Compare commits

...

3 Commits

Author SHA1 Message Date
woikos
205f23fc0c Add message segmentation to NRC protocol spec (v0.48.15)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add CHUNK response type for large payload handling
- Document chunking threshold (40KB) accounting for encryption overhead
- Specify chunk message format with messageId, index, total, data fields
- Add sender chunking process with Base64 encoding steps
- Add receiver reassembly process with buffer management
- Document 60-second timeout for incomplete chunk buffers
- Update client/bridge implementation notes with chunking requirements
- Add Smesh as reference implementation for client-side chunking

Files modified:
- docs/NIP-NRC.md: Added Message Segmentation section and updated impl notes
- pkg/version/version: v0.48.14 -> v0.48.15

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 11:29:31 +01:00
woikos
489b9f4593 Improve release command VPS deployment docs (v0.48.14)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Clarify ARM64 build-on-remote approach for relay.orly.dev
- Remove unnecessary git stash from deployment command
- Add note about setcap needing reapplication after binary rebuild
- Use explicit GOPATH and go binary path for clarity

Files modified:
- .claude/commands/release.md: Improved deployment step documentation
- pkg/version/version: v0.48.13 -> v0.48.14

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 11:14:20 +01:00
woikos
604d759a6a Fix web UI not showing cached events and add Blossom toggle (v0.48.13)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Fix fetchEvents() discarding IndexedDB cached events instead of merging with relay results
- Add mergeAndDeduplicateEvents() helper to combine and dedupe events by ID
- Add ORLY_BLOSSOM_ENABLED config option to disable Blossom server
- Make fetch-kinds.js fall back to existing eventKinds.js when network unavailable

Files modified:
- app/web/src/nostr.js: Fix event caching, add merge helper
- app/web/scripts/fetch-kinds.js: Add fallback for network failures
- app/config/config.go: Add BlossomEnabled config field
- app/main.go: Check BlossomEnabled before initializing Blossom server
- pkg/version/version: Bump to v0.48.13

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-11 04:55:55 +01:00
9 changed files with 175 additions and 41 deletions

View File

@@ -49,10 +49,12 @@ If no argument provided, default to `patch`.
GIT_SSH_COMMAND="ssh -i ~/.ssh/gitmlekudev" git push ssh://mleku@git.mleku.dev:2222/mleku/next.orly.dev.git main --tags
```
11. **Deploy to VPS** by running:
```
ssh relay.orly.dev 'cd ~/src/next.orly.dev && git stash && git pull origin main && export PATH=$PATH:~/go/bin && CGO_ENABLED=0 go build -o ~/.local/bin/next.orly.dev && sudo /usr/sbin/setcap cap_net_bind_service=+ep ~/.local/bin/next.orly.dev && sudo systemctl restart orly && ~/.local/bin/next.orly.dev version'
11. **Deploy to relay.orly.dev** (ARM64):
Build on remote (faster than uploading cross-compiled binary due to slow local bandwidth):
```bash
ssh relay.orly.dev 'cd ~/src/next.orly.dev && git pull origin main && GOPATH=$HOME CGO_ENABLED=0 ~/go/bin/go build -o ~/.local/bin/next.orly.dev && sudo /usr/sbin/setcap cap_net_bind_service=+ep ~/.local/bin/next.orly.dev && sudo systemctl restart orly && ~/.local/bin/next.orly.dev version'
```
Note: setcap must be re-applied after each binary rebuild to allow binding to ports 80/443.
12. **Report completion** with the new version and commit hash

View File

@@ -72,7 +72,8 @@ type C struct {
FollowsThrottlePerEvent time.Duration `env:"ORLY_FOLLOWS_THROTTLE_INCREMENT" default:"200ms" usage:"delay added per event for non-followed users"`
FollowsThrottleMaxDelay time.Duration `env:"ORLY_FOLLOWS_THROTTLE_MAX" default:"60s" usage:"maximum throttle delay cap"`
// Blossom blob storage service level settings
// Blossom blob storage service settings
BlossomEnabled bool `env:"ORLY_BLOSSOM_ENABLED" default:"true" usage:"enable Blossom blob storage server (only works with Badger backend)"`
BlossomServiceLevels string `env:"ORLY_BLOSSOM_SERVICE_LEVELS" usage:"comma-separated list of service levels in format: name:storage_mb_per_sat_per_month (e.g., basic:1,premium:10)"`
// Web UI and dev mode settings

View File

@@ -435,7 +435,7 @@ func Run(
// Initialize Blossom blob storage server (only for Badger backend)
// MUST be done before UserInterface() which registers routes
if badgerDB, ok := db.(*database.D); ok {
if badgerDB, ok := db.(*database.D); ok && cfg.BlossomEnabled {
log.I.F("Badger backend detected, initializing Blossom server...")
if l.blossomServer, err = initializeBlossomServer(ctx, cfg, badgerDB); err != nil {
log.E.F("failed to initialize blossom server: %v", err)
@@ -445,6 +445,8 @@ func Run(
} else {
log.W.F("blossom server initialization returned nil without error")
}
} else if !cfg.BlossomEnabled {
log.I.F("Blossom server disabled via ORLY_BLOSSOM_ENABLED=false")
} else {
log.I.F("Non-Badger backend detected (type: %T), Blossom server not available", db)
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -6,25 +6,35 @@
import { fileURLToPath } from 'url';
import { dirname, join } from 'path';
import { writeFileSync } from 'fs';
import { writeFileSync, existsSync } from 'fs';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const KINDS_URL = 'https://git.mleku.dev/mleku/nostr/raw/branch/main/encoders/kind/kinds.json';
const OUTPUT_PATH = join(__dirname, '..', 'src', 'eventKinds.js');
async function fetchKinds() {
console.log(`Fetching kinds from ${KINDS_URL}...`);
const response = await fetch(KINDS_URL);
if (!response.ok) {
throw new Error(`Failed to fetch kinds.json: ${response.status} ${response.statusText}`);
try {
const response = await fetch(KINDS_URL, { timeout: 10000 });
if (!response.ok) {
throw new Error(`HTTP ${response.status} ${response.statusText}`);
}
const data = await response.json();
console.log(`Fetched ${Object.keys(data.kinds).length} kinds (version: ${data.version})`);
return data;
} catch (error) {
// Check if we have an existing eventKinds.js we can use
if (existsSync(OUTPUT_PATH)) {
console.warn(`Warning: Could not fetch kinds.json (${error.message})`);
console.log(`Using existing ${OUTPUT_PATH}`);
return null; // Signal to skip generation
}
throw new Error(`Failed to fetch kinds.json and no existing file: ${error.message}`);
}
const data = await response.json();
console.log(`Fetched ${Object.keys(data.kinds).length} kinds (version: ${data.version})`);
return data;
}
function generateEventKinds(data) {
@@ -202,14 +212,18 @@ export const kindCategories = [
async function main() {
try {
const data = await fetchKinds();
// If fetchKinds returned null, we're using the existing file
if (data === null) {
console.log('Skipping generation, using existing eventKinds.js');
return;
}
const kinds = generateEventKinds(data);
const js = generateJS(kinds, data);
// Write to src/eventKinds.js
const outPath = join(__dirname, '..', 'src', 'eventKinds.js');
writeFileSync(outPath, js);
console.log(`Generated ${outPath} with ${kinds.length} kinds`);
writeFileSync(OUTPUT_PATH, js);
console.log(`Generated ${OUTPUT_PATH} with ${kinds.length} kinds`);
} catch (error) {
console.error('Error:', error.message);
process.exit(1);

View File

@@ -179,6 +179,28 @@ export class Nip07Signer {
}
}
// Merge two event arrays, deduplicating by event id
// Newer events (by created_at) take precedence for same id
function mergeAndDeduplicateEvents(cached, relay) {
const eventMap = new Map();
// Add cached events first
for (const event of cached) {
eventMap.set(event.id, event);
}
// Add/update with relay events (they may be newer)
for (const event of relay) {
const existing = eventMap.get(event.id);
if (!existing || event.created_at >= existing.created_at) {
eventMap.set(event.id, event);
}
}
// Return sorted by created_at descending (newest first)
return Array.from(eventMap.values()).sort((a, b) => b.created_at - a.created_at);
}
// IndexedDB helpers for unified event storage
// This provides a local cache that all components can access
const DB_NAME = "nostrCache";
@@ -573,9 +595,10 @@ export async function fetchEvents(filters, options = {}) {
} = options;
// Try to get cached events first if requested
let cachedEvents = [];
if (useCache) {
try {
const cachedEvents = await queryEventsFromDB(filters);
cachedEvents = await queryEventsFromDB(filters);
if (cachedEvents.length > 0) {
console.log(`Found ${cachedEvents.length} cached events in IndexedDB`);
}
@@ -585,17 +608,19 @@ export async function fetchEvents(filters, options = {}) {
}
return new Promise((resolve, reject) => {
const events = [];
const relayEvents = [];
const timeoutId = setTimeout(() => {
console.log(`Timeout reached after ${timeout}ms, returning ${events.length} events`);
console.log(`Timeout reached after ${timeout}ms, returning ${relayEvents.length} relay events`);
sub.close();
// Store all received events in IndexedDB before resolving
if (events.length > 0) {
putEvents(events).catch(e => console.warn("Failed to cache events", e));
if (relayEvents.length > 0) {
putEvents(relayEvents).catch(e => console.warn("Failed to cache events", e));
}
resolve(events);
// Merge cached events with relay events, deduplicate by id
const mergedEvents = mergeAndDeduplicateEvents(cachedEvents, relayEvents);
resolve(mergedEvents);
}, timeout);
try {
@@ -615,22 +640,25 @@ export async function fetchEvents(filters, options = {}) {
created_at: event.created_at,
content_preview: event.content?.substring(0, 50)
});
events.push(event);
relayEvents.push(event);
// Store event immediately in IndexedDB
putEvent(event).catch(e => console.warn("Failed to cache event", e));
},
oneose() {
console.log(`✅ EOSE received for REQ [${subId}], got ${events.length} events`);
console.log(`✅ EOSE received for REQ [${subId}], got ${relayEvents.length} relay events`);
clearTimeout(timeoutId);
sub.close();
// Store all events in IndexedDB before resolving
if (events.length > 0) {
putEvents(events).catch(e => console.warn("Failed to cache events", e));
if (relayEvents.length > 0) {
putEvents(relayEvents).catch(e => console.warn("Failed to cache events", e));
}
resolve(events);
// Merge cached events with relay events, deduplicate by id
const mergedEvents = mergeAndDeduplicateEvents(cachedEvents, relayEvents);
console.log(`Merged ${cachedEvents.length} cached + ${relayEvents.length} relay = ${mergedEvents.length} total events`);
resolve(mergedEvents);
}
}
);

View File

@@ -137,7 +137,7 @@ Where `payload` is the standard Nostr message array, e.g.:
The encrypted content structure:
```json
{
"type": "EVENT" | "OK" | "EOSE" | "NOTICE" | "CLOSED" | "COUNT" | "AUTH",
"type": "EVENT" | "OK" | "EOSE" | "NOTICE" | "CLOSED" | "COUNT" | "AUTH" | "CHUNK",
"payload": <standard_nostr_response_array>
}
```
@@ -150,6 +150,7 @@ Where `payload` is the standard Nostr response array, e.g.:
- `["CLOSED", "<sub_id>", "<message>"]`
- `["COUNT", "<sub_id>", {"count": <n>}]`
- `["AUTH", "<challenge>"]`
- `[<chunk_object>]` (for CHUNK type, see Message Segmentation)
### Session Management
@@ -168,6 +169,85 @@ The conversation key is derived from:
- **Secret-based auth**: ECDH between client's secret key (derived from URI secret) and relay's public key
- **CAT auth**: ECDH between client's Nostr key and relay's public key
### Message Segmentation
Some Nostr events exceed the typical relay message size limits (commonly 64KB). NRC supports message segmentation to handle large payloads by splitting them into multiple chunks.
#### When to Chunk
Senders SHOULD chunk messages when the JSON-serialized response exceeds 40KB. This threshold accounts for:
- NIP-44 encryption overhead (~100 bytes)
- Base64 encoding expansion (~33%)
- Event wrapper overhead (tags, signature, etc.)
#### Chunk Message Format
When a response is too large, it is split into multiple CHUNK responses:
```json
{
"type": "CHUNK",
"payload": [{
"type": "CHUNK",
"messageId": "<uuid>",
"index": 0,
"total": 3,
"data": "<base64_encoded_chunk>"
}]
}
```
Fields:
- `messageId`: A unique identifier (UUID) for the chunked message, used to correlate chunks
- `index`: Zero-based chunk index (0, 1, 2, ...)
- `total`: Total number of chunks in this message
- `data`: Base64-encoded segment of the original message
#### Chunking Process (Sender)
1. Serialize the original response message to JSON
2. If the serialized length exceeds the threshold (40KB), proceed with chunking
3. Encode the JSON string as UTF-8, then Base64 encode it
4. Split the Base64 string into chunks of the maximum chunk size
5. Generate a unique `messageId` (UUID recommended)
6. Send each chunk as a separate CHUNK response event
Example encoding (JavaScript):
```javascript
const encoded = btoa(unescape(encodeURIComponent(jsonString)))
```
#### Reassembly Process (Receiver)
1. When receiving a CHUNK response, buffer it by `messageId`
2. Track received chunks by `index`
3. When all chunks are received (`chunks.size === total`):
a. Concatenate chunk data in index order (0, 1, 2, ...)
b. Base64 decode the concatenated string
c. Parse as UTF-8 JSON to recover the original response
4. Process the reassembled response as normal
5. Clean up the chunk buffer
Example decoding (JavaScript):
```javascript
const jsonString = decodeURIComponent(escape(atob(concatenatedBase64)))
const response = JSON.parse(jsonString)
```
#### Chunk Buffer Management
Receivers MUST implement chunk buffer cleanup:
- Discard incomplete chunk buffers after 60 seconds of inactivity
- Limit the number of concurrent incomplete messages to prevent memory exhaustion
- Log warnings when discarding stale buffers for debugging
#### Ordering and Reliability
- Chunks MAY arrive out of order; receivers MUST reassemble by index
- Missing chunks result in message loss; the incomplete buffer is eventually discarded
- Duplicate chunks (same messageId + index) SHOULD be ignored
- Each chunk is sent as a separate encrypted NRC response event
### Authentication
#### Secret-Based Authentication
@@ -208,6 +288,9 @@ The conversation key is derived from:
4. Match responses using the `e` tag (references request event ID)
5. Handle EOSE by waiting for kind 24892 with type "EOSE" in content
6. For subscriptions, maintain mapping of internal sub IDs to tunnel session
7. **Chunking**: Maintain a chunk buffer map keyed by `messageId`
8. **Chunking**: When receiving CHUNK responses, buffer chunks and reassemble when complete
9. **Chunking**: Implement 60-second timeout for incomplete chunk buffers
## Bridge Implementation Notes
@@ -217,10 +300,14 @@ The conversation key is derived from:
4. Capture all relay responses and wrap in kind 24892
5. Sign with relay's key and publish to rendezvous relay
6. Maintain session state for subscription mapping
7. **Chunking**: Check response size before sending; chunk if > 40KB
8. **Chunking**: Use consistent messageId (UUID) across all chunks of a message
9. **Chunking**: Send chunks in order (index 0, 1, 2, ...) for optimal reassembly
## Reference Implementations
- ORLY Relay: [https://git.mleku.dev/mleku/next.orly.dev](https://git.mleku.dev/mleku/next.orly.dev)
- ORLY Relay (Bridge): [https://git.mleku.dev/mleku/next.orly.dev](https://git.mleku.dev/mleku/next.orly.dev)
- Smesh Client: [https://git.mleku.dev/mleku/smesh](https://git.mleku.dev/mleku/smesh)
## See Also

View File

@@ -1 +1 @@
v0.48.12
v0.48.15