Compare commits

..

178 Commits

Author SHA1 Message Date
woikos
489b9f4593 Improve release command VPS deployment docs (v0.48.14)
Some checks are pending
Go / build-and-release (push) Waiting to run
- Clarify ARM64 build-on-remote approach for relay.orly.dev
- Remove unnecessary git stash from deployment command
- Add note about setcap needing reapplication after binary rebuild
- Use explicit GOPATH and go binary path for clarity

Files modified:
- .claude/commands/release.md: Improved deployment step documentation
- pkg/version/version: v0.48.13 -> v0.48.14

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 11:14:20 +01:00
woikos
604d759a6a Fix web UI not showing cached events and add Blossom toggle (v0.48.13)
Some checks are pending
Go / build-and-release (push) Waiting to run
- Fix fetchEvents() discarding IndexedDB cached events instead of merging with relay results
- Add mergeAndDeduplicateEvents() helper to combine and dedupe events by ID
- Add ORLY_BLOSSOM_ENABLED config option to disable Blossom server
- Make fetch-kinds.js fall back to existing eventKinds.js when network unavailable

Files modified:
- app/web/src/nostr.js: Fix event caching, add merge helper
- app/web/scripts/fetch-kinds.js: Add fallback for network failures
- app/config/config.go: Add BlossomEnabled config field
- app/main.go: Check BlossomEnabled before initializing Blossom server
- pkg/version/version: Bump to v0.48.13

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-11 04:55:55 +01:00
woikos
be72b694eb Add BBolt rate limiting and tune Badger defaults for large archives (v0.48.12)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Increase Badger cache defaults: block 512→1024MB, index 256→512MB
- Increase serial cache defaults: pubkeys 100k→250k, event IDs 500k→1M
- Change ZSTD default from level 1 (fast) to level 3 (balanced)
- Add memory-only rate limiter for BBolt backend
- Add BBolt to database backend docs with scaling recommendations
- Document migration between Badger and BBolt backends

Files modified:
- app/config/config.go: Tuned defaults for large-scale deployments
- main.go: Add BBolt rate limiter support
- pkg/ratelimit/factory.go: Add NewMemoryOnlyLimiter factory
- pkg/ratelimit/memory_monitor.go: New memory-only load monitor
- CLAUDE.md: Add BBolt docs and scaling guide

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 11:55:07 +01:00
woikos
61f6027a64 Remove auto-profile creation and add auth config docs (v0.48.11)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Remove createDefaultProfile() function from nostr.js that auto-created
  placeholder profiles for new users - profiles should not be auto-generated
- Add auth-required configuration caution section to CLAUDE.md documenting
  risks of enabling NIP-42 auth on production relays

Files modified:
- CLAUDE.md: Added auth-required configuration section
- app/web/src/nostr.js: Removed createDefaultProfile and auto-profile logic
- app/web/dist/bundle.js: Rebuilt with changes
- pkg/version/version: v0.48.11

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 10:22:56 +01:00
woikos
e7bc9a4a97 Add progressive throttle for follows ACL mode (v0.48.10)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add progressive throttle feature for follows ACL mode, allowing
  non-followed users to write with increasing delay instead of blocking
- Delay increases linearly per event (default 200ms) and decays at 1:1
  ratio with elapsed time, capping at configurable max (default 60s)
- Track both IP and pubkey independently to prevent evasion
- Add periodic cleanup to remove fully-decayed throttle entries
- Fix BBolt serial resolver to return proper errors when buckets or
  entries are not found

Files modified:
- app/config/config.go: Add ORLY_FOLLOWS_THROTTLE_* env vars
- app/handle-event.go: Apply throttle delay before event processing
- app/listener.go: Add getFollowsThrottleDelay helper method
- pkg/acl/follows.go: Integrate throttle with follows ACL
- pkg/acl/follows_throttle.go: New progressive throttle implementation
- pkg/bbolt/save-event.go: Return errors from serial lookups
- pkg/version/version: Bump to v0.48.10

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 17:39:04 +01:00
woikos
41a3b5c0a5 Fix OOM crash from corrupt compact event data
Add sanity bounds to prevent memory exhaustion when decoding corrupt
events with garbage varint values. Previously, corrupt data could cause
massive allocations (e.g., make([]byte, 2^60)) leading to OOM crashes.

- Add MaxTagsPerEvent (10000), MaxTagElements (100), MaxContentLength (10MB),
  MaxTagElementLength (1MB) limits
- Return sentinel errors for corrupt data instead of logging
- Silently skip corrupt events (caller handles gracefully)

This fixes crash loops on archive.orly.dev where OOM during writes
left corrupt events in bbolt database.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 11:46:36 +01:00
woikos
d41c332d06 Add NRC (Nostr Relay Connect) protocol and web UI (v0.48.9)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Implement NIP-NRC protocol for remote relay access through public relay tunnel
- Add NRC bridge service with NIP-44 encrypted message tunneling
- Add NRC client library for applications
- Add session management with subscription tracking and expiry
- Add URI parsing for nostr+relayconnect:// scheme with secret and CAT auth
- Add NRC API endpoints for connection management (create/list/delete/get-uri)
- Add RelayConnectView.svelte component for managing NRC connections in web UI
- Add NRC database storage for connection secrets and labels
- Add NRC CLI commands (generate, list, revoke)
- Add support for Cashu Access Tokens (CAT) in NRC URIs
- Add ScopeNRC constant for Cashu token scope
- Add wasm build infrastructure and stub files

Files modified:
- app/config/config.go: NRC configuration options
- app/handle-nrc.go: New API handlers for NRC connections
- app/main.go: NRC bridge startup integration
- app/server.go: Register NRC API routes
- app/web/src/App.svelte: Add Relay Connect tab
- app/web/src/RelayConnectView.svelte: New NRC management component
- app/web/src/api.js: NRC API client functions
- main.go: NRC CLI command handlers
- pkg/bunker/acl_adapter.go: Add NRC scope mapping
- pkg/cashu/token/token.go: Add ScopeNRC constant
- pkg/database/nrc.go: NRC connection storage
- pkg/protocol/nrc/: New NRC protocol implementation
- docs/NIP-NRC.md: NIP specification document

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 03:40:12 +01:00
woikos
0dac41e35e Add documentation and improve BBolt import memory efficiency (v0.48.8)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add README.md table of contents for easier navigation
- Add Curation ACL documentation section to README.md
- Create detailed Curation Mode Guide (docs/CURATION_MODE_GUIDE.md)
- Fix OOM during BBolt index building by closing temp file before build
- Add GC calls before index building to reclaim batch buffer memory
- Improve import-export.go with processJSONLEventsReturningCount
- Add policy-aware import path for sync operations

Files modified:
- README.md: Added TOC and curation ACL documentation
- docs/CURATION_MODE_GUIDE.md: New comprehensive curation mode guide
- pkg/bbolt/import-export.go: Memory-safe import with deferred cleanup
- pkg/bbolt/import-minimal.go: Added GC before index build
- pkg/version/version: Bump to v0.48.8

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 15:37:06 +01:00
woikos
2480be3a73 Fix OOM in BuildIndexes by processing in chunks (v0.48.6)
- Process events in 200k chunks instead of loading all at once
- Write indexes to disk after each chunk, then free memory
- Call debug.FreeOSMemory() between chunks to release memory to OS
- Memory usage now ~150-200MB per chunk instead of 5GB+

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 09:10:50 +01:00
woikos
d363f5da04 Implement BBolt ImportEventsFromReader for migration (v0.48.1)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add import-export.go with full JSONL import support for bbolt
- Remove Import/Export/ImportEventsFromReader stubs from stubs.go
- Includes batched write flush after import completion
- Progress logging every 5 seconds during import

Files modified:
- pkg/bbolt/import-export.go: New file with import functionality
- pkg/bbolt/stubs.go: Remove implemented stubs
- pkg/version/version: v0.48.0 -> v0.48.1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 06:57:06 +01:00
woikos
48a2b97b7e Fix: Add bbolt import for factory registration 2026-01-06 06:52:55 +01:00
woikos
9fed1261ad Add BBolt database backend for HDD-optimized archival relays (v0.48.0)
- BBolt B+tree backend with sequential access patterns for spinning disks
- Write batching (5000 events / 128MB / 30s flush) to reduce disk thrashing
- Adjacency list storage for graph data (one key per vertex, not per edge)
- Bloom filter for fast negative edge existence checks (~12MB for 10M edges)
- No query cache (saves RAM, B+tree reads are fast enough on HDD)
- Migration tool: orly migrate --from badger --to bbolt
- Configuration: ORLY_BBOLT_* environment variables

Files modified:
- app/config/config.go: Added BBolt configuration options
- main.go: Added migrate subcommand and BBolt config wiring
- pkg/database/factory.go: Added BBolt factory registration
- pkg/bbolt/*: New BBolt database backend implementation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 06:50:58 +01:00
woikos
8dfd25613d Fix corrupted events with zero-filled IDs/pubkeys/sigs (v0.47.1)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add validation in GetEventIdBySerial to ensure sei value is 32 bytes
- Fix fallback-to-legacy bug: return error instead of attempting legacy
  unmarshal on compact format data when event ID lookup fails
- Add upfront validation in UnmarshalCompactEvent for eventId length
- Prevents events with all-zero IDs from being returned to clients

Files modified:
- pkg/database/serial_cache.go: Validate sei value is exactly 32 bytes
- pkg/database/fetch-events-by-serials.go: Return error for compact format
  when eventId missing instead of falling back to legacy unmarshal
- pkg/database/fetch-event-by-serial.go: Same fix for single event fetch
- pkg/database/compact_event.go: Validate eventId is 32 bytes upfront
- pkg/version/version: Bump to v0.47.1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 05:51:34 +01:00
woikos
047cdf3472 Add curation ACL mode and complete graph query implementation (v0.47.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
Curation Mode:
- Three-tier publisher classification: Trusted, Blacklisted, Unclassified
- Per-pubkey rate limiting (default 50/day) for unclassified users
- IP flood protection (default 500/day) with automatic banning
- Event kind allow-listing via categories, ranges, and custom kinds
- Query filtering hides blacklisted pubkey events (admin/owner exempt)
- Web UI for managing trusted/blacklisted pubkeys and configuration
- NIP-86 API endpoints for all curation management operations

Graph Query Extension:
- Complete reference aggregation for Badger and Neo4j backends
- E-tag graph backfill migration (v8) runs automatically on startup
- Configuration options: ORLY_GRAPH_QUERIES_ENABLED, MAX_DEPTH, etc.
- NIP-11 advertisement of graph query capabilities

Files modified:
- app/handle-nip86-curating.go: NIP-86 curation API handlers (new)
- app/web/src/CurationView.svelte: Curation management UI (new)
- app/web/src/kindCategories.js: Kind category definitions (new)
- pkg/acl/curating.go: Curating ACL implementation (new)
- pkg/database/curating-acl.go: Database layer for curation (new)
- pkg/neo4j/graph-refs.go: Neo4j ref collection (new)
- pkg/database/migrations.go: E-tag graph backfill migration
- pkg/protocol/graph/executor.go: Reference aggregation support
- app/handle-event.go: Curation config event processing
- app/handle-req.go: Blacklist filtering for queries
- docs/GRAPH_QUERIES_REMAINING_PLAN.md: Updated completion status

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 21:42:17 +01:00
woikos
ea7bc75fac Fix NIP-11 caching and export streaming issues (v0.46.2)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Fix Content-Type header being set on request instead of response
- Add Vary: Accept header to prevent browser/CDN caching NIP-11 for HTML
- Add periodic flushing during export for HTTP streaming (every 100 events)
- Add initial flush after headers to start streaming immediately
- Add X-Content-Type-Options: nosniff to prevent browser buffering

Files modified:
- app/handle-relayinfo.go: Fix header and add Vary: Accept
- app/server.go: Add initial flush and nosniff header for export
- pkg/database/export.go: Add periodic and final flushing for streaming

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 07:17:48 +01:00
woikos
2e9cde01f8 Refactor Tor to subprocess mode, enabled by default (v0.46.1)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Spawn tor binary as subprocess instead of requiring external daemon
- Auto-generate torrc in $ORLY_DATA_DIR/tor/ (userspace, no root)
- Enable Tor by default; gracefully disable if tor binary not found
- Add ORLY_TOR_BINARY and ORLY_TOR_SOCKS config options
- Remove external Tor setup scripts and documentation

Files modified:
- app/config/config.go: New subprocess-based Tor config options
- app/main.go: Updated Tor initialization for new config
- pkg/tor/service.go: Rewritten for subprocess management
- Removed: deploy/orly-tor.service, docs/TOR_SETUP.md, scripts/tor-*.sh

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 06:01:09 +01:00
woikos
25d087697e Add Tor hidden service support and fallback relay profile fetching (v0.46.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add pkg/tor package for Tor hidden service integration
- Add Tor config options: ORLY_TOR_ENABLED, ORLY_TOR_PORT, ORLY_TOR_HS_DIR, ORLY_TOR_ONION_ADDRESS
- Extend NIP-11 relay info with addresses field for .onion URLs
- Add fallback relays (Damus, nos.lol, nostr.band, purplepag.es) for profile lookups
- Refactor profile fetching to try local relay first, then fallback relays
- Add Tor setup documentation and deployment scripts

Files modified:
- app/config/config.go: Add Tor configuration options
- app/handle-relayinfo.go: Add ExtendedRelayInfo with addresses field
- app/main.go: Initialize and manage Tor service lifecycle
- app/server.go: Add torService field to Server struct
- app/web/src/constants.js: Add FALLBACK_RELAYS
- app/web/src/nostr.js: Add fallback relay profile fetching
- pkg/tor/: New package for Tor hidden service management
- docs/TOR_SETUP.md: Documentation for Tor configuration
- deploy/orly-tor.service: Systemd service for Tor integration
- scripts/tor-*.sh: Setup scripts for Tor development and production

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 05:50:03 +01:00
woikos
6056446a73 Add script to enable archive features on deployment 2026-01-02 19:57:19 +01:00
woikos
8a14cec3cd Add archive relay query augmentation and access-based GC (v0.45.0)
- Add async archive relay querying (local results immediate, archives in background)
- Add query caching with filter normalization to avoid repeated requests
- Add session-deduplicated access tracking for events
- Add continuous garbage collection based on access patterns
- Auto-detect storage limit (80% of filesystem) when ORLY_MAX_STORAGE_BYTES=0
- Support NIP-50 search queries to archive relays

New environment variables:
- ORLY_ARCHIVE_ENABLED: Enable archive relay query augmentation
- ORLY_ARCHIVE_RELAYS: Comma-separated archive relay URLs
- ORLY_ARCHIVE_TIMEOUT_SEC: Archive query timeout
- ORLY_ARCHIVE_CACHE_TTL_HRS: Query deduplication window
- ORLY_GC_ENABLED: Enable access-based garbage collection
- ORLY_MAX_STORAGE_BYTES: Max storage (0=auto 80%)
- ORLY_GC_INTERVAL_SEC: GC check interval
- ORLY_GC_BATCH_SIZE: Events per GC cycle

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 19:35:16 +01:00
0008d33792 Remove bunker (NIP-46) functionality from web UI (v0.44.7)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Delete BunkerView.svelte component and all bunker UI
- Remove bunker-service.js and bunker-worker.js implementations
- Clean up bunker stores and worker management from stores.js
- Remove getBunkerURL and getBunkerInfo API functions
- Remove bunker tab from navigation and App.svelte imports
- Simplify rollup.config.js by removing bunker-worker build
- Remove NIP46 token scope from cashu-client.js

Files modified:
- app/web/src/BunkerView.svelte: Deleted
- app/web/src/bunker-service.js: Deleted
- app/web/src/bunker-worker.js: Deleted
- app/web/src/stores.js: Removed bunker state and worker functions
- app/web/src/api.js: Removed bunker API functions
- app/web/src/App.svelte: Removed bunker tab and imports
- app/web/rollup.config.js: Simplified to single bundle
- app/web/src/cashu-client.js: Removed NIP46 scope
- pkg/version/version: Bumped to v0.44.7

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 11:16:07 +02:00
woikos
ac61e56b61 Move bunker service to Web Worker for persistence (v0.44.6)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add bunker-worker.js Web Worker for NIP-46 signing
- Update rollup to build worker as separate bundle
- Move bunker state to stores.js for persistence across tab switches
- Worker maintains WebSocket connection independently of UI lifecycle

Files modified:
- app/web/src/bunker-worker.js: New Web Worker implementation
- app/web/src/stores.js: Added bunker worker state management
- app/web/src/BunkerView.svelte: Use worker instead of inline service
- app/web/rollup.config.js: Build worker bundle separately

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 16:24:56 +01:00
woikos
ae024cc784 Fix bunker UI state management issues (v0.44.5)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add guard to prevent duplicate service starts
- Fix stale variable references in error handler
- Show token list even when WebSocket temporarily disconnects
- Add logging for bunker service status changes

Files modified:
- app/web/src/BunkerView.svelte: UI state fixes
- app/web/dist/bundle.js: Rebuilt web UI

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 15:56:40 +01:00
woikos
e6fa2f15e4 Add persistent keyset storage for Cashu tokens (v0.44.4)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add FileStore implementation for keyset persistence
- Keysets now survive server restarts
- Store keysets in JSON file at $ORLY_DATA_DIR/cashu-keysets.json
- Tokens issued before restart remain valid

Files modified:
- pkg/cashu/keyset/file_store.go: New file-based keyset store
- app/main.go: Use FileStore instead of MemoryStore

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 15:37:16 +01:00
woikos
e28ab948b0 Add multi-token support for bunker client connections (v0.44.3)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Each client device now gets its own CAT token
- Tokens can be individually named (editable, defaults to cute names like "jolly-jellyfish")
- Tokens can be individually revoked
- Expandable table rows show QR code and full bunker URL per token
- Separate service token for ORLY's own relay connection
- Add Token button to create additional client tokens

Files modified:
- app/web/src/BunkerView.svelte: Token list UI with expandable details

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 15:02:09 +01:00
woikos
3f34eb288d Update nostr lib v1.0.12 with TLS URL scheme fix for NIP-98 2025-12-29 14:33:12 +01:00
woikos
8424f0ca44 Add debugging for NIP-98 auth in cashu mint 2025-12-29 14:17:50 +01:00
woikos
48c6739d25 Enable Cashu access tokens automatically when ACL is active (v0.44.2)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add automatic Cashu issuer/verifier initialization when ACL mode is not 'none'
- Use memory store for keyset management with proper TTL configuration
- Import cashuiface package for AllowAllChecker implementation
- ACL handles authorization; CAT provides token-based authentication

Files modified:
- app/main.go: Add Cashu system initialization when ACL active
- pkg/version/version: Bump to v0.44.2

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 14:01:54 +01:00
woikos
b837dcb5f0 Fix UTF-8 encoding error in compact event tag marshaling (v0.44.1)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Fix binary pubkey/event ID values not being detected by tag.Marshal
- Compact event decoder now returns 33-byte values with null terminator
- This allows tag.Marshal to detect and hex-encode binary values correctly
- Fixes "Could not decode a text frame as UTF-8" WebSocket errors

Files modified:
- pkg/database/compact_event.go: Return 33-byte binary with null terminator

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 13:39:49 +01:00
woikos
7ed1aea0f1 Add NIP-46 bunker service for remote signing with CAT support (v0.44.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add bunker-service.js: NIP-46 signer that handles signing requests from remote clients
- Add cashu-client.js: Cashu token minting for bunker authorization
- Update BunkerView.svelte: Add Start/Stop service toggle, CAT token generation, status indicator
- Update App.svelte: Pass userPrivkey to BunkerView for signing
- Add @noble/curves and @noble/hashes dependencies
- Include CAT token in bunker URL format: bunker://<pubkey>?relay=...&secret=...&cat=...
- Improve PWA manifest with maskable icons

Files modified:
- app/web/src/bunker-service.js: NEW - NIP-46 signer implementation
- app/web/src/cashu-client.js: NEW - Cashu token minting client
- app/web/src/BunkerView.svelte: Add service controls and CAT integration
- app/web/src/App.svelte: Add userPrivkey state and prop
- app/web/package.json: Add noble crypto dependencies
- app/web/public/manifest.json: Add maskable icon variants
- pkg/version/version: Bump to v0.44.0

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 12:57:13 +01:00
woikos
fdc4496768 Fix favicon.ico to serve favicon.png from embedded web UI
- Update handleFavicon to serve /favicon.png instead of non-existent orly-favicon.png
- Remove orly-favicon.png from rollup copy targets
- Update release command to include setcap before restart

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 11:18:35 +01:00
woikos
635457aed3 Add PWA support with offline-first caching (v0.43.1)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add web app manifest for standalone installation
- Add service worker with offline-first caching for static assets
- Add network-first caching with fallback for API calls
- Generate PWA icons (192x192, 512x512) from favicon
- Add Apple PWA meta tags for iOS support
- Update rollup config to copy PWA files to dist

Files modified:
- app/web/public/manifest.json: New PWA manifest
- app/web/public/sw.js: New service worker
- app/web/public/icon-192.png: New PWA icon
- app/web/public/icon-512.png: New PWA icon
- app/web/public/index.html: Add manifest link, meta tags, SW registration
- app/web/rollup.config.js: Add PWA files to copy targets
- pkg/version/version: Bump to v0.43.1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 10:50:59 +01:00
f22bf3f388 Add Neo4j memory tuning config and query result limits (v0.43.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add Neo4j driver config options for memory management:
  - ORLY_NEO4J_MAX_CONN_POOL (default: 25) - connection pool size
  - ORLY_NEO4J_FETCH_SIZE (default: 1000) - records per batch
  - ORLY_NEO4J_MAX_TX_RETRY_SEC (default: 30) - transaction retry timeout
  - ORLY_NEO4J_QUERY_RESULT_LIMIT (default: 10000) - max results per query
- Apply driver settings when creating Neo4j connection (pool size, fetch size, retry time)
- Enforce query result limit as safety cap on all Cypher queries
- Fix QueryForSerials and QueryForIds to preserve LIMIT clauses
- Add comprehensive memory tuning documentation with sizing guidelines
- Add NIP-46 signer-based authentication for bunker connections
- Update go.mod with new dependencies

Files modified:
- app/config/config.go: Add Neo4j driver tuning config vars
- main.go: Pass new config values to database factory
- pkg/database/factory.go: Add Neo4j tuning fields to DatabaseConfig
- pkg/database/factory_wasm.go: Mirror factory.go changes for WASM
- pkg/neo4j/neo4j.go: Apply driver config, add getter methods
- pkg/neo4j/query-events.go: Enforce query result limit, fix LIMIT preservation
- docs/NEO4J_BACKEND.md: Add Memory Tuning section, update Docker example
- CLAUDE.md: Add Neo4j memory tuning quick reference
- app/handle-req.go: NIP-46 signer authentication
- app/publisher.go: HasActiveNIP46Signer check
- pkg/protocol/publish/publisher.go: NIP46SignerChecker interface
- go.mod: Add dependencies

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 02:18:05 +02:00
aef9e24e40 Require CAT for NIP-46 bunker connections (v0.42.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Enforce Cashu access token for kind 24133 events when Cashu is enabled and ACL is active
- Reject NIP-46 events without valid token with "restricted: NIP-46 requires Cashu access token"
- Verify token scope is NIP-46 or RELAY

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 19:31:39 +02:00
1b17acb50c Add simplified NIP-46 bunker page with click-to-copy QR codes (v0.41.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add BunkerView with two QR codes: client (bunker://) and signer (nostr+connect://)
- Add click-to-copy functionality on QR codes with visual "Copied!" feedback
- Add CAT requirement warning (only shows when ACL mode is active)
- Remove WireGuard dependencies from bunker page
- Add /api/bunker/info public endpoint for relay URL, ACL mode, CAT status
- Add Cashu token verification for WebSocket connections
- Add kind permission checking for Cashu token scopes
- Add cashuToken field to Listener for connection-level token tracking

Files modified:
- app/handle-bunker.go: New bunker info endpoint (without WireGuard)
- app/handle-event.go: Add Cashu token kind permission check
- app/handle-websocket.go: Extract and verify Cashu token on WS upgrade
- app/listener.go: Add cashuToken field
- app/server.go: Register bunker info endpoint
- app/web/src/BunkerView.svelte: Complete rewrite with QR codes
- app/web/src/api.js: Add getBunkerInfo() function
- pkg/version/version: Bump to v0.41.0

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 18:36:04 +02:00
ea4a54c5e7 Add Cashu blind signature access tokens (NIP-XX draft)
Implements privacy-preserving bearer tokens for relay access control using
Cashu-style blind signatures. Tokens prove whitelist membership without
linking issuance to usage.

Features:
- BDHKE crypto primitives (HashToCurve, Blind, Sign, Unblind, Verify)
- Keyset management with weekly rotation
- Token format with kind permissions and scope isolation
- Generic issuer/verifier with pluggable authorization
- HTTP endpoints: POST /cashu/mint, GET /cashu/keysets, GET /cashu/info
- ACL adapter bridging ORLY's access control to Cashu AuthzChecker
- Stateless revocation via ACL re-check on each token use
- Two-token rotation for seamless renewal (max 2 weeks after blacklist)

Configuration:
- ORLY_CASHU_ENABLED: Enable Cashu tokens
- ORLY_CASHU_TOKEN_TTL: Token validity (default: 1 week)
- ORLY_CASHU_SCOPES: Allowed scopes (relay, nip46, blossom, api)
- ORLY_CASHU_REAUTHORIZE: Re-check ACL on each verification

Files:
- pkg/cashu/bdhke/: Core blind signature cryptography
- pkg/cashu/keyset/: Keyset management and rotation
- pkg/cashu/token/: Token format with kind permissions
- pkg/cashu/issuer/: Token issuance with authorization
- pkg/cashu/verifier/: Token verification with middleware
- pkg/interfaces/cashu/: AuthzChecker, KeysetStore interfaces
- pkg/bunker/acl_adapter.go: ORLY ACL integration
- app/handle-cashu.go: HTTP endpoints
- docs/NIP-XX-CASHU-ACCESS-TOKENS.md: Full specification

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 11:30:11 +02:00
2eb523c161 Add git.mleku.dev remote push to release process (v0.40.1)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Update release command to push to git.mleku.dev using gitmlekudev SSH key
- Add release process documentation to README.md

Files modified:
- .claude/commands/release.md: Add GIT_SSH_COMMAND push to git.mleku.dev
- README.md: Document release process and SSH key configuration
- pkg/version/version: Bump to v0.40.1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-27 18:07:15 +02:00
e84949140b Add WireGuard VPN with random /31 subnet isolation (v0.40.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add embedded WireGuard VPN server using wireguard-go + netstack
- Implement deterministic /31 subnet allocation from seed + sequence
- Use Badger's built-in Sequence for atomic counter allocation
- Add NIP-46 bunker server for remote signing over VPN
- Add revoked key tracking and access audit logging for users
- Add Bunker tab to web UI with WireGuard/bunker QR codes
- Support key regeneration with old keypair archiving

New environment variables:
- ORLY_WG_ENABLED: Enable WireGuard VPN server
- ORLY_WG_PORT: UDP port for WireGuard (default 51820)
- ORLY_WG_ENDPOINT: Public endpoint for WireGuard
- ORLY_WG_NETWORK: Base network for subnet pool (default 10.0.0.0/8)
- ORLY_BUNKER_ENABLED: Enable NIP-46 bunker
- ORLY_BUNKER_PORT: WebSocket port for bunker (default 3335)

Files added:
- pkg/wireguard/: WireGuard server, keygen, subnet pool, errors
- pkg/bunker/: NIP-46 bunker server and session handling
- pkg/database/wireguard.go: Peer storage with audit logging
- app/handle-wireguard.go: API endpoints for config/regenerate/audit
- app/wireguard-helpers.go: Key derivation helpers
- app/web/src/BunkerView.svelte: Bunker UI with QR codes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-27 16:32:48 +02:00
2aa5c16311 Fix base64 encoding to keep padding for Go URLEncoding (v0.39.3)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Remove padding stripping from URL-safe base64 conversion
- Go's base64.URLEncoding expects padding characters
- Fix applied to LogView.svelte, BlossomView.svelte, and api.js

Files modified:
- app/web/src/LogView.svelte: Keep padding in auth header
- app/web/src/BlossomView.svelte: Keep padding in auth header
- app/web/src/api.js: Keep padding in auth header
- pkg/version/version: Bump to v0.39.3

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 16:10:47 +01:00
ce54a6886a Use URL-safe base64 for NIP-98 auth encoding (v0.39.2)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Fix base64 encoding to use URL-safe format (- instead of +, _ instead of /)
- Remove padding characters (=) from base64 output
- Apply fix to LogView, BlossomView, and api.js

Files modified:
- app/web/src/LogView.svelte: URL-safe base64 for NIP-98 auth
- app/web/src/BlossomView.svelte: URL-safe base64 for Blossom auth
- app/web/src/api.js: URL-safe base64 for NIP-98 auth
- pkg/version/version: Bump to v0.39.2

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 16:03:46 +01:00
05170db4f7 Fix NIP-98 URL mismatch in log viewer (v0.39.1)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Include query parameters in signed NIP-98 auth URL
- Auth event URL must match actual request URL including ?offset=&limit=

Files modified:
- app/web/src/LogView.svelte: Fix auth URL to include query params
- pkg/version/version: Bump to v0.39.1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 15:54:30 +01:00
d2122801cd Add nurl and vainstr CLI tools (v0.39.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add nurl: NIP-98 authenticated HTTP client for testing owner APIs
- Add vainstr: vanity npub generator using fast secp256k1 library
- Update CLAUDE.md with documentation for both tools
- Properly handle secp256k1 library loading via p8k.New()

Files modified:
- cmd/nurl/main.go: New NIP-98 HTTP client tool
- cmd/vainstr/main.go: New vanity npub generator
- CLAUDE.md: Added usage documentation for nurl and vainstr
- go.mod/go.sum: Added go-arg dependency for vainstr
- pkg/version/version: Bump to v0.39.0

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 14:45:21 +01:00
678a228fb8 Fix log parser to match lol library format (v0.38.1)
Some checks failed
Go / build-and-release (push) Has been cancelled
The lol library outputs logs in format:
  1703500000000000ℹ️ message /path/to/file.go:123

Where:
- Timestamp is Unix microseconds
- Level is emoji (☠️🚨⚠️ℹ️🔎👻)
- Message text
- File:line location

Updated parser to correctly parse this format.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 14:11:29 +01:00
02db40de59 Fix log viewer to properly capture logs (v0.38.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Reinitialize lol loggers after wrapping Writer with BufferedWriter
- The lol.Main logger was initialized in init() with os.Stderr directly,
  bypassing the Writer variable, so we now recreate it with the wrapped Writer
- Log level changes now properly affect both the buffer and syslog output

Files modified:
- app/config/config.go: Reinitialize loggers after BufferedWriter setup
- pkg/logbuffer/writer.go: Remove unused stub function

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 14:01:36 +01:00
8e5754e799 Add log viewer for relay owners (v0.37.3)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add in-memory ring buffer for log storage (configurable via ORLY_LOG_BUFFER_SIZE)
- Add owner-only log viewer in web UI with infinite scroll
- Add log level selector with runtime level changes
- Add clear logs functionality
- Update Blossom refresh button to use 🔄 emoji style

Files modified:
- pkg/logbuffer/buffer.go: Ring buffer implementation
- pkg/logbuffer/writer.go: Buffered writer hook for log capture
- app/config/config.go: Add ORLY_LOG_BUFFER_SIZE env var
- app/handle-logs.go: Log API handlers
- app/server.go: Register log routes
- app/web/src/LogView.svelte: Log viewer component
- app/web/src/App.svelte: Add logs tab (owner-only)
- app/web/src/BlossomView.svelte: Update refresh button style

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 13:49:43 +01:00
e4468d305e Improve Blossom UI responsiveness and layout (v0.37.2)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Show full npub on screens > 720px, truncated on smaller screens
- Make admin users list extend to full width

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 13:20:49 +01:00
d3f2ea0f08 Fix Blossom view layout overflow (v0.37.1)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Use box-sizing instead of explicit width to fix right edge overflow

Files modified:
- pkg/version/version: Bump to v0.37.1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 13:15:13 +01:00
3f07e47ffb Fix Blossom view right edge overflow 2025-12-25 13:10:44 +01:00
aea8fd31e7 Improve Blossom UI with thumbnails and full-width layout (v0.37.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Make Blossom view use full available width
- Add "Upload new files" label with Select Files button on right
- Show image/video thumbnails in file list (48x48px)
- Add emoji icons for audio (🎵) and documents (📄)
- Show full hash on screens > 720px, truncated on smaller

Files modified:
- app/web/src/BlossomView.svelte: UI layout and thumbnail changes
- app/web/dist/*: Rebuilt bundle

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 13:07:25 +01:00
0de4137a10 Fix embedded web UI deployment by tracking dist assets (v0.36.23)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Track bundle.js, bundle.css, and all dist assets in git
- Previously only index.html was tracked, breaking VPS deployments
- Remove debug logging from BlossomView

Files modified:
- app/web/dist/*: Add all build assets to git tracking
- app/web/src/BlossomView.svelte: Remove debug code

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 12:44:14 +01:00
042acd9ed2 Track all dist assets and remove debug logging 2025-12-25 12:38:54 +01:00
dddf1ac568 Add bundle.js to git tracking for embedded web UI 2025-12-25 12:34:48 +01:00
d6f2a0f7cf Add visible debug bar for role detection 2025-12-25 12:32:40 +01:00
7c60b63df6 Add debug logging for Blossom admin role detection (v0.36.22)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add console.log to trace currentEffectiveRole value in BlossomView
- Add HTML comment showing role and isAdmin values for debugging

Files modified:
- app/web/src/BlossomView.svelte: Add debug logging for role detection

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 12:30:15 +01:00
ab2ac1bf4c Add Blossom admin UI for viewing all users' storage (v0.36.21)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add ListAllUserStats() storage method to aggregate user blob stats
- Add handleAdminListUsers() handler for admin endpoint
- Add /blossom/admin/users route requiring admin ACL
- Add Admin button to Blossom UI for admin/owner roles
- Add admin view showing all users with file counts and sizes
- Add user detail view to browse individual user's files
- Fetch user profiles (avatar, name) for admin list display

Files modified:
- pkg/blossom/storage.go: Add UserBlobStats struct and ListAllUserStats()
- pkg/blossom/handlers.go: Add handleAdminListUsers() handler
- pkg/blossom/server.go: Add admin/users route
- app/web/src/BlossomView.svelte: Add admin view state, UI, and styles

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 12:04:35 +01:00
96209bd8a5 Fix release deploy to use correct binary path (v0.36.20)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Update deploy command to build to ~/.local/bin/next.orly.dev
- Service uses this path, not ./orly in project directory

Files modified:
- .claude/commands/release.md: Fixed binary output path for VPS deploy

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 11:34:20 +01:00
da6008a00e Improve version link visibility and styling in sidebar (v0.36.19)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Change version link color from muted to readable text color
- Add background color hover effect matching tab styling
- Replace Gitea icon with mug-and-leaf icon
- Rename CSS class from gitea-icon to version-icon

Files modified:
- app/web/src/Sidebar.svelte: Updated version link styling and icon

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 11:19:52 +01:00
b6b31cb93f Add version display to web UI sidebar (v0.36.18)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add version footer to sidebar bottom-left with Gitea icon link
- Fetch relay version from NIP-11 relay info document
- Link opens https://next.orly.dev in new tab
- Responsive design hides version text on medium screens

Files modified:
- app/web/src/api.js: Add fetchRelayInfo() function
- app/web/src/Sidebar.svelte: Add version display with Gitea SVG icon
- app/web/src/App.svelte: Add relayVersion state and fetch on init
- pkg/version/version: Bump to v0.36.18

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 10:08:50 +01:00
77d153a9c7 Add LRU cache for serial lookups with dynamic scaling (v0.36.17)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add generic LRUCache[K, V] implementation using container/list for O(1) ops
- Replace random 50% eviction with proper LRU eviction in SerialCache
- Cache now starts empty and grows on demand up to configured limits
- Use [32]byte keys instead of string([]byte) to avoid allocation overhead
- Single-entry eviction at capacity instead of 50% bulk clearing
- Add comprehensive unit tests and benchmarks for LRUCache
- Benchmarks show ~32-34 ns/op with 0 allocations for Get/Put

Files modified:
- pkg/database/lrucache.go: New generic LRU cache implementation
- pkg/database/lrucache_test.go: Unit tests and benchmarks
- pkg/database/serial_cache.go: Refactored to use LRUCache

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 06:25:21 +01:00
eddd05eabf Add memory optimization improvements for reduced GC pressure (v0.36.16)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add buffer pool (pkg/database/bufpool) with SmallPool (64B) and MediumPool (1KB)
  for reusing bytes.Buffer instances on hot paths
- Fix escape analysis in index types (uint40, letter, word) by using fixed-size
  arrays instead of make() calls that escape to heap
- Add handler concurrency limiter (ORLY_MAX_HANDLERS_PER_CONN, default 100) to
  prevent unbounded goroutine growth under WebSocket load
- Add pre-allocation hints to Uint40s.Union/Intersection/Difference methods
- Update compact_event.go, save-event.go, serial_cache.go, and
  get-indexes-for-event.go to use pooled buffers

Files modified:
- app/config/config.go: Add MaxHandlersPerConnection config
- app/handle-websocket.go: Initialize handler semaphore
- app/listener.go: Add semaphore acquire/release in messageProcessor
- pkg/database/bufpool/pool.go: New buffer pool package
- pkg/database/compact_event.go: Use buffer pool, fix escape analysis
- pkg/database/get-indexes-for-event.go: Reuse single buffer for all indexes
- pkg/database/indexes/types/letter.go: Fixed array in UnmarshalRead
- pkg/database/indexes/types/uint40.go: Fixed arrays, pre-allocation hints
- pkg/database/indexes/types/word.go: Fixed array in UnmarshalRead
- pkg/database/save-event.go: Use buffer pool for key encoding
- pkg/database/serial_cache.go: Use buffer pool for lookups

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 06:03:53 +01:00
24383ef1f4 Decompose handle-event.go into DDD domain services (v0.36.15)
Some checks failed
Go / build-and-release (push) Has been cancelled
Major refactoring of event handling into clean, testable domain services:

- Add pkg/event/validation: JSON hex validation, signature verification,
  timestamp bounds, NIP-70 protected tag validation
- Add pkg/event/authorization: Policy and ACL authorization decisions,
  auth challenge handling, access level determination
- Add pkg/event/routing: Event router registry with ephemeral and delete
  handlers, kind-based dispatch
- Add pkg/event/processing: Event persistence, delivery to subscribers,
  and post-save hooks (ACL reconfig, sync, relay groups)
- Reduce handle-event.go from 783 to 296 lines (62% reduction)
- Add comprehensive unit tests for all new domain services
- Refactor database tests to use shared TestMain setup
- Fix blossom URL test expectations (missing "/" separator)
- Add go-memory-optimization skill and analysis documentation
- Update DDD_ANALYSIS.md to reflect completed decomposition

Files modified:
- app/handle-event.go: Slim orchestrator using domain services
- app/server.go: Service initialization and interface wrappers
- app/handle-event-types.go: Shared types (OkHelper, result types)
- pkg/event/validation/*: New validation service package
- pkg/event/authorization/*: New authorization service package
- pkg/event/routing/*: New routing service package
- pkg/event/processing/*: New processing service package
- pkg/database/*_test.go: Refactored to shared TestMain
- pkg/blossom/http_test.go: Fixed URL format expectations

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 05:30:07 +01:00
3e0a94a053 Use Gitea API directly for release creation (v0.36.14)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Replace tea CLI with direct Gitea API calls
- Add release ID extraction and validation
- Upload assets via API with proper error handling
- Add release verification step

Files modified:
- .gitea/workflows/go.yml: Direct API release creation
- pkg/version/version: Bump to v0.36.14

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-24 14:32:26 +01:00
b61cb114a2 Add error handling to all workflow steps (v0.36.13)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add set -e to all steps to fail fast on errors
- Add debug output for environment variables in checkout step
- Log more context to help diagnose CI failures

Files modified:
- .gitea/workflows/go.yml: Comprehensive error handling
- pkg/version/version: Bump to v0.36.13

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-24 14:28:16 +01:00
8b280b5574 Fix release workflow error handling
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add set -e to exit on any error
- Validate GITEA_TOKEN secret is set before proceeding
- Verify release binaries exist before upload attempt
- Remove error-suppressing || echo patterns
- Add login verification step

Files modified:
- .gitea/workflows/go.yml: Proper error handling for release creation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-24 14:23:04 +01:00
c9a03db395 Fix Blossom CORS headers and add root-level upload routes (v0.36.12)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add proper CORS headers for Blossom endpoints including X-SHA-256,
  X-Content-Length, X-Content-Type headers required by blossom-client-sdk
- Add root-level Blossom routes (/upload, /media, /mirror, /report, /list/)
  for clients like Jumble that expect Blossom at root
- Export BaseURLKey from pkg/blossom for use by app handlers
- Make blossomRootHandler return URLs with /blossom prefix so blob
  downloads work via the registered /blossom/ route
- Remove Access-Control-Allow-Credentials header (not needed for * origin)
- Add Access-Control-Expose-Headers for X-Reason and other response headers

Files modified:
- app/blossom.go: Add blossomRootHandler, use exported BaseURLKey
- app/server.go: Add CORS handling for blossom paths, register root routes
- pkg/blossom/server.go: Fix CORS headers, export BaseURLKey
- pkg/blossom/utils.go: Minor formatting
- pkg/version/version: Bump to v0.36.12

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-24 11:32:52 +01:00
f326ff0307 Bump version to v0.36.11
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add fixed-size cryptographic types (EventID, Pubkey, Signature)
- Add EventRef type for stack-allocated event references
- Add IDFixed(), PubFixed(), IDHex(), PubHex() methods to IdPkTs
- Update nostr library to v1.0.11

Files modified:
- pkg/version/version: v0.36.10 -> v0.36.11

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-23 14:49:47 +01:00
06063750e7 Add fixed-size type support for IdPkTs and EventRef
- Update nostr dependency to v1.0.11 with new types package
- Add IDFixed(), PubFixed(), IDHex(), PubHex() methods to IdPkTs
- Add EventRef type: 80-byte stack-allocated event reference
- Add ToEventRef()/ToIdPkTs() conversion methods
- Update tests to use IDHex() instead of hex.Enc(r.Id)

EventRef provides:
- Copy-on-assignment semantics (arrays vs slices)
- Zero heap allocations for event reference passing
- Type-safe fixed-size fields (EventID, Pubkey)

Files modified:
- go.mod, go.sum: Update nostr to v1.0.11
- pkg/interfaces/store/store_interface.go: Add methods and EventRef type
- pkg/interfaces/store/store_interface_test.go: New test file
- pkg/database/binary_tag_filter_test.go: Use IDHex()
- pkg/neo4j/fetch-event_test.go: Use IDHex(), PubHex()

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-23 14:47:50 +01:00
0addc61549 Add unicode normalization for word indexing (v0.36.10)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add unicode_normalize.go with mappings for small caps and fraktur
- Map 77 decorative unicode characters to ASCII equivalents:
  - Small caps (25 chars): ᴅᴇᴀᴛʜ → death
  - Fraktur lowercase (26 chars): 𝔡𝔢𝔞𝔱𝔥 → death
  - Fraktur uppercase (26 chars): 𝔇𝔈𝔄𝔗ℌ → death
- Fix broken utf8DecodeRuneInString() that failed on multi-byte UTF-8
- Add migration v7 to rebuild word indexes with normalization
- Add comprehensive unit tests for all character mappings

Files modified:
- pkg/database/unicode_normalize.go: New - character mapping tables
- pkg/database/unicode_normalize_test.go: New - unit tests
- pkg/database/tokenize.go: Integrate normalizeRune(), fix UTF-8 decoder
- pkg/database/migrations.go: Add version 7 migration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-22 18:53:30 +01:00
11d1b6bfd1 Fix fetch-kinds script for Node.js compatibility (v0.36.9)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Replace import.meta.dirname with fileURLToPath/dirname for Node < 20.11
- Use static imports instead of dynamic imports for fs/path

Files modified:
- app/web/scripts/fetch-kinds.js: Node.js compatibility fix
- pkg/version/version: v0.36.8 -> v0.36.9

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-21 05:17:48 +01:00
636b55e70b Clean up local Claude Code settings (v0.36.8)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Remove redundant permission entries from .claude/settings.local.json
- Bump version to v0.36.8

Files modified:
- .claude/settings.local.json: Cleanup old permissions
- pkg/version/version: v0.36.7 -> v0.36.8

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-21 05:13:56 +01:00
7f1785a39a Add prebuild script to fetch event kinds from nostr library
- Add scripts/fetch-kinds.js to fetch kinds.json from central source
- Update package.json with prebuild hook to auto-fetch on build
- Regenerate eventKinds.js from https://git.mleku.dev/mleku/nostr/raw/branch/main/encoders/kind/kinds.json
- Now uses single source of truth for all 184 event kinds

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-21 05:07:34 +01:00
b4c0c4825c Add secure nsec key generation and encryption for web UI (v0.36.7)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add nsec-crypto.js library with Argon2id+AES-GCM encryption
- Generate new nsec keys using secure system entropy
- Encrypt nsec with password (~3 sec Argon2id derivation in Web Worker)
- Add unlock flow for returning users with encrypted keys
- Add deriving modal with live timer during key derivation
- Auto-create default profile for new users with ORLY logo avatar
- Fix NIP-42 auth race condition in websocket-auth.js
- Improve header user profile display (avatar fills height, no truncation)
- Add instant light/dark theme colors in HTML head
- Add background box around username/nip05 in settings drawer
- Update CLAUDE.md with nsec-crypto library documentation

Files modified:
- app/web/src/nsec-crypto.js: New encryption library
- app/web/src/LoginModal.svelte: Key gen, encryption, unlock UI
- app/web/src/nostr.js: Default profile creation
- app/web/src/App.svelte: Header and drawer styling
- app/web/public/index.html: Instant theme colors
- CLAUDE.md: Library documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-18 08:40:16 +01:00
602d563a7c Fix WebSocket auth flow and improve header user profile display
- Fix NIP-42 auth race condition: wait for AUTH challenge before authenticating
- Header user profile: avatar fills vertical space, username vertically centered
- Remove username truncation to show full name/npub
- Standardize header height to 3em across all components

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-18 06:25:38 +01:00
606a3ca8c6 Update release command with web rebuild and improved VPS deploy (v0.36.6)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add step to rebuild embedded web UI before committing releases
- Fix VPS deploy command to add Go to PATH for non-login shells
- Remove web rebuild from VPS deploy (assets now committed to repo)
- Use && instead of ; for proper error handling in deploy script

Files modified:
- .claude/commands/release.md: Updated release workflow
- pkg/version/version: Bump to v0.36.6

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 14:26:42 +01:00
554358ce81 Add VPS auto-deploy step to release command (v0.36.5)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add step 10 to /release command that SSHes to VPS (10.0.0.1) and
  runs deployment: git stash, git pull, rebuild web UI, restart service
- Enables one-command releases with automatic production deployment

Files modified:
- .claude/commands/release.md: Add VPS deployment step
- pkg/version/version: Bump to v0.36.5

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 14:19:44 +01:00
358c8bc931 Replace manual theme toggle with automatic system preference detection (v0.36.4)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Remove sun/moon theme toggle button from header
- Detect system theme preference using window.matchMedia prefers-color-scheme
- Add event listener to automatically switch theme when OS preference changes
- Remove localStorage-based theme persistence in favor of system preference
- Clean up unused theme-toggle-btn CSS styles

Files modified:
- app/web/src/Header.svelte: Remove toggle button, toggleTheme function, and CSS
- app/web/src/App.svelte: Replace localStorage theme init with matchMedia detection
- pkg/version/version: Bump to v0.36.4

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 14:11:15 +01:00
1bbbfb5570 Fix WebSocket protocol detection for HTTP deployments (v0.36.3)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Fix minifier optimization bug that caused ws:// protocol detection to
  always return wss:// by using startsWith('https') instead of === 'https:'
- Update App.svelte to use protocol detection in all 5 WebSocket URL
  construction locations (compose, delete, repost, publish functions)
- Update constants.js DEFAULT_RELAYS to use the same minifier-safe pattern
- Enables web UI to work correctly on HTTP-only relay deployments

Files modified:
- app/web/src/App.svelte: Fix 5 hardcoded wss:// URLs with protocol detection
- app/web/src/constants.js: Fix DEFAULT_RELAYS protocol detection
- pkg/version/version: Bump to v0.36.3

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 13:52:28 +01:00
0a3e639fee Add event template generator with 140+ Nostr event kinds (v0.36.2)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add comprehensive eventKinds.js database with all NIPs event kinds
  including templates, descriptions, NIP references, and type flags
- Create EventTemplateSelector.svelte modal with search functionality
  and category filtering (Social, Messaging, Lists, Marketplace, etc.)
- Update ComposeView with "Generate Template" button and error banner
  for displaying permission-aware publish error messages
- Enhance publishEvent() in App.svelte with detailed error handling
  that explains policy restrictions, permission issues, and provides
  actionable guidance for users
- Add permission pre-check to prevent read-only users from attempting
  to publish events
- Update CLAUDE.md with Web UI event templates documentation
- Create docs/WEB_UI_EVENT_TEMPLATES.md with comprehensive user guide

Files modified:
- app/web/src/eventKinds.js (new)
- app/web/src/EventTemplateSelector.svelte (new)
- app/web/src/ComposeView.svelte
- app/web/src/App.svelte
- docs/WEB_UI_EVENT_TEMPLATES.md (new)
- CLAUDE.md
- pkg/version/version

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-16 10:39:02 +01:00
9d6280eab1 Fix duplicate REPORTS relationships in Neo4j backend (v0.36.1)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Change processReport() to use MERGE instead of CREATE for REPORTS
  relationships, deduplicating by (reporter, reported, report_type)
- Add ON CREATE/ON MATCH clauses to preserve newest event data while
  preventing duplicate relationships
- Add getExistingReportEvent() helper to check for existing reports
- Add markReportEventSuperseded() to track superseded events
- Add v4 migration migrateDeduplicateReports() to clean up existing
  duplicate REPORTS relationships in databases
- Add comprehensive tests: TestReportDeduplication with subtests for
  deduplication, different types, and superseded event tracking
- Update WOT_SPEC.md with REPORTS deduplication behavior and correct
  property names (report_type, created_at, created_by_event)
- Bump version to v0.36.1

Fixes: https://git.nostrdev.com/mleku/next.orly.dev/issues/16

Files modified:
- pkg/neo4j/social-event-processor.go: MERGE-based deduplication
- pkg/neo4j/migrations.go: v4 migration for duplicate cleanup
- pkg/neo4j/social-event-processor_test.go: Deduplication tests
- pkg/neo4j/WOT_SPEC.md: Updated REPORTS documentation
- pkg/version/version: Bump to v0.36.1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-16 10:13:15 +01:00
96bdf5cba2 Implement Tag-based e/p model for Neo4j backend (v0.36.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add unified Tag-based model where e/p tags create intermediate Tag nodes
  with REFERENCES relationships to Event/NostrUser nodes
- Update save-event.go: addPTagsInBatches and addETagsInBatches now create
  Tag nodes with TAGGED_WITH and REFERENCES relationships
- Update delete.go: CheckForDeleted uses Tag traversal for kind 5 detection
- Add v3 migration in migrations.go to convert existing direct REFERENCES
  and MENTIONS relationships to the new Tag-based model
- Create comprehensive test file tag_model_test.go with 15+ test functions
  covering Tag model, filter queries, migrations, and deletion detection
- Update save-event_test.go to verify new Tag-based relationship patterns
- Update WOT_SPEC.md with Tag-Based References documentation section
- Update CLAUDE.md and README.md with Neo4j Tag-based model documentation
- Bump version to v0.36.0

This change enables #e and #p filter queries to work correctly by storing
all tags (including e/p) through intermediate Tag nodes.

Files modified:
- pkg/neo4j/save-event.go: Tag-based e/p relationship creation
- pkg/neo4j/delete.go: Tag traversal for deletion detection
- pkg/neo4j/migrations.go: v3 migration for existing data
- pkg/neo4j/tag_model_test.go: New comprehensive test file
- pkg/neo4j/save-event_test.go: Updated for new model
- pkg/neo4j/WOT_SPEC.md: Tag-Based References documentation
- pkg/neo4j/README.md: Architecture and example queries
- CLAUDE.md: Repository documentation update
- pkg/version/version: Bump to v0.36.0

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-16 09:22:05 +01:00
516ce9c42c Add issue templates, CI workflows, and decentralization plan
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add Gitea issue templates for bug reports and feature requests with
  structured YAML forms for version, database backend, and log level
- Add GitHub Actions CI workflow for automated testing on push/PR
- Add GitHub Actions release workflow for building multi-platform
  binaries on tag push with SHA256 checksums
- Add CONTRIBUTING.md with development setup, PR guidelines, and
  commit message format documentation
- Add DECENTRALIZE_NOSTR.md expansion plan outlining WireGuard tunnel,
  GUI installer, system tray, and proxy server architecture
- Update allowed commands in Claude settings
- Bump version to v0.35.5

Files modified:
- .gitea/issue_template/: Bug report, feature request, and config YAML
- .github/workflows/: CI and release automation workflows
- CONTRIBUTING.md: New contributor guide
- docs/plans/DECENTRALIZE_NOSTR.md: Expansion architecture plan
- .claude/settings.local.json: Updated allowed commands
- pkg/version/version: Version bump to v0.35.5

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 20:50:49 +01:00
ed95947971 Add release command and bump version to v0.35.4
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add .claude/commands/release.md slash command for automated release
  workflow with version bumping, commit creation, tagging, and push
- Supports patch and minor version increments with proper validation
- Includes build verification step before committing
- Update settings.local.json with allowed commands from previous session
- Bump version from v0.35.3 to v0.35.4

Files modified:
- .claude/commands/release.md: New release automation command
- .claude/settings.local.json: Updated allowed commands
- pkg/version/version: Version bump to v0.35.4

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 19:50:13 +01:00
b58b91cd14 Add ORLY_POLICY_PATH for custom policy file location
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add ORLY_POLICY_PATH environment variable to configure custom policy
  file path, overriding the default ~/.config/ORLY/policy.json location
- Enforce ABSOLUTE paths only - relay panics on startup if relative path
  is provided, preventing common misconfiguration errors
- Update PolicyManager to store and expose configPath for hot-reload saves
- Add ConfigPath() method to P struct delegating to internal PolicyManager
- Update NewWithManager() signature to accept optional custom path parameter
- Add BUG_REPORTS_AND_FEATURE_REQUEST_PROTOCOL.md with issue submission
  guidelines requiring environment details, reproduction steps, and logs
- Update README.md with system requirements (500MB minimum memory) and
  link to bug report protocol
- Update CLAUDE.md and README.md documentation for new ORLY_POLICY_PATH

Files modified:
- app/config/config.go: Add PolicyPath config field
- pkg/policy/policy.go: Add configPath storage and validation
- app/handle-policy-config.go: Use policyManager.ConfigPath()
- app/main.go: Pass cfg.PolicyPath to NewWithManager
- pkg/policy/*_test.go: Update test calls with new parameter
- BUG_REPORTS_AND_FEATURE_REQUEST_PROTOCOL.md: New file
- README.md, CLAUDE.md: Documentation updates

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 18:36:04 +01:00
20293046d3 update nostr library version for scheme handling fix
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-14 08:25:12 +01:00
a6d969d7e9 bump version
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-14 08:20:41 +01:00
a5dc827e15 Fix NIP-11 fetch URL scheme conversion for non-proxied relays
- Convert wss:// to https:// and ws:// to http:// before fetching NIP-11
  documents, fixing failures for users not using HTTPS upgrade proxies
- The fetchNIP11 function was using WebSocket URLs directly for HTTP
  requests, causing scheme mismatch errors

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 08:20:09 +01:00
be81b3320e rate limiter test report 2025-12-12 21:59:00 +01:00
f16ab3077f Interim release: documentation updates and rate limiting improvements
- Add applesauce library reference documentation
- Add rate limiting test report for Badger
- Add memory monitoring for rate limiter (platform-specific implementations)
- Enhance PID-controlled adaptive rate limiting
- Update Neo4j and Badger monitors with improved load metrics
- Add docker-compose configuration
- Update README and configuration options

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-12 08:47:25 +01:00
ba84e12ea9 Add _graph extension support to Neo4j driver
Some checks failed
Go / build-and-release (push) Has been cancelled
- Implement TraverseFollows using Cypher path queries on FOLLOWS relationships
- Implement TraverseFollowers using reverse path traversal
- Implement FindMentions using MENTIONS relationships from p-tags
- Implement TraverseThread using REFERENCES relationships from e-tags
  with bidirectional traversal (inbound replies, outbound parents)
- Add GraphAdapter to bridge Neo4j to graph.GraphDatabase interface
- Add GraphResult type implementing graph.GraphResultI for Neo4j
- Initialize graph executor for Neo4j backend in app/main.go

The implementation uses existing Neo4j schema and relationships created
by SaveEvent() - no schema changes required. The _graph extension now
works transparently with either Badger or Neo4j backends.

Bump version to v0.35.0

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-12 07:07:31 +01:00
a816737cd3 Fix NIP-42 AUTH compliance: always respond with OK message
Some checks failed
Go / build-and-release (push) Has been cancelled
- Ensure AUTH handler always sends OK response per NIP-42 specification,
  including for parse failures (uses zero event ID with error reason)
- Add zeroEventID constant for OK responses when event ID cannot be parsed
- Document critical client guidance: clients MUST wait for OK response
  after AUTH before publishing events requiring authentication
- Update nostr skill and CLAUDE.md with NIP-42 AUTH protocol requirements
  for client developers, emphasizing OK response handling
- Add MAX_THINKING_TOKENS setting to Claude configuration

Files modified:
- app/handle-auth.go: Add OK response for AUTH parse failures
- .claude/skills/nostr/SKILL.md: Document AUTH OK response requirements
- CLAUDE.md: Add NIP-42 AUTH Protocol section for client developers
- .claude/settings.local.json: Add MAX_THINKING_TOKENS setting
- pkg/version/version: Bump to v0.34.7

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-12 06:14:24 +01:00
28b41847a6 Generalize PID controller as reusable library with abstract interfaces
- Create pkg/interfaces/pid for generic PID controller interfaces:
  - ProcessVariable: abstract input (value + timestamp)
  - Source: provides process variable samples
  - Output: controller output with P/I/D components and clamping info
  - Controller: generic PID interface with setpoint/gains
  - Tuning: configuration struct for all PID parameters

- Create pkg/pid as standalone PID controller implementation:
  - Thread-safe with mutex protection
  - Low-pass filtered derivative to suppress high-frequency noise
  - Anti-windup on integral term
  - Configurable output clamping
  - Presets for common use cases: rate limiting, PoW difficulty,
    temperature control, motor speed

- Update pkg/ratelimit to use generic pkg/pid.Controller:
  - Limiter now uses pidif.Controller interface
  - Type assertions for monitoring/debugging state access
  - Maintains backward compatibility with existing API

The generic PID package can now be used for any dynamic adjustment
scenario beyond rate limiting, such as blockchain PoW difficulty
adjustment, temperature regulation, or motor speed control.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-11 22:53:04 +01:00
88b0509ad8 Implement PID-controlled adaptive rate limiting for database operations
- Add LoadMonitor interface in pkg/interfaces/loadmonitor/ for database load metrics
- Implement PIDController with filtered derivative to suppress high-frequency noise
  - Proportional (P): immediate response to current error
  - Integral (I): eliminates steady-state offset with anti-windup clamping
  - Derivative (D): rate-of-change prediction with low-pass filtering
- Create BadgerLoadMonitor tracking L0 tables, compaction score, and cache hit ratio
- Create Neo4jLoadMonitor tracking query semaphore usage and latencies
- Add AdaptiveRateLimiter combining PID controllers for reads and writes
- Configure via environment variables:
  - ORLY_RATE_LIMIT_ENABLED: enable/disable rate limiting
  - ORLY_RATE_LIMIT_TARGET_MB: target memory limit (default 1500MB)
  - ORLY_RATE_LIMIT_*_K[PID]: PID gains for reads/writes
  - ORLY_RATE_LIMIT_MAX_*_MS: maximum delays
  - ORLY_RATE_LIMIT_*_TARGET: setpoints for reads/writes
- Integrate rate limiter into Server struct and lifecycle management
- Add comprehensive unit tests for PID controller behavior

Files modified:
- app/config/config.go: Add rate limiting configuration options
- app/main.go: Initialize and start/stop rate limiter
- app/server.go: Add rateLimiter field to Server struct
- main.go: Create rate limiter with appropriate monitor
- pkg/run/run.go: Pass disabled limiter for test instances
- pkg/interfaces/loadmonitor/: New LoadMonitor interface
- pkg/ratelimit/: New PID controller and limiter implementation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-11 22:45:11 +01:00
afa3dce1c9 Add PID-controlled adaptive rate limiting plan for relay operations
- Design comprehensive rate limiting for both reads (REQ) and writes (EVENT)
- Implement PID controller with filtered derivative to avoid noise amplification
- Apply low-pass filter before derivative computation (bandpass effect)
- Add anti-windup for integral term to prevent saturation
- Support setpoint-based control (target operating point as memory fraction)
- Separate tuning parameters for read vs write operations
- Monitor database-specific metrics (Badger LSM, Neo4j transactions)
- Combine memory pressure (70%) and load level (30%) into process variable
- Include integration examples for WebSocket handlers and import loop
- Add configuration via environment variables with sensible defaults

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-11 22:17:29 +01:00
cbc502a703 Fix broken submodule and add import memory optimization plan
- Remove broken submodule reference for pkg/protocol/blossom/blossom
  and track blossom spec files as regular files instead
- Add IMPORT_MEMORY_OPTIMIZATION_PLAN.md documenting strategies to
  constrain import memory usage to ≤1.5GB through cache reduction,
  batched syncs, batch transactions, and adaptive rate limiting
- Based on test results: 2.1M events imported in 48min at 736 events/sec
  with peak memory of 6.4GB (target is 1.5GB)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-11 21:36:39 +01:00
95271cbc81 Add Neo4j integration tests and query rate-limiting logic
Some checks failed
Go / build-and-release (push) Has been cancelled
Introduce comprehensive integration tests for Neo4j bug fixes covering batching, event relationships, and processing logic. Add rate-limiting to Neo4j queries using semaphores and retry policies to prevent authentication rate limiting and connection exhaustion, ensuring system stability under load.
2025-12-07 00:07:25 +00:00
8ea91e39d8 Add Claude Code skills for web frontend frameworks
- Add Svelte 3/4 skill covering components, reactivity, stores, lifecycle
- Add Rollup skill covering configuration, plugins, code splitting
- Add nostr-tools skill covering event creation, signing, relay communication
- Add applesauce-core skill covering event stores, reactive queries
- Add applesauce-signers skill covering NIP-07/NIP-46 signing abstractions
- Update .gitignore to include .claude/** directory

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-06 06:56:57 +00:00
d3d2d6e643 Add WasmDB support and enhance query/cache/policy systems
Introduced WasmDB as a new IndexedDB backend for WebAssembly environments, replicating Badger's schema for compatibility. Enhanced the query caching system with optional configuration to improve memory usage efficiency. Improved the policy system with new permissive overrides and clarified read vs write applicability for better flexibility.
2025-12-05 22:05:33 +00:00
8bdf1fcd39 Replace search mode with an enhanced filter system
Some checks failed
Go / build-and-release (push) Has been cancelled
Removes the legacy search mode in favor of an improved event filter system. Introduces debounced filter application, JSON-based filter configuration, and a cleaner UI for filtering events, offering greater flexibility and clarity.
2025-12-05 21:16:19 +00:00
930e3eb1b1 Upgrade dependencies and improve UI handling.
Updated "applesauce-core" and "applesauce-signers" to newer versions in lockfile and package.json. Enhanced UI with better button styling and added logic to hide the "policy" tab if not enabled. Included "bun update" in approved commands.
2025-12-05 19:48:34 +00:00
8ef3114f5c Refactor project to modularize constants and utilities.
Moved reusable constants and helper functions to dedicated modules for improved maintainability and reusability. Improved build configuration to differentiate output directories for development and production. Enhanced server error handling and added safeguards for disabled web UI scenarios.
2025-12-05 19:25:13 +00:00
e9173a6894 Update event import process and improve user feedback
Some checks failed
Go / build-and-release (push) Has been cancelled
Simplified event import to run synchronously, ensuring proper resource handling and accurate feedback. Enhanced frontend with real-time import status messages and error handling. Adjusted migrations to handle events individually, improving reliability and granular progress tracking.
2025-12-05 14:42:22 +00:00
c1bd05fb04 Adjust ACL behavior for "none" mode and make query cache optional
Some checks failed
Go / build-and-release (push) Has been cancelled
This commit allows skipping authentication, permission checks, and certain filters (e.g., deletions, expirations) when the ACL mode is set to "none" (open relay mode). It also introduces a configuration option to disable query caching to reduce memory usage. These changes improve operational flexibility for open relay setups and resource-constrained environments.
2025-12-05 11:25:34 +00:00
6b72f1f2b7 Update privileged event filtering to respect ACL mode
Some checks failed
Go / build-and-release (push) Has been cancelled
Privileged events are now filtered based on ACL mode, allowing open access when ACL is "none." Added tests to verify behavior for different ACL modes, ensuring unauthorized and unauthenticated users can only access privileged events when explicitly permitted. Version bumped to v0.34.2.
2025-12-05 10:02:49 +00:00
83c27a52b0 bump v0.34.1
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-04 20:10:27 +00:00
1e9c447fe6 Refactor Neo4j tests and improve tag handling in Cypher
Replaces outdated Neo4j test setup with a robust TestMain, shared test database, and utility functions for test data and migrations. Improves Cypher generation for processing e-tags, p-tags, and other tags to ensure compliance with Neo4j syntax. Added integration test script and updated benchmark reports for Badger backend.
2025-12-04 20:09:24 +00:00
6b98c23606 add first draft graph query implementation
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-04 09:28:13 +00:00
8dbc19ee9e Add benchmark results for multiple Nostr relay backends
Introduced comprehensive benchmarks for `next-orly-badger`, `next-orly-neo4j`, and `nostr-rs-relay` backends, covering peak throughput, burst patterns, mixed read/write, query, and concurrent query/store tests. Reports include detailed performance metrics (e.g., events/sec, latency, success rates) and are saved as text and AsciiDoc formats. Aggregate summary also generated for testing consistency across relay implementations.
2025-12-04 05:43:20 +00:00
290fcbf8f0 remove outdated configuration items for obsolete tail packing optimization
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-03 21:24:43 +00:00
54ead81791 merge authors/nostruser in neo4j, add compact pubkey/e/p serial refs
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-03 20:49:49 +00:00
746523ea78 Add support for read/write permissive overrides in policies
Some checks failed
Go / build-and-release (push) Has been cancelled
Introduce `read_allow_permissive` and `write_allow_permissive` flags in the global rule to override kind whitelists for read or write operations. These flags allow more flexible policy configurations while maintaining blacklist enforcement and preventing conflicting settings. Updated tests and documentation for clarity.
2025-12-03 20:26:49 +00:00
52189633d9 Unify NostrUser and Author nodes; add migrations support
Some checks failed
Go / build-and-release (push) Has been cancelled
Merged 'Author' nodes into 'NostrUser' for unified identity tracking and social graph representation. Introduced migrations framework to handle schema changes, including retroactive updates for existing relationships and constraints. Updated tests, schema definitions, and documentation to reflect these changes.
2025-12-03 20:02:41 +00:00
59247400dc Remove Dgraph support from the codebase.
Some checks failed
Go / build-and-release (push) Has been cancelled
Dgraph-related functionality, configuration, and benchmarks have been removed from the project. This streamlines the codebase to focus on supported backends, specifically eliminating Dgraph references in favor of Neo4j and other implementations. Version bumped to reflect the changes.
2025-12-03 19:33:37 +00:00
7a27c44bc9 Enhance policy system tests and documentation.
Some checks failed
Go / build-and-release (push) Has been cancelled
Added extensive tests for default-permissive access control, read/write follow whitelists, and privileged-only fields. Updated policy documentation with new configuration examples, access control reference, and logging details.
2025-12-03 19:19:36 +00:00
6bd56a30c9 remove dgraph completely
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-03 16:44:49 +00:00
880772cab1 Remove Dgraph, check hex field case, reject if any uppercase
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-03 16:26:07 +00:00
1851ba39fa fix type and nil panic errors
Some checks failed
Go / build-and-release (push) Has been cancelled
1. Added Err() method to CollectedResult (pkg/neo4j/neo4j.go:68-72):
    - The resultiter.Neo4jResultIterator interface requires Err() error
    - CollectedResult was missing this method, causing the type assertion to fail
    - Since CollectedResult pre-fetches all records, Err() always returns nil
  2. Fixed nil pointer dereference in buildCypherQuery (pkg/neo4j/query-events.go:173):
    - Changed if *f.Limit > 0 to if f.Limit != nil && *f.Limit > 0
    - This prevents a panic when filters don't specify a limit
  3. Simplified parseEventsFromResult signature (pkg/neo4j/query-events.go:185):
    - Changed from func (n *N) parseEventsFromResult(result any) to accept *CollectedResult directly
    - This eliminates the runtime type assertion since ExecuteRead already returns *CollectedResult
    - Removed the now-unused resultiter import
2025-12-03 12:59:23 +00:00
de290aeb25 implement wasm/js specific database engine
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-03 12:31:40 +00:00
0a61f274d5 implement wasm/js specific database engine 2025-12-03 12:31:25 +00:00
c8fac06f24 lint and correct cypher query code 2025-12-03 10:42:32 +00:00
64c6bd8bdd add cypher to cloud skills 2025-12-03 10:25:31 +00:00
58d75bfc5a add version command 2025-12-03 10:23:39 +00:00
69e2c873d8 Refactor for interface clarity and dependency isolation.
Some checks failed
Go / build-and-release (push) Has been cancelled
Replaced inline interface literals with dedicated, documented interface definitions in `pkg/interfaces/`. Introduced `TimeoutError`, `PolicyChecker`, and `Neo4jResultIterator` interfaces to clarify design, improve maintainability, and resolve potential circular dependencies. Updated config and constant usage rules for consistency. Incremented version to v0.31.11.
2025-12-03 06:04:50 +00:00
6c7d55ff7e Update version and add comprehensive Cypher query tests
Some checks failed
Go / build-and-release (push) Has been cancelled
Bumped version to v0.31.10. Added extensive unit and integration tests for Cypher query generation in Neo4j, including validation of WITH clause fixes and handling optional matches for various event tagging scenarios. Ensures robust handling of references, relationships, and syntactical correctness.
2025-12-02 19:29:52 +00:00
3c17e975df Add foundational resources for elliptic curve operations and distributed systems
Added detailed pseudocode for elliptic curve algorithms covering modular arithmetic, point operations, scalar multiplication, and coordinate conversions. Also introduced a comprehensive knowledge base for distributed systems, including CAP theorem, consistency models, consensus protocols (e.g., Paxos, Raft, PBFT, Nakamoto), and fault-tolerant design principles.
2025-12-02 19:14:39 +00:00
feae79af1a fix bug in cypher code that breaks queries
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-02 19:10:50 +00:00
ebef8605eb update CLAUDE.md 2025-12-02 18:35:40 +00:00
c5db0abf73 Add policy configuration reference documentation
Introduce a comprehensive reference guide for ORLY policy configuration. This document outlines policy options, validation rules, access control, and debugging methods, serving as the authoritative resource for policy-related behavior.
2025-12-02 18:12:11 +00:00
016e97925a Refactor database configuration to use centralized struct
Some checks failed
Go / build-and-release (push) Has been cancelled
Replaced individual environment variable access with a unified `DatabaseConfig` struct for all database backends. This centralizes configuration management, reduces redundant code, and ensures all options are documented in `app/config/config.go`. Backward compatibility is maintained with default values and retained constructors.
2025-12-02 13:30:50 +00:00
042b47a4d9 Make policy validation write-only and add corresponding tests
Some checks failed
Go / build-and-release (push) Has been cancelled
Updated policy validation logic to apply only to write operations, ensuring constraints like max_expiry_duration and required tags do not affect read operations. Added corresponding test cases to verify behavior for both valid and invalid inputs. This change improves clarity between write and read validation rules.

bump tag to update binary
2025-12-02 12:41:41 +00:00
952ce0285b Validate ISO-8601 duration format for max_expiry_duration
Some checks failed
Go / build-and-release (push) Has been cancelled
Added validation to reject invalid max_expiry_duration formats in policy configs, ensuring compliance with ISO-8601 standards. Updated the `New` function to fail fast on invalid inputs and included detailed error messages for better clarity. Comprehensive tests were added to verify both valid and invalid scenarios.

bump tag to build binary with update
2025-12-02 11:53:52 +00:00
45856f39b4 Update nostr to v1.0.7 with cross-platform crypto support
Some checks failed
Go / build-and-release (push) Has been cancelled
- Bump git.mleku.dev/mleku/nostr from v1.0.4 to v1.0.7
- Add p256k1.mleku.dev as indirect dependency for pure Go crypto
- Remove local replace directive for CI compatibility
- Add WASM/Mobile build plan documentation
- Bump version to v0.31.5

nostr v1.0.7 changes:
- Split crypto/p8k into platform-specific files
- Linux uses libsecp256k1 via purego (fast)
- Other platforms (darwin, windows, android) use pure Go p256k1
- Enables cross-compilation without CGO or native libraries

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 11:21:28 +00:00
70944d45df Add extensive tests and improve policy configuration handling
Some checks failed
Go / build-and-release (push) Has been cancelled
Introduce comprehensive tests for policy validation logic, including owner and policy admin scenarios. Update `HandlePolicyConfigUpdate` to differentiate permissions for owners and policy admins, enforcing stricter field restrictions and validation flows.
2025-12-02 07:51:59 +00:00
dd8027478c Update version and enhance owner configuration in README
Some checks failed
Go / build-and-release (push) Has been cancelled
Bump version from v0.31.2 to v0.31.3 and improve the README with clearer instructions for setting relay ownership. Introduced a new recommended method for managing owners via `policy.json`, detailed top-level fields, and refined key rule options for better usability and flexibility in cloud environments.
2025-12-01 21:41:47 +00:00
5631c162d9 Add default security configuration and policy recipes
Introduced default security settings with stricter access control, including policies requiring owner/admin privileges by default. Added multiple pre-configured policy recipes, custom validator support, and extended documentation for security, configurations, and use cases.
2025-12-01 21:39:28 +00:00
2166ff7013 Remove subscription_stability_test.go and improve test variable naming
Some checks failed
Go / build-and-release (push) Has been cancelled
Deleted `subscription_stability_test.go` to clean up unused or redundant code. Updated naming in test files for improved readability, replacing `tag` with `tg` for consistency. Also updated the `github.com/klauspost/compress` dependency to v1.18.2.
2025-12-01 18:47:15 +00:00
869006c4c3 Add comprehensive tests for new policy fields and combinations
Some checks failed
Go / build-and-release (push) Has been cancelled
Introduce tests to validate functionality for new policy fields, including `max_expiry_duration`, `protected_required`, `identifier_regex`, and `follows_whitelist_admins`. Also, cover combinations of new and existing fields to ensure compatibility and precedence rules are correctly enforced.

bump to v0.31.2
2025-12-01 18:21:38 +00:00
2e42caee0e Fix .idea directory not being ignored due to allowlist pattern
- Move .idea/ ignore rule after the !*/ allowlist directive
- Add **/.idea/ pattern to catch nested occurrences
- The !*/ rule was re-including directories, overriding the earlier ignore

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-01 14:44:31 +00:00
2026591c42 update schema and add doc on updating schema 2025-11-28 06:27:46 +00:00
fb39cb3347 fix go.mod 2025-11-27 22:07:32 +00:00
48b0b6984c Fix directory spider tag loss: size limits and validation
Some checks failed
Go / build-and-release (push) Has been cancelled
- Increase WebSocket message size limit from 500KB to 10MB to prevent
  truncation of large kind 3 follow list events (8000+ follows)
- Add validation in SaveEvent to reject kind 3 events without p tags
  before storage, preventing malformed events from buggy relays
- Implement CleanupKind3WithoutPTags() to remove existing malformed
  kind 3 events at startup
- Add enhanced logging showing tag count and event ID when rejecting
  invalid kind 3 events for better observability
- Create round-trip test proving binary tag encoding preserves p tags
  correctly through JSON→binary→JSON cycle
- Root cause: 500KB limit was truncating large follow lists during
  WebSocket receive, causing tags to be lost or incomplete
- Three-layer defense: prevent at gate (size), validate (save time),
  and cleanup (startup)

Files modified:
- app/handle-websocket.go: Increase DefaultMaxMessageSize to 10MB
- pkg/database/save-event.go: Add kind 3 validation with logging
- pkg/database/cleanup-kind3.go: New cleanup function
- pkg/database/cleanup-kind3_test.go: Round-trip test
- app/main.go: Invoke cleanup at startup
2025-11-27 13:49:33 +00:00
7fedcd24d3 initial draft of hot reload policy 2025-11-27 06:31:34 +00:00
5fbe131755 bump to v0.31.0 directory spider
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-27 00:03:23 +00:00
8757b41dd9 add directory spider 2025-11-27 00:02:14 +00:00
1810c8bef3 Fix binary tag value handling for e/p tags across database layer
Some checks failed
Go / build-and-release (push) Has been cancelled
- Update nostr library to v1.0.3 with improved binary tag support
- Replace tag.Value() calls with tag.ValueHex() to handle both binary and hex formats
- Add NormalizeTagValueForHash() for consistent filter tag normalization
- Update QueryPTagGraph to handle binary-encoded and hex-encoded pubkeys
- Fix tag matching in query-events.go using TagValuesMatchUsingTagMethods
- Add filter_utils.go with tag normalization helper functions
- Update delete operations in process-delete.go and neo4j/delete.go
- Fix ACL follows extraction to use ValueHex() for consistent decoding
- Add binary_tag_filter_test.go for testing tag value normalization
- Bump version to v0.30.3
2025-11-26 21:16:46 +00:00
fad39ec201 Add serve mode, fix binary tags, document CLI tools, improve Docker
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add 'serve' subcommand for ephemeral RAM-based relay at /dev/shm with
  open ACL mode for testing and benchmarking
- Fix e-tag and p-tag decoding to use ValueHex()/ValueBinary() methods
  instead of Value() which returns raw bytes for binary-optimized storage
- Document all command-line tools in readme.adoc (relay-tester, benchmark,
  stresstest, blossomtest, aggregator, convert, FIND, policytest, etc.)
- Switch Docker images from Alpine to Debian for proper libsecp256k1
  Schnorr signature and ECDH support required by Nostr
- Upgrade Docker Go version from 1.21 to 1.25
- Add ramdisk mode (--ramdisk) to benchmark script for eliminating disk
  I/O bottlenecks in performance measurements
- Add docker-compose.ramdisk.yml for tmpfs-based benchmark volumes
- Add test coverage for privileged policy with binary-encoded p-tags
- Fix blossom test to expect 200 OK for anonymous uploads when auth is
  not required (RequireAuth=false with ACL mode 'none')
- Update follows ACL to handle both binary and hex p-tag formats
- Grant owner access to all users in serve mode via None ACL
- Add benchmark reports from multi-relay comparison run
- Update CLAUDE.md with binary tag handling documentation
- Bump version to v0.30.2
2025-11-26 09:52:29 +00:00
f1ddad3318 fix policy logic error caused by interface breach
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-25 20:46:46 +00:00
0161825be8 bump for social graph feature for neo4j v0.30.0
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-25 18:09:51 +00:00
6412edeabb implement preliminary implementation of graph data model 2025-11-25 18:08:44 +00:00
655a7d9473 update workflow to update web app bundle correctly
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-25 15:41:01 +00:00
a03af8e05a self-detection elides self url at startup, handles multiple DNS pointers
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-25 13:26:37 +00:00
1522bfab2e add relay self-connection via authed pubkey
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-25 12:54:37 +00:00
a457d22baf update go.yml workflow 2025-11-25 12:12:08 +00:00
2b8f359a83 fix workflow to fetch libsecp256k1.so
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-25 11:04:04 +00:00
2e865c9616 fix workflow to fetch libsecp256k1.so
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-25 06:03:22 +00:00
7fe1154391 fix policy load failure to panic, remove fallback case
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-25 05:49:05 +00:00
6e4f24329e fix silent fail of loading policy with panic, and bogus fallback logic 2025-11-24 20:24:51 +00:00
da058c37c0 blossom works fully correctly 2025-11-23 12:32:53 +00:00
1c376e6e8d migrate to new nostr library 2025-11-23 08:15:06 +00:00
86cf8b2e35 unignore files that should be there 2025-11-22 20:12:55 +00:00
ef51382760 optimize e and p tags 2025-11-22 19:40:48 +00:00
5c12c467b7 some more gitea 2025-11-21 22:40:03 +00:00
76e9166a04 fix paths 2025-11-21 21:49:50 +00:00
350b4eb393 gitea 2025-11-21 21:47:28 +00:00
b67f7dc900 fix policy to require auth and ignore all reqs before valid auth is made
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-21 20:19:24 +00:00
fb65282702 develop registration ratelimit mechanism 2025-11-21 19:13:18 +00:00
ebe0012863 fix auth, read/white whitelisting and rule precedence, bump to v0.29.13
Some checks failed
Go / build-and-release (push) Has been cancelled
Policy System Verification & Testing (Latest Updates) Authentication & Security:

Verified policy system enforces authentication for all REQ and EVENT messages when enabled

Confirmed AUTH challenges are sent immediately on connection and repeated until authentication succeeds

Validated unauthenticated requests are silently rejected regardless of other policy rules

Access Control Logic:

Confirmed privileged flag only restricts read access (REQ queries), not write operations (EVENT submissions)

Validated read_allow and privileged use OR logic: users get access if EITHER they're in the allow list OR they're a party to the event (author/p-tag)
This design allows both explicit whitelisting and privacy for involved parties

Kind Whitelisting:

Verified kind filtering properly rejects unlisted events in all scenarios:

Explicit kind.whitelist: Only listed kinds accepted, even if rules exist for other kinds

Implicit whitelist (rules only): Only kinds with defined rules accepted

Blacklist mode: Blacklisted kinds rejected, others require rules

Added comprehensive test suite (10 scenarios) covering edge cases and real-world configurations
2025-11-21 16:13:34 +00:00
917bcf0348 fix policy to ignore all req/events without auth 2025-11-21 15:28:07 +00:00
55add34ac1 add rely-sqlite to benchmark
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-20 20:55:37 +00:00
00a6a78a41 fix cache to disregard subscription ids
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-20 12:30:17 +00:00
1b279087a9 add vertexes between npubs and events, use for p tags 2025-11-20 09:16:54 +00:00
b7417ab5eb create new index that records the links between pubkeys, events, kinds, and inbound/outbound/author 2025-11-20 05:13:56 +00:00
d4e2f48b7e bump to v0.29.10
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-19 13:08:00 +00:00
a79beee179 fixed and unified privilege checks across ACLs
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-19 13:05:21 +00:00
f89f41b8c4 full benchmark run 2025-11-19 12:22:04 +00:00
be6cd8c740 fixed error comparing hex/binary in pubkey white/blacklist, complete neo4j and tests"
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-19 11:25:38 +00:00
8b3d03da2c fix workflow setup
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-11-18 20:56:18 +00:00
5bcb8d7f52 upgrade to gitea workflows
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
2025-11-18 20:50:05 +00:00
b3b963ecf5 replace github workflows with gitea 2025-11-18 20:46:54 +00:00
d4fb6cbf49 fix handleevents not prompting auth for event publish with auth-required
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
2025-11-18 20:26:36 +00:00
962 changed files with 140145 additions and 79444 deletions

View File

@@ -0,0 +1,64 @@
# Release Command
Review all changes in the repository and create a release with proper commit message, version tag, and push to remotes.
## Argument: $ARGUMENTS
The argument should be one of:
- `patch` - Bump the patch version (e.g., v0.35.3 -> v0.35.4)
- `minor` - Bump the minor version and reset patch to 0 (e.g., v0.35.3 -> v0.36.0)
If no argument provided, default to `patch`.
## Steps to perform:
1. **Read the current version** from `pkg/version/version`
2. **Calculate the new version** based on the argument:
- Parse the current version (format: vMAJOR.MINOR.PATCH)
- If `patch`: increment PATCH by 1
- If `minor`: increment MINOR by 1, set PATCH to 0
3. **Update the version file** (`pkg/version/version`) with the new version
4. **Rebuild the embedded web UI** by running:
```
./scripts/update-embedded-web.sh
```
This ensures the latest web UI changes are included in the release.
5. **Review changes** using `git status` and `git diff --stat HEAD`
6. **Compose a commit message** following this format:
- First line: 72 chars max, imperative mood summary
- Blank line
- Bullet points describing each significant change
- "Files modified:" section listing affected files
- Footer with Claude Code attribution
7. **Stage all changes** with `git add -A`
8. **Create the commit** with the composed message
9. **Create a git tag** with the new version (e.g., `v0.36.0`)
10. **Push to remotes** (origin, gitea, and git.mleku.dev) with tags:
```
git push origin main --tags
git push gitea main --tags
GIT_SSH_COMMAND="ssh -i ~/.ssh/gitmlekudev" git push ssh://mleku@git.mleku.dev:2222/mleku/next.orly.dev.git main --tags
```
11. **Deploy to relay.orly.dev** (ARM64):
Build on remote (faster than uploading cross-compiled binary due to slow local bandwidth):
```bash
ssh relay.orly.dev 'cd ~/src/next.orly.dev && git pull origin main && GOPATH=$HOME CGO_ENABLED=0 ~/go/bin/go build -o ~/.local/bin/next.orly.dev && sudo /usr/sbin/setcap cap_net_bind_service=+ep ~/.local/bin/next.orly.dev && sudo systemctl restart orly && ~/.local/bin/next.orly.dev version'
```
Note: setcap must be re-applied after each binary rebuild to allow binding to ports 80/443.
12. **Report completion** with the new version and commit hash
## Important:
- Do NOT push to github remote (only origin and gitea)
- Always verify the build compiles before committing: `CGO_ENABLED=0 go build -o /dev/null ./...`
- If build fails, fix issues before proceeding

View File

@@ -1,103 +1,10 @@
{
"permissions": {
"allow": [
"Skill(skill-creator)",
"Bash(cat:*)",
"Bash(python3:*)",
"Bash(find:*)",
"Skill(nostr-websocket)",
"Bash(go build:*)",
"Bash(chmod:*)",
"Bash(journalctl:*)",
"Bash(timeout 5 bash -c 'echo [\"\"REQ\"\",\"\"test123\"\",{\"\"kinds\"\":[1],\"\"limit\"\":1}] | websocat ws://localhost:3334':*)",
"Bash(pkill:*)",
"Bash(timeout 5 bash:*)",
"Bash(md5sum:*)",
"Bash(timeout 3 bash -c 'echo [\\\"\"REQ\\\"\",\\\"\"test456\\\"\",{\\\"\"kinds\\\"\":[1],\\\"\"limit\\\"\":10}] | websocat ws://localhost:3334')",
"Bash(printf:*)",
"Bash(websocat:*)",
"Bash(go test:*)",
"Bash(timeout 180 go test:*)",
"WebFetch(domain:github.com)",
"WebFetch(domain:raw.githubusercontent.com)",
"Bash(/tmp/find help)",
"Bash(/tmp/find verify-name example.com)",
"Skill(golang)",
"Bash(/tmp/find verify-name Bitcoin.Nostr)",
"Bash(/tmp/find generate-key)",
"Bash(git ls-tree:*)",
"Bash(CGO_ENABLED=0 go build:*)",
"Bash(CGO_ENABLED=0 go test:*)",
"Bash(app/web/dist/index.html)",
"Bash(export CGO_ENABLED=0)",
"Bash(bash:*)",
"Bash(CGO_ENABLED=0 ORLY_LOG_LEVEL=debug go test:*)",
"Bash(/tmp/test-policy-script.sh)",
"Bash(docker --version:*)",
"Bash(mkdir:*)",
"Bash(./test-docker-policy/test-policy.sh:*)",
"Bash(docker-compose:*)",
"Bash(tee:*)",
"Bash(docker logs:*)",
"Bash(timeout 5 websocat:*)",
"Bash(docker exec:*)",
"Bash(TESTSIG=\"bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb\":*)",
"Bash(echo:*)",
"Bash(git rm:*)",
"Bash(git add:*)",
"Bash(./test-policy.sh:*)",
"Bash(docker rm:*)",
"Bash(./scripts/docker-policy/test-policy.sh:*)",
"Bash(./policytest:*)",
"WebSearch",
"WebFetch(domain:blog.scottlogic.com)",
"WebFetch(domain:eli.thegreenplace.net)",
"WebFetch(domain:learn-wasm.dev)",
"Bash(curl:*)",
"Bash(./build.sh)",
"Bash(./pkg/wasm/shell/run.sh:*)",
"Bash(./run.sh echo.wasm)",
"Bash(./test.sh)",
"Bash(ORLY_PPROF=cpu ORLY_LOG_LEVEL=info ORLY_LISTEN=0.0.0.0 ORLY_PORT=3334 ORLY_ADMINS=npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku ORLY_OWNERS=npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku ORLY_ACL_MODE=follows ORLY_SPIDER_MODE=follows timeout 120 go run:*)",
"Bash(go tool pprof:*)",
"Bash(go get:*)",
"Bash(go mod tidy:*)",
"Bash(go list:*)",
"Bash(timeout 180 go build:*)",
"Bash(timeout 240 go build:*)",
"Bash(timeout 300 go build:*)",
"Bash(/tmp/orly:*)",
"Bash(./orly version:*)",
"Bash(git checkout:*)",
"Bash(docker ps:*)",
"Bash(./run-profile.sh:*)",
"Bash(sudo rm:*)",
"Bash(docker compose:*)",
"Bash(./run-benchmark.sh:*)",
"Bash(docker run:*)",
"Bash(docker inspect:*)",
"Bash(./run-benchmark-clean.sh:*)",
"Bash(cd:*)",
"Bash(CGO_ENABLED=0 timeout 180 go build:*)",
"Bash(/home/mleku/src/next.orly.dev/pkg/dgraph/dgraph.go)",
"Bash(ORLY_LOG_LEVEL=debug timeout 60 ./orly:*)",
"Bash(ORLY_LOG_LEVEL=debug timeout 30 ./orly:*)",
"Bash(killall:*)",
"Bash(kill:*)",
"Bash(gh repo list:*)",
"Bash(gh auth:*)",
"Bash(/tmp/backup-github-repos.sh)",
"Bash(./benchmark:*)",
"Bash(env)",
"Bash(./run-badger-benchmark.sh:*)",
"Bash(./update-github-vpn.sh:*)",
"Bash(dmesg:*)",
"Bash(export:*)",
"Bash(timeout 60 /tmp/benchmark-fixed:*)",
"Bash(/tmp/test-auth-event.sh)"
],
"allow": [],
"deny": [],
"ask": []
"ask": [],
"additionalDirectories": []
},
"outputStyle": "Explanatory"
"outputStyle": "Default",
"MAX_THINKING_TOKENS": "16000"
}

View File

@@ -0,0 +1,634 @@
---
name: applesauce-core
description: This skill should be used when working with applesauce-core library for Nostr client development, including event stores, queries, observables, and client utilities. Provides comprehensive knowledge of applesauce patterns for building reactive Nostr applications.
---
# applesauce-core Skill
This skill provides comprehensive knowledge and patterns for working with applesauce-core, a library that provides reactive utilities and patterns for building Nostr clients.
## When to Use This Skill
Use this skill when:
- Building reactive Nostr applications
- Managing event stores and caches
- Working with observable patterns for Nostr
- Implementing real-time updates
- Building timeline and feed views
- Managing replaceable events
- Working with profiles and metadata
- Creating efficient Nostr queries
## Core Concepts
### applesauce-core Overview
applesauce-core provides:
- **Event stores** - Reactive event caching and management
- **Queries** - Declarative event querying patterns
- **Observables** - RxJS-based reactive patterns
- **Profile helpers** - Profile metadata management
- **Timeline utilities** - Feed and timeline building
- **NIP helpers** - NIP-specific utilities
### Installation
```bash
npm install applesauce-core
```
### Basic Architecture
applesauce-core is built on reactive principles:
- Events are stored in reactive stores
- Queries return observables that update when new events arrive
- Components subscribe to observables for real-time updates
## Event Store
### Creating an Event Store
```javascript
import { EventStore } from 'applesauce-core';
// Create event store
const eventStore = new EventStore();
// Add events
eventStore.add(event1);
eventStore.add(event2);
// Add multiple events
eventStore.addMany([event1, event2, event3]);
// Check if event exists
const exists = eventStore.has(eventId);
// Get event by ID
const event = eventStore.get(eventId);
// Remove event
eventStore.remove(eventId);
// Clear all events
eventStore.clear();
```
### Event Store Queries
```javascript
// Get all events
const allEvents = eventStore.getAll();
// Get events by filter
const filtered = eventStore.filter({
kinds: [1],
authors: [pubkey]
});
// Get events by author
const authorEvents = eventStore.getByAuthor(pubkey);
// Get events by kind
const textNotes = eventStore.getByKind(1);
```
### Replaceable Events
applesauce-core handles replaceable events automatically:
```javascript
// For kind 0 (profile), only latest is kept
eventStore.add(profileEvent1); // stored
eventStore.add(profileEvent2); // replaces if newer
// For parameterized replaceable (30000-39999)
eventStore.add(articleEvent); // keyed by author + kind + d-tag
// Get replaceable event
const profile = eventStore.getReplaceable(0, pubkey);
const article = eventStore.getReplaceable(30023, pubkey, 'article-slug');
```
## Queries
### Query Patterns
```javascript
import { createQuery } from 'applesauce-core';
// Create a query
const query = createQuery(eventStore, {
kinds: [1],
limit: 50
});
// Subscribe to query results
query.subscribe(events => {
console.log('Current events:', events);
});
// Query updates automatically when new events added
eventStore.add(newEvent); // Subscribers notified
```
### Timeline Query
```javascript
import { TimelineQuery } from 'applesauce-core';
// Create timeline for user's notes
const timeline = new TimelineQuery(eventStore, {
kinds: [1],
authors: [userPubkey]
});
// Get observable of timeline
const timeline$ = timeline.events$;
// Subscribe
timeline$.subscribe(events => {
// Events sorted by created_at, newest first
renderTimeline(events);
});
```
### Profile Query
```javascript
import { ProfileQuery } from 'applesauce-core';
// Query profile metadata
const profileQuery = new ProfileQuery(eventStore, pubkey);
// Get observable
const profile$ = profileQuery.profile$;
profile$.subscribe(profile => {
if (profile) {
console.log('Name:', profile.name);
console.log('Picture:', profile.picture);
}
});
```
## Observables
### Working with RxJS
applesauce-core uses RxJS observables:
```javascript
import { map, filter, distinctUntilChanged } from 'rxjs/operators';
// Transform query results
const names$ = profileQuery.profile$.pipe(
filter(profile => profile !== null),
map(profile => profile.name),
distinctUntilChanged()
);
// Combine multiple observables
import { combineLatest } from 'rxjs';
const combined$ = combineLatest([
timeline$,
profile$
]).pipe(
map(([events, profile]) => ({
events,
authorName: profile?.name
}))
);
```
### Creating Custom Observables
```javascript
import { Observable } from 'rxjs';
function createEventObservable(store, filter) {
return new Observable(subscriber => {
// Initial emit
subscriber.next(store.filter(filter));
// Subscribe to store changes
const unsubscribe = store.onChange(() => {
subscriber.next(store.filter(filter));
});
// Cleanup
return () => unsubscribe();
});
}
```
## Profile Helpers
### Profile Metadata
```javascript
import { parseProfile, ProfileContent } from 'applesauce-core';
// Parse kind 0 content
const profileEvent = await getProfileEvent(pubkey);
const profile = parseProfile(profileEvent);
// Profile fields
console.log(profile.name); // Display name
console.log(profile.about); // Bio
console.log(profile.picture); // Avatar URL
console.log(profile.banner); // Banner image URL
console.log(profile.nip05); // NIP-05 identifier
console.log(profile.lud16); // Lightning address
console.log(profile.website); // Website URL
```
### Profile Store
```javascript
import { ProfileStore } from 'applesauce-core';
const profileStore = new ProfileStore(eventStore);
// Get profile observable
const profile$ = profileStore.getProfile(pubkey);
// Get multiple profiles
const profiles$ = profileStore.getProfiles([pubkey1, pubkey2]);
// Request profile load (triggers fetch if not cached)
profileStore.requestProfile(pubkey);
```
## Timeline Utilities
### Building Feeds
```javascript
import { Timeline } from 'applesauce-core';
// Create timeline
const timeline = new Timeline(eventStore);
// Add filter
timeline.setFilter({
kinds: [1, 6],
authors: followedPubkeys
});
// Get events observable
const events$ = timeline.events$;
// Load more (pagination)
timeline.loadMore(50);
// Refresh (get latest)
timeline.refresh();
```
### Thread Building
```javascript
import { ThreadBuilder } from 'applesauce-core';
// Build thread from root event
const thread = new ThreadBuilder(eventStore, rootEventId);
// Get thread observable
const thread$ = thread.thread$;
thread$.subscribe(threadData => {
console.log('Root:', threadData.root);
console.log('Replies:', threadData.replies);
console.log('Reply count:', threadData.replyCount);
});
```
### Reactions and Zaps
```javascript
import { ReactionStore, ZapStore } from 'applesauce-core';
// Reactions
const reactionStore = new ReactionStore(eventStore);
const reactions$ = reactionStore.getReactions(eventId);
reactions$.subscribe(reactions => {
console.log('Likes:', reactions.likes);
console.log('Custom:', reactions.custom);
});
// Zaps
const zapStore = new ZapStore(eventStore);
const zaps$ = zapStore.getZaps(eventId);
zaps$.subscribe(zaps => {
console.log('Total sats:', zaps.totalAmount);
console.log('Zap count:', zaps.count);
});
```
## NIP Helpers
### NIP-05 Verification
```javascript
import { verifyNip05 } from 'applesauce-core';
// Verify NIP-05
const result = await verifyNip05('alice@example.com', expectedPubkey);
if (result.valid) {
console.log('NIP-05 verified');
} else {
console.log('Verification failed:', result.error);
}
```
### NIP-10 Reply Parsing
```javascript
import { parseReplyTags } from 'applesauce-core';
// Parse reply structure
const parsed = parseReplyTags(event);
console.log('Root event:', parsed.root);
console.log('Reply to:', parsed.reply);
console.log('Mentions:', parsed.mentions);
```
### NIP-65 Relay Lists
```javascript
import { parseRelayList } from 'applesauce-core';
// Parse relay list event (kind 10002)
const relays = parseRelayList(relayListEvent);
console.log('Read relays:', relays.read);
console.log('Write relays:', relays.write);
```
## Integration with nostr-tools
### Using with SimplePool
```javascript
import { SimplePool } from 'nostr-tools';
import { EventStore } from 'applesauce-core';
const pool = new SimplePool();
const eventStore = new EventStore();
// Load events into store
pool.subscribeMany(relays, [filter], {
onevent(event) {
eventStore.add(event);
}
});
// Query store reactively
const timeline$ = createTimelineQuery(eventStore, filter);
```
### Publishing Events
```javascript
import { finalizeEvent } from 'nostr-tools';
// Create event
const event = finalizeEvent({
kind: 1,
content: 'Hello!',
created_at: Math.floor(Date.now() / 1000),
tags: []
}, secretKey);
// Add to local store immediately (optimistic update)
eventStore.add(event);
// Publish to relays
await pool.publish(relays, event);
```
## Svelte Integration
### Using in Svelte Components
```svelte
<script>
import { onMount, onDestroy } from 'svelte';
import { EventStore, TimelineQuery } from 'applesauce-core';
export let pubkey;
const eventStore = new EventStore();
let events = [];
let subscription;
onMount(() => {
const timeline = new TimelineQuery(eventStore, {
kinds: [1],
authors: [pubkey]
});
subscription = timeline.events$.subscribe(e => {
events = e;
});
});
onDestroy(() => {
subscription?.unsubscribe();
});
</script>
{#each events as event}
<div class="event">
{event.content}
</div>
{/each}
```
### Svelte Store Adapter
```javascript
import { readable } from 'svelte/store';
// Convert RxJS observable to Svelte store
function fromObservable(observable, initialValue) {
return readable(initialValue, set => {
const subscription = observable.subscribe(set);
return () => subscription.unsubscribe();
});
}
// Usage
const events$ = timeline.events$;
const eventsStore = fromObservable(events$, []);
```
```svelte
<script>
import { eventsStore } from './stores.js';
</script>
{#each $eventsStore as event}
<div>{event.content}</div>
{/each}
```
## Best Practices
### Store Management
1. **Single store instance** - Use one EventStore per app
2. **Clear stale data** - Implement cache limits
3. **Handle replaceable events** - Let store manage deduplication
4. **Unsubscribe** - Clean up subscriptions on component destroy
### Query Optimization
1. **Use specific filters** - Narrow queries perform better
2. **Limit results** - Use limit for initial loads
3. **Cache queries** - Reuse query instances
4. **Debounce updates** - Throttle rapid changes
### Memory Management
1. **Limit store size** - Implement LRU or time-based eviction
2. **Clean up observables** - Unsubscribe when done
3. **Use weak references** - For profile caches
4. **Paginate large feeds** - Don't load everything at once
### Reactive Patterns
1. **Prefer observables** - Over imperative queries
2. **Use operators** - Transform data with RxJS
3. **Combine streams** - For complex views
4. **Handle loading states** - Show placeholders
## Common Patterns
### Event Deduplication
```javascript
// EventStore handles deduplication automatically
eventStore.add(event1);
eventStore.add(event1); // No duplicate
// For manual deduplication
const seen = new Set();
events.filter(e => {
if (seen.has(e.id)) return false;
seen.add(e.id);
return true;
});
```
### Optimistic Updates
```javascript
async function publishNote(content) {
// Create event
const event = await createEvent(content);
// Add to store immediately (optimistic)
eventStore.add(event);
try {
// Publish to relays
await pool.publish(relays, event);
} catch (error) {
// Remove on failure
eventStore.remove(event.id);
throw error;
}
}
```
### Loading States
```javascript
import { BehaviorSubject, combineLatest } from 'rxjs';
const loading$ = new BehaviorSubject(true);
const events$ = timeline.events$;
const state$ = combineLatest([loading$, events$]).pipe(
map(([loading, events]) => ({
loading,
events,
empty: !loading && events.length === 0
}))
);
// Start loading
loading$.next(true);
await loadEvents();
loading$.next(false);
```
### Infinite Scroll
```javascript
function createInfiniteScroll(timeline, pageSize = 50) {
let loading = false;
async function loadMore() {
if (loading) return;
loading = true;
await timeline.loadMore(pageSize);
loading = false;
}
function onScroll(event) {
const { scrollTop, scrollHeight, clientHeight } = event.target;
if (scrollHeight - scrollTop <= clientHeight * 1.5) {
loadMore();
}
}
return { loadMore, onScroll };
}
```
## Troubleshooting
### Common Issues
**Events not updating:**
- Check subscription is active
- Verify events are being added to store
- Ensure filter matches events
**Memory growing:**
- Implement store size limits
- Clean up subscriptions
- Use weak references where appropriate
**Slow queries:**
- Add indexes for common queries
- Use more specific filters
- Implement pagination
**Stale data:**
- Implement refresh mechanisms
- Set up real-time subscriptions
- Handle replaceable event updates
## References
- **applesauce GitHub**: https://github.com/hzrd149/applesauce
- **RxJS Documentation**: https://rxjs.dev
- **nostr-tools**: https://github.com/nbd-wtf/nostr-tools
- **Nostr Protocol**: https://github.com/nostr-protocol/nostr
## Related Skills
- **nostr-tools** - Lower-level Nostr operations
- **applesauce-signers** - Event signing abstractions
- **svelte** - Building reactive UIs
- **nostr** - Nostr protocol fundamentals

View File

@@ -0,0 +1,757 @@
---
name: applesauce-signers
description: This skill should be used when working with applesauce-signers library for Nostr event signing, including NIP-07 browser extensions, NIP-46 remote signing, and custom signer implementations. Provides comprehensive knowledge of signing patterns and signer abstractions.
---
# applesauce-signers Skill
This skill provides comprehensive knowledge and patterns for working with applesauce-signers, a library that provides signing abstractions for Nostr applications.
## When to Use This Skill
Use this skill when:
- Implementing event signing in Nostr applications
- Integrating with NIP-07 browser extensions
- Working with NIP-46 remote signers
- Building custom signer implementations
- Managing signing sessions
- Handling signing requests and permissions
- Implementing multi-signer support
## Core Concepts
### applesauce-signers Overview
applesauce-signers provides:
- **Signer abstraction** - Unified interface for different signers
- **NIP-07 integration** - Browser extension support
- **NIP-46 support** - Remote signing (Nostr Connect)
- **Simple signers** - Direct key signing
- **Permission handling** - Manage signing requests
- **Observable patterns** - Reactive signing states
### Installation
```bash
npm install applesauce-signers
```
### Signer Interface
All signers implement a common interface:
```typescript
interface Signer {
// Get public key
getPublicKey(): Promise<string>;
// Sign event
signEvent(event: UnsignedEvent): Promise<SignedEvent>;
// Encrypt (NIP-04)
nip04Encrypt?(pubkey: string, plaintext: string): Promise<string>;
nip04Decrypt?(pubkey: string, ciphertext: string): Promise<string>;
// Encrypt (NIP-44)
nip44Encrypt?(pubkey: string, plaintext: string): Promise<string>;
nip44Decrypt?(pubkey: string, ciphertext: string): Promise<string>;
}
```
## Simple Signer
### Using Secret Key
```javascript
import { SimpleSigner } from 'applesauce-signers';
import { generateSecretKey } from 'nostr-tools';
// Create signer with existing key
const signer = new SimpleSigner(secretKey);
// Or generate new key
const newSecretKey = generateSecretKey();
const newSigner = new SimpleSigner(newSecretKey);
// Get public key
const pubkey = await signer.getPublicKey();
// Sign event
const unsignedEvent = {
kind: 1,
content: 'Hello Nostr!',
created_at: Math.floor(Date.now() / 1000),
tags: []
};
const signedEvent = await signer.signEvent(unsignedEvent);
```
### NIP-04 Encryption
```javascript
// Encrypt message
const ciphertext = await signer.nip04Encrypt(
recipientPubkey,
'Secret message'
);
// Decrypt message
const plaintext = await signer.nip04Decrypt(
senderPubkey,
ciphertext
);
```
### NIP-44 Encryption
```javascript
// Encrypt with NIP-44 (preferred)
const ciphertext = await signer.nip44Encrypt(
recipientPubkey,
'Secret message'
);
// Decrypt
const plaintext = await signer.nip44Decrypt(
senderPubkey,
ciphertext
);
```
## NIP-07 Signer
### Browser Extension Integration
```javascript
import { Nip07Signer } from 'applesauce-signers';
// Check if extension is available
if (window.nostr) {
const signer = new Nip07Signer();
// Get public key (may prompt user)
const pubkey = await signer.getPublicKey();
// Sign event (prompts user)
const signedEvent = await signer.signEvent(unsignedEvent);
}
```
### Handling Extension Availability
```javascript
function getAvailableSigner() {
if (typeof window !== 'undefined' && window.nostr) {
return new Nip07Signer();
}
return null;
}
// Wait for extension to load
async function waitForExtension(timeout = 3000) {
const start = Date.now();
while (Date.now() - start < timeout) {
if (window.nostr) {
return new Nip07Signer();
}
await new Promise(r => setTimeout(r, 100));
}
return null;
}
```
### Extension Permissions
```javascript
// Some extensions support granular permissions
const signer = new Nip07Signer();
// Request specific permissions
try {
// This varies by extension
await window.nostr.enable();
} catch (error) {
console.log('User denied permission');
}
```
## NIP-46 Remote Signer
### Nostr Connect
```javascript
import { Nip46Signer } from 'applesauce-signers';
// Create remote signer
const signer = new Nip46Signer({
// Remote signer's pubkey
remotePubkey: signerPubkey,
// Relays for communication
relays: ['wss://relay.example.com'],
// Local secret key for encryption
localSecretKey: localSecretKey,
// Optional: custom client name
clientName: 'My Nostr App'
});
// Connect to remote signer
await signer.connect();
// Get public key
const pubkey = await signer.getPublicKey();
// Sign event
const signedEvent = await signer.signEvent(unsignedEvent);
// Disconnect when done
signer.disconnect();
```
### Connection URL
```javascript
// Parse nostrconnect:// URL
function parseNostrConnectUrl(url) {
const parsed = new URL(url);
return {
pubkey: parsed.pathname.replace('//', ''),
relay: parsed.searchParams.get('relay'),
secret: parsed.searchParams.get('secret')
};
}
// Create signer from URL
const { pubkey, relay, secret } = parseNostrConnectUrl(connectUrl);
const signer = new Nip46Signer({
remotePubkey: pubkey,
relays: [relay],
localSecretKey: generateSecretKey(),
secret: secret
});
```
### Bunker URL
```javascript
// Parse bunker:// URL (NIP-46)
function parseBunkerUrl(url) {
const parsed = new URL(url);
return {
pubkey: parsed.pathname.replace('//', ''),
relays: parsed.searchParams.getAll('relay'),
secret: parsed.searchParams.get('secret')
};
}
const { pubkey, relays, secret } = parseBunkerUrl(bunkerUrl);
```
## Signer Management
### Signer Store
```javascript
import { SignerStore } from 'applesauce-signers';
const signerStore = new SignerStore();
// Set active signer
signerStore.setSigner(signer);
// Get active signer
const activeSigner = signerStore.getSigner();
// Clear signer (logout)
signerStore.clearSigner();
// Observable for signer changes
signerStore.signer$.subscribe(signer => {
if (signer) {
console.log('Logged in');
} else {
console.log('Logged out');
}
});
```
### Multi-Account Support
```javascript
class AccountManager {
constructor() {
this.accounts = new Map();
this.activeAccount = null;
}
addAccount(pubkey, signer) {
this.accounts.set(pubkey, signer);
}
removeAccount(pubkey) {
this.accounts.delete(pubkey);
if (this.activeAccount === pubkey) {
this.activeAccount = null;
}
}
switchAccount(pubkey) {
if (this.accounts.has(pubkey)) {
this.activeAccount = pubkey;
return this.accounts.get(pubkey);
}
return null;
}
getActiveSigner() {
return this.activeAccount
? this.accounts.get(this.activeAccount)
: null;
}
}
```
## Custom Signers
### Implementing a Custom Signer
```javascript
class CustomSigner {
constructor(options) {
this.options = options;
}
async getPublicKey() {
// Return public key
return this.options.pubkey;
}
async signEvent(event) {
// Implement signing logic
// Could call external API, hardware wallet, etc.
const signedEvent = await this.externalSign(event);
return signedEvent;
}
async nip04Encrypt(pubkey, plaintext) {
// Implement NIP-04 encryption
throw new Error('NIP-04 not supported');
}
async nip04Decrypt(pubkey, ciphertext) {
throw new Error('NIP-04 not supported');
}
async nip44Encrypt(pubkey, plaintext) {
// Implement NIP-44 encryption
throw new Error('NIP-44 not supported');
}
async nip44Decrypt(pubkey, ciphertext) {
throw new Error('NIP-44 not supported');
}
}
```
### Hardware Wallet Signer
```javascript
class HardwareWalletSigner {
constructor(devicePath) {
this.devicePath = devicePath;
}
async connect() {
// Connect to hardware device
this.device = await connectToDevice(this.devicePath);
}
async getPublicKey() {
// Get public key from device
return await this.device.getNostrPubkey();
}
async signEvent(event) {
// Sign on device (user confirms on device)
const signature = await this.device.signNostrEvent(event);
return {
...event,
pubkey: await this.getPublicKey(),
id: getEventHash(event),
sig: signature
};
}
}
```
### Read-Only Signer
```javascript
class ReadOnlySigner {
constructor(pubkey) {
this.pubkey = pubkey;
}
async getPublicKey() {
return this.pubkey;
}
async signEvent(event) {
throw new Error('Read-only mode: cannot sign events');
}
async nip04Encrypt(pubkey, plaintext) {
throw new Error('Read-only mode: cannot encrypt');
}
async nip04Decrypt(pubkey, ciphertext) {
throw new Error('Read-only mode: cannot decrypt');
}
}
```
## Signing Utilities
### Event Creation Helper
```javascript
async function createAndSignEvent(signer, template) {
const pubkey = await signer.getPublicKey();
const event = {
...template,
pubkey,
created_at: template.created_at || Math.floor(Date.now() / 1000)
};
return await signer.signEvent(event);
}
// Usage
const signedNote = await createAndSignEvent(signer, {
kind: 1,
content: 'Hello!',
tags: []
});
```
### Batch Signing
```javascript
async function signEvents(signer, events) {
const signed = [];
for (const event of events) {
const signedEvent = await signer.signEvent(event);
signed.push(signedEvent);
}
return signed;
}
// With parallelization (if signer supports)
async function signEventsParallel(signer, events) {
return Promise.all(
events.map(event => signer.signEvent(event))
);
}
```
## Svelte Integration
### Signer Context
```svelte
<!-- SignerProvider.svelte -->
<script>
import { setContext } from 'svelte';
import { writable } from 'svelte/store';
const signer = writable(null);
setContext('signer', {
signer,
setSigner: (s) => signer.set(s),
clearSigner: () => signer.set(null)
});
</script>
<slot />
```
```svelte
<!-- Component using signer -->
<script>
import { getContext } from 'svelte';
const { signer } = getContext('signer');
async function publishNote(content) {
if (!$signer) {
alert('Please login first');
return;
}
const event = await $signer.signEvent({
kind: 1,
content,
created_at: Math.floor(Date.now() / 1000),
tags: []
});
// Publish event...
}
</script>
```
### Login Component
```svelte
<script>
import { getContext } from 'svelte';
import { Nip07Signer, SimpleSigner } from 'applesauce-signers';
const { setSigner, clearSigner, signer } = getContext('signer');
let nsec = '';
async function loginWithExtension() {
if (window.nostr) {
setSigner(new Nip07Signer());
} else {
alert('No extension found');
}
}
function loginWithNsec() {
try {
const decoded = nip19.decode(nsec);
if (decoded.type === 'nsec') {
setSigner(new SimpleSigner(decoded.data));
nsec = '';
}
} catch (e) {
alert('Invalid nsec');
}
}
function logout() {
clearSigner();
}
</script>
{#if $signer}
<button on:click={logout}>Logout</button>
{:else}
<button on:click={loginWithExtension}>
Login with Extension
</button>
<div>
<input
type="password"
bind:value={nsec}
placeholder="nsec..."
/>
<button on:click={loginWithNsec}>
Login with Key
</button>
</div>
{/if}
```
## Best Practices
### Security
1. **Never store secret keys in plain text** - Use secure storage
2. **Prefer NIP-07** - Let extensions manage keys
3. **Clear keys on logout** - Don't leave in memory
4. **Validate before signing** - Check event content
### User Experience
1. **Show signing status** - Loading states
2. **Handle rejections gracefully** - User may cancel
3. **Provide fallbacks** - Multiple login options
4. **Remember preferences** - Store signer type
### Error Handling
```javascript
async function safeSign(signer, event) {
try {
return await signer.signEvent(event);
} catch (error) {
if (error.message.includes('rejected')) {
console.log('User rejected signing');
return null;
}
if (error.message.includes('timeout')) {
console.log('Signing timed out');
return null;
}
throw error;
}
}
```
### Permission Checking
```javascript
function hasEncryptionSupport(signer) {
return typeof signer.nip04Encrypt === 'function' ||
typeof signer.nip44Encrypt === 'function';
}
function getEncryptionMethod(signer) {
// Prefer NIP-44
if (typeof signer.nip44Encrypt === 'function') {
return 'nip44';
}
if (typeof signer.nip04Encrypt === 'function') {
return 'nip04';
}
return null;
}
```
## Common Patterns
### Signer Detection
```javascript
async function detectSigners() {
const available = [];
// Check NIP-07
if (typeof window !== 'undefined' && window.nostr) {
available.push({
type: 'nip07',
name: 'Browser Extension',
create: () => new Nip07Signer()
});
}
// Check stored credentials
const storedKey = localStorage.getItem('nsec');
if (storedKey) {
available.push({
type: 'stored',
name: 'Saved Key',
create: () => new SimpleSigner(storedKey)
});
}
return available;
}
```
### Auto-Reconnect for NIP-46
```javascript
class ReconnectingNip46Signer {
constructor(options) {
this.options = options;
this.signer = null;
}
async connect() {
this.signer = new Nip46Signer(this.options);
await this.signer.connect();
}
async signEvent(event) {
try {
return await this.signer.signEvent(event);
} catch (error) {
if (error.message.includes('disconnected')) {
await this.connect();
return await this.signer.signEvent(event);
}
throw error;
}
}
}
```
### Signer Type Persistence
```javascript
const SIGNER_KEY = 'nostr_signer_type';
function saveSigner(type, data) {
localStorage.setItem(SIGNER_KEY, JSON.stringify({ type, data }));
}
async function restoreSigner() {
const saved = localStorage.getItem(SIGNER_KEY);
if (!saved) return null;
const { type, data } = JSON.parse(saved);
switch (type) {
case 'nip07':
if (window.nostr) {
return new Nip07Signer();
}
break;
case 'simple':
// Don't store secret keys!
break;
case 'nip46':
const signer = new Nip46Signer(data);
await signer.connect();
return signer;
}
return null;
}
```
## Troubleshooting
### Common Issues
**Extension not detected:**
- Wait for page load
- Check window.nostr exists
- Verify extension is enabled
**Signing rejected:**
- User cancelled in extension
- Handle gracefully with error message
**NIP-46 connection fails:**
- Check relay is accessible
- Verify remote signer is online
- Check secret matches
**Encryption not supported:**
- Check signer has encrypt methods
- Fall back to alternative method
- Show user appropriate error
## References
- **applesauce GitHub**: https://github.com/hzrd149/applesauce
- **NIP-07 Specification**: https://github.com/nostr-protocol/nips/blob/master/07.md
- **NIP-46 Specification**: https://github.com/nostr-protocol/nips/blob/master/46.md
- **nostr-tools**: https://github.com/nbd-wtf/nostr-tools
## Related Skills
- **nostr-tools** - Event creation and signing utilities
- **applesauce-core** - Event stores and queries
- **nostr** - Nostr protocol fundamentals
- **svelte** - Building Nostr UIs

View File

@@ -0,0 +1,395 @@
---
name: cypher
description: This skill should be used when writing, debugging, or discussing Neo4j Cypher queries. Provides comprehensive knowledge of Cypher syntax, query patterns, performance optimization, and common mistakes. Particularly useful for translating between domain models and graph queries.
---
# Neo4j Cypher Query Language
## Purpose
This skill provides expert-level guidance for writing Neo4j Cypher queries, including syntax, patterns, performance optimization, and common pitfalls. It is particularly tuned for the patterns used in this ORLY Nostr relay codebase.
## When to Use
Activate this skill when:
- Writing Cypher queries for Neo4j
- Debugging Cypher syntax errors
- Optimizing query performance
- Translating Nostr filter queries to Cypher
- Working with graph relationships and traversals
- Creating or modifying schema (indexes, constraints)
## Core Cypher Syntax
### Clause Order (CRITICAL)
Cypher requires clauses in a specific order. Violating this causes syntax errors:
```cypher
// CORRECT order of clauses
MATCH (n:Label) // 1. Pattern matching
WHERE n.prop = value // 2. Filtering
WITH n, count(*) AS cnt // 3. Intermediate results (resets scope)
OPTIONAL MATCH (n)-[r]-() // 4. Optional patterns
CREATE (m:NewNode) // 5. Node/relationship creation
SET n.prop = value // 6. Property updates
DELETE r // 7. Deletions
RETURN n.prop AS result // 8. Return clause
ORDER BY result DESC // 9. Ordering
SKIP 10 LIMIT 20 // 10. Pagination
```
### The WITH Clause (CRITICAL)
The `WITH` clause is required to transition between certain operations:
**Rule: Cannot use MATCH after CREATE without WITH**
```cypher
// WRONG - MATCH after CREATE without WITH
CREATE (e:Event {id: $id})
MATCH (ref:Event {id: $refId}) // ERROR!
CREATE (e)-[:REFERENCES]->(ref)
// CORRECT - Use WITH to carry variables forward
CREATE (e:Event {id: $id})
WITH e
MATCH (ref:Event {id: $refId})
CREATE (e)-[:REFERENCES]->(ref)
```
**Rule: WITH resets the scope**
Variables not included in WITH are no longer accessible:
```cypher
// WRONG - 'a' is lost after WITH
MATCH (a:Author), (e:Event)
WITH e
WHERE a.pubkey = $pubkey // ERROR: 'a' not defined
// CORRECT - Include all needed variables
MATCH (a:Author), (e:Event)
WITH a, e
WHERE a.pubkey = $pubkey
```
### Node and Relationship Patterns
```cypher
// Nodes
(n) // Anonymous node
(n:Label) // Labeled node
(n:Label {prop: value}) // Node with properties
(n:Label:OtherLabel) // Multiple labels
// Relationships
-[r]-> // Directed, anonymous
-[r:TYPE]-> // Typed relationship
-[r:TYPE {prop: value}]-> // With properties
-[r:TYPE|OTHER]-> // Multiple types (OR)
-[*1..3]-> // Variable length (1 to 3 hops)
-[*]-> // Any number of hops
```
### MERGE vs CREATE
**CREATE**: Always creates new nodes/relationships (may create duplicates)
```cypher
CREATE (n:Event {id: $id}) // Creates even if id exists
```
**MERGE**: Finds or creates (idempotent)
```cypher
MERGE (n:Event {id: $id}) // Finds existing or creates new
ON CREATE SET n.created = timestamp()
ON MATCH SET n.accessed = timestamp()
```
**Best Practice**: Use MERGE for reference nodes, CREATE for unique events
```cypher
// Reference nodes - use MERGE (idempotent)
MERGE (author:Author {pubkey: $pubkey})
// Unique events - use CREATE (after checking existence)
CREATE (e:Event {id: $eventId, ...})
```
### OPTIONAL MATCH
Returns NULL for non-matching patterns (like LEFT JOIN):
```cypher
// Find events, with or without tags
MATCH (e:Event)
OPTIONAL MATCH (e)-[:TAGGED_WITH]->(t:Tag)
RETURN e.id, collect(t.value) AS tags
```
### Conditional Creation with FOREACH
To conditionally create relationships:
```cypher
// FOREACH trick for conditional operations
OPTIONAL MATCH (ref:Event {id: $refId})
FOREACH (ignoreMe IN CASE WHEN ref IS NOT NULL THEN [1] ELSE [] END |
CREATE (e)-[:REFERENCES]->(ref)
)
```
### Aggregation Functions
```cypher
count(*) // Count all rows
count(n) // Count non-null values
count(DISTINCT n) // Count unique values
collect(n) // Collect into list
collect(DISTINCT n) // Collect unique values
sum(n.value) // Sum values
avg(n.value) // Average
min(n.value), max(n.value) // Min/max
```
### String Operations
```cypher
// String matching
WHERE n.name STARTS WITH 'prefix'
WHERE n.name ENDS WITH 'suffix'
WHERE n.name CONTAINS 'substring'
WHERE n.name =~ 'regex.*pattern' // Regex
// String functions
toLower(str), toUpper(str)
trim(str), ltrim(str), rtrim(str)
substring(str, start, length)
replace(str, search, replacement)
```
### List Operations
```cypher
// IN clause
WHERE n.kind IN [1, 7, 30023]
WHERE n.pubkey IN $pubkeyList
// List comprehension
[x IN list WHERE x > 0 | x * 2]
// UNWIND - expand list into rows
UNWIND $pubkeys AS pubkey
MERGE (u:User {pubkey: pubkey})
```
### Parameters
Always use parameters for values (security + performance):
```cypher
// CORRECT - parameterized
MATCH (e:Event {id: $eventId})
WHERE e.kind IN $kinds
// WRONG - string interpolation (SQL injection risk!)
MATCH (e:Event {id: '" + eventId + "'})
```
## Schema Management
### Constraints
```cypher
// Uniqueness constraint (also creates index)
CREATE CONSTRAINT event_id_unique IF NOT EXISTS
FOR (e:Event) REQUIRE e.id IS UNIQUE
// Composite uniqueness
CREATE CONSTRAINT card_unique IF NOT EXISTS
FOR (c:Card) REQUIRE (c.customer_id, c.observee_pubkey) IS UNIQUE
// Drop constraint
DROP CONSTRAINT event_id_unique IF EXISTS
```
### Indexes
```cypher
// Single property index
CREATE INDEX event_kind IF NOT EXISTS FOR (e:Event) ON (e.kind)
// Composite index
CREATE INDEX event_kind_created IF NOT EXISTS
FOR (e:Event) ON (e.kind, e.created_at)
// Drop index
DROP INDEX event_kind IF EXISTS
```
## Common Query Patterns
### Find with Filter
```cypher
// Multiple conditions with OR
MATCH (e:Event)
WHERE e.kind IN $kinds
AND (e.id = $id1 OR e.id = $id2)
AND e.created_at >= $since
RETURN e
ORDER BY e.created_at DESC
LIMIT $limit
```
### Graph Traversal
```cypher
// Find events by author
MATCH (e:Event)-[:AUTHORED_BY]->(a:Author {pubkey: $pubkey})
RETURN e
// Find followers of a user
MATCH (follower:NostrUser)-[:FOLLOWS]->(user:NostrUser {pubkey: $pubkey})
RETURN follower.pubkey
// Find mutual follows (friends)
MATCH (a:NostrUser {pubkey: $pubkeyA})-[:FOLLOWS]->(b:NostrUser)
WHERE (b)-[:FOLLOWS]->(a)
RETURN b.pubkey AS mutual_friend
```
### Upsert Pattern
```cypher
MERGE (n:Node {key: $key})
ON CREATE SET
n.created_at = timestamp(),
n.value = $value
ON MATCH SET
n.updated_at = timestamp(),
n.value = $value
RETURN n
```
### Batch Processing with UNWIND
```cypher
// Create multiple nodes from list
UNWIND $items AS item
CREATE (n:Node {id: item.id, value: item.value})
// Create relationships from list
UNWIND $follows AS followed_pubkey
MERGE (followed:NostrUser {pubkey: followed_pubkey})
MERGE (author)-[:FOLLOWS]->(followed)
```
## Performance Optimization
### Index Usage
1. **Start with indexed properties** - Begin MATCH with most selective indexed field
2. **Use composite indexes** - For queries filtering on multiple properties
3. **Profile queries** - Use `PROFILE` prefix to see execution plan
```cypher
PROFILE MATCH (e:Event {kind: 1})
WHERE e.created_at > $since
RETURN e LIMIT 100
```
### Query Optimization Tips
1. **Filter early** - Put WHERE conditions close to MATCH
2. **Limit early** - Use LIMIT as early as possible
3. **Avoid Cartesian products** - Connect patterns or use WITH
4. **Use parameters** - Enables query plan caching
```cypher
// GOOD - Filter and limit early
MATCH (e:Event)
WHERE e.kind IN $kinds AND e.created_at >= $since
WITH e ORDER BY e.created_at DESC LIMIT 100
OPTIONAL MATCH (e)-[:TAGGED_WITH]->(t:Tag)
RETURN e, collect(t)
// BAD - Late filtering
MATCH (e:Event), (t:Tag)
WHERE e.kind IN $kinds
RETURN e, t LIMIT 100
```
## Reference Materials
For detailed information, consult the reference files:
- **references/syntax-reference.md** - Complete Cypher syntax guide with all clause types, operators, and functions
- **references/common-patterns.md** - Project-specific patterns for ORLY Nostr relay including event storage, tag queries, and social graph traversals
- **references/common-mistakes.md** - Frequent Cypher errors and how to avoid them
## ORLY-Specific Patterns
This codebase uses these specific Cypher patterns:
### Event Storage Pattern
```cypher
// Create event with author relationship
MERGE (a:Author {pubkey: $pubkey})
CREATE (e:Event {
id: $eventId,
serial: $serial,
kind: $kind,
created_at: $createdAt,
content: $content,
sig: $sig,
pubkey: $pubkey,
tags: $tags
})
CREATE (e)-[:AUTHORED_BY]->(a)
```
### Tag Query Pattern
```cypher
// Query events by tag (Nostr #<tag> filter)
MATCH (e:Event)-[:TAGGED_WITH]->(t:Tag {type: $tagType})
WHERE t.value IN $tagValues
RETURN e
ORDER BY e.created_at DESC
LIMIT $limit
```
### Social Graph Pattern
```cypher
// Process contact list with diff-based updates
// Mark old as superseded
OPTIONAL MATCH (old:ProcessedSocialEvent {event_id: $old_event_id})
SET old.superseded_by = $new_event_id
// Create tracking node
CREATE (new:ProcessedSocialEvent {
event_id: $new_event_id,
event_kind: 3,
pubkey: $author_pubkey,
created_at: $created_at,
processed_at: timestamp()
})
// Update relationships
MERGE (author:NostrUser {pubkey: $author_pubkey})
WITH author
UNWIND $added_follows AS followed_pubkey
MERGE (followed:NostrUser {pubkey: followed_pubkey})
MERGE (author)-[:FOLLOWS]->(followed)
```
## Official Resources
- Neo4j Cypher Manual: https://neo4j.com/docs/cypher-manual/current/
- Cypher Cheat Sheet: https://neo4j.com/docs/cypher-cheat-sheet/current/
- Query Tuning: https://neo4j.com/docs/cypher-manual/current/query-tuning/

View File

@@ -0,0 +1,381 @@
# Common Cypher Mistakes and How to Avoid Them
## Clause Ordering Errors
### MATCH After CREATE Without WITH
**Error**: `Invalid input 'MATCH': expected ... WITH`
```cypher
// WRONG
CREATE (e:Event {id: $id})
MATCH (ref:Event {id: $refId}) // ERROR!
CREATE (e)-[:REFERENCES]->(ref)
// CORRECT - Use WITH to transition
CREATE (e:Event {id: $id})
WITH e
MATCH (ref:Event {id: $refId})
CREATE (e)-[:REFERENCES]->(ref)
```
**Rule**: After CREATE, you must use WITH before MATCH.
### WHERE After WITH Without Carrying Variables
**Error**: `Variable 'x' not defined`
```cypher
// WRONG - 'a' is lost
MATCH (a:Author), (e:Event)
WITH e
WHERE a.pubkey = $pubkey // ERROR: 'a' not in scope
// CORRECT - Include all needed variables
MATCH (a:Author), (e:Event)
WITH a, e
WHERE a.pubkey = $pubkey
```
**Rule**: WITH resets the scope. Include all variables you need.
### ORDER BY Without Aliased Return
**Error**: `Invalid input 'ORDER': expected ... AS`
```cypher
// WRONG in some contexts
RETURN n.name
ORDER BY n.name
// SAFER - Use alias
RETURN n.name AS name
ORDER BY name
```
## MERGE Mistakes
### MERGE on Complex Pattern Creates Duplicates
```cypher
// DANGEROUS - May create duplicate nodes
MERGE (a:Person {name: 'Alice'})-[:KNOWS]->(b:Person {name: 'Bob'})
// CORRECT - MERGE nodes separately first
MERGE (a:Person {name: 'Alice'})
MERGE (b:Person {name: 'Bob'})
MERGE (a)-[:KNOWS]->(b)
```
**Rule**: MERGE simple patterns, not complex ones.
### MERGE Without Unique Property
```cypher
// DANGEROUS - Will keep creating nodes
MERGE (p:Person) // No unique identifier!
SET p.name = 'Alice'
// CORRECT - Provide unique key
MERGE (p:Person {email: $email})
SET p.name = 'Alice'
```
**Rule**: MERGE must have properties that uniquely identify the node.
### Missing ON CREATE/ON MATCH
```cypher
// LOSES context of whether new or existing
MERGE (p:Person {id: $id})
SET p.updated_at = timestamp() // Always runs
// BETTER - Handle each case
MERGE (p:Person {id: $id})
ON CREATE SET p.created_at = timestamp()
ON MATCH SET p.updated_at = timestamp()
```
## NULL Handling Errors
### Comparing with NULL
```cypher
// WRONG - NULL = NULL is NULL, not true
WHERE n.email = null // Never matches!
// CORRECT
WHERE n.email IS NULL
WHERE n.email IS NOT NULL
```
### NULL in Aggregations
```cypher
// count(NULL) returns 0, collect(NULL) includes NULL
MATCH (n:Person)
OPTIONAL MATCH (n)-[:BOUGHT]->(p:Product)
RETURN n.name, count(p) // count ignores NULL
```
### NULL Propagation in Expressions
```cypher
// Any operation with NULL returns NULL
WHERE n.age + 1 > 21 // If n.age is NULL, whole expression is NULL (falsy)
// Handle with coalesce
WHERE coalesce(n.age, 0) + 1 > 21
```
## List and IN Clause Errors
### Empty List in IN
```cypher
// An empty list never matches
WHERE n.kind IN [] // Always false
// Check for empty list in application code before query
// Or use CASE:
WHERE CASE WHEN size($kinds) > 0 THEN n.kind IN $kinds ELSE true END
```
### IN with NULL Values
```cypher
// NULL in the list causes issues
WHERE n.id IN [1, NULL, 3] // NULL is never equal to anything
// Filter NULLs in application code
```
## Relationship Pattern Errors
### Forgetting Direction
```cypher
// WRONG - Creates both directions
MATCH (a)-[:FOLLOWS]-(b) // Undirected!
// CORRECT - Specify direction
MATCH (a)-[:FOLLOWS]->(b) // a follows b
MATCH (a)<-[:FOLLOWS]-(b) // b follows a
```
### Variable-Length Without Bounds
```cypher
// DANGEROUS - Potentially explosive
MATCH (a)-[*]->(b) // Any length path!
// SAFE - Set bounds
MATCH (a)-[*1..3]->(b) // 1 to 3 hops max
```
### Creating Duplicate Relationships
```cypher
// May create duplicates
CREATE (a)-[:KNOWS]->(b)
// Idempotent
MERGE (a)-[:KNOWS]->(b)
```
## Performance Mistakes
### Cartesian Products
```cypher
// WRONG - Cartesian product
MATCH (a:Person), (b:Product)
WHERE a.id = $personId AND b.id = $productId
CREATE (a)-[:BOUGHT]->(b)
// CORRECT - Single pattern or sequential
MATCH (a:Person {id: $personId})
MATCH (b:Product {id: $productId})
CREATE (a)-[:BOUGHT]->(b)
```
### Late Filtering
```cypher
// SLOW - Filters after collecting everything
MATCH (e:Event)
WITH e
WHERE e.kind = 1 // Should be in MATCH or right after
// FAST - Filter early
MATCH (e:Event)
WHERE e.kind = 1
```
### Missing LIMIT with ORDER BY
```cypher
// SLOW - Sorts all results
MATCH (e:Event)
RETURN e
ORDER BY e.created_at DESC
// FAST - Limits result set
MATCH (e:Event)
RETURN e
ORDER BY e.created_at DESC
LIMIT 100
```
### Unparameterized Queries
```cypher
// WRONG - No query plan caching, injection risk
MATCH (e:Event {id: '" + eventId + "'})
// CORRECT - Use parameters
MATCH (e:Event {id: $eventId})
```
## String Comparison Errors
### Case Sensitivity
```cypher
// Cypher strings are case-sensitive
WHERE n.name = 'alice' // Won't match 'Alice'
// Use toLower/toUpper for case-insensitive
WHERE toLower(n.name) = toLower($name)
// Or use regex with (?i)
WHERE n.name =~ '(?i)alice'
```
### LIKE vs CONTAINS
```cypher
// There's no LIKE in Cypher
WHERE n.name LIKE '%alice%' // ERROR!
// Use CONTAINS, STARTS WITH, ENDS WITH
WHERE n.name CONTAINS 'alice'
WHERE n.name STARTS WITH 'ali'
WHERE n.name ENDS WITH 'ice'
// Or regex for complex patterns
WHERE n.name =~ '.*ali.*ce.*'
```
## Index Mistakes
### Constraint vs Index
```cypher
// Constraint (also creates index, enforces uniqueness)
CREATE CONSTRAINT foo IF NOT EXISTS FOR (n:Node) REQUIRE n.id IS UNIQUE
// Index only (no uniqueness enforcement)
CREATE INDEX bar IF NOT EXISTS FOR (n:Node) ON (n.id)
```
### Index Not Used
```cypher
// Index on n.id won't help here
WHERE toLower(n.id) = $id // Function applied to indexed property!
// Store lowercase if needed, or create computed property
```
### Wrong Composite Index Order
```cypher
// Index on (kind, created_at) won't help query by created_at alone
MATCH (e:Event) WHERE e.created_at > $since // Index not used
// Either create single-property index or query by kind too
CREATE INDEX event_created_at FOR (e:Event) ON (e.created_at)
```
## Transaction Errors
### Read After Write in Same Transaction
```cypher
// In Neo4j, reads in a transaction see the writes
// But be careful with external processes
CREATE (n:Node {id: 'new'})
WITH n
MATCH (m:Node {id: 'new'}) // Will find 'n'
```
### Locks and Deadlocks
```cypher
// MERGE takes locks; avoid complex patterns that might deadlock
// Bad: two MERGEs on same labels in different order
Session 1: MERGE (a:Person {id: 1}) MERGE (b:Person {id: 2})
Session 2: MERGE (b:Person {id: 2}) MERGE (a:Person {id: 1}) // Potential deadlock
// Good: consistent ordering
Session 1: MERGE (a:Person {id: 1}) MERGE (b:Person {id: 2})
Session 2: MERGE (a:Person {id: 1}) MERGE (b:Person {id: 2})
```
## Type Coercion Issues
### Integer vs String
```cypher
// Types must match
WHERE n.id = 123 // Won't match if n.id is "123"
WHERE n.id = '123' // Won't match if n.id is 123
// Use appropriate parameter types from Go
params["id"] = int64(123) // For integer
params["id"] = "123" // For string
```
### Boolean Handling
```cypher
// Neo4j booleans vs strings
WHERE n.active = true // Boolean
WHERE n.active = 'true' // String - different!
```
## Delete Errors
### Delete Node With Relationships
```cypher
// ERROR - Node still has relationships
MATCH (n:Person {id: $id})
DELETE n
// CORRECT - Delete relationships first
MATCH (n:Person {id: $id})
DETACH DELETE n
```
### Optional Match and Delete
```cypher
// WRONG - DELETE NULL causes no error but also doesn't help
OPTIONAL MATCH (n:Node {id: $id})
DELETE n // If n is NULL, nothing happens silently
// Better - Check existence first or handle in application
MATCH (n:Node {id: $id})
DELETE n
```
## Debugging Tips
1. **Use EXPLAIN** to see query plan without executing
2. **Use PROFILE** to see actual execution metrics
3. **Break complex queries** into smaller parts to isolate issues
4. **Check parameter types** - mismatched types are a common issue
5. **Verify indexes exist** with `SHOW INDEXES`
6. **Check constraints** with `SHOW CONSTRAINTS`

View File

@@ -0,0 +1,397 @@
# Common Cypher Patterns for ORLY Nostr Relay
This reference contains project-specific Cypher patterns used in the ORLY Nostr relay's Neo4j backend.
## Schema Overview
### Node Types
| Label | Purpose | Key Properties |
|-------|---------|----------------|
| `Event` | Nostr events (NIP-01) | `id`, `kind`, `pubkey`, `created_at`, `content`, `sig`, `tags`, `serial` |
| `Author` | Event authors (for NIP-01 queries) | `pubkey` |
| `Tag` | Generic tags | `type`, `value` |
| `NostrUser` | Social graph users (WoT) | `pubkey`, `name`, `about`, `picture`, `nip05` |
| `ProcessedSocialEvent` | Social event tracking | `event_id`, `event_kind`, `pubkey`, `superseded_by` |
| `Marker` | Internal state markers | `key`, `value` |
### Relationship Types
| Type | From | To | Purpose |
|------|------|-----|---------|
| `AUTHORED_BY` | Event | Author | Links event to author |
| `TAGGED_WITH` | Event | Tag | Links event to tags |
| `REFERENCES` | Event | Event | e-tag references |
| `MENTIONS` | Event | Author | p-tag mentions |
| `FOLLOWS` | NostrUser | NostrUser | Contact list (kind 3) |
| `MUTES` | NostrUser | NostrUser | Mute list (kind 10000) |
| `REPORTS` | NostrUser | NostrUser | Reports (kind 1984) |
## Event Storage Patterns
### Create Event with Full Relationships
This pattern creates an event and all related nodes/relationships atomically:
```cypher
// 1. Create or get author
MERGE (a:Author {pubkey: $pubkey})
// 2. Create event node
CREATE (e:Event {
id: $eventId,
serial: $serial,
kind: $kind,
created_at: $createdAt,
content: $content,
sig: $sig,
pubkey: $pubkey,
tags: $tagsJson // JSON string for full tag data
})
// 3. Link to author
CREATE (e)-[:AUTHORED_BY]->(a)
// 4. Process e-tags (event references)
WITH e, a
OPTIONAL MATCH (ref0:Event {id: $eTag_0})
FOREACH (_ IN CASE WHEN ref0 IS NOT NULL THEN [1] ELSE [] END |
CREATE (e)-[:REFERENCES]->(ref0)
)
// 5. Process p-tags (mentions)
WITH e, a
MERGE (mentioned0:Author {pubkey: $pTag_0})
CREATE (e)-[:MENTIONS]->(mentioned0)
// 6. Process other tags
WITH e, a
MERGE (tag0:Tag {type: $tagType_0, value: $tagValue_0})
CREATE (e)-[:TAGGED_WITH]->(tag0)
RETURN e.id AS id
```
### Check Event Existence
```cypher
MATCH (e:Event {id: $id})
RETURN e.id AS id
LIMIT 1
```
### Get Next Serial Number
```cypher
MERGE (m:Marker {key: 'serial'})
ON CREATE SET m.value = 1
ON MATCH SET m.value = m.value + 1
RETURN m.value AS serial
```
## Query Patterns
### Basic Filter Query (NIP-01)
```cypher
MATCH (e:Event)
WHERE e.kind IN $kinds
AND e.pubkey IN $authors
AND e.created_at >= $since
AND e.created_at <= $until
RETURN e.id AS id,
e.kind AS kind,
e.created_at AS created_at,
e.content AS content,
e.sig AS sig,
e.pubkey AS pubkey,
e.tags AS tags,
e.serial AS serial
ORDER BY e.created_at DESC
LIMIT $limit
```
### Query by Event ID (with prefix support)
```cypher
// Exact match
MATCH (e:Event {id: $id})
RETURN e
// Prefix match
MATCH (e:Event)
WHERE e.id STARTS WITH $idPrefix
RETURN e
```
### Query by Tag (#<tag> filter)
```cypher
MATCH (e:Event)
OPTIONAL MATCH (e)-[:TAGGED_WITH]->(t:Tag)
WHERE t.type = $tagType AND t.value IN $tagValues
RETURN DISTINCT e
ORDER BY e.created_at DESC
LIMIT $limit
```
### Count Events
```cypher
MATCH (e:Event)
WHERE e.kind IN $kinds
RETURN count(e) AS count
```
### Query Delete Events Targeting an Event
```cypher
MATCH (target:Event {id: $targetId})
MATCH (e:Event {kind: 5})-[:REFERENCES]->(target)
RETURN e
ORDER BY e.created_at DESC
```
### Replaceable Event Check (kinds 0, 3, 10000-19999)
```cypher
MATCH (e:Event {kind: $kind, pubkey: $pubkey})
WHERE e.created_at < $newCreatedAt
RETURN e.serial AS serial
ORDER BY e.created_at DESC
```
### Parameterized Replaceable Event Check (kinds 30000-39999)
```cypher
MATCH (e:Event {kind: $kind, pubkey: $pubkey})-[:TAGGED_WITH]->(t:Tag {type: 'd', value: $dValue})
WHERE e.created_at < $newCreatedAt
RETURN e.serial AS serial
ORDER BY e.created_at DESC
```
## Social Graph Patterns
### Update Profile (Kind 0)
```cypher
MERGE (user:NostrUser {pubkey: $pubkey})
ON CREATE SET
user.created_at = timestamp(),
user.first_seen_event = $event_id
ON MATCH SET
user.last_profile_update = $created_at
SET
user.name = $name,
user.about = $about,
user.picture = $picture,
user.nip05 = $nip05,
user.lud16 = $lud16,
user.display_name = $display_name
```
### Contact List Update (Kind 3) - Diff-Based
```cypher
// Mark old event as superseded
OPTIONAL MATCH (old:ProcessedSocialEvent {event_id: $old_event_id})
SET old.superseded_by = $new_event_id
// Create new event tracking
CREATE (new:ProcessedSocialEvent {
event_id: $new_event_id,
event_kind: 3,
pubkey: $author_pubkey,
created_at: $created_at,
processed_at: timestamp(),
relationship_count: $total_follows,
superseded_by: null
})
// Get or create author
MERGE (author:NostrUser {pubkey: $author_pubkey})
// Update unchanged relationships to new event
WITH author
OPTIONAL MATCH (author)-[unchanged:FOLLOWS]->(followed:NostrUser)
WHERE unchanged.created_by_event = $old_event_id
AND NOT followed.pubkey IN $removed_follows
SET unchanged.created_by_event = $new_event_id,
unchanged.created_at = $created_at
// Remove old relationships for removed follows
WITH author
OPTIONAL MATCH (author)-[old_follows:FOLLOWS]->(followed:NostrUser)
WHERE old_follows.created_by_event = $old_event_id
AND followed.pubkey IN $removed_follows
DELETE old_follows
// Create new relationships for added follows
WITH author
UNWIND $added_follows AS followed_pubkey
MERGE (followed:NostrUser {pubkey: followed_pubkey})
MERGE (author)-[new_follows:FOLLOWS]->(followed)
ON CREATE SET
new_follows.created_by_event = $new_event_id,
new_follows.created_at = $created_at,
new_follows.relay_received_at = timestamp()
ON MATCH SET
new_follows.created_by_event = $new_event_id,
new_follows.created_at = $created_at
```
### Create Report (Kind 1984)
```cypher
// Create tracking node
CREATE (evt:ProcessedSocialEvent {
event_id: $event_id,
event_kind: 1984,
pubkey: $reporter_pubkey,
created_at: $created_at,
processed_at: timestamp(),
relationship_count: 1,
superseded_by: null
})
// Create users and relationship
MERGE (reporter:NostrUser {pubkey: $reporter_pubkey})
MERGE (reported:NostrUser {pubkey: $reported_pubkey})
CREATE (reporter)-[:REPORTS {
created_by_event: $event_id,
created_at: $created_at,
relay_received_at: timestamp(),
report_type: $report_type
}]->(reported)
```
### Get Latest Social Event for Pubkey
```cypher
MATCH (evt:ProcessedSocialEvent {pubkey: $pubkey, event_kind: $kind})
WHERE evt.superseded_by IS NULL
RETURN evt.event_id AS event_id,
evt.created_at AS created_at,
evt.relationship_count AS relationship_count
ORDER BY evt.created_at DESC
LIMIT 1
```
### Get Follows for Event
```cypher
MATCH (author:NostrUser)-[f:FOLLOWS]->(followed:NostrUser)
WHERE f.created_by_event = $event_id
RETURN collect(followed.pubkey) AS pubkeys
```
## WoT Query Patterns
### Find Mutual Follows
```cypher
MATCH (a:NostrUser {pubkey: $pubkeyA})-[:FOLLOWS]->(b:NostrUser)
WHERE (b)-[:FOLLOWS]->(a)
RETURN b.pubkey AS mutual_friend
```
### Find Followers
```cypher
MATCH (follower:NostrUser)-[:FOLLOWS]->(user:NostrUser {pubkey: $pubkey})
RETURN follower.pubkey, follower.name
```
### Find Following
```cypher
MATCH (user:NostrUser {pubkey: $pubkey})-[:FOLLOWS]->(following:NostrUser)
RETURN following.pubkey, following.name
```
### Hop Distance (Trust Path)
```cypher
MATCH (start:NostrUser {pubkey: $startPubkey})
MATCH (end:NostrUser {pubkey: $endPubkey})
MATCH path = shortestPath((start)-[:FOLLOWS*..6]->(end))
RETURN length(path) AS hops, [n IN nodes(path) | n.pubkey] AS path
```
### Second-Degree Connections
```cypher
MATCH (me:NostrUser {pubkey: $myPubkey})-[:FOLLOWS]->(:NostrUser)-[:FOLLOWS]->(suggested:NostrUser)
WHERE NOT (me)-[:FOLLOWS]->(suggested)
AND suggested.pubkey <> $myPubkey
RETURN suggested.pubkey, count(*) AS commonFollows
ORDER BY commonFollows DESC
LIMIT 20
```
## Schema Management Patterns
### Create Constraint
```cypher
CREATE CONSTRAINT event_id_unique IF NOT EXISTS
FOR (e:Event) REQUIRE e.id IS UNIQUE
```
### Create Index
```cypher
CREATE INDEX event_kind IF NOT EXISTS
FOR (e:Event) ON (e.kind)
```
### Create Composite Index
```cypher
CREATE INDEX event_kind_created_at IF NOT EXISTS
FOR (e:Event) ON (e.kind, e.created_at)
```
### Drop All Data (Testing Only)
```cypher
MATCH (n) DETACH DELETE n
```
## Performance Patterns
### Use EXPLAIN/PROFILE
```cypher
// See query plan without running
EXPLAIN MATCH (e:Event) WHERE e.kind = 1 RETURN e
// Run and see actual metrics
PROFILE MATCH (e:Event) WHERE e.kind = 1 RETURN e
```
### Batch Import with UNWIND
```cypher
UNWIND $events AS evt
CREATE (e:Event {
id: evt.id,
kind: evt.kind,
pubkey: evt.pubkey,
created_at: evt.created_at,
content: evt.content,
sig: evt.sig,
tags: evt.tags
})
```
### Efficient Pagination
```cypher
// Use indexed ORDER BY with WHERE for cursor-based pagination
MATCH (e:Event)
WHERE e.kind = 1 AND e.created_at < $cursor
RETURN e
ORDER BY e.created_at DESC
LIMIT 20
```

View File

@@ -0,0 +1,540 @@
# Cypher Syntax Reference
Complete syntax reference for Neo4j Cypher query language.
## Clause Reference
### Reading Clauses
#### MATCH
Finds patterns in the graph.
```cypher
// Basic node match
MATCH (n:Label)
// Match with properties
MATCH (n:Label {key: value})
// Match relationships
MATCH (a)-[r:RELATES_TO]->(b)
// Match path
MATCH path = (a)-[*1..3]->(b)
```
#### OPTIONAL MATCH
Like MATCH but returns NULL for non-matches (LEFT OUTER JOIN).
```cypher
MATCH (a:Person)
OPTIONAL MATCH (a)-[:KNOWS]->(b:Person)
RETURN a.name, b.name // b.name may be NULL
```
#### WHERE
Filters results.
```cypher
// Comparison operators
WHERE n.age > 21
WHERE n.age >= 21
WHERE n.age < 65
WHERE n.age <= 65
WHERE n.name = 'Alice'
WHERE n.name <> 'Bob'
// Boolean operators
WHERE n.age > 21 AND n.active = true
WHERE n.age < 18 OR n.age > 65
WHERE NOT n.deleted
// NULL checks
WHERE n.email IS NULL
WHERE n.email IS NOT NULL
// Pattern predicates
WHERE (n)-[:KNOWS]->(:Person)
WHERE NOT (n)-[:BLOCKED]->()
WHERE exists((n)-[:FOLLOWS]->())
// String predicates
WHERE n.name STARTS WITH 'A'
WHERE n.name ENDS WITH 'son'
WHERE n.name CONTAINS 'li'
WHERE n.name =~ '(?i)alice.*' // Case-insensitive regex
// List predicates
WHERE n.status IN ['active', 'pending']
WHERE any(x IN n.tags WHERE x = 'important')
WHERE all(x IN n.scores WHERE x > 50)
WHERE none(x IN n.errors WHERE x IS NOT NULL)
WHERE single(x IN n.items WHERE x.primary = true)
```
### Writing Clauses
#### CREATE
Creates nodes and relationships.
```cypher
// Create node
CREATE (n:Label {key: value})
// Create multiple nodes
CREATE (a:Person {name: 'Alice'}), (b:Person {name: 'Bob'})
// Create relationship
CREATE (a)-[r:KNOWS {since: 2020}]->(b)
// Create path
CREATE p = (a)-[:KNOWS]->(b)-[:KNOWS]->(c)
```
#### MERGE
Find or create pattern. **Critical for idempotency**.
```cypher
// MERGE node
MERGE (n:Label {key: $uniqueKey})
// MERGE with ON CREATE / ON MATCH
MERGE (n:Person {email: $email})
ON CREATE SET n.created = timestamp(), n.name = $name
ON MATCH SET n.accessed = timestamp()
// MERGE relationship (both nodes must exist or be in scope)
MERGE (a)-[r:KNOWS]->(b)
ON CREATE SET r.since = date()
```
**MERGE Gotcha**: MERGE on a pattern locks the entire pattern. For relationships, MERGE each node first:
```cypher
// CORRECT
MERGE (a:Person {id: $id1})
MERGE (b:Person {id: $id2})
MERGE (a)-[:KNOWS]->(b)
// RISKY - may create duplicate nodes
MERGE (a:Person {id: $id1})-[:KNOWS]->(b:Person {id: $id2})
```
#### SET
Updates properties.
```cypher
// Set single property
SET n.name = 'Alice'
// Set multiple properties
SET n.name = 'Alice', n.age = 30
// Set from map (replaces all properties)
SET n = {name: 'Alice', age: 30}
// Set from map (adds/updates, keeps existing)
SET n += {name: 'Alice'}
// Set label
SET n:NewLabel
// Remove property
SET n.obsolete = null
```
#### DELETE / DETACH DELETE
Removes nodes and relationships.
```cypher
// Delete relationship
MATCH (a)-[r:KNOWS]->(b)
DELETE r
// Delete node (must have no relationships)
MATCH (n:Orphan)
DELETE n
// Delete node and all relationships
MATCH (n:Person {name: 'Bob'})
DETACH DELETE n
```
#### REMOVE
Removes properties and labels.
```cypher
// Remove property
REMOVE n.temporary
// Remove label
REMOVE n:OldLabel
```
### Projection Clauses
#### RETURN
Specifies output.
```cypher
// Return nodes
RETURN n
// Return properties
RETURN n.name, n.age
// Return with alias
RETURN n.name AS name, n.age AS age
// Return all
RETURN *
// Return distinct
RETURN DISTINCT n.category
// Return expression
RETURN n.price * n.quantity AS total
```
#### WITH
Passes results between query parts. **Critical for multi-part queries**.
```cypher
// Filter and pass
MATCH (n:Person)
WITH n WHERE n.age > 21
RETURN n
// Aggregate and continue
MATCH (n:Person)-[:BOUGHT]->(p:Product)
WITH n, count(p) AS purchases
WHERE purchases > 5
RETURN n.name, purchases
// Order and limit mid-query
MATCH (n:Person)
WITH n ORDER BY n.age DESC LIMIT 10
MATCH (n)-[:LIVES_IN]->(c:City)
RETURN n.name, c.name
```
**WITH resets scope**: Variables not listed in WITH are no longer available.
#### ORDER BY
Sorts results.
```cypher
ORDER BY n.name // Ascending (default)
ORDER BY n.name ASC // Explicit ascending
ORDER BY n.name DESC // Descending
ORDER BY n.lastName, n.firstName // Multiple fields
ORDER BY n.priority DESC, n.name // Mixed
```
#### SKIP and LIMIT
Pagination.
```cypher
// Skip first 10
SKIP 10
// Return only 20
LIMIT 20
// Pagination
ORDER BY n.created_at DESC
SKIP $offset LIMIT $pageSize
```
### Sub-queries
#### CALL (Subquery)
Execute subquery for each row.
```cypher
MATCH (p:Person)
CALL {
WITH p
MATCH (p)-[:BOUGHT]->(prod:Product)
RETURN count(prod) AS purchaseCount
}
RETURN p.name, purchaseCount
```
#### UNION
Combine results from multiple queries.
```cypher
MATCH (n:Person) RETURN n.name AS name
UNION
MATCH (n:Company) RETURN n.name AS name
// UNION ALL keeps duplicates
MATCH (n:Person) RETURN n.name AS name
UNION ALL
MATCH (n:Company) RETURN n.name AS name
```
### Control Flow
#### FOREACH
Iterate over list, execute updates.
```cypher
// Set property on path nodes
MATCH path = (a)-[*]->(b)
FOREACH (n IN nodes(path) | SET n.visited = true)
// Conditional operation (common pattern)
OPTIONAL MATCH (target:Node {id: $id})
FOREACH (_ IN CASE WHEN target IS NOT NULL THEN [1] ELSE [] END |
CREATE (source)-[:LINKS_TO]->(target)
)
```
#### CASE
Conditional expressions.
```cypher
// Simple CASE
RETURN CASE n.status
WHEN 'active' THEN 'A'
WHEN 'pending' THEN 'P'
ELSE 'X'
END AS code
// Generic CASE
RETURN CASE
WHEN n.age < 18 THEN 'minor'
WHEN n.age < 65 THEN 'adult'
ELSE 'senior'
END AS category
```
## Operators
### Comparison
| Operator | Description |
|----------|-------------|
| `=` | Equal |
| `<>` | Not equal |
| `<` | Less than |
| `>` | Greater than |
| `<=` | Less than or equal |
| `>=` | Greater than or equal |
| `IS NULL` | Is null |
| `IS NOT NULL` | Is not null |
### Boolean
| Operator | Description |
|----------|-------------|
| `AND` | Logical AND |
| `OR` | Logical OR |
| `NOT` | Logical NOT |
| `XOR` | Exclusive OR |
### String
| Operator | Description |
|----------|-------------|
| `STARTS WITH` | Prefix match |
| `ENDS WITH` | Suffix match |
| `CONTAINS` | Substring match |
| `=~` | Regex match |
### List
| Operator | Description |
|----------|-------------|
| `IN` | List membership |
| `+` | List concatenation |
### Mathematical
| Operator | Description |
|----------|-------------|
| `+` | Addition |
| `-` | Subtraction |
| `*` | Multiplication |
| `/` | Division |
| `%` | Modulo |
| `^` | Exponentiation |
## Functions
### Aggregation
```cypher
count(*) // Count rows
count(n) // Count non-null
count(DISTINCT n) // Count unique
sum(n.value) // Sum
avg(n.value) // Average
min(n.value) // Minimum
max(n.value) // Maximum
collect(n) // Collect to list
collect(DISTINCT n) // Collect unique
stDev(n.value) // Standard deviation
percentileCont(n.value, 0.5) // Median
```
### Scalar
```cypher
// Type functions
id(n) // Internal node ID (deprecated, use elementId)
elementId(n) // Element ID string
labels(n) // Node labels
type(r) // Relationship type
properties(n) // Property map
// Math
abs(x)
ceil(x)
floor(x)
round(x)
sign(x)
sqrt(x)
rand() // Random 0-1
// String
size(str) // String length
toLower(str)
toUpper(str)
trim(str)
ltrim(str)
rtrim(str)
replace(str, from, to)
substring(str, start, len)
left(str, len)
right(str, len)
split(str, delimiter)
reverse(str)
toString(val)
// Null handling
coalesce(val1, val2, ...) // First non-null
nullIf(val1, val2) // NULL if equal
// Type conversion
toInteger(val)
toFloat(val)
toBoolean(val)
toString(val)
```
### List Functions
```cypher
size(list) // List length
head(list) // First element
tail(list) // All but first
last(list) // Last element
range(start, end) // Create range [start..end]
range(start, end, step)
reverse(list)
keys(map) // Map keys as list
values(map) // Map values as list
// List predicates
any(x IN list WHERE predicate)
all(x IN list WHERE predicate)
none(x IN list WHERE predicate)
single(x IN list WHERE predicate)
// List manipulation
[x IN list WHERE predicate] // Filter
[x IN list | expression] // Map
[x IN list WHERE pred | expr] // Filter and map
reduce(s = initial, x IN list | s + x) // Reduce
```
### Path Functions
```cypher
nodes(path) // Nodes in path
relationships(path) // Relationships in path
length(path) // Number of relationships
shortestPath((a)-[*]-(b))
allShortestPaths((a)-[*]-(b))
```
### Temporal Functions
```cypher
timestamp() // Current Unix timestamp (ms)
datetime() // Current datetime
date() // Current date
time() // Current time
duration({days: 1, hours: 12})
// Components
datetime().year
datetime().month
datetime().day
datetime().hour
// Parsing
date('2024-01-15')
datetime('2024-01-15T10:30:00Z')
```
### Spatial Functions
```cypher
point({x: 1, y: 2})
point({latitude: 37.5, longitude: -122.4})
distance(point1, point2)
```
## Comments
```cypher
// Single line comment
/* Multi-line
comment */
```
## Transaction Control
```cypher
// In procedures/transactions
:begin
:commit
:rollback
```
## Parameter Syntax
```cypher
// Parameter reference
$paramName
// In properties
{key: $value}
// In WHERE
WHERE n.id = $id
// In expressions
RETURN $multiplier * n.value
```

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,610 @@
# Consensus Protocols - Detailed Reference
Complete specifications and implementation details for major consensus protocols.
## Paxos Complete Specification
### Proposal Numbers
Proposal numbers must be:
- **Unique**: No two proposers use the same number
- **Totally ordered**: Any two can be compared
**Implementation**: `(round_number, proposer_id)` where proposer_id breaks ties.
### Single-Decree Paxos State
**Proposer state**:
```
proposal_number: int
value: any
```
**Acceptor state (persistent)**:
```
highest_promised: int # Highest proposal number promised
accepted_proposal: int # Number of accepted proposal (0 if none)
accepted_value: any # Value of accepted proposal (null if none)
```
### Message Format
**Prepare** (Phase 1a):
```
{
type: "PREPARE",
proposal_number: n
}
```
**Promise** (Phase 1b):
```
{
type: "PROMISE",
proposal_number: n,
accepted_proposal: m, # null if nothing accepted
accepted_value: v # null if nothing accepted
}
```
**Accept** (Phase 2a):
```
{
type: "ACCEPT",
proposal_number: n,
value: v
}
```
**Accepted** (Phase 2b):
```
{
type: "ACCEPTED",
proposal_number: n,
value: v
}
```
### Proposer Algorithm
```
function propose(value):
n = generate_proposal_number()
# Phase 1: Prepare
promises = []
for acceptor in acceptors:
send PREPARE(n) to acceptor
wait until |promises| > |acceptors|/2 or timeout
if timeout:
return FAILED
# Choose value
highest = max(promises, key=p.accepted_proposal)
if highest.accepted_value is not null:
value = highest.accepted_value
# Phase 2: Accept
accepts = []
for acceptor in acceptors:
send ACCEPT(n, value) to acceptor
wait until |accepts| > |acceptors|/2 or timeout
if timeout:
return FAILED
return SUCCESS(value)
```
### Acceptor Algorithm
```
on receive PREPARE(n):
if n > highest_promised:
highest_promised = n
persist(highest_promised)
reply PROMISE(n, accepted_proposal, accepted_value)
else:
# Optionally reply NACK(highest_promised)
ignore or reject
on receive ACCEPT(n, v):
if n >= highest_promised:
highest_promised = n
accepted_proposal = n
accepted_value = v
persist(highest_promised, accepted_proposal, accepted_value)
reply ACCEPTED(n, v)
else:
ignore or reject
```
### Multi-Paxos Optimization
**Stable leader**:
```
# Leader election (using Paxos or other method)
leader = elect_leader()
# Leader's Phase 1 for all future instances
leader sends PREPARE(n) for instance range [i, ∞)
# For each command:
function propose_as_leader(value, instance):
# Skip Phase 1 if already leader
for acceptor in acceptors:
send ACCEPT(n, value, instance) to acceptor
wait for majority ACCEPTED
return SUCCESS
```
### Paxos Safety Proof Sketch
**Invariant**: If a value v is chosen for instance i, no other value can be chosen.
**Proof**:
1. Value chosen → accepted by majority with proposal n
2. Any higher proposal n' must contact majority
3. Majorities intersect → at least one acceptor has accepted v
4. New proposer adopts v (or higher already-accepted value)
5. By induction, all future proposals use v
## Raft Complete Specification
### State
**All servers (persistent)**:
```
currentTerm: int # Latest term seen
votedFor: ServerId # Candidate voted for in current term (null if none)
log[]: LogEntry # Log entries
```
**All servers (volatile)**:
```
commitIndex: int # Highest log index known to be committed
lastApplied: int # Highest log index applied to state machine
```
**Leader (volatile, reinitialized after election)**:
```
nextIndex[]: int # For each server, next log index to send
matchIndex[]: int # For each server, highest log index replicated
```
**LogEntry**:
```
{
term: int,
command: any
}
```
### RequestVote RPC
**Request**:
```
{
term: int, # Candidate's term
candidateId: ServerId, # Candidate requesting vote
lastLogIndex: int, # Index of candidate's last log entry
lastLogTerm: int # Term of candidate's last log entry
}
```
**Response**:
```
{
term: int, # currentTerm, for candidate to update itself
voteGranted: bool # True if candidate received vote
}
```
**Receiver implementation**:
```
on receive RequestVote(term, candidateId, lastLogIndex, lastLogTerm):
if term < currentTerm:
return {term: currentTerm, voteGranted: false}
if term > currentTerm:
currentTerm = term
votedFor = null
convert to follower
# Check if candidate's log is at least as up-to-date as ours
ourLastTerm = log[len(log)-1].term if log else 0
ourLastIndex = len(log) - 1
logOK = (lastLogTerm > ourLastTerm) or
(lastLogTerm == ourLastTerm and lastLogIndex >= ourLastIndex)
if (votedFor is null or votedFor == candidateId) and logOK:
votedFor = candidateId
persist(currentTerm, votedFor)
reset election timer
return {term: currentTerm, voteGranted: true}
return {term: currentTerm, voteGranted: false}
```
### AppendEntries RPC
**Request**:
```
{
term: int, # Leader's term
leaderId: ServerId, # For follower to redirect clients
prevLogIndex: int, # Index of log entry preceding new ones
prevLogTerm: int, # Term of prevLogIndex entry
entries[]: LogEntry, # Log entries to store (empty for heartbeat)
leaderCommit: int # Leader's commitIndex
}
```
**Response**:
```
{
term: int, # currentTerm, for leader to update itself
success: bool # True if follower had matching prevLog entry
}
```
**Receiver implementation**:
```
on receive AppendEntries(term, leaderId, prevLogIndex, prevLogTerm, entries, leaderCommit):
if term < currentTerm:
return {term: currentTerm, success: false}
reset election timer
if term > currentTerm:
currentTerm = term
votedFor = null
convert to follower
# Check log consistency
if prevLogIndex >= len(log) or
(prevLogIndex >= 0 and log[prevLogIndex].term != prevLogTerm):
return {term: currentTerm, success: false}
# Append new entries (handling conflicts)
for i, entry in enumerate(entries):
index = prevLogIndex + 1 + i
if index < len(log):
if log[index].term != entry.term:
# Delete conflicting entry and all following
log = log[:index]
log.append(entry)
else:
log.append(entry)
persist(currentTerm, votedFor, log)
# Update commit index
if leaderCommit > commitIndex:
commitIndex = min(leaderCommit, len(log) - 1)
return {term: currentTerm, success: true}
```
### Leader Behavior
```
on becoming leader:
for each server:
nextIndex[server] = len(log)
matchIndex[server] = 0
start sending heartbeats
on receiving client command:
append entry to local log
persist log
send AppendEntries to all followers
on receiving AppendEntries response from server:
if response.success:
matchIndex[server] = prevLogIndex + len(entries)
nextIndex[server] = matchIndex[server] + 1
# Update commit index
for N from commitIndex+1 to len(log)-1:
if log[N].term == currentTerm and
|{s : matchIndex[s] >= N}| > |servers|/2:
commitIndex = N
else:
nextIndex[server] = max(1, nextIndex[server] - 1)
retry AppendEntries with lower prevLogIndex
on commitIndex update:
while lastApplied < commitIndex:
lastApplied++
apply log[lastApplied].command to state machine
```
### Election Timeout
```
on election timeout (follower or candidate):
currentTerm++
convert to candidate
votedFor = self
persist(currentTerm, votedFor)
reset election timer
votes = 1 # Vote for self
for each server except self:
send RequestVote(currentTerm, self, lastLogIndex, lastLogTerm)
wait for responses or timeout:
if received votes > |servers|/2:
become leader
if received AppendEntries from valid leader:
become follower
if timeout:
start new election
```
## PBFT Complete Specification
### Message Types
**REQUEST**:
```
{
type: "REQUEST",
operation: o, # Operation to execute
timestamp: t, # Client timestamp (for reply matching)
client: c # Client identifier
}
```
**PRE-PREPARE**:
```
{
type: "PRE-PREPARE",
view: v, # Current view number
sequence: n, # Sequence number
digest: d, # Hash of request
request: m # The request message
}
signature(primary)
```
**PREPARE**:
```
{
type: "PREPARE",
view: v,
sequence: n,
digest: d,
replica: i # Sending replica
}
signature(replica_i)
```
**COMMIT**:
```
{
type: "COMMIT",
view: v,
sequence: n,
digest: d,
replica: i
}
signature(replica_i)
```
**REPLY**:
```
{
type: "REPLY",
view: v,
timestamp: t,
client: c,
replica: i,
result: r # Execution result
}
signature(replica_i)
```
### Replica State
```
view: int # Current view
sequence: int # Last assigned sequence number (primary)
log[]: {request, prepares, commits, state} # Log of requests
prepared_certificates: {} # Prepared certificates (2f+1 prepares)
committed_certificates: {} # Committed certificates (2f+1 commits)
h: int # Low water mark
H: int # High water mark (h + L)
```
### Normal Operation Protocol
**Primary (replica p = v mod n)**:
```
on receive REQUEST(m) from client:
if not primary for current view:
forward to primary
return
n = assign_sequence_number()
d = hash(m)
broadcast PRE-PREPARE(v, n, d, m) to all replicas
add to log
```
**All replicas**:
```
on receive PRE-PREPARE(v, n, d, m) from primary:
if v != current_view:
ignore
if already accepted pre-prepare for (v, n) with different digest:
ignore
if not in_view_as_backup(v):
ignore
if not h < n <= H:
ignore # Outside sequence window
# Valid pre-prepare
add to log
broadcast PREPARE(v, n, d, i) to all replicas
on receive PREPARE(v, n, d, j) from replica j:
if v != current_view:
ignore
add to log[n].prepares
if |log[n].prepares| >= 2f and not already_prepared(v, n, d):
# Prepared certificate complete
mark as prepared
broadcast COMMIT(v, n, d, i) to all replicas
on receive COMMIT(v, n, d, j) from replica j:
if v != current_view:
ignore
add to log[n].commits
if |log[n].commits| >= 2f + 1 and prepared(v, n, d):
# Committed certificate complete
if all entries < n are committed:
execute(m)
send REPLY(v, t, c, i, result) to client
```
### View Change Protocol
**Timeout trigger**:
```
on request timeout (no progress):
view_change_timeout++
broadcast VIEW-CHANGE(v+1, n, C, P, i)
where:
n = last stable checkpoint sequence number
C = checkpoint certificate (2f+1 checkpoint messages)
P = set of prepared certificates for messages after n
```
**VIEW-CHANGE**:
```
{
type: "VIEW-CHANGE",
view: v, # New view number
sequence: n, # Checkpoint sequence
checkpoints: C, # Checkpoint certificate
prepared: P, # Set of prepared certificates
replica: i
}
signature(replica_i)
```
**New primary (p' = v mod n)**:
```
on receive 2f VIEW-CHANGE for view v:
V = set of valid view-change messages
# Compute O: set of requests to re-propose
O = {}
for seq in max_checkpoint_seq(V) to max_seq(V):
if exists prepared certificate for seq in V:
O[seq] = request from certificate
else:
O[seq] = null-request # No-op
broadcast NEW-VIEW(v, V, O)
# Re-run protocol for requests in O
for seq, request in O:
if request != null:
send PRE-PREPARE(v, seq, hash(request), request)
```
**NEW-VIEW**:
```
{
type: "NEW-VIEW",
view: v,
view_changes: V, # 2f+1 view-change messages
pre_prepares: O # Set of pre-prepare messages
}
signature(primary)
```
### Checkpointing
Periodic stable checkpoints to garbage collect logs:
```
every K requests:
state_hash = hash(state_machine_state)
broadcast CHECKPOINT(n, state_hash, i)
on receive 2f+1 CHECKPOINT for (n, d):
if all digests match:
create stable checkpoint
h = n # Move low water mark
garbage_collect(entries < n)
```
## HotStuff Protocol
Linear complexity BFT using threshold signatures.
### Key Innovation
- **Three-phase**: prepare → pre-commit → commit → decide
- **Pipelining**: Next proposal starts before current finishes
- **Threshold signatures**: O(n) total messages instead of O(n²)
### Message Flow
```
Phase 1 (Prepare):
Leader: broadcast PREPARE(v, node)
Replicas: sign and send partial signature to leader
Leader: aggregate into prepare certificate QC
Phase 2 (Pre-commit):
Leader: broadcast PRE-COMMIT(v, QC_prepare)
Replicas: sign and send partial signature
Leader: aggregate into pre-commit certificate
Phase 3 (Commit):
Leader: broadcast COMMIT(v, QC_precommit)
Replicas: sign and send partial signature
Leader: aggregate into commit certificate
Phase 4 (Decide):
Leader: broadcast DECIDE(v, QC_commit)
Replicas: execute and commit
```
### Pipelining
```
Block k: [prepare] [pre-commit] [commit] [decide]
Block k+1: [prepare] [pre-commit] [commit] [decide]
Block k+2: [prepare] [pre-commit] [commit] [decide]
```
Each phase of block k+1 piggybacks on messages for block k.
## Protocol Comparison Matrix
| Feature | Paxos | Raft | PBFT | HotStuff |
|---------|-------|------|------|----------|
| Fault model | Crash | Crash | Byzantine | Byzantine |
| Fault tolerance | f with 2f+1 | f with 2f+1 | f with 3f+1 | f with 3f+1 |
| Message complexity | O(n) | O(n) | O(n²) | O(n) |
| Leader required | No (helps) | Yes | Yes | Yes |
| Phases | 2 | 2 | 3 | 3 |
| View change | Complex | Simple | Complex | Simple |

View File

@@ -0,0 +1,610 @@
# Logical Clocks - Implementation Reference
Detailed implementations and algorithms for causality tracking.
## Lamport Clock Implementation
### Data Structure
```go
type LamportClock struct {
counter uint64
mu sync.Mutex
}
func NewLamportClock() *LamportClock {
return &LamportClock{counter: 0}
}
```
### Operations
```go
// Tick increments clock for local event
func (c *LamportClock) Tick() uint64 {
c.mu.Lock()
defer c.mu.Unlock()
c.counter++
return c.counter
}
// Send returns timestamp for outgoing message
func (c *LamportClock) Send() uint64 {
return c.Tick()
}
// Receive updates clock based on incoming message timestamp
func (c *LamportClock) Receive(msgTime uint64) uint64 {
c.mu.Lock()
defer c.mu.Unlock()
if msgTime > c.counter {
c.counter = msgTime
}
c.counter++
return c.counter
}
// Time returns current clock value without incrementing
func (c *LamportClock) Time() uint64 {
c.mu.Lock()
defer c.mu.Unlock()
return c.counter
}
```
### Usage Example
```go
// Process A
clockA := NewLamportClock()
e1 := clockA.Tick() // Event 1: time=1
msgTime := clockA.Send() // Send: time=2
// Process B
clockB := NewLamportClock()
e2 := clockB.Tick() // Event 2: time=1
e3 := clockB.Receive(msgTime) // Receive: time=3 (max(1,2)+1)
```
## Vector Clock Implementation
### Data Structure
```go
type VectorClock struct {
clocks map[string]uint64 // processID -> logical time
self string // this process's ID
mu sync.RWMutex
}
func NewVectorClock(processID string, allProcesses []string) *VectorClock {
clocks := make(map[string]uint64)
for _, p := range allProcesses {
clocks[p] = 0
}
return &VectorClock{
clocks: clocks,
self: processID,
}
}
```
### Operations
```go
// Tick increments own clock
func (vc *VectorClock) Tick() map[string]uint64 {
vc.mu.Lock()
defer vc.mu.Unlock()
vc.clocks[vc.self]++
return vc.copy()
}
// Send returns copy of vector for message
func (vc *VectorClock) Send() map[string]uint64 {
return vc.Tick()
}
// Receive merges incoming vector and increments
func (vc *VectorClock) Receive(incoming map[string]uint64) map[string]uint64 {
vc.mu.Lock()
defer vc.mu.Unlock()
// Merge: take max of each component
for pid, time := range incoming {
if time > vc.clocks[pid] {
vc.clocks[pid] = time
}
}
// Increment own clock
vc.clocks[vc.self]++
return vc.copy()
}
// copy returns a copy of the vector
func (vc *VectorClock) copy() map[string]uint64 {
result := make(map[string]uint64)
for k, v := range vc.clocks {
result[k] = v
}
return result
}
```
### Comparison Functions
```go
// Compare returns ordering relationship between two vectors
type Ordering int
const (
Equal Ordering = iota // V1 == V2
HappenedBefore // V1 < V2
HappenedAfter // V1 > V2
Concurrent // V1 || V2
)
func Compare(v1, v2 map[string]uint64) Ordering {
less := false
greater := false
// Get all keys
allKeys := make(map[string]bool)
for k := range v1 {
allKeys[k] = true
}
for k := range v2 {
allKeys[k] = true
}
for k := range allKeys {
t1 := v1[k] // 0 if not present
t2 := v2[k]
if t1 < t2 {
less = true
}
if t1 > t2 {
greater = true
}
}
if !less && !greater {
return Equal
}
if less && !greater {
return HappenedBefore
}
if greater && !less {
return HappenedAfter
}
return Concurrent
}
// IsConcurrent checks if two events are concurrent
func IsConcurrent(v1, v2 map[string]uint64) bool {
return Compare(v1, v2) == Concurrent
}
// HappenedBefore checks if v1 -> v2 (v1 causally precedes v2)
func HappenedBefore(v1, v2 map[string]uint64) bool {
return Compare(v1, v2) == HappenedBefore
}
```
## Interval Tree Clock Implementation
### Data Structures
```go
// ID represents the identity tree
type ID struct {
IsLeaf bool
Value int // 0 or 1 for leaves
Left *ID // nil for leaves
Right *ID
}
// Stamp represents the event tree
type Stamp struct {
Base int
Left *Stamp // nil for leaf stamps
Right *Stamp
}
// ITC combines ID and Stamp
type ITC struct {
ID *ID
Stamp *Stamp
}
```
### ID Operations
```go
// NewSeedID creates initial full ID (1)
func NewSeedID() *ID {
return &ID{IsLeaf: true, Value: 1}
}
// Fork splits an ID into two
func (id *ID) Fork() (*ID, *ID) {
if id.IsLeaf {
if id.Value == 0 {
// Cannot fork zero ID
return &ID{IsLeaf: true, Value: 0},
&ID{IsLeaf: true, Value: 0}
}
// Split full ID into left and right halves
return &ID{
IsLeaf: false,
Left: &ID{IsLeaf: true, Value: 1},
Right: &ID{IsLeaf: true, Value: 0},
},
&ID{
IsLeaf: false,
Left: &ID{IsLeaf: true, Value: 0},
Right: &ID{IsLeaf: true, Value: 1},
}
}
// Fork from non-leaf: give half to each
if id.Left.IsLeaf && id.Left.Value == 0 {
// Left is zero, fork right
newRight1, newRight2 := id.Right.Fork()
return &ID{IsLeaf: false, Left: id.Left, Right: newRight1},
&ID{IsLeaf: false, Left: &ID{IsLeaf: true, Value: 0}, Right: newRight2}
}
if id.Right.IsLeaf && id.Right.Value == 0 {
// Right is zero, fork left
newLeft1, newLeft2 := id.Left.Fork()
return &ID{IsLeaf: false, Left: newLeft1, Right: id.Right},
&ID{IsLeaf: false, Left: newLeft2, Right: &ID{IsLeaf: true, Value: 0}}
}
// Both have IDs, split
return &ID{IsLeaf: false, Left: id.Left, Right: &ID{IsLeaf: true, Value: 0}},
&ID{IsLeaf: false, Left: &ID{IsLeaf: true, Value: 0}, Right: id.Right}
}
// Join merges two IDs
func Join(id1, id2 *ID) *ID {
if id1.IsLeaf && id1.Value == 0 {
return id2
}
if id2.IsLeaf && id2.Value == 0 {
return id1
}
if id1.IsLeaf && id2.IsLeaf && id1.Value == 1 && id2.Value == 1 {
return &ID{IsLeaf: true, Value: 1}
}
// Normalize to non-leaf
left1 := id1.Left
right1 := id1.Right
left2 := id2.Left
right2 := id2.Right
if id1.IsLeaf {
left1 = id1
right1 = id1
}
if id2.IsLeaf {
left2 = id2
right2 = id2
}
newLeft := Join(left1, left2)
newRight := Join(right1, right2)
return normalize(&ID{IsLeaf: false, Left: newLeft, Right: newRight})
}
func normalize(id *ID) *ID {
if !id.IsLeaf {
if id.Left.IsLeaf && id.Right.IsLeaf &&
id.Left.Value == id.Right.Value {
return &ID{IsLeaf: true, Value: id.Left.Value}
}
}
return id
}
```
### Stamp Operations
```go
// NewStamp creates initial stamp (0)
func NewStamp() *Stamp {
return &Stamp{Base: 0}
}
// Event increments the stamp for the given ID
func Event(id *ID, stamp *Stamp) *Stamp {
if id.IsLeaf {
if id.Value == 1 {
return &Stamp{Base: stamp.Base + 1}
}
return stamp // Cannot increment with zero ID
}
// Non-leaf ID: fill where we have ID
if id.Left.IsLeaf && id.Left.Value == 1 {
// Have left ID, increment left
newLeft := Event(&ID{IsLeaf: true, Value: 1}, getLeft(stamp))
return normalizeStamp(&Stamp{
Base: stamp.Base,
Left: newLeft,
Right: getRight(stamp),
})
}
if id.Right.IsLeaf && id.Right.Value == 1 {
newRight := Event(&ID{IsLeaf: true, Value: 1}, getRight(stamp))
return normalizeStamp(&Stamp{
Base: stamp.Base,
Left: getLeft(stamp),
Right: newRight,
})
}
// Both non-zero, choose lower side
leftMax := maxStamp(getLeft(stamp))
rightMax := maxStamp(getRight(stamp))
if leftMax <= rightMax {
return normalizeStamp(&Stamp{
Base: stamp.Base,
Left: Event(id.Left, getLeft(stamp)),
Right: getRight(stamp),
})
}
return normalizeStamp(&Stamp{
Base: stamp.Base,
Left: getLeft(stamp),
Right: Event(id.Right, getRight(stamp)),
})
}
func getLeft(s *Stamp) *Stamp {
if s.Left == nil {
return &Stamp{Base: 0}
}
return s.Left
}
func getRight(s *Stamp) *Stamp {
if s.Right == nil {
return &Stamp{Base: 0}
}
return s.Right
}
func maxStamp(s *Stamp) int {
if s.Left == nil && s.Right == nil {
return s.Base
}
left := 0
right := 0
if s.Left != nil {
left = maxStamp(s.Left)
}
if s.Right != nil {
right = maxStamp(s.Right)
}
max := left
if right > max {
max = right
}
return s.Base + max
}
// JoinStamps merges two stamps
func JoinStamps(s1, s2 *Stamp) *Stamp {
// Take max at each level
base := s1.Base
if s2.Base > base {
base = s2.Base
}
// Adjust for base difference
adj1 := s1.Base
adj2 := s2.Base
return normalizeStamp(&Stamp{
Base: base,
Left: joinStampsRecursive(s1.Left, s2.Left, adj1-base, adj2-base),
Right: joinStampsRecursive(s1.Right, s2.Right, adj1-base, adj2-base),
})
}
func normalizeStamp(s *Stamp) *Stamp {
if s.Left == nil && s.Right == nil {
return s
}
if s.Left != nil && s.Right != nil {
if s.Left.Base > 0 && s.Right.Base > 0 {
min := s.Left.Base
if s.Right.Base < min {
min = s.Right.Base
}
return &Stamp{
Base: s.Base + min,
Left: &Stamp{Base: s.Left.Base - min, Left: s.Left.Left, Right: s.Left.Right},
Right: &Stamp{Base: s.Right.Base - min, Left: s.Right.Left, Right: s.Right.Right},
}
}
}
return s
}
```
## Hybrid Logical Clock Implementation
```go
type HLC struct {
l int64 // logical component (physical time)
c int64 // counter
mu sync.Mutex
}
func NewHLC() *HLC {
return &HLC{l: 0, c: 0}
}
type HLCTimestamp struct {
L int64
C int64
}
func (hlc *HLC) physicalTime() int64 {
return time.Now().UnixNano()
}
// Now returns current HLC timestamp for local/send event
func (hlc *HLC) Now() HLCTimestamp {
hlc.mu.Lock()
defer hlc.mu.Unlock()
pt := hlc.physicalTime()
if pt > hlc.l {
hlc.l = pt
hlc.c = 0
} else {
hlc.c++
}
return HLCTimestamp{L: hlc.l, C: hlc.c}
}
// Update updates HLC based on received timestamp
func (hlc *HLC) Update(received HLCTimestamp) HLCTimestamp {
hlc.mu.Lock()
defer hlc.mu.Unlock()
pt := hlc.physicalTime()
if pt > hlc.l && pt > received.L {
hlc.l = pt
hlc.c = 0
} else if received.L > hlc.l {
hlc.l = received.L
hlc.c = received.C + 1
} else if hlc.l > received.L {
hlc.c++
} else { // hlc.l == received.L
if received.C > hlc.c {
hlc.c = received.C + 1
} else {
hlc.c++
}
}
return HLCTimestamp{L: hlc.l, C: hlc.c}
}
// Compare compares two HLC timestamps
func (t1 HLCTimestamp) Compare(t2 HLCTimestamp) int {
if t1.L < t2.L {
return -1
}
if t1.L > t2.L {
return 1
}
if t1.C < t2.C {
return -1
}
if t1.C > t2.C {
return 1
}
return 0
}
```
## Causal Broadcast Implementation
```go
type CausalBroadcast struct {
vc *VectorClock
pending []PendingMessage
deliver func(Message)
mu sync.Mutex
}
type PendingMessage struct {
Msg Message
Timestamp map[string]uint64
}
func NewCausalBroadcast(processID string, processes []string, deliver func(Message)) *CausalBroadcast {
return &CausalBroadcast{
vc: NewVectorClock(processID, processes),
pending: make([]PendingMessage, 0),
deliver: deliver,
}
}
// Broadcast sends a message to all processes
func (cb *CausalBroadcast) Broadcast(msg Message) map[string]uint64 {
cb.mu.Lock()
defer cb.mu.Unlock()
timestamp := cb.vc.Send()
// Actual network broadcast would happen here
return timestamp
}
// Receive handles an incoming message
func (cb *CausalBroadcast) Receive(msg Message, sender string, timestamp map[string]uint64) {
cb.mu.Lock()
defer cb.mu.Unlock()
// Add to pending
cb.pending = append(cb.pending, PendingMessage{Msg: msg, Timestamp: timestamp})
// Try to deliver pending messages
cb.tryDeliver()
}
func (cb *CausalBroadcast) tryDeliver() {
changed := true
for changed {
changed = false
for i, pending := range cb.pending {
if cb.canDeliver(pending.Timestamp) {
// Deliver message
cb.vc.Receive(pending.Timestamp)
cb.deliver(pending.Msg)
// Remove from pending
cb.pending = append(cb.pending[:i], cb.pending[i+1:]...)
changed = true
break
}
}
}
}
func (cb *CausalBroadcast) canDeliver(msgVC map[string]uint64) bool {
currentVC := cb.vc.clocks
for pid, msgTime := range msgVC {
if pid == cb.vc.self {
// Must be next expected from sender
if msgTime != currentVC[pid]+1 {
return false
}
} else {
// All other dependencies must be satisfied
if msgTime > currentVC[pid] {
return false
}
}
}
return true
}
```

View File

@@ -0,0 +1,166 @@
---
name: domain-driven-design
description: This skill should be used when designing software architecture, modeling domains, reviewing code for DDD compliance, identifying bounded contexts, designing aggregates, or discussing strategic and tactical DDD patterns. Provides comprehensive Domain-Driven Design principles, axioms, heuristics, and anti-patterns for building maintainable, domain-centric software systems.
---
# Domain-Driven Design
## Overview
Domain-Driven Design (DDD) is an approach to software development that centers the design on the core business domain. This skill provides principles, patterns, and heuristics for both strategic design (system boundaries and relationships) and tactical design (code-level patterns).
## When to Apply This Skill
- Designing new systems or features with complex business logic
- Identifying and defining bounded contexts
- Modeling aggregates, entities, and value objects
- Reviewing code for DDD pattern compliance
- Decomposing monoliths into services
- Establishing ubiquitous language with domain experts
## Core Axioms
### Axiom 1: The Domain is Supreme
Software exists to solve domain problems. Technical decisions serve the domain, not vice versa. When technical elegance conflicts with domain clarity, domain clarity wins.
### Axiom 2: Language Creates Reality
The ubiquitous language shapes how teams think about the domain. Ambiguous language creates ambiguous software. Invest heavily in precise terminology.
### Axiom 3: Boundaries Enable Autonomy
Explicit boundaries (bounded contexts) allow teams to evolve independently. The cost of integration is worth the benefit of isolation.
### Axiom 4: Models are Imperfect Approximations
No model captures all domain complexity. Accept that models simplify reality. Refine models continuously as understanding deepens.
## Strategic Design Quick Reference
| Pattern | Purpose | Key Heuristic |
|---------|---------|---------------|
| **Bounded Context** | Define linguistic/model boundaries | One team, one language, one model |
| **Context Map** | Document context relationships | Make implicit integrations explicit |
| **Subdomain** | Classify domain areas by value | Core (invest), Supporting (adequate), Generic (outsource) |
| **Ubiquitous Language** | Shared vocabulary | If experts don't use the term, neither should code |
For detailed strategic patterns, consult `references/strategic-patterns.md`.
## Tactical Design Quick Reference
| Pattern | Purpose | Key Heuristic |
|---------|---------|---------------|
| **Entity** | Identity-tracked object | "Same identity = same thing" regardless of attributes |
| **Value Object** | Immutable, identity-less | Equality by value, always immutable, self-validating |
| **Aggregate** | Consistency boundary | Small aggregates, reference by ID, one transaction = one aggregate |
| **Domain Event** | Record state changes | Past tense naming, immutable, contains all relevant data |
| **Repository** | Collection abstraction | One per aggregate root, domain-focused interface |
| **Domain Service** | Stateless operations | When logic doesn't belong to any single entity |
| **Factory** | Complex object creation | When construction logic is complex or variable |
For detailed tactical patterns, consult `references/tactical-patterns.md`.
## Essential Heuristics
### Aggregate Design Heuristics
1. **Protect business invariants inside aggregate boundaries** - If two pieces of data must be consistent, they belong in the same aggregate
2. **Design small aggregates** - Large aggregates cause concurrency issues and slow performance
3. **Reference other aggregates by identity only** - Never hold direct object references across aggregate boundaries
4. **Update one aggregate per transaction** - Eventual consistency across aggregates using domain events
5. **Aggregate roots are the only entry point** - External code never reaches inside to manipulate child entities
### Bounded Context Heuristics
1. **Linguistic boundaries** - When the same word means different things, you have different contexts
2. **Team boundaries** - One context per team enables autonomy
3. **Process boundaries** - Different business processes often indicate different contexts
4. **Data ownership** - Each context owns its data; no shared databases
### Modeling Heuristics
1. **Nouns → Entities or Value Objects** - Things with identity become entities; descriptive things become value objects
2. **Verbs → Domain Services or Methods** - Actions become methods on entities or stateless services
3. **Business rules → Invariants** - Rules the domain must always satisfy become aggregate invariants
4. **Events in domain expert language → Domain Events** - "When X happens" becomes a domain event
## Decision Guides
### Entity vs Value Object
```
Does this thing have a lifecycle and identity that matters?
├─ YES → Is identity based on an ID (not attributes)?
│ ├─ YES → Entity
│ └─ NO → Reconsider; might be Value Object with natural key
└─ NO → Value Object
```
### Where Does This Logic Belong?
```
Is this logic stateless?
├─ NO → Does it belong to a single aggregate?
│ ├─ YES → Method on the aggregate/entity
│ └─ NO → Reconsider aggregate boundaries
└─ YES → Does it coordinate multiple aggregates?
├─ YES → Application Service
└─ NO → Does it represent a domain concept?
├─ YES → Domain Service
└─ NO → Infrastructure Service
```
### Should This Be a Separate Bounded Context?
```
Do different stakeholders use different language for this?
├─ YES → Separate bounded context
└─ NO → Does a different team own this?
├─ YES → Separate bounded context
└─ NO → Would a separate model reduce complexity?
├─ YES → Consider separation (but weigh integration cost)
└─ NO → Keep in current context
```
## Anti-Patterns Overview
| Anti-Pattern | Description | Fix |
|--------------|-------------|-----|
| **Anemic Domain Model** | Entities with only getters/setters | Move behavior into domain objects |
| **Big Ball of Mud** | No clear boundaries | Identify bounded contexts |
| **Smart UI** | Business logic in presentation layer | Extract domain layer |
| **Database-Driven Design** | Model follows database schema | Model follows domain, map to database |
| **Leaky Abstractions** | Infrastructure concerns in domain | Dependency inversion, ports and adapters |
| **God Aggregate** | One aggregate does everything | Split by invariant boundaries |
| **Premature Abstraction** | Abstracting before understanding | Concrete first, abstract when patterns emerge |
For detailed anti-patterns and remediation, consult `references/anti-patterns.md`.
## Implementation Checklist
When implementing DDD in a codebase:
- [ ] Ubiquitous language documented and used consistently in code
- [ ] Bounded contexts identified with clear boundaries
- [ ] Context map documenting integration patterns
- [ ] Aggregates designed small with clear invariants
- [ ] Entities have behavior, not just data
- [ ] Value objects are immutable and self-validating
- [ ] Domain events capture important state changes
- [ ] Repositories abstract persistence for aggregate roots
- [ ] No business logic in application services (orchestration only)
- [ ] No infrastructure concerns in domain layer
## Resources
### references/
- `strategic-patterns.md` - Detailed strategic DDD patterns including bounded contexts, context maps, subdomain classification, and ubiquitous language
- `tactical-patterns.md` - Detailed tactical DDD patterns including entities, value objects, aggregates, domain events, repositories, and services
- `anti-patterns.md` - Common DDD anti-patterns, how to identify them, and remediation strategies
To search references for specific topics:
- Bounded contexts: `grep -i "bounded context" references/`
- Aggregate design: `grep -i "aggregate" references/`
- Value objects: `grep -i "value object" references/`

View File

@@ -0,0 +1,853 @@
# DDD Anti-Patterns
This reference documents common anti-patterns encountered when implementing Domain-Driven Design, how to identify them, and remediation strategies.
## Anemic Domain Model
### Description
Entities that are mere data containers with getters and setters, while all business logic lives in "service" classes. The domain model looks like a relational database schema mapped to objects.
### Symptoms
- Entities with only get/set methods and no behavior
- Service classes with methods like `orderService.calculateTotal(order)`
- Business rules scattered across multiple services
- Heavy use of DTOs that mirror entity structure
- "Transaction scripts" in application services
### Example
```typescript
// ANTI-PATTERN: Anemic domain model
class Order {
id: string;
customerId: string;
items: OrderItem[];
status: string;
total: number;
// Only data access, no behavior
getId(): string { return this.id; }
setStatus(status: string): void { this.status = status; }
getItems(): OrderItem[] { return this.items; }
setTotal(total: number): void { this.total = total; }
}
class OrderService {
// All logic external to the entity
calculateTotal(order: Order): number {
let total = 0;
for (const item of order.getItems()) {
total += item.price * item.quantity;
}
order.setTotal(total);
return total;
}
canShip(order: Order): boolean {
return order.status === 'PAID' && order.getItems().length > 0;
}
ship(order: Order, trackingNumber: string): void {
if (!this.canShip(order)) {
throw new Error('Cannot ship order');
}
order.setStatus('SHIPPED');
order.trackingNumber = trackingNumber;
}
}
```
### Remediation
```typescript
// CORRECT: Rich domain model
class Order {
private _id: OrderId;
private _items: OrderItem[];
private _status: OrderStatus;
// Behavior lives in the entity
get total(): Money {
return this._items.reduce(
(sum, item) => sum.add(item.subtotal()),
Money.zero()
);
}
canShip(): boolean {
return this._status === OrderStatus.Paid && this._items.length > 0;
}
ship(trackingNumber: TrackingNumber): void {
if (!this.canShip()) {
throw new OrderNotShippableError(this._id, this._status);
}
this._status = OrderStatus.Shipped;
this._trackingNumber = trackingNumber;
}
addItem(item: OrderItem): void {
this.ensureCanModify();
this._items.push(item);
}
}
// Application service is thin - only orchestration
class OrderApplicationService {
async shipOrder(orderId: OrderId, trackingNumber: TrackingNumber): Promise<void> {
const order = await this.orderRepository.findById(orderId);
order.ship(trackingNumber); // Domain logic in entity
await this.orderRepository.save(order);
}
}
```
### Root Causes
- Developers treating objects as data structures
- Thinking in terms of database tables
- Copying patterns from CRUD applications
- Misunderstanding "service" to mean "all logic goes here"
## God Aggregate
### Description
An aggregate that has grown to encompass too much. It handles multiple concerns, has many child entities, and becomes a performance and concurrency bottleneck.
### Symptoms
- Aggregates with 10+ child entity types
- Long load times due to eager loading everything
- Frequent optimistic concurrency conflicts
- Methods that only touch a small subset of the aggregate
- Difficulty reasoning about invariants
### Example
```typescript
// ANTI-PATTERN: God aggregate
class Customer {
private _id: CustomerId;
private _profile: CustomerProfile;
private _addresses: Address[];
private _paymentMethods: PaymentMethod[];
private _orders: Order[]; // History of all orders!
private _wishlist: WishlistItem[];
private _reviews: Review[];
private _loyaltyPoints: LoyaltyAccount;
private _preferences: Preferences;
private _notifications: Notification[];
private _supportTickets: SupportTicket[];
// Loading this customer loads EVERYTHING
// Updating preferences causes concurrency conflict with order placement
}
```
### Remediation
```typescript
// CORRECT: Small, focused aggregates
class Customer {
private _id: CustomerId;
private _profile: CustomerProfile;
private _defaultAddressId: AddressId;
private _membershipTier: MembershipTier;
}
class CustomerAddressBook {
private _customerId: CustomerId;
private _addresses: Address[];
}
class ShoppingCart {
private _customerId: CustomerId; // Reference by ID
private _items: CartItem[];
}
class Wishlist {
private _customerId: CustomerId; // Reference by ID
private _items: WishlistItem[];
}
class LoyaltyAccount {
private _customerId: CustomerId; // Reference by ID
private _points: Points;
private _transactions: LoyaltyTransaction[];
}
```
### Identification Heuristic
Ask: "Do all these things need to be immediately consistent?" If the answer is no, they probably belong in separate aggregates.
## Aggregate Reference Violation
### Description
Aggregates holding direct object references to other aggregates instead of referencing by identity. Creates implicit coupling and makes it impossible to reason about transactional boundaries.
### Symptoms
- Navigation from one aggregate to another: `order.customer.address`
- Loading an aggregate brings in connected aggregates
- Unclear what gets saved when calling `save()`
- Difficulty implementing eventual consistency
### Example
```typescript
// ANTI-PATTERN: Direct reference
class Order {
private customer: Customer; // Direct reference!
private shippingAddress: Address;
getCustomerEmail(): string {
return this.customer.email; // Navigating through!
}
validate(): void {
// Touching another aggregate's data
if (this.customer.creditLimit < this.total) {
throw new Error('Credit limit exceeded');
}
}
}
```
### Remediation
```typescript
// CORRECT: Reference by identity
class Order {
private _customerId: CustomerId; // ID only!
private _shippingAddress: Address; // Value object copied at order time
// If customer data is needed, it must be explicitly loaded
static create(
customerId: CustomerId,
shippingAddress: Address,
creditLimit: Money // Passed in, not navigated to
): Order {
return new Order(customerId, shippingAddress, creditLimit);
}
}
// Application service coordinates loading if needed
class OrderApplicationService {
async getOrderWithCustomerDetails(orderId: OrderId): Promise<OrderDetails> {
const order = await this.orderRepository.findById(orderId);
const customer = await this.customerRepository.findById(order.customerId);
return new OrderDetails(order, customer);
}
}
```
## Smart UI
### Description
Business logic embedded directly in the user interface layer. Controllers, presenters, or UI components contain domain rules.
### Symptoms
- Validation logic in form handlers
- Business calculations in controllers
- State machines in UI components
- Domain rules duplicated across different UI views
- "If we change the UI framework, we lose the business logic"
### Example
```typescript
// ANTI-PATTERN: Smart UI
class OrderController {
submitOrder(request: Request): Response {
const cart = request.body;
// Business logic in controller!
let total = 0;
for (const item of cart.items) {
total += item.price * item.quantity;
}
// Discount rules in controller!
if (cart.items.length > 10) {
total *= 0.9; // 10% bulk discount
}
if (total > 1000 && !this.hasValidPaymentMethod(cart.customerId)) {
return Response.error('Orders over $1000 require verified payment');
}
// More business rules...
const order = {
customerId: cart.customerId,
items: cart.items,
total: total,
status: 'PENDING'
};
this.database.insert('orders', order);
return Response.ok(order);
}
}
```
### Remediation
```typescript
// CORRECT: UI delegates to domain
class OrderController {
submitOrder(request: Request): Response {
const command = new PlaceOrderCommand(
request.body.customerId,
request.body.items
);
try {
const orderId = this.orderApplicationService.placeOrder(command);
return Response.ok({ orderId });
} catch (error) {
if (error instanceof DomainError) {
return Response.badRequest(error.message);
}
throw error;
}
}
}
// Domain logic in domain layer
class Order {
private calculateTotal(): Money {
const subtotal = this._items.reduce(
(sum, item) => sum.add(item.subtotal()),
Money.zero()
);
return this._discountPolicy.apply(subtotal, this._items.length);
}
}
class BulkDiscountPolicy implements DiscountPolicy {
apply(subtotal: Money, itemCount: number): Money {
if (itemCount > 10) {
return subtotal.multiply(0.9);
}
return subtotal;
}
}
```
## Database-Driven Design
### Description
The domain model is derived from the database schema rather than from domain concepts. Tables become classes; foreign keys become object references; database constraints become business rules.
### Symptoms
- Class names match table names exactly
- Foreign key relationships drive object graph
- ID fields everywhere, even where identity doesn't matter
- `nullable` database columns drive optional properties
- Domain model changes require database migration first
### Example
```typescript
// ANTI-PATTERN: Database-driven model
// Mirrors database schema exactly
class orders {
order_id: number;
customer_id: number;
order_date: Date;
status_cd: string;
shipping_address_id: number;
billing_address_id: number;
total_amt: number;
tax_amt: number;
created_ts: Date;
updated_ts: Date;
}
class order_items {
order_item_id: number;
order_id: number;
product_id: number;
quantity: number;
unit_price: number;
discount_pct: number;
}
```
### Remediation
```typescript
// CORRECT: Domain-driven model
class Order {
private readonly _id: OrderId;
private _status: OrderStatus;
private _items: OrderItem[];
private _shippingAddress: Address; // Value object, not FK
private _billingAddress: Address;
// Domain behavior, not database structure
get total(): Money {
return this._items.reduce(
(sum, item) => sum.add(item.lineTotal()),
Money.zero()
);
}
ship(trackingNumber: TrackingNumber): void {
// Business logic
}
}
// Mapping is infrastructure concern
class OrderRepository {
async save(order: Order): Promise<void> {
// Map rich domain object to database tables
await this.db.query(
'INSERT INTO orders (id, status, shipping_street, shipping_city...) VALUES (...)'
);
}
}
```
### Key Principle
The domain model reflects how domain experts think, not how data is stored. Persistence is an infrastructure detail.
## Leaky Abstractions
### Description
Infrastructure concerns bleeding into the domain layer. Domain objects depend on frameworks, databases, or external services.
### Symptoms
- Domain entities with ORM decorators
- Repository interfaces returning database-specific types
- Domain services making HTTP calls
- Framework annotations on domain objects
- `import { Entity } from 'typeorm'` in domain layer
### Example
```typescript
// ANTI-PATTERN: Infrastructure leaking into domain
import { Entity, Column, PrimaryColumn, ManyToOne } from 'typeorm';
import { IsEmail, IsNotEmpty } from 'class-validator';
@Entity('customers') // ORM in domain!
export class Customer {
@PrimaryColumn()
id: string;
@Column()
@IsNotEmpty() // Validation framework in domain!
name: string;
@Column()
@IsEmail()
email: string;
@ManyToOne(() => Subscription) // ORM relationship in domain!
subscription: Subscription;
}
// Domain service calling external API directly
class ShippingCostService {
async calculateCost(order: Order): Promise<number> {
// HTTP call in domain!
const response = await fetch('https://shipping-api.com/rates', {
body: JSON.stringify(order)
});
return response.json().cost;
}
}
```
### Remediation
```typescript
// CORRECT: Clean domain layer
// Domain object - no framework dependencies
class Customer {
private constructor(
private readonly _id: CustomerId,
private readonly _name: CustomerName,
private readonly _email: Email
) {}
static create(name: string, email: string): Customer {
return new Customer(
CustomerId.generate(),
CustomerName.create(name), // Self-validating value object
Email.create(email) // Self-validating value object
);
}
}
// Port (interface) defined in domain
interface ShippingRateProvider {
getRate(destination: Address, weight: Weight): Promise<Money>;
}
// Domain service uses port
class ShippingCostCalculator {
constructor(private rateProvider: ShippingRateProvider) {}
async calculate(order: Order): Promise<Money> {
return this.rateProvider.getRate(
order.shippingAddress,
order.totalWeight()
);
}
}
// Adapter (infrastructure) implements port
class ShippingApiRateProvider implements ShippingRateProvider {
async getRate(destination: Address, weight: Weight): Promise<Money> {
const response = await fetch('https://shipping-api.com/rates', {
body: JSON.stringify({ destination, weight })
});
const data = await response.json();
return Money.of(data.cost, Currency.USD);
}
}
```
## Shared Database
### Description
Multiple bounded contexts accessing the same database tables. Changes in one context break others. No clear data ownership.
### Symptoms
- Multiple services querying the same tables
- Fear of schema changes because "something else might break"
- Unclear which service is authoritative for data
- Cross-context joins in queries
- Database triggers coordinating contexts
### Example
```typescript
// ANTI-PATTERN: Shared database
// Sales context
class SalesOrderService {
async getOrder(orderId: string) {
return this.db.query(`
SELECT o.*, c.name, c.email, p.name as product_name
FROM orders o
JOIN customers c ON o.customer_id = c.id
JOIN products p ON o.product_id = p.id
WHERE o.id = ?
`, [orderId]);
}
}
// Shipping context - same tables!
class ShippingService {
async getOrdersToShip() {
return this.db.query(`
SELECT o.*, c.address
FROM orders o
JOIN customers c ON o.customer_id = c.id
WHERE o.status = 'PAID'
`);
}
async markShipped(orderId: string) {
// Directly modifying shared table
await this.db.query(
"UPDATE orders SET status = 'SHIPPED' WHERE id = ?",
[orderId]
);
}
}
```
### Remediation
```typescript
// CORRECT: Each context owns its data
// Sales context - owns order creation
class SalesOrderRepository {
async save(order: SalesOrder): Promise<void> {
await this.salesDb.query('INSERT INTO sales_orders...');
// Publish event for other contexts
await this.eventPublisher.publish(
new OrderPlaced(order.id, order.customerId, order.items)
);
}
}
// Shipping context - owns its projection
class ShippingOrderProjection {
// Handles events to build local projection
async handleOrderPlaced(event: OrderPlaced): Promise<void> {
await this.shippingDb.query(`
INSERT INTO shipments (order_id, customer_id, status)
VALUES (?, ?, 'PENDING')
`, [event.orderId, event.customerId]);
}
}
class ShipmentRepository {
async findPendingShipments(): Promise<Shipment[]> {
// Queries only shipping context's data
return this.shippingDb.query(
"SELECT * FROM shipments WHERE status = 'PENDING'"
);
}
}
```
## Premature Abstraction
### Description
Creating abstractions, interfaces, and frameworks before understanding the problem space. Often justified as "flexibility for the future."
### Symptoms
- Interfaces with single implementations
- Generic frameworks solving hypothetical problems
- Heavy use of design patterns without clear benefit
- Configuration systems for things that never change
- "We might need this someday"
### Example
```typescript
// ANTI-PATTERN: Premature abstraction
interface IOrderProcessor<TOrder, TResult> {
process(order: TOrder): Promise<TResult>;
}
interface IOrderValidator<TOrder> {
validate(order: TOrder): ValidationResult;
}
interface IOrderPersister<TOrder> {
persist(order: TOrder): Promise<void>;
}
abstract class AbstractOrderProcessor<TOrder, TResult>
implements IOrderProcessor<TOrder, TResult> {
constructor(
protected validator: IOrderValidator<TOrder>,
protected persister: IOrderPersister<TOrder>,
protected notifier: INotificationService,
protected logger: ILogger,
protected metrics: IMetricsCollector
) {}
async process(order: TOrder): Promise<TResult> {
this.logger.log('Processing order');
this.metrics.increment('orders.processed');
const validation = this.validator.validate(order);
if (!validation.isValid) {
throw new ValidationException(validation.errors);
}
const result = await this.doProcess(order);
await this.persister.persist(order);
await this.notifier.notify(order);
return result;
}
protected abstract doProcess(order: TOrder): Promise<TResult>;
}
// Only one concrete implementation ever created
class StandardOrderProcessor extends AbstractOrderProcessor<Order, OrderResult> {
protected async doProcess(order: Order): Promise<OrderResult> {
// The actual logic is trivial
return new OrderResult(order.id);
}
}
```
### Remediation
```typescript
// CORRECT: Concrete first, abstract when patterns emerge
class OrderService {
async placeOrder(command: PlaceOrderCommand): Promise<OrderId> {
const order = Order.create(command);
if (!order.isValid()) {
throw new InvalidOrderError(order.validationErrors());
}
await this.orderRepository.save(order);
return order.id;
}
}
// Only add abstraction when you have multiple implementations
// and understand the variation points
```
### Heuristic
Wait until you have three similar implementations before abstracting. The right abstraction will be obvious then.
## Big Ball of Mud
### Description
A system without clear architectural boundaries. Everything depends on everything. Changes ripple unpredictably.
### Symptoms
- No clear module boundaries
- Circular dependencies
- Any change might break anything
- "Only Bob understands how this works"
- Integration tests are the only reliable tests
- Fear of refactoring
### Identification
```
# Circular dependency example
OrderService → CustomerService → PaymentService → OrderService
```
### Remediation Strategy
1. **Identify implicit contexts** - Find clusters of related functionality
2. **Define explicit boundaries** - Create modules/packages with clear interfaces
3. **Break cycles** - Introduce events or shared kernel for circular dependencies
4. **Enforce boundaries** - Use architectural tests, linting rules
```typescript
// Step 1: Identify boundaries
// sales/ - order creation, pricing
// fulfillment/ - shipping, tracking
// customer/ - customer management
// shared/ - shared kernel (Money, Address)
// Step 2: Define public interfaces
// sales/index.ts
export { OrderService } from './application/OrderService';
export { OrderPlaced, OrderCancelled } from './domain/events';
// Internal types not exported
// Step 3: Break cycles with events
class OrderService {
async placeOrder(command: PlaceOrderCommand): Promise<OrderId> {
const order = Order.create(command);
await this.orderRepository.save(order);
// Instead of calling PaymentService directly
await this.eventPublisher.publish(new OrderPlaced(order));
return order.id;
}
}
class PaymentEventHandler {
async handleOrderPlaced(event: OrderPlaced): Promise<void> {
await this.paymentService.collectPayment(event.orderId, event.total);
}
}
```
## CRUD-Driven Development
### Description
Treating all domain operations as Create, Read, Update, Delete operations. Loses domain intent and behavior.
### Symptoms
- Endpoints like `PUT /orders/{id}` that accept any field changes
- Service methods like `updateOrder(orderId, updates)`
- Domain events named `OrderUpdated` instead of `OrderShipped`
- No validation of state transitions
- Business operations hidden behind generic updates
### Example
```typescript
// ANTI-PATTERN: CRUD-driven
class OrderController {
@Put('/orders/:id')
async updateOrder(id: string, body: Partial<Order>) {
// Any field can be updated!
return this.orderService.update(id, body);
}
}
class OrderService {
async update(id: string, updates: Partial<Order>): Promise<Order> {
const order = await this.repo.findById(id);
Object.assign(order, updates); // Blindly apply updates
return this.repo.save(order);
}
}
```
### Remediation
```typescript
// CORRECT: Intent-revealing operations
class OrderController {
@Post('/orders/:id/ship')
async shipOrder(id: string, body: ShipOrderRequest) {
return this.orderService.ship(id, body.trackingNumber);
}
@Post('/orders/:id/cancel')
async cancelOrder(id: string, body: CancelOrderRequest) {
return this.orderService.cancel(id, body.reason);
}
}
class OrderService {
async ship(orderId: OrderId, trackingNumber: TrackingNumber): Promise<void> {
const order = await this.repo.findById(orderId);
order.ship(trackingNumber); // Domain logic with validation
await this.repo.save(order);
await this.publish(new OrderShipped(orderId, trackingNumber));
}
async cancel(orderId: OrderId, reason: CancellationReason): Promise<void> {
const order = await this.repo.findById(orderId);
order.cancel(reason); // Validates cancellation is allowed
await this.repo.save(order);
await this.publish(new OrderCancelled(orderId, reason));
}
}
```
## Summary: Detection Checklist
| Anti-Pattern | Key Question |
|--------------|--------------|
| Anemic Domain Model | Do entities have behavior or just data? |
| God Aggregate | Does everything need immediate consistency? |
| Aggregate Reference Violation | Are aggregates holding other aggregates? |
| Smart UI | Would changing UI framework lose business logic? |
| Database-Driven Design | Does model match tables or domain concepts? |
| Leaky Abstractions | Does domain code import infrastructure? |
| Shared Database | Do multiple contexts write to same tables? |
| Premature Abstraction | Are there interfaces with single implementations? |
| Big Ball of Mud | Can any change break anything? |
| CRUD-Driven Development | Are operations generic updates or domain intents? |

View File

@@ -0,0 +1,358 @@
# Strategic DDD Patterns
Strategic DDD patterns address the large-scale structure of a system: how to divide it into bounded contexts, how those contexts relate, and how to prioritize investment across subdomains.
## Bounded Context
### Definition
A Bounded Context is an explicit boundary within which a domain model exists. Inside the boundary, all terms have specific, unambiguous meanings. The same term may mean different things in different bounded contexts.
### Why It Matters
- **Linguistic clarity** - "Customer" in Sales means something different than "Customer" in Shipping
- **Model isolation** - Changes to one model don't cascade across the system
- **Team autonomy** - Teams can work independently within their context
- **Focused complexity** - Each context solves one set of problems well
### Identification Heuristics
1. **Language divergence** - When stakeholders use the same word differently, there's a context boundary
2. **Department boundaries** - Organizational structure often mirrors domain structure
3. **Process boundaries** - End-to-end business processes often define context edges
4. **Data ownership** - Who is the authoritative source for this data?
5. **Change frequency** - Parts that change together should stay together
### Example: E-Commerce Platform
| Context | "Order" means... | "Product" means... |
|---------|------------------|-------------------|
| **Catalog** | N/A | Displayable item with description, images, categories |
| **Inventory** | N/A | Stock keeping unit with quantity and location |
| **Sales** | Shopping cart ready for checkout | Line item with price |
| **Fulfillment** | Shipment to be picked and packed | Physical item to ship |
| **Billing** | Invoice to collect payment | Taxable good |
### Implementation Patterns
#### Separate Deployables
Each bounded context as its own service/application.
```
catalog-service/
├── src/domain/Product.ts
└── src/infrastructure/CatalogRepository.ts
sales-service/
├── src/domain/Product.ts # Different model!
└── src/domain/Order.ts
```
#### Module Boundaries
Bounded contexts as modules within a monolith.
```
src/
├── catalog/
│ └── domain/Product.ts
├── sales/
│ └── domain/Product.ts # Different model!
└── shared/
└── kernel/Money.ts # Shared kernel
```
## Context Map
### Definition
A Context Map is a visual and documented representation of how bounded contexts relate to each other. It makes integration patterns explicit.
### Integration Patterns
#### Partnership
Two contexts develop together with mutual dependencies. Changes are coordinated.
```
┌─────────────┐ Partnership ┌─────────────┐
│ Catalog │◄──────────────────►│ Inventory │
└─────────────┘ └─────────────┘
```
**Use when**: Two teams must succeed or fail together.
#### Shared Kernel
A small, shared model that multiple contexts depend on. Changes require agreement from all consumers.
```
┌─────────────┐ ┌─────────────┐
│ Sales │ │ Billing │
└──────┬──────┘ └──────┬──────┘
│ │
└─────────► Money ◄──────────────┘
(shared kernel)
```
**Use when**: Core concepts genuinely need the same model.
**Danger**: Creates coupling. Keep shared kernels minimal.
#### Customer-Supplier
Upstream context (supplier) provides data/services; downstream context (customer) consumes. Supplier considers customer needs.
```
┌─────────────┐ ┌─────────────┐
│ Catalog │───── supplies ────►│ Sales │
│ (upstream) │ │ (downstream)│
└─────────────┘ └─────────────┘
```
**Use when**: One context clearly serves another, and the supplier is responsive.
#### Conformist
Downstream adopts upstream's model without negotiation. Upstream doesn't accommodate downstream needs.
```
┌─────────────┐ ┌─────────────┐
│ External │───── dictates ────►│ Our App │
│ API │ │ (conformist)│
└─────────────┘ └─────────────┘
```
**Use when**: Upstream won't change (third-party API), and their model is acceptable.
#### Anti-Corruption Layer (ACL)
Translation layer that protects a context from external models. Transforms data at the boundary.
```
┌─────────────┐ ┌───────┐ ┌─────────────┐
│ Legacy │───────►│ ACL │───────►│ New System │
│ System │ └───────┘ └─────────────┘
```
**Use when**: Upstream model would pollute downstream; translation is worth the cost.
```typescript
// Anti-Corruption Layer example
class LegacyOrderAdapter {
constructor(private legacyApi: LegacyOrderApi) {}
translateOrder(legacyOrder: LegacyOrder): Order {
return new Order({
id: OrderId.from(legacyOrder.order_num),
customer: this.translateCustomer(legacyOrder.cust_data),
items: legacyOrder.line_items.map(this.translateLineItem),
// Transform legacy status codes to domain concepts
status: this.mapStatus(legacyOrder.stat_cd),
});
}
private mapStatus(legacyCode: string): OrderStatus {
const mapping: Record<string, OrderStatus> = {
'OP': OrderStatus.Open,
'SH': OrderStatus.Shipped,
'CL': OrderStatus.Closed,
};
return mapping[legacyCode] ?? OrderStatus.Unknown;
}
}
```
#### Open Host Service
A context provides a well-defined protocol/API for others to consume.
```
┌─────────────┐
┌──────────►│ Reports │
│ └─────────────┘
┌───────┴───────┐ ┌─────────────┐
│ Catalog API │──►│ Search │
│ (open host) │ └─────────────┘
└───────┬───────┘ ┌─────────────┐
└──────────►│ Partner │
└─────────────┘
```
**Use when**: Multiple downstream contexts need access; worth investing in a stable API.
#### Published Language
A shared language format (schema) for communication between contexts. Often combined with Open Host Service.
Examples: JSON schemas, Protocol Buffers, GraphQL schemas, industry standards (HL7 for healthcare).
#### Separate Ways
Contexts have no integration. Each solves its needs independently.
**Use when**: Integration cost exceeds benefit; duplication is acceptable.
### Context Map Notation
```
┌───────────────────────────────────────────────────────────────┐
│ CONTEXT MAP │
├───────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────┐ Partnership ┌─────────┐ │
│ │ Sales │◄────────────────────────────►│Inventory│ │
│ │ (U,D) │ │ (U,D) │ │
│ └────┬────┘ └────┬────┘ │
│ │ │ │
│ │ Customer/Supplier │ │
│ ▼ │ │
│ ┌─────────┐ │ │
│ │ Billing │◄──────────────────────────────────┘ │
│ │ (D) │ Conformist │
│ └─────────┘ │
│ │
│ Legend: U = Upstream, D = Downstream │
└───────────────────────────────────────────────────────────────┘
```
## Subdomain Classification
### Core Domain
The essential differentiator. This is where competitive advantage lives.
**Characteristics**:
- Unique to this business
- Complex, requires deep expertise
- Frequently changing as business evolves
- Worth significant investment
**Strategy**: Build in-house with best talent. Invest heavily in modeling.
### Supporting Subdomain
Necessary for the business but not a differentiator.
**Characteristics**:
- Important but not unique
- Moderate complexity
- Changes less frequently
- Custom implementation needed
**Strategy**: Build with adequate (not exceptional) investment. May outsource.
### Generic Subdomain
Solved problems with off-the-shelf solutions.
**Characteristics**:
- Common across industries
- Well-understood solutions exist
- Rarely changes
- Not a differentiator
**Strategy**: Buy or use open-source. Don't reinvent.
### Example: E-Commerce Platform
| Subdomain | Type | Strategy |
|-----------|------|----------|
| Product Recommendation Engine | Core | In-house, top talent |
| Inventory Management | Supporting | Build, adequate investment |
| Payment Processing | Generic | Third-party (Stripe, etc.) |
| User Authentication | Generic | Third-party or standard library |
| Shipping Logistics | Supporting | Build or integrate vendor |
| Customer Analytics | Core | In-house, strategic investment |
## Ubiquitous Language
### Definition
A common language shared by developers and domain experts. It appears in conversations, documentation, and code.
### Building Ubiquitous Language
1. **Listen to experts** - Use their terminology, not technical jargon
2. **Challenge vague terms** - "Process the order" → What exactly happens?
3. **Document glossary** - Maintain a living dictionary
4. **Enforce in code** - Class and method names use the language
5. **Refine continuously** - Language evolves with understanding
### Language in Code
```typescript
// Bad: Technical terms
class OrderProcessor {
handleOrderCreation(data: OrderData): void {
this.validateData(data);
this.persistToDatabase(data);
this.sendNotification(data);
}
}
// Good: Ubiquitous language
class OrderTaker {
placeOrder(cart: ShoppingCart): PlacedOrder {
const order = cart.checkout();
order.confirmWith(this.paymentGateway);
this.orderRepository.save(order);
this.domainEvents.publish(new OrderPlaced(order));
return order;
}
}
```
### Glossary Example
| Term | Definition | Context |
|------|------------|---------|
| **Order** | A confirmed purchase with payment collected | Sales |
| **Shipment** | Physical package(s) sent to fulfill an order | Fulfillment |
| **SKU** | Stock Keeping Unit; unique identifier for inventory | Inventory |
| **Cart** | Uncommitted collection of items a customer intends to buy | Sales |
| **Listing** | Product displayed for purchase in the catalog | Catalog |
### Anti-Pattern: Technical Language Leakage
```typescript
// Bad: Database terminology leaks into domain
order.setForeignKeyCustomerId(customerId);
order.persist();
// Bad: HTTP concerns leak into domain
order.deserializeFromJson(request.body);
order.setHttpStatus(200);
// Good: Domain language only
order.placeFor(customer);
orderRepository.save(order);
```
## Strategic Design Decisions
### When to Split a Bounded Context
Split when:
- Different parts need to evolve at different speeds
- Different teams need ownership
- Model complexity is becoming unmanageable
- Language conflicts are emerging within the context
Don't split when:
- Transaction boundaries would become awkward
- Integration cost outweighs isolation benefit
- Single team can handle the complexity
### When to Merge Bounded Contexts
Merge when:
- Integration overhead is excessive
- Same team owns both
- Models are converging naturally
- Separate contexts create artificial complexity
### Dealing with Legacy Systems
1. **Bubble context** - New bounded context with ACL to legacy
2. **Strangler fig** - Gradually replace legacy feature by feature
3. **Conformist** - Accept legacy model if acceptable
4. **Separate ways** - Rebuild independently, migrate data later

View File

@@ -0,0 +1,927 @@
# Tactical DDD Patterns
Tactical DDD patterns are code-level building blocks for implementing a rich domain model. They help express domain concepts in code that mirrors how domain experts think.
## Entity
### Definition
An object defined by its identity rather than its attributes. Two entities with the same attribute values but different identities are different things.
### Characteristics
- Has a unique identifier that persists through state changes
- Identity established at creation, immutable thereafter
- Equality based on identity, not attribute values
- Has a lifecycle (created, modified, potentially deleted)
- Contains behavior relevant to the domain concept it represents
### When to Use
- The object represents something tracked over time
- "Is this the same one?" is a meaningful question
- The object needs to be referenced from other parts of the system
- State changes are important to track
### Implementation
```typescript
// Entity with identity and behavior
class Order {
private readonly _id: OrderId;
private _status: OrderStatus;
private _items: OrderItem[];
private _shippingAddress: Address;
constructor(id: OrderId, items: OrderItem[], shippingAddress: Address) {
this._id = id;
this._items = items;
this._shippingAddress = shippingAddress;
this._status = OrderStatus.Pending;
}
get id(): OrderId {
return this._id;
}
// Behavior, not just data access
confirm(): void {
if (this._items.length === 0) {
throw new EmptyOrderError(this._id);
}
this._status = OrderStatus.Confirmed;
}
ship(trackingNumber: TrackingNumber): void {
if (this._status !== OrderStatus.Confirmed) {
throw new InvalidOrderStateError(this._id, this._status, 'ship');
}
this._status = OrderStatus.Shipped;
// Domain event raised
}
addItem(item: OrderItem): void {
if (this._status !== OrderStatus.Pending) {
throw new OrderModificationError(this._id);
}
this._items.push(item);
}
// Identity-based equality
equals(other: Order): boolean {
return this._id.equals(other._id);
}
}
// Strongly-typed identity
class OrderId {
constructor(private readonly value: string) {
if (!value || value.trim() === '') {
throw new InvalidOrderIdError();
}
}
equals(other: OrderId): boolean {
return this.value === other.value;
}
toString(): string {
return this.value;
}
}
```
### Entity vs Data Structure
```typescript
// Bad: Anemic entity (data structure)
class Order {
id: string;
status: string;
items: Item[];
// Only getters/setters, no behavior
}
// Good: Rich entity with behavior
class Order {
private _id: OrderId;
private _status: OrderStatus;
private _items: OrderItem[];
confirm(): void { /* enforces rules */ }
cancel(reason: CancellationReason): void { /* enforces rules */ }
addItem(item: OrderItem): void { /* enforces rules */ }
}
```
## Value Object
### Definition
An object defined entirely by its attributes. Two value objects with the same attributes are interchangeable. Has no identity.
### Characteristics
- Immutable - once created, never changes
- Equality based on attributes, not identity
- Self-validating - always in a valid state
- Side-effect free - methods return new instances
- Conceptually whole - attributes form a complete concept
### When to Use
- The concept has no lifecycle or identity
- "Are these the same?" means "do they have the same values?"
- Measurement, description, or quantification
- Combinations of attributes that belong together
### Implementation
```typescript
// Value Object: Money
class Money {
private constructor(
private readonly amount: number,
private readonly currency: Currency
) {}
// Factory method with validation
static of(amount: number, currency: Currency): Money {
if (amount < 0) {
throw new NegativeMoneyError(amount);
}
return new Money(amount, currency);
}
// Immutable operations - return new instances
add(other: Money): Money {
this.ensureSameCurrency(other);
return Money.of(this.amount + other.amount, this.currency);
}
subtract(other: Money): Money {
this.ensureSameCurrency(other);
return Money.of(this.amount - other.amount, this.currency);
}
multiply(factor: number): Money {
return Money.of(this.amount * factor, this.currency);
}
// Value-based equality
equals(other: Money): boolean {
return this.amount === other.amount &&
this.currency.equals(other.currency);
}
private ensureSameCurrency(other: Money): void {
if (!this.currency.equals(other.currency)) {
throw new CurrencyMismatchError(this.currency, other.currency);
}
}
}
// Value Object: Address
class Address {
private constructor(
readonly street: string,
readonly city: string,
readonly postalCode: string,
readonly country: Country
) {}
static create(street: string, city: string, postalCode: string, country: Country): Address {
if (!street || !city || !postalCode) {
throw new InvalidAddressError();
}
if (!country.validatePostalCode(postalCode)) {
throw new InvalidPostalCodeError(postalCode, country);
}
return new Address(street, city, postalCode, country);
}
// Returns new instance with modified value
withStreet(newStreet: string): Address {
return Address.create(newStreet, this.city, this.postalCode, this.country);
}
equals(other: Address): boolean {
return this.street === other.street &&
this.city === other.city &&
this.postalCode === other.postalCode &&
this.country.equals(other.country);
}
}
// Value Object: DateRange
class DateRange {
private constructor(
readonly start: Date,
readonly end: Date
) {}
static create(start: Date, end: Date): DateRange {
if (end < start) {
throw new InvalidDateRangeError(start, end);
}
return new DateRange(start, end);
}
contains(date: Date): boolean {
return date >= this.start && date <= this.end;
}
overlaps(other: DateRange): boolean {
return this.start <= other.end && this.end >= other.start;
}
durationInDays(): number {
return Math.floor((this.end.getTime() - this.start.getTime()) / (1000 * 60 * 60 * 24));
}
}
```
### Common Value Objects
| Domain | Value Objects |
|--------|--------------|
| **E-commerce** | Money, Price, Quantity, SKU, Address, PhoneNumber |
| **Healthcare** | BloodPressure, Dosage, DateRange, PatientId |
| **Finance** | AccountNumber, IBAN, TaxId, Percentage |
| **Shipping** | Weight, Dimensions, TrackingNumber, PostalCode |
| **General** | Email, URL, PhoneNumber, Name, Coordinates |
## Aggregate
### Definition
A cluster of entities and value objects with defined boundaries. Has an aggregate root entity that serves as the single entry point. External objects can only reference the root.
### Characteristics
- Defines a transactional consistency boundary
- Aggregate root is the only externally accessible object
- Enforces invariants across the cluster
- Loaded and saved as a unit
- Other aggregates referenced by identity only
### Design Rules
1. **Protect invariants** - All rules that must be consistent are inside the boundary
2. **Small aggregates** - Prefer single-entity aggregates; add children only when invariants require
3. **Reference by identity** - Never hold direct references to other aggregates
4. **Update one per transaction** - Eventual consistency between aggregates
5. **Design around invariants** - Identify what must be immediately consistent
### Implementation
```typescript
// Aggregate: Order (root) with OrderItems (child entities)
class Order {
private readonly _id: OrderId;
private _items: Map<ProductId, OrderItem>;
private _status: OrderStatus;
// Invariant: Order total cannot exceed credit limit
private _creditLimit: Money;
private constructor(
id: OrderId,
creditLimit: Money
) {
this._id = id;
this._items = new Map();
this._status = OrderStatus.Draft;
this._creditLimit = creditLimit;
}
static create(id: OrderId, creditLimit: Money): Order {
return new Order(id, creditLimit);
}
// All modifications go through aggregate root
addItem(productId: ProductId, quantity: Quantity, unitPrice: Money): void {
this.ensureCanModify();
const newItem = OrderItem.create(productId, quantity, unitPrice);
const projectedTotal = this.calculateTotalWith(newItem);
// Invariant enforcement
if (projectedTotal.isGreaterThan(this._creditLimit)) {
throw new CreditLimitExceededError(projectedTotal, this._creditLimit);
}
this._items.set(productId, newItem);
}
removeItem(productId: ProductId): void {
this.ensureCanModify();
this._items.delete(productId);
}
updateItemQuantity(productId: ProductId, newQuantity: Quantity): void {
this.ensureCanModify();
const item = this._items.get(productId);
if (!item) {
throw new ItemNotFoundError(productId);
}
const updatedItem = item.withQuantity(newQuantity);
const projectedTotal = this.calculateTotalWithUpdate(productId, updatedItem);
if (projectedTotal.isGreaterThan(this._creditLimit)) {
throw new CreditLimitExceededError(projectedTotal, this._creditLimit);
}
this._items.set(productId, updatedItem);
}
submit(): OrderSubmitted {
if (this._items.size === 0) {
throw new EmptyOrderError();
}
this._status = OrderStatus.Submitted;
return new OrderSubmitted(this._id, this.total(), new Date());
}
// Read-only access to child entities
get items(): ReadonlyArray<OrderItem> {
return Array.from(this._items.values());
}
total(): Money {
return this.items.reduce(
(sum, item) => sum.add(item.subtotal()),
Money.zero(Currency.USD)
);
}
private ensureCanModify(): void {
if (this._status !== OrderStatus.Draft) {
throw new OrderNotModifiableError(this._id, this._status);
}
}
private calculateTotalWith(newItem: OrderItem): Money {
return this.total().add(newItem.subtotal());
}
private calculateTotalWithUpdate(productId: ProductId, updatedItem: OrderItem): Money {
const currentItem = this._items.get(productId)!;
return this.total().subtract(currentItem.subtotal()).add(updatedItem.subtotal());
}
}
// Child entity - only accessible through aggregate root
class OrderItem {
private constructor(
private readonly _productId: ProductId,
private _quantity: Quantity,
private readonly _unitPrice: Money
) {}
static create(productId: ProductId, quantity: Quantity, unitPrice: Money): OrderItem {
return new OrderItem(productId, quantity, unitPrice);
}
get productId(): ProductId { return this._productId; }
get quantity(): Quantity { return this._quantity; }
get unitPrice(): Money { return this._unitPrice; }
subtotal(): Money {
return this._unitPrice.multiply(this._quantity.value);
}
withQuantity(newQuantity: Quantity): OrderItem {
return new OrderItem(this._productId, newQuantity, this._unitPrice);
}
}
```
### Aggregate Reference Patterns
```typescript
// Bad: Direct object reference across aggregates
class Order {
private customer: Customer; // Holds the entire aggregate!
}
// Good: Reference by identity
class Order {
private customerId: CustomerId;
// If customer data needed, load separately
getCustomerAddress(customerRepository: CustomerRepository): Address {
const customer = customerRepository.findById(this.customerId);
return customer.shippingAddress;
}
}
```
## Domain Event
### Definition
A record of something significant that happened in the domain. Captures state changes that domain experts care about.
### Characteristics
- Named in past tense (OrderPlaced, PaymentReceived)
- Immutable - records historical fact
- Contains all relevant data about what happened
- Published after state change is committed
- May trigger reactions in same or different bounded contexts
### When to Use
- Domain experts talk about "when X happens, Y should happen"
- Need to communicate changes across aggregate boundaries
- Maintaining an audit trail
- Implementing eventual consistency
- Integration with other bounded contexts
### Implementation
```typescript
// Base domain event
abstract class DomainEvent {
readonly occurredAt: Date;
readonly eventId: string;
constructor() {
this.occurredAt = new Date();
this.eventId = generateUUID();
}
abstract get eventType(): string;
}
// Specific domain events
class OrderPlaced extends DomainEvent {
constructor(
readonly orderId: OrderId,
readonly customerId: CustomerId,
readonly totalAmount: Money,
readonly items: ReadonlyArray<OrderItemSnapshot>
) {
super();
}
get eventType(): string {
return 'order.placed';
}
}
class OrderShipped extends DomainEvent {
constructor(
readonly orderId: OrderId,
readonly trackingNumber: TrackingNumber,
readonly carrier: string,
readonly estimatedDelivery: Date
) {
super();
}
get eventType(): string {
return 'order.shipped';
}
}
class PaymentReceived extends DomainEvent {
constructor(
readonly orderId: OrderId,
readonly amount: Money,
readonly paymentMethod: PaymentMethod,
readonly transactionId: string
) {
super();
}
get eventType(): string {
return 'payment.received';
}
}
// Entity raising events
class Order {
private _domainEvents: DomainEvent[] = [];
submit(): void {
// State change
this._status = OrderStatus.Submitted;
// Raise event
this._domainEvents.push(
new OrderPlaced(
this._id,
this._customerId,
this.total(),
this.itemSnapshots()
)
);
}
pullDomainEvents(): DomainEvent[] {
const events = [...this._domainEvents];
this._domainEvents = [];
return events;
}
}
// Event handler
class OrderPlacedHandler {
constructor(
private inventoryService: InventoryService,
private emailService: EmailService
) {}
async handle(event: OrderPlaced): Promise<void> {
// Reserve inventory (different aggregate)
await this.inventoryService.reserveItems(event.items);
// Send confirmation email
await this.emailService.sendOrderConfirmation(
event.customerId,
event.orderId,
event.totalAmount
);
}
}
```
### Event Publishing Patterns
```typescript
// Pattern 1: Collect and dispatch after save
class OrderApplicationService {
async placeOrder(command: PlaceOrderCommand): Promise<OrderId> {
const order = Order.create(command);
await this.orderRepository.save(order);
// Dispatch events after successful save
const events = order.pullDomainEvents();
await this.eventDispatcher.dispatchAll(events);
return order.id;
}
}
// Pattern 2: Outbox pattern (reliable publishing)
class OrderApplicationService {
async placeOrder(command: PlaceOrderCommand): Promise<OrderId> {
await this.unitOfWork.transaction(async () => {
const order = Order.create(command);
await this.orderRepository.save(order);
// Save events to outbox in same transaction
const events = order.pullDomainEvents();
await this.outbox.saveEvents(events);
});
// Separate process publishes from outbox
return order.id;
}
}
```
## Repository
### Definition
Mediates between the domain and data mapping layers. Provides collection-like interface for accessing aggregates.
### Characteristics
- One repository per aggregate root
- Interface defined in domain layer, implementation in infrastructure
- Returns fully reconstituted aggregates
- Abstracts persistence concerns from domain
### Interface Design
```typescript
// Domain layer interface
interface OrderRepository {
findById(id: OrderId): Promise<Order | null>;
save(order: Order): Promise<void>;
delete(order: Order): Promise<void>;
// Domain-specific queries
findPendingOrdersFor(customerId: CustomerId): Promise<Order[]>;
findOrdersToShipBefore(deadline: Date): Promise<Order[]>;
}
// Infrastructure implementation
class PostgresOrderRepository implements OrderRepository {
constructor(private db: Database) {}
async findById(id: OrderId): Promise<Order | null> {
const row = await this.db.query(
'SELECT * FROM orders WHERE id = $1',
[id.toString()]
);
if (!row) return null;
const items = await this.db.query(
'SELECT * FROM order_items WHERE order_id = $1',
[id.toString()]
);
return this.reconstitute(row, items);
}
async save(order: Order): Promise<void> {
await this.db.transaction(async (tx) => {
await tx.query(
'INSERT INTO orders (id, status, customer_id) VALUES ($1, $2, $3) ON CONFLICT (id) DO UPDATE SET status = $2',
[order.id.toString(), order.status, order.customerId.toString()]
);
// Save items
for (const item of order.items) {
await tx.query(
'INSERT INTO order_items (order_id, product_id, quantity, unit_price) VALUES ($1, $2, $3, $4) ON CONFLICT DO UPDATE...',
[order.id.toString(), item.productId.toString(), item.quantity.value, item.unitPrice.amount]
);
}
});
}
private reconstitute(orderRow: any, itemRows: any[]): Order {
// Rebuild aggregate from persistence data
return Order.reconstitute({
id: OrderId.from(orderRow.id),
status: OrderStatus[orderRow.status],
customerId: CustomerId.from(orderRow.customer_id),
items: itemRows.map(row => OrderItem.reconstitute({
productId: ProductId.from(row.product_id),
quantity: Quantity.of(row.quantity),
unitPrice: Money.of(row.unit_price, Currency.USD)
}))
});
}
}
```
### Repository vs DAO
```typescript
// DAO: Data-centric, returns raw data
interface OrderDao {
findById(id: string): Promise<OrderRow>;
findItems(orderId: string): Promise<OrderItemRow[]>;
insert(row: OrderRow): Promise<void>;
}
// Repository: Domain-centric, returns aggregates
interface OrderRepository {
findById(id: OrderId): Promise<Order | null>;
save(order: Order): Promise<void>;
}
```
## Domain Service
### Definition
Stateless operations that represent domain concepts but don't naturally belong to any entity or value object.
### When to Use
- The operation involves multiple aggregates
- The operation represents a domain concept
- Putting the operation on an entity would create awkward dependencies
- The operation is stateless
### Examples
```typescript
// Domain Service: Transfer money between accounts
class MoneyTransferService {
transfer(
from: Account,
to: Account,
amount: Money
): TransferResult {
// Involves two aggregates
// Neither account should "own" this operation
if (!from.canWithdraw(amount)) {
return TransferResult.insufficientFunds();
}
from.withdraw(amount);
to.deposit(amount);
return TransferResult.success(
new MoneyTransferred(from.id, to.id, amount)
);
}
}
// Domain Service: Calculate shipping cost
class ShippingCostCalculator {
constructor(
private rateProvider: ShippingRateProvider
) {}
calculate(
items: OrderItem[],
destination: Address,
shippingMethod: ShippingMethod
): Money {
const totalWeight = items.reduce(
(sum, item) => sum.add(item.weight),
Weight.zero()
);
const rate = this.rateProvider.getRate(
destination.country,
shippingMethod
);
return rate.calculateFor(totalWeight);
}
}
// Domain Service: Check inventory availability
class InventoryAvailabilityService {
constructor(
private inventoryRepository: InventoryRepository
) {}
checkAvailability(
items: Array<{ productId: ProductId; quantity: Quantity }>
): AvailabilityResult {
const unavailable: ProductId[] = [];
for (const { productId, quantity } of items) {
const inventory = this.inventoryRepository.findByProductId(productId);
if (!inventory || !inventory.hasAvailable(quantity)) {
unavailable.push(productId);
}
}
return unavailable.length === 0
? AvailabilityResult.allAvailable()
: AvailabilityResult.someUnavailable(unavailable);
}
}
```
### Domain Service vs Application Service
```typescript
// Domain Service: Domain logic, domain types, stateless
class PricingService {
calculateDiscountedPrice(product: Product, customer: Customer): Money {
const basePrice = product.price;
const discount = customer.membershipLevel.discountPercentage;
return basePrice.applyDiscount(discount);
}
}
// Application Service: Orchestration, use cases, transaction boundary
class OrderApplicationService {
constructor(
private orderRepository: OrderRepository,
private pricingService: PricingService,
private eventPublisher: EventPublisher
) {}
async createOrder(command: CreateOrderCommand): Promise<OrderId> {
const customer = await this.customerRepository.findById(command.customerId);
const order = Order.create(command.orderId, customer.id);
for (const item of command.items) {
const product = await this.productRepository.findById(item.productId);
const price = this.pricingService.calculateDiscountedPrice(product, customer);
order.addItem(item.productId, item.quantity, price);
}
await this.orderRepository.save(order);
await this.eventPublisher.publish(order.pullDomainEvents());
return order.id;
}
}
```
## Factory
### Definition
Encapsulates complex object or aggregate creation logic. Creates objects in a valid state.
### When to Use
- Construction logic is complex
- Multiple ways to create the same type of object
- Creation involves other objects or services
- Need to enforce invariants at creation time
### Implementation
```typescript
// Factory as static method
class Order {
static create(customerId: CustomerId, creditLimit: Money): Order {
return new Order(
OrderId.generate(),
customerId,
creditLimit,
OrderStatus.Draft,
[]
);
}
static reconstitute(data: OrderData): Order {
// For rebuilding from persistence
return new Order(
data.id,
data.customerId,
data.creditLimit,
data.status,
data.items
);
}
}
// Factory as separate class
class OrderFactory {
constructor(
private creditLimitService: CreditLimitService,
private idGenerator: IdGenerator
) {}
async createForCustomer(customerId: CustomerId): Promise<Order> {
const creditLimit = await this.creditLimitService.getLimit(customerId);
const orderId = this.idGenerator.generate();
return Order.create(orderId, customerId, creditLimit);
}
createFromQuote(quote: Quote): Order {
const order = Order.create(
this.idGenerator.generate(),
quote.customerId,
quote.creditLimit
);
for (const item of quote.items) {
order.addItem(item.productId, item.quantity, item.agreedPrice);
}
return order;
}
}
// Builder pattern for complex construction
class OrderBuilder {
private customerId?: CustomerId;
private items: OrderItemData[] = [];
private shippingAddress?: Address;
private billingAddress?: Address;
forCustomer(customerId: CustomerId): this {
this.customerId = customerId;
return this;
}
withItem(productId: ProductId, quantity: Quantity, price: Money): this {
this.items.push({ productId, quantity, price });
return this;
}
shippingTo(address: Address): this {
this.shippingAddress = address;
return this;
}
billingTo(address: Address): this {
this.billingAddress = address;
return this;
}
build(): Order {
if (!this.customerId) throw new Error('Customer required');
if (!this.shippingAddress) throw new Error('Shipping address required');
if (this.items.length === 0) throw new Error('At least one item required');
const order = Order.create(this.customerId);
order.setShippingAddress(this.shippingAddress);
order.setBillingAddress(this.billingAddress ?? this.shippingAddress);
for (const item of this.items) {
order.addItem(item.productId, item.quantity, item.price);
}
return order;
}
}
```

View File

@@ -0,0 +1,369 @@
---
name: elliptic-curves
description: This skill should be used when working with elliptic curve cryptography, implementing or debugging secp256k1 operations, understanding modular arithmetic and finite fields, or implementing signature schemes like ECDSA and Schnorr. Provides comprehensive knowledge of group theory foundations, curve mathematics, point multiplication algorithms, and cryptographic optimizations.
---
# Elliptic Curve Cryptography
This skill provides deep knowledge of elliptic curve cryptography (ECC), with particular focus on the secp256k1 curve used in Bitcoin and Nostr, including the mathematical foundations and implementation considerations.
## When to Use This Skill
- Implementing or debugging elliptic curve operations
- Working with secp256k1, ECDSA, or Schnorr signatures
- Understanding modular arithmetic and finite field operations
- Optimizing cryptographic code for performance
- Analyzing security properties of curve-based cryptography
## Mathematical Foundations
### Groups in Cryptography
A **group** is a set G with a binary operation (often denoted · or +) satisfying:
1. **Closure**: For all a, b ∈ G, the result a · b is also in G
2. **Associativity**: (a · b) · c = a · (b · c)
3. **Identity**: There exists e ∈ G such that e · a = a · e = a
4. **Inverse**: For each a ∈ G, there exists a⁻¹ such that a · a⁻¹ = e
A **cyclic group** is generated by repeatedly applying the operation to a single element (the generator). The **order** of a group is the number of elements.
**Why groups matter in cryptography**: The discrete logarithm problem—given g and gⁿ, find n—is computationally hard in certain groups, forming the security basis for ECC.
### Modular Arithmetic
Modular arithmetic constrains calculations to a finite range [0, p-1] for some modulus p:
```
a ≡ b (mod p) means p divides (a - b)
Operations:
- Addition: (a + b) mod p
- Subtraction: (a - b + p) mod p
- Multiplication: (a × b) mod p
- Inverse: a⁻¹ where (a × a⁻¹) ≡ 1 (mod p)
```
**Computing modular inverse**:
- **Fermat's Little Theorem**: If p is prime, a⁻¹ ≡ a^(p-2) (mod p)
- **Extended Euclidean Algorithm**: More efficient for general cases
- **SafeGCD Algorithm**: Constant-time, used in libsecp256k1
### Finite Fields (Galois Fields)
A **finite field** GF(p) or 𝔽ₚ is a field with a finite number of elements where:
- p must be prime (or a prime power for extension fields)
- All arithmetic operations are defined and produce elements within the field
- Every non-zero element has a multiplicative inverse
For cryptographic curves like secp256k1, the field is 𝔽ₚ where p is a 256-bit prime.
**Key property**: The non-zero elements of a finite field form a cyclic group under multiplication.
## Elliptic Curves
### The Curve Equation
An elliptic curve over a finite field 𝔽ₚ is defined by the Weierstrass equation:
```
y² = x³ + ax + b (mod p)
```
The curve must satisfy the non-singularity condition: 4a³ + 27b² ≠ 0
### Points on the Curve
A point P = (x, y) is on the curve if it satisfies the equation. The set of all points, plus a special "point at infinity" O (the identity element), forms an abelian group.
### Point Operations
**Point Addition (P + Q where P ≠ Q)**:
```
λ = (y₂ - y₁) / (x₂ - x₁) (mod p)
x₃ = λ² - x₁ - x₂ (mod p)
y₃ = λ(x₁ - x₃) - y₁ (mod p)
```
**Point Doubling (P + P = 2P)**:
```
λ = (3x₁² + a) / (2y₁) (mod p)
x₃ = λ² - 2x₁ (mod p)
y₃ = λ(x₁ - x₃) - y₁ (mod p)
```
**Point at Infinity**: Acts as the identity element; P + O = P for all P.
**Point Negation**: -P = (x, -y) = (x, p - y)
## The secp256k1 Curve
### Parameters
secp256k1 is defined by SECG (Standards for Efficient Cryptography Group):
```
Curve equation: y² = x³ + 7 (a = 0, b = 7)
Prime modulus p:
0xFFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFE FFFFFC2F
= 2²⁵⁶ - 2³² - 977
Group order n:
0xFFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFE BAAEDCE6 AF48A03B BFD25E8C D0364141
Generator point G:
Gx = 0x79BE667E F9DCBBAC 55A06295 CE870B07 029BFCDB 2DCE28D9 59F2815B 16F81798
Gy = 0x483ADA77 26A3C465 5DA4FBFC 0E1108A8 FD17B448 A6855419 9C47D08F FB10D4B8
Cofactor h = 1
```
### Why secp256k1?
1. **Koblitz curve**: a = 0 enables faster computation (no ax term)
2. **Special prime**: p = 2²⁵⁶ - 2³² - 977 allows efficient modular reduction
3. **Deterministic construction**: Not randomly generated, reducing backdoor concerns
4. **~30% faster** than random curves when fully optimized
### Efficient Modular Reduction
The special form of p enables fast reduction without general division:
```
For p = 2²⁵⁶ - 2³² - 977:
To reduce a 512-bit number c = c_high × 2²⁵⁶ + c_low:
c ≡ c_low + c_high × 2³² + c_high × 977 (mod p)
```
## Point Multiplication Algorithms
Scalar multiplication kP (computing P + P + ... + P, k times) is the core operation.
### Double-and-Add (Binary Method)
```
Input: k (scalar), P (point)
Output: kP
R = O (point at infinity)
for i from bit_length(k)-1 down to 0:
R = 2R # Point doubling
if bit i of k is 1:
R = R + P # Point addition
return R
```
**Complexity**: O(log k) point operations
**Vulnerability**: Timing side-channels (different branches for 0/1 bits)
### Montgomery Ladder
Constant-time algorithm that performs the same operations regardless of bit values:
```
Input: k (scalar), P (point)
Output: kP
R0 = O
R1 = P
for i from bit_length(k)-1 down to 0:
if bit i of k is 0:
R1 = R0 + R1
R0 = 2R0
else:
R0 = R0 + R1
R1 = 2R1
return R0
```
**Advantage**: Resistant to simple power analysis and timing attacks.
### Window Methods (w-NAF)
Precompute small multiples of P, then process w bits at a time:
```
w-NAF representation reduces additions by ~1/3 compared to binary
Precomputation table: [P, 3P, 5P, 7P, ...] for w=4
```
### Endomorphism Optimization (GLV Method)
secp256k1 has an efficiently computable endomorphism φ where:
```
φ(x, y) = (βx, y) where β³ ≡ 1 (mod p)
φ(P) = λP where λ³ ≡ 1 (mod n)
```
This allows splitting scalar k into k₁ + k₂λ with smaller k₁, k₂, reducing operations by ~33-50%.
### Multi-Scalar Multiplication (Strauss-Shamir)
For computing k₁P₁ + k₂P₂ (common in signature verification):
```
Process both scalars simultaneously, combining operations
Reduces work compared to separate multiplications
```
## Coordinate Systems
### Affine Coordinates
Standard (x, y) representation. Requires modular inversion for each operation.
### Projective Coordinates
Represent (X:Y:Z) where x = X/Z, y = Y/Z:
- Avoids inversions during intermediate computations
- Only one inversion at the end to convert back to affine
### Jacobian Coordinates
Represent (X:Y:Z) where x = X/Z², y = Y/Z³:
- Fastest for point doubling
- Used extensively in libsecp256k1
### López-Dahab Coordinates
For curves over GF(2ⁿ), optimized for binary field arithmetic.
## Signature Schemes
### ECDSA (Elliptic Curve Digital Signature Algorithm)
**Key Generation**:
```
Private key: d (random integer in [1, n-1])
Public key: Q = dG
```
**Signing message m**:
```
1. Hash: e = H(m) truncated to curve order bit length
2. Random: k ∈ [1, n-1]
3. Compute: (x, y) = kG
4. Calculate: r = x mod n (if r = 0, restart with new k)
5. Calculate: s = k⁻¹(e + rd) mod n (if s = 0, restart)
6. Signature: (r, s)
```
**Verification of signature (r, s) on message m**:
```
1. Check: r, s ∈ [1, n-1]
2. Hash: e = H(m)
3. Compute: w = s⁻¹ mod n
4. Compute: u₁ = ew mod n, u₂ = rw mod n
5. Compute: (x, y) = u₁G + u₂Q
6. Valid if: r ≡ x (mod n)
```
**Security considerations**:
- k MUST be unique per signature (reuse leaks private key)
- Use RFC 6979 for deterministic k derivation
### Schnorr Signatures (BIP-340)
Simpler, more efficient, with provable security.
**Signing message m**:
```
1. Random: k ∈ [1, n-1]
2. Compute: R = kG
3. Challenge: e = H(R || Q || m)
4. Response: s = k + ed mod n
5. Signature: (R, s) or (r_x, s) where r_x is x-coordinate of R
```
**Verification**:
```
1. Compute: e = H(R || Q || m)
2. Check: sG = R + eQ
```
**Advantages over ECDSA**:
- Linear: enables signature aggregation (MuSig)
- Simpler verification (no modular inverse)
- Batch verification support
- Provably secure in Random Oracle Model
## Implementation Considerations
### Constant-Time Operations
To prevent timing attacks:
- Avoid branches dependent on secret data
- Use constant-time comparison functions
- Mask operations to hide data-dependent timing
```go
// BAD: Timing leak
if secretBit == 1 {
doOperation()
}
// GOOD: Constant-time conditional
result = conditionalSelect(secretBit, value1, value0)
```
### Memory Safety
- Zeroize sensitive data after use
- Avoid leaving secrets in registers or cache
- Use secure memory allocation when available
### Side-Channel Protections
- **Timing attacks**: Use constant-time algorithms
- **Power analysis**: Montgomery ladder, point blinding
- **Cache attacks**: Avoid table lookups indexed by secrets
### Random Number Generation
- Use cryptographically secure RNG for k in ECDSA
- Consider deterministic k (RFC 6979) for reproducibility
- Validate output is in valid range [1, n-1]
## libsecp256k1 Optimizations
The Bitcoin Core library includes:
1. **Field arithmetic**: 5×52-bit limbs for 64-bit platforms
2. **Scalar arithmetic**: 4×64-bit representation
3. **Endomorphism**: GLV decomposition enabled by default
4. **Batch inversion**: Amortizes expensive inversions
5. **SafeGCD**: Constant-time modular inverse
6. **Precomputed tables**: For generator point multiplications
## Security Properties
### Discrete Logarithm Problem (DLP)
Given P and Q = kP, finding k is computationally infeasible.
**Best known attacks**:
- Generic: Baby-step Giant-step, Pollard's rho: O(√n) operations
- For secp256k1: ~2¹²⁸ operations (128-bit security)
### Curve Security Criteria
- Large prime order subgroup
- Cofactor 1 (no small subgroup attacks)
- Resistant to MOV attack (embedding degree)
- Not anomalous (n ≠ p)
## Common Pitfalls
1. **k reuse in ECDSA**: Immediately leaks private key
2. **Weak random k**: Partially leaks key over multiple signatures
3. **Invalid curve points**: Validate points are on curve
4. **Small subgroup attacks**: Check point order (cofactor = 1 helps)
5. **Timing leaks**: Non-constant-time scalar multiplication
## References
For detailed implementations, see:
- `references/secp256k1-parameters.md` - Full curve parameters
- `references/algorithms.md` - Detailed algorithm pseudocode
- `references/security.md` - Security analysis and attack vectors

View File

@@ -0,0 +1,513 @@
# Elliptic Curve Algorithms
Detailed pseudocode for core elliptic curve operations.
## Field Arithmetic
### Modular Addition
```
function mod_add(a, b, p):
result = a + b
if result >= p:
result = result - p
return result
```
### Modular Subtraction
```
function mod_sub(a, b, p):
if a >= b:
return a - b
else:
return p - b + a
```
### Modular Multiplication
For general case:
```
function mod_mul(a, b, p):
return (a * b) mod p
```
For secp256k1 optimized (Barrett reduction):
```
function mod_mul_secp256k1(a, b):
# Compute full 512-bit product
product = a * b
# Split into high and low 256-bit parts
low = product & ((1 << 256) - 1)
high = product >> 256
# Reduce: result ≡ low + high * (2³² + 977) (mod p)
result = low + high * (1 << 32) + high * 977
# May need additional reduction
while result >= p:
result = result - p
return result
```
### Modular Inverse
**Extended Euclidean Algorithm**:
```
function mod_inverse(a, p):
if a == 0:
error "No inverse exists for 0"
old_r, r = p, a
old_s, s = 0, 1
while r != 0:
quotient = old_r / r
old_r, r = r, old_r - quotient * r
old_s, s = s, old_s - quotient * s
if old_r != 1:
error "No inverse exists"
if old_s < 0:
old_s = old_s + p
return old_s
```
**Fermat's Little Theorem** (for prime p):
```
function mod_inverse_fermat(a, p):
return mod_exp(a, p - 2, p)
```
### Modular Exponentiation (Square-and-Multiply)
```
function mod_exp(base, exp, p):
result = 1
base = base mod p
while exp > 0:
if exp & 1: # exp is odd
result = (result * base) mod p
exp = exp >> 1
base = (base * base) mod p
return result
```
### Modular Square Root (Tonelli-Shanks)
For secp256k1 where p ≡ 3 (mod 4):
```
function mod_sqrt(a, p):
# For p ≡ 3 (mod 4), sqrt(a) = a^((p+1)/4)
return mod_exp(a, (p + 1) / 4, p)
```
## Point Operations
### Point Validation
```
function is_on_curve(P, a, b, p):
if P is infinity:
return true
x, y = P
left = (y * y) mod p
right = (x * x * x + a * x + b) mod p
return left == right
```
### Point Addition (Affine Coordinates)
```
function point_add(P, Q, a, p):
if P is infinity:
return Q
if Q is infinity:
return P
x1, y1 = P
x2, y2 = Q
if x1 == x2:
if y1 == mod_neg(y2, p): # P = -Q
return infinity
else: # P == Q
return point_double(P, a, p)
# λ = (y2 - y1) / (x2 - x1)
numerator = mod_sub(y2, y1, p)
denominator = mod_sub(x2, x1, p)
λ = mod_mul(numerator, mod_inverse(denominator, p), p)
# x3 = λ² - x1 - x2
x3 = mod_sub(mod_sub(mod_mul(λ, λ, p), x1, p), x2, p)
# y3 = λ(x1 - x3) - y1
y3 = mod_sub(mod_mul(λ, mod_sub(x1, x3, p), p), y1, p)
return (x3, y3)
```
### Point Doubling (Affine Coordinates)
```
function point_double(P, a, p):
if P is infinity:
return infinity
x, y = P
if y == 0:
return infinity
# λ = (3x² + a) / (2y)
numerator = mod_add(mod_mul(3, mod_mul(x, x, p), p), a, p)
denominator = mod_mul(2, y, p)
λ = mod_mul(numerator, mod_inverse(denominator, p), p)
# x3 = λ² - 2x
x3 = mod_sub(mod_mul(λ, λ, p), mod_mul(2, x, p), p)
# y3 = λ(x - x3) - y
y3 = mod_sub(mod_mul(λ, mod_sub(x, x3, p), p), y, p)
return (x3, y3)
```
### Point Negation
```
function point_negate(P, p):
if P is infinity:
return infinity
x, y = P
return (x, p - y)
```
## Scalar Multiplication
### Double-and-Add (Left-to-Right)
```
function scalar_mult_double_add(k, P, a, p):
if k == 0 or P is infinity:
return infinity
if k < 0:
k = -k
P = point_negate(P, p)
R = infinity
bits = binary_representation(k) # MSB first
for bit in bits:
R = point_double(R, a, p)
if bit == 1:
R = point_add(R, P, a, p)
return R
```
### Montgomery Ladder (Constant-Time)
```
function scalar_mult_montgomery(k, P, a, p):
R0 = infinity
R1 = P
bits = binary_representation(k) # MSB first
for bit in bits:
if bit == 0:
R1 = point_add(R0, R1, a, p)
R0 = point_double(R0, a, p)
else:
R0 = point_add(R0, R1, a, p)
R1 = point_double(R1, a, p)
return R0
```
### w-NAF Scalar Multiplication
```
function compute_wNAF(k, w):
# Convert scalar to width-w Non-Adjacent Form
naf = []
while k > 0:
if k & 1: # k is odd
# Get w-bit window
digit = k mod (1 << w)
if digit >= (1 << (w-1)):
digit = digit - (1 << w)
naf.append(digit)
k = k - digit
else:
naf.append(0)
k = k >> 1
return naf
function scalar_mult_wNAF(k, P, w, a, p):
# Precompute odd multiples: [P, 3P, 5P, ..., (2^(w-1)-1)P]
precomp = [P]
P2 = point_double(P, a, p)
for i in range(1, 1 << (w-1)):
precomp.append(point_add(precomp[-1], P2, a, p))
# Convert k to w-NAF
naf = compute_wNAF(k, w)
# Compute scalar multiplication
R = infinity
for i in range(len(naf) - 1, -1, -1):
R = point_double(R, a, p)
digit = naf[i]
if digit > 0:
R = point_add(R, precomp[(digit - 1) / 2], a, p)
elif digit < 0:
R = point_add(R, point_negate(precomp[(-digit - 1) / 2], p), a, p)
return R
```
### Shamir's Trick (Multi-Scalar)
For computing k₁P + k₂Q efficiently:
```
function multi_scalar_mult(k1, P, k2, Q, a, p):
# Precompute P + Q
PQ = point_add(P, Q, a, p)
# Get binary representations (same length, padded)
bits1 = binary_representation(k1)
bits2 = binary_representation(k2)
max_len = max(len(bits1), len(bits2))
bits1 = pad_left(bits1, max_len)
bits2 = pad_left(bits2, max_len)
R = infinity
for i in range(max_len):
R = point_double(R, a, p)
b1, b2 = bits1[i], bits2[i]
if b1 == 1 and b2 == 1:
R = point_add(R, PQ, a, p)
elif b1 == 1:
R = point_add(R, P, a, p)
elif b2 == 1:
R = point_add(R, Q, a, p)
return R
```
## Jacobian Coordinates
More efficient for repeated operations.
### Conversion
```
# Affine to Jacobian
function affine_to_jacobian(P):
if P is infinity:
return (1, 1, 0) # Jacobian infinity
x, y = P
return (x, y, 1)
# Jacobian to Affine
function jacobian_to_affine(P, p):
X, Y, Z = P
if Z == 0:
return infinity
Z_inv = mod_inverse(Z, p)
Z_inv2 = mod_mul(Z_inv, Z_inv, p)
Z_inv3 = mod_mul(Z_inv2, Z_inv, p)
x = mod_mul(X, Z_inv2, p)
y = mod_mul(Y, Z_inv3, p)
return (x, y)
```
### Point Doubling (Jacobian)
For curve y² = x³ + 7 (a = 0):
```
function jacobian_double(P, p):
X, Y, Z = P
if Y == 0:
return (1, 1, 0) # infinity
# For a = 0: M = 3*X²
S = mod_mul(4, mod_mul(X, mod_mul(Y, Y, p), p), p)
M = mod_mul(3, mod_mul(X, X, p), p)
X3 = mod_sub(mod_mul(M, M, p), mod_mul(2, S, p), p)
Y3 = mod_sub(mod_mul(M, mod_sub(S, X3, p), p),
mod_mul(8, mod_mul(Y, Y, mod_mul(Y, Y, p), p), p), p)
Z3 = mod_mul(2, mod_mul(Y, Z, p), p)
return (X3, Y3, Z3)
```
### Point Addition (Jacobian + Affine)
Mixed addition is faster when one point is in affine:
```
function jacobian_add_affine(P, Q, p):
# P in Jacobian (X1, Y1, Z1), Q in affine (x2, y2)
X1, Y1, Z1 = P
x2, y2 = Q
if Z1 == 0:
return affine_to_jacobian(Q)
Z1Z1 = mod_mul(Z1, Z1, p)
U2 = mod_mul(x2, Z1Z1, p)
S2 = mod_mul(y2, mod_mul(Z1, Z1Z1, p), p)
H = mod_sub(U2, X1, p)
HH = mod_mul(H, H, p)
I = mod_mul(4, HH, p)
J = mod_mul(H, I, p)
r = mod_mul(2, mod_sub(S2, Y1, p), p)
V = mod_mul(X1, I, p)
X3 = mod_sub(mod_sub(mod_mul(r, r, p), J, p), mod_mul(2, V, p), p)
Y3 = mod_sub(mod_mul(r, mod_sub(V, X3, p), p), mod_mul(2, mod_mul(Y1, J, p), p), p)
Z3 = mod_mul(mod_sub(mod_mul(mod_add(Z1, H, p), mod_add(Z1, H, p), p),
mod_add(Z1Z1, HH, p), p), 1, p)
return (X3, Y3, Z3)
```
## GLV Endomorphism (secp256k1)
### Scalar Decomposition
```
# Constants for secp256k1
LAMBDA = 0x5363AD4CC05C30E0A5261C028812645A122E22EA20816678DF02967C1B23BD72
BETA = 0x7AE96A2B657C07106E64479EAC3434E99CF0497512F58995C1396C28719501EE
# Decomposition coefficients
A1 = 0x3086D221A7D46BCDE86C90E49284EB15
B1 = 0x114CA50F7A8E2F3F657C1108D9D44CFD8
A2 = 0xE4437ED6010E88286F547FA90ABFE4C3
B2 = A1
function glv_decompose(k, n):
# Compute c1 = round(b2 * k / n)
# Compute c2 = round(-b1 * k / n)
c1 = (B2 * k + n // 2) // n
c2 = (-B1 * k + n // 2) // n
# k1 = k - c1*A1 - c2*A2
# k2 = -c1*B1 - c2*B2
k1 = k - c1 * A1 - c2 * A2
k2 = -c1 * B1 - c2 * B2
return (k1, k2)
function glv_scalar_mult(k, P, p, n):
k1, k2 = glv_decompose(k, n)
# Compute endomorphism: φ(P) = (β*x, y)
x, y = P
phi_P = (mod_mul(BETA, x, p), y)
# Use Shamir's trick: k1*P + k2*φ(P)
return multi_scalar_mult(k1, P, k2, phi_P, 0, p)
```
## Batch Inversion
Amortize expensive inversions over multiple points:
```
function batch_invert(values, p):
n = len(values)
if n == 0:
return []
# Compute cumulative products
products = [values[0]]
for i in range(1, n):
products.append(mod_mul(products[-1], values[i], p))
# Invert the final product
inv = mod_inverse(products[-1], p)
# Compute individual inverses
inverses = [0] * n
for i in range(n - 1, 0, -1):
inverses[i] = mod_mul(inv, products[i - 1], p)
inv = mod_mul(inv, values[i], p)
inverses[0] = inv
return inverses
```
## Key Generation
```
function generate_keypair(G, n, p):
# Generate random private key
d = random_integer(1, n - 1)
# Compute public key
Q = scalar_mult(d, G)
return (d, Q)
```
## Point Compression/Decompression
```
function compress_point(P, p):
if P is infinity:
return bytes([0x00])
x, y = P
prefix = 0x02 if (y % 2 == 0) else 0x03
return bytes([prefix]) + x.to_bytes(32, 'big')
function decompress_point(compressed, a, b, p):
prefix = compressed[0]
if prefix == 0x00:
return infinity
x = int.from_bytes(compressed[1:], 'big')
# Compute y² = x³ + ax + b
y_squared = mod_add(mod_add(mod_mul(x, mod_mul(x, x, p), p),
mod_mul(a, x, p), p), b, p)
# Compute y = sqrt(y²)
y = mod_sqrt(y_squared, p)
# Select correct y based on prefix
if (prefix == 0x02) != (y % 2 == 0):
y = p - y
return (x, y)
```

View File

@@ -0,0 +1,194 @@
# secp256k1 Complete Parameters
## Curve Definition
**Name**: secp256k1 (Standards for Efficient Cryptography, prime field, 256-bit, Koblitz curve #1)
**Equation**: y² = x³ + 7 (mod p)
This is the short Weierstrass form with coefficients a = 0, b = 7.
## Field Parameters
### Prime Modulus p
```
Decimal:
115792089237316195423570985008687907853269984665640564039457584007908834671663
Hexadecimal:
0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F
Binary representation:
2²⁵⁶ - 2³² - 2⁹ - 2⁸ - 2⁷ - 2⁶ - 2⁴ - 1
= 2²⁵⁶ - 2³² - 977
```
**Special form benefits**:
- Efficient modular reduction using: c mod p = c_low + c_high × (2³² + 977)
- Near-Mersenne prime enables fast arithmetic
### Group Order n
```
Decimal:
115792089237316195423570985008687907852837564279074904382605163141518161494337
Hexadecimal:
0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141
```
The number of points on the curve, including the point at infinity.
### Cofactor h
```
h = 1
```
Cofactor 1 means the group order n equals the curve order, simplifying security analysis and eliminating small subgroup attacks.
## Generator Point G
### Compressed Form
```
02 79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
```
The 02 prefix indicates the y-coordinate is even.
### Uncompressed Form
```
04 79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
```
### Individual Coordinates
**Gx**:
```
Decimal:
55066263022277343669578718895168534326250603453777594175500187360389116729240
Hexadecimal:
0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
```
**Gy**:
```
Decimal:
32670510020758816978083085130507043184471273380659243275938904335757337482424
Hexadecimal:
0x483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
```
## Endomorphism Parameters
secp256k1 has an efficiently computable endomorphism φ: (x, y) → (βx, y).
### β (Beta)
```
Hexadecimal:
0x7AE96A2B657C07106E64479EAC3434E99CF0497512F58995C1396C28719501EE
Property: β³ ≡ 1 (mod p)
```
### λ (Lambda)
```
Hexadecimal:
0x5363AD4CC05C30E0A5261C028812645A122E22EA20816678DF02967C1B23BD72
Property: λ³ ≡ 1 (mod n)
Relationship: φ(P) = λP for all points P
```
### GLV Decomposition Constants
For splitting scalar k into k₁ + k₂λ:
```
a₁ = 0x3086D221A7D46BCDE86C90E49284EB15
b₁ = -0xE4437ED6010E88286F547FA90ABFE4C3
a₂ = 0x114CA50F7A8E2F3F657C1108D9D44CFD8
b₂ = a₁
```
## Derived Constants
### Field Characteristics
```
(p + 1) / 4 = 0x3FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFBFFFFF0C
Used for computing modular square roots via Tonelli-Shanks shortcut
```
### Order Characteristics
```
(n - 1) / 2 = 0x7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF5D576E7357A4501DDFE92F46681B20A0
Used in low-S normalization for ECDSA signatures
```
## Validation Formulas
### Point on Curve Check
For point (x, y), verify:
```
y² ≡ x³ + 7 (mod p)
```
### Generator Verification
Verify G is on curve:
```
Gy² mod p = 0x9C47D08FFB10D4B8 ... (truncated for display)
Gx³ + 7 mod p = same value
```
### Order Verification
Verify nG = O (point at infinity):
```
Computing n × G should yield the identity element
```
## Bit Lengths
| Parameter | Bits | Bytes |
|-----------|------|-------|
| p (prime) | 256 | 32 |
| n (order) | 256 | 32 |
| Private key | 256 | 32 |
| Public key (compressed) | 257 | 33 |
| Public key (uncompressed) | 513 | 65 |
| ECDSA signature | 512 | 64 |
| Schnorr signature | 512 | 64 |
## Security Level
- **Equivalent symmetric key strength**: 128 bits
- **Best known attack complexity**: ~2¹²⁸ operations (Pollard's rho)
- **Safe until**: Quantum computers with ~1500+ logical qubits
## ASN.1 OID
```
1.3.132.0.10
iso(1) identified-organization(3) certicom(132) curve(0) secp256k1(10)
```
## Comparison with Other Curves
| Curve | Field Size | Security | Speed | Use Case |
|-------|------------|----------|-------|----------|
| secp256k1 | 256-bit | 128-bit | Fast (Koblitz) | Bitcoin, Nostr |
| secp256r1 (P-256) | 256-bit | 128-bit | Moderate | TLS, general |
| Curve25519 | 255-bit | ~128-bit | Very fast | Modern crypto |
| secp384r1 (P-384) | 384-bit | 192-bit | Slower | High security |

View File

@@ -0,0 +1,291 @@
# Elliptic Curve Security Analysis
Security properties, attack vectors, and mitigations for elliptic curve cryptography.
## The Discrete Logarithm Problem (ECDLP)
### Definition
Given points P and Q = kP on an elliptic curve, find the scalar k.
**Security assumption**: For properly chosen curves, this problem is computationally infeasible.
### Best Known Attacks
#### Generic Attacks (Work on Any Group)
| Attack | Complexity | Notes |
|--------|------------|-------|
| Baby-step Giant-step | O(√n) space and time | Requires √n storage |
| Pollard's rho | O(√n) time, O(1) space | Practical for large groups |
| Pollard's lambda | O(√n) | When k is in known range |
| Pohlig-Hellman | O(√p) where p is largest prime factor | Exploits factorization of n |
For secp256k1 (n ≈ 2²⁵⁶):
- Generic attack complexity: ~2¹²⁸ operations
- Equivalent to 128-bit symmetric security
#### Curve-Specific Attacks
| Attack | Applicable When | Mitigation |
|--------|-----------------|------------|
| MOV/FR reduction | Low embedding degree | Use curves with high embedding degree |
| Anomalous curve attack | n = p | Ensure n ≠ p |
| GHS attack | Extension field curves | Use prime field curves |
**secp256k1 is immune to all known curve-specific attacks**.
## Side-Channel Attacks
### Timing Attacks
**Vulnerability**: Execution time varies based on secret data.
**Examples**:
- Conditional branches on secret bits
- Early exit conditions
- Variable-time modular operations
**Mitigations**:
- Constant-time algorithms (Montgomery ladder)
- Fixed execution paths
- Dummy operations to equalize timing
### Power Analysis
**Simple Power Analysis (SPA)**: Single trace reveals operations.
- Double-and-add visible as different power signatures
- Mitigation: Montgomery ladder (uniform operations)
**Differential Power Analysis (DPA)**: Statistical analysis of many traces.
- Mitigation: Point blinding, scalar blinding
### Cache Attacks
**FLUSH+RELOAD Attack**:
```
1. Attacker flushes cache line containing lookup table
2. Victim performs table lookup based on secret
3. Attacker measures reload time to determine which entry was accessed
```
**Mitigations**:
- Avoid secret-dependent table lookups
- Use constant-time table access patterns
- Scatter tables to prevent cache line sharing
### Electromagnetic (EM) Attacks
Similar to power analysis but captures electromagnetic emissions.
**Mitigations**:
- Shielding
- Same algorithmic protections as power analysis
## Implementation Vulnerabilities
### k-Reuse in ECDSA
**The Sony PS3 Hack (2010)**:
If the same k is used for two signatures (r₁, s₁) and (r₂, s₂) on messages m₁ and m₂:
```
s₁ = k⁻¹(e₁ + rd) mod n
s₂ = k⁻¹(e₂ + rd) mod n
Since k is the same:
s₁ - s₂ = k⁻¹(e₁ - e₂) mod n
k = (e₁ - e₂)(s₁ - s₂)⁻¹ mod n
Once k is known:
d = (s₁k - e₁)r⁻¹ mod n
```
**Mitigation**: Use deterministic k (RFC 6979).
### Weak Random k
Even with unique k values, if the RNG is biased:
- Lattice-based attacks can recover private key
- Only ~1% bias in k can be exploitable with enough signatures
**Mitigations**:
- Use cryptographically secure RNG
- Use deterministic k (RFC 6979)
- Verify k is in valid range [1, n-1]
### Invalid Curve Attacks
**Attack**: Attacker provides point not on the curve.
- Point may be on a weaker curve
- Operations may leak information
**Mitigation**: Always validate points are on curve:
```
Verify: y² ≡ x³ + ax + b (mod p)
```
### Small Subgroup Attacks
**Attack**: If cofactor h > 1, points of small order exist.
- Attacker sends point of small order
- Response reveals private key mod (small order)
**Mitigation**:
- Use curves with cofactor 1 (secp256k1 has h = 1)
- Multiply received points by cofactor
- Validate point order
### Fault Attacks
**Attack**: Induce computational errors (voltage glitches, radiation).
- Corrupted intermediate values may leak information
- Differential fault analysis can recover keys
**Mitigations**:
- Redundant computations with comparison
- Verify final results
- Hardware protections
## Signature Malleability
### ECDSA Malleability
Given valid signature (r, s), signature (r, n - s) is also valid for the same message.
**Impact**: Transaction ID malleability (historical Bitcoin issue)
**Mitigation**: Enforce low-S normalization:
```
if s > n/2:
s = n - s
```
### Schnorr Non-Malleability
BIP-340 Schnorr signatures are non-malleable by design:
- Use x-only public keys
- Deterministic nonce derivation
## Quantum Threats
### Shor's Algorithm
**Threat**: Polynomial-time discrete log on quantum computers.
- Requires ~1500-2000 logical qubits for secp256k1
- Current quantum computers: <100 noisy qubits
**Timeline**: Estimated 10-20+ years for cryptographically relevant quantum computers.
### Migration Strategy
1. **Monitor**: Track quantum computing progress
2. **Prepare**: Develop post-quantum alternatives
3. **Hybrid**: Use classical + post-quantum in transition
4. **Migrate**: Full transition when necessary
### Post-Quantum Alternatives
- Lattice-based signatures (CRYSTALS-Dilithium)
- Hash-based signatures (SPHINCS+)
- Code-based cryptography
## Best Practices
### Key Generation
```
DO:
- Use cryptographically secure RNG
- Validate private key is in [1, n-1]
- Verify public key is on curve
- Verify public key is not point at infinity
DON'T:
- Use predictable seeds
- Use truncated random values
- Skip validation
```
### Signature Generation
```
DO:
- Use RFC 6979 for deterministic k
- Validate all inputs
- Use constant-time operations
- Clear sensitive memory after use
DON'T:
- Reuse k values
- Use weak/biased RNG
- Skip low-S normalization (ECDSA)
```
### Signature Verification
```
DO:
- Validate r, s are in [1, n-1]
- Validate public key is on curve
- Validate public key is not infinity
- Use batch verification when possible
DON'T:
- Skip any validation steps
- Accept malformed signatures
```
### Public Key Handling
```
DO:
- Validate received points are on curve
- Check point is not infinity
- Prefer compressed format for storage
DON'T:
- Accept unvalidated points
- Skip curve membership check
```
## Security Checklist
### Implementation Review
- [ ] All scalar multiplications are constant-time
- [ ] No secret-dependent branches
- [ ] No secret-indexed table lookups
- [ ] Memory is zeroized after use
- [ ] Random k uses CSPRNG or RFC 6979
- [ ] All received points are validated
- [ ] Private keys are in valid range
- [ ] Signatures use low-S normalization
### Operational Security
- [ ] Private keys stored securely (HSM, secure enclave)
- [ ] Key derivation uses proper KDF
- [ ] Backups are encrypted
- [ ] Key rotation policy exists
- [ ] Audit logging enabled
- [ ] Incident response plan exists
## Security Levels Comparison
| Curve | Bits | Symmetric Equivalent | RSA Equivalent |
|-------|------|---------------------|----------------|
| secp192r1 | 192 | 96 | 1536 |
| secp224r1 | 224 | 112 | 2048 |
| secp256k1 | 256 | 128 | 3072 |
| secp384r1 | 384 | 192 | 7680 |
| secp521r1 | 521 | 256 | 15360 |
## References
- NIST SP 800-57: Recommendation for Key Management
- SEC 1: Elliptic Curve Cryptography
- RFC 6979: Deterministic Usage of DSA and ECDSA
- BIP-340: Schnorr Signatures for secp256k1
- SafeCurves: Choosing Safe Curves for Elliptic-Curve Cryptography

View File

@@ -0,0 +1,478 @@
---
name: go-memory-optimization
description: This skill should be used when optimizing Go code for memory efficiency, reducing GC pressure, implementing object pooling, analyzing escape behavior, choosing between fixed-size arrays and slices, designing worker pools, or profiling memory allocations. Provides comprehensive knowledge of Go's memory model, stack vs heap allocation, sync.Pool patterns, goroutine reuse, and GC tuning.
---
# Go Memory Optimization
## Overview
This skill provides guidance on optimizing Go programs for memory efficiency and reduced garbage collection overhead. Topics include stack allocation semantics, fixed-size types, escape analysis, object pooling, goroutine management, and GC tuning.
## Core Principles
### The Allocation Hierarchy
Prefer allocations in this order (fastest to slowest):
1. **Stack allocation** - Zero GC cost, automatic cleanup on function return
2. **Pooled objects** - Amortized allocation cost via sync.Pool
3. **Pre-allocated buffers** - Single allocation, reused across operations
4. **Heap allocation** - GC-managed, use when lifetime exceeds function scope
### When Optimization Matters
Focus memory optimization efforts on:
- Hot paths executed thousands/millions of times per second
- Large objects (>32KB) that stress the GC
- Long-running services where GC pauses affect latency
- Memory-constrained environments
Avoid premature optimization. Profile first with `go tool pprof` to identify actual bottlenecks.
## Fixed-Size Types vs Slices
### Stack Allocation with Arrays
Arrays with known compile-time size can be stack-allocated, avoiding heap entirely:
```go
// HEAP: slice header + backing array escape to heap
func processSlice() []byte {
data := make([]byte, 32)
// ... use data
return data // escapes
}
// STACK: fixed array stays on stack if doesn't escape
func processArray() {
var data [32]byte // stack-allocated
// ... use data
} // automatically cleaned up
```
### Fixed-Size Binary Types Pattern
Define types with explicit sizes for protocol fields, cryptographic values, and identifiers:
```go
// Binary types enforce length and enable stack allocation
type EventID [32]byte // SHA256 hash
type Pubkey [32]byte // Schnorr public key
type Signature [64]byte // Schnorr signature
// Methods operate on value receivers when size permits
func (id EventID) Hex() string {
return hex.EncodeToString(id[:])
}
func (id EventID) IsZero() bool {
return id == EventID{} // efficient zero-value comparison
}
```
### Size Thresholds
| Size | Recommendation |
|------|----------------|
| ≤64 bytes | Pass by value, stack-friendly |
| 65-128 bytes | Consider context; value for read-only, pointer for mutation |
| >128 bytes | Pass by pointer to avoid copy overhead |
### Array to Slice Conversion
Convert fixed arrays to slices only at API boundaries:
```go
type Hash [32]byte
func (h Hash) Bytes() []byte {
return h[:] // creates slice header, array stays on stack if h does
}
// Prefer methods that accept arrays directly
func VerifySignature(pubkey Pubkey, msg []byte, sig Signature) bool {
// pubkey and sig are stack-allocated in caller
}
```
## Escape Analysis
### Understanding Escape
Variables "escape" to the heap when the compiler cannot prove their lifetime is bounded by the stack frame. Check escape behavior with:
```bash
go build -gcflags="-m -m" ./...
```
### Common Escape Causes
```go
// 1. Returning pointers to local variables
func escapes() *int {
x := 42
return &x // x escapes
}
// 2. Storing in interface{}
func escapes(x int) interface{} {
return x // x escapes (boxed)
}
// 3. Closures capturing by reference
func escapes() func() int {
x := 42
return func() int { return x } // x escapes
}
// 4. Slice/map with unknown capacity
func escapes(n int) []byte {
return make([]byte, n) // escapes (size unknown at compile time)
}
// 5. Sending pointers to channels
func escapes(ch chan *int) {
x := 42
ch <- &x // x escapes
}
```
### Preventing Escape
```go
// 1. Accept pointers, don't return them
func noEscape(result *[32]byte) {
// caller owns memory, function fills it
copy(result[:], computeHash())
}
// 2. Use fixed-size arrays
func noEscape() {
var buf [1024]byte // known size, stack-allocated
process(buf[:])
}
// 3. Preallocate with known capacity
func noEscape() {
buf := make([]byte, 0, 1024) // may stay on stack
// ... append up to 1024 bytes
}
// 4. Avoid interface{} on hot paths
func noEscape(x int) int {
return x * 2 // no boxing
}
```
## sync.Pool Usage
### Basic Pattern
```go
var bufferPool = sync.Pool{
New: func() interface{} {
return make([]byte, 0, 4096)
},
}
func processRequest(data []byte) {
buf := bufferPool.Get().([]byte)
buf = buf[:0] // reset length, keep capacity
defer bufferPool.Put(buf)
// use buf...
}
```
### Typed Pool Wrapper
```go
type BufferPool struct {
pool sync.Pool
size int
}
func NewBufferPool(size int) *BufferPool {
return &BufferPool{
pool: sync.Pool{
New: func() interface{} {
b := make([]byte, size)
return &b
},
},
size: size,
}
}
func (p *BufferPool) Get() *[]byte {
return p.pool.Get().(*[]byte)
}
func (p *BufferPool) Put(b *[]byte) {
if b == nil || cap(*b) < p.size {
return // don't pool undersized buffers
}
*b = (*b)[:p.size] // reset to full size
p.pool.Put(b)
}
```
### Pool Anti-Patterns
```go
// BAD: Pool of pointers to small values (overhead exceeds benefit)
var intPool = sync.Pool{New: func() interface{} { return new(int) }}
// BAD: Not resetting state before Put
bufPool.Put(buf) // may contain sensitive data
// BAD: Pooling objects with goroutine-local state
var connPool = sync.Pool{...} // connections are stateful
// BAD: Assuming pooled objects persist (GC clears pools)
obj := pool.Get()
// ... long delay
pool.Put(obj) // obj may have been GC'd during delay
```
### When to Use sync.Pool
| Use Case | Pool? | Reason |
|----------|-------|--------|
| Buffers in HTTP handlers | Yes | High allocation rate, short lifetime |
| Encoder/decoder state | Yes | Expensive to initialize |
| Small values (<64 bytes) | No | Pointer overhead exceeds benefit |
| Long-lived objects | No | Pools are for short-lived reuse |
| Objects with cleanup needs | No | Pool provides no finalization |
## Goroutine Pooling
### Worker Pool Pattern
```go
type WorkerPool struct {
jobs chan func()
workers int
wg sync.WaitGroup
}
func NewWorkerPool(workers, queueSize int) *WorkerPool {
p := &WorkerPool{
jobs: make(chan func(), queueSize),
workers: workers,
}
p.wg.Add(workers)
for i := 0; i < workers; i++ {
go p.worker()
}
return p
}
func (p *WorkerPool) worker() {
defer p.wg.Done()
for job := range p.jobs {
job()
}
}
func (p *WorkerPool) Submit(job func()) {
p.jobs <- job
}
func (p *WorkerPool) Shutdown() {
close(p.jobs)
p.wg.Wait()
}
```
### Bounded Concurrency with Semaphore
```go
type Semaphore struct {
sem chan struct{}
}
func NewSemaphore(n int) *Semaphore {
return &Semaphore{sem: make(chan struct{}, n)}
}
func (s *Semaphore) Acquire() { s.sem <- struct{}{} }
func (s *Semaphore) Release() { <-s.sem }
// Usage
sem := NewSemaphore(runtime.GOMAXPROCS(0))
for _, item := range items {
sem.Acquire()
go func(it Item) {
defer sem.Release()
process(it)
}(item)
}
```
### Goroutine Reuse Benefits
| Metric | Spawn per request | Worker pool |
|--------|-------------------|-------------|
| Goroutine creation | O(n) | O(workers) |
| Stack allocation | 2KB × n | 2KB × workers |
| Scheduler overhead | Higher | Lower |
| GC pressure | Higher | Lower |
## Reducing GC Pressure
### Allocation Reduction Strategies
```go
// 1. Reuse buffers across iterations
buf := make([]byte, 0, 4096)
for _, item := range items {
buf = buf[:0] // reset without reallocation
buf = processItem(buf, item)
}
// 2. Preallocate slices with known length
result := make([]Item, 0, len(input)) // avoid append reallocations
for _, in := range input {
result = append(result, transform(in))
}
// 3. Struct embedding instead of pointer fields
type Event struct {
ID [32]byte // embedded, not *[32]byte
Pubkey [32]byte // single allocation for entire struct
Signature [64]byte
Content string // only string data on heap
}
// 4. String interning for repeated values
var kindStrings = map[int]string{
0: "set_metadata",
1: "text_note",
// ...
}
```
### GC Tuning
```go
import "runtime/debug"
func init() {
// GOGC: target heap growth percentage (default 100)
// Lower = more frequent GC, less memory
// Higher = less frequent GC, more memory
debug.SetGCPercent(50) // GC when heap grows 50%
// GOMEMLIMIT: soft memory limit (Go 1.19+)
// GC becomes more aggressive as limit approaches
debug.SetMemoryLimit(512 << 20) // 512MB limit
}
```
Environment variables:
```bash
GOGC=50 # More aggressive GC
GOMEMLIMIT=512MiB # Soft memory limit
GODEBUG=gctrace=1 # GC trace output
```
### Arena Allocation (Go 1.20+, experimental)
```go
//go:build goexperiment.arenas
import "arena"
func processLargeDataset(data []byte) Result {
a := arena.NewArena()
defer a.Free() // bulk free all allocations
// All allocations from arena are freed together
items := arena.MakeSlice[Item](a, 0, 1000)
// ... process
// Copy result out before Free
return copyResult(result)
}
```
## Memory Profiling
### Heap Profile
```go
import "runtime/pprof"
func captureHeapProfile() {
f, _ := os.Create("heap.prof")
defer f.Close()
runtime.GC() // get accurate picture
pprof.WriteHeapProfile(f)
}
```
```bash
go tool pprof -http=:8080 heap.prof
go tool pprof -alloc_space heap.prof # total allocations
go tool pprof -inuse_space heap.prof # current usage
```
### Allocation Benchmarks
```go
func BenchmarkAllocation(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
result := processData(input)
_ = result
}
}
```
Output interpretation:
```
BenchmarkAllocation-8 1000000 1234 ns/op 256 B/op 3 allocs/op
↑ ↑
bytes/op allocations/op
```
### Live Memory Monitoring
```go
func printMemStats() {
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("Alloc: %d MB\n", m.Alloc/1024/1024)
fmt.Printf("TotalAlloc: %d MB\n", m.TotalAlloc/1024/1024)
fmt.Printf("Sys: %d MB\n", m.Sys/1024/1024)
fmt.Printf("NumGC: %d\n", m.NumGC)
fmt.Printf("GCPause: %v\n", time.Duration(m.PauseNs[(m.NumGC+255)%256]))
}
```
## Common Patterns Reference
For detailed code examples and patterns, see `references/patterns.md`:
- Buffer pool implementations
- Zero-allocation JSON encoding
- Memory-efficient string building
- Slice capacity management
- Struct layout optimization
## Checklist for Memory-Critical Code
1. [ ] Profile before optimizing (`go tool pprof`)
2. [ ] Check escape analysis output (`-gcflags="-m"`)
3. [ ] Use fixed-size arrays for known-size data
4. [ ] Implement sync.Pool for frequently allocated objects
5. [ ] Preallocate slices with known capacity
6. [ ] Reuse buffers instead of allocating new ones
7. [ ] Consider struct field ordering for alignment
8. [ ] Benchmark with `-benchmem` flag
9. [ ] Set appropriate GOGC/GOMEMLIMIT for production
10. [ ] Monitor GC behavior with GODEBUG=gctrace=1

View File

@@ -0,0 +1,594 @@
# Go Memory Optimization Patterns
Detailed code examples and patterns for memory-efficient Go programming.
## Buffer Pool Implementations
### Tiered Buffer Pool
For workloads with varying buffer sizes:
```go
type TieredPool struct {
small sync.Pool // 1KB
medium sync.Pool // 16KB
large sync.Pool // 256KB
}
func NewTieredPool() *TieredPool {
return &TieredPool{
small: sync.Pool{New: func() interface{} { return make([]byte, 1024) }},
medium: sync.Pool{New: func() interface{} { return make([]byte, 16384) }},
large: sync.Pool{New: func() interface{} { return make([]byte, 262144) }},
}
}
func (p *TieredPool) Get(size int) []byte {
switch {
case size <= 1024:
return p.small.Get().([]byte)[:size]
case size <= 16384:
return p.medium.Get().([]byte)[:size]
case size <= 262144:
return p.large.Get().([]byte)[:size]
default:
return make([]byte, size) // too large for pool
}
}
func (p *TieredPool) Put(b []byte) {
switch cap(b) {
case 1024:
p.small.Put(b[:cap(b)])
case 16384:
p.medium.Put(b[:cap(b)])
case 262144:
p.large.Put(b[:cap(b)])
}
// Non-standard sizes are not pooled
}
```
### bytes.Buffer Pool
```go
var bufferPool = sync.Pool{
New: func() interface{} {
return new(bytes.Buffer)
},
}
func GetBuffer() *bytes.Buffer {
return bufferPool.Get().(*bytes.Buffer)
}
func PutBuffer(b *bytes.Buffer) {
b.Reset()
bufferPool.Put(b)
}
// Usage
func processData(data []byte) string {
buf := GetBuffer()
defer PutBuffer(buf)
buf.WriteString("prefix:")
buf.Write(data)
buf.WriteString(":suffix")
return buf.String() // allocates new string
}
```
## Zero-Allocation JSON Encoding
### Pre-allocated Encoder
```go
type JSONEncoder struct {
buf []byte
scratch [64]byte // for number formatting
}
func (e *JSONEncoder) Reset() {
e.buf = e.buf[:0]
}
func (e *JSONEncoder) Bytes() []byte {
return e.buf
}
func (e *JSONEncoder) WriteString(s string) {
e.buf = append(e.buf, '"')
for i := 0; i < len(s); i++ {
c := s[i]
switch c {
case '"':
e.buf = append(e.buf, '\\', '"')
case '\\':
e.buf = append(e.buf, '\\', '\\')
case '\n':
e.buf = append(e.buf, '\\', 'n')
case '\r':
e.buf = append(e.buf, '\\', 'r')
case '\t':
e.buf = append(e.buf, '\\', 't')
default:
if c < 0x20 {
e.buf = append(e.buf, '\\', 'u', '0', '0',
hexDigits[c>>4], hexDigits[c&0xf])
} else {
e.buf = append(e.buf, c)
}
}
}
e.buf = append(e.buf, '"')
}
func (e *JSONEncoder) WriteInt(n int64) {
e.buf = strconv.AppendInt(e.buf, n, 10)
}
func (e *JSONEncoder) WriteHex(b []byte) {
e.buf = append(e.buf, '"')
for _, v := range b {
e.buf = append(e.buf, hexDigits[v>>4], hexDigits[v&0xf])
}
e.buf = append(e.buf, '"')
}
var hexDigits = [16]byte{'0', '1', '2', '3', '4', '5', '6', '7',
'8', '9', 'a', 'b', 'c', 'd', 'e', 'f'}
```
### Append-Based Encoding
```go
// AppendJSON appends JSON representation to dst, returning extended slice
func (ev *Event) AppendJSON(dst []byte) []byte {
dst = append(dst, `{"id":"`...)
dst = appendHex(dst, ev.ID[:])
dst = append(dst, `","pubkey":"`...)
dst = appendHex(dst, ev.Pubkey[:])
dst = append(dst, `","created_at":`...)
dst = strconv.AppendInt(dst, ev.CreatedAt, 10)
dst = append(dst, `,"kind":`...)
dst = strconv.AppendInt(dst, int64(ev.Kind), 10)
dst = append(dst, `,"content":`...)
dst = appendJSONString(dst, ev.Content)
dst = append(dst, '}')
return dst
}
// Usage with pre-allocated buffer
func encodeEvents(events []Event) []byte {
// Estimate size: ~500 bytes per event
buf := make([]byte, 0, len(events)*500)
buf = append(buf, '[')
for i, ev := range events {
if i > 0 {
buf = append(buf, ',')
}
buf = ev.AppendJSON(buf)
}
buf = append(buf, ']')
return buf
}
```
## Memory-Efficient String Building
### strings.Builder with Preallocation
```go
func buildQuery(parts []string) string {
// Calculate total length
total := len(parts) - 1 // for separators
for _, p := range parts {
total += len(p)
}
var b strings.Builder
b.Grow(total) // single allocation
for i, p := range parts {
if i > 0 {
b.WriteByte(',')
}
b.WriteString(p)
}
return b.String()
}
```
### Avoiding String Concatenation
```go
// BAD: O(n^2) allocations
func buildPath(parts []string) string {
result := ""
for _, p := range parts {
result += "/" + p // new allocation each iteration
}
return result
}
// GOOD: O(n) with single allocation
func buildPath(parts []string) string {
if len(parts) == 0 {
return ""
}
n := len(parts) // for slashes
for _, p := range parts {
n += len(p)
}
b := make([]byte, 0, n)
for _, p := range parts {
b = append(b, '/')
b = append(b, p...)
}
return string(b)
}
```
### Unsafe String/Byte Conversion
```go
import "unsafe"
// Zero-allocation string to []byte (read-only!)
func unsafeBytes(s string) []byte {
return unsafe.Slice(unsafe.StringData(s), len(s))
}
// Zero-allocation []byte to string (b must not be modified!)
func unsafeString(b []byte) string {
return unsafe.String(unsafe.SliceData(b), len(b))
}
// Use when:
// 1. Converting string for read-only operations (hashing, comparison)
// 2. Returning []byte from buffer that won't be modified
// 3. Performance-critical paths with careful ownership management
```
## Slice Capacity Management
### Append Growth Patterns
```go
// Slice growth: 0 -> 1 -> 2 -> 4 -> 8 -> 16 -> 32 -> 64 -> ...
// After 1024: grows by 25% each time
// BAD: Unknown final size causes multiple reallocations
func collectItems() []Item {
var items []Item
for item := range source {
items = append(items, item) // may reallocate multiple times
}
return items
}
// GOOD: Preallocate when size is known
func collectItems(n int) []Item {
items := make([]Item, 0, n)
for item := range source {
items = append(items, item)
}
return items
}
// GOOD: Use slice header trick for uncertain sizes
func collectItems() []Item {
items := make([]Item, 0, 32) // reasonable initial capacity
for item := range source {
items = append(items, item)
}
// Trim excess capacity if items will be long-lived
return items[:len(items):len(items)]
}
```
### Slice Recycling
```go
// Reuse slice backing array
func processInBatches(items []Item, batchSize int) {
batch := make([]Item, 0, batchSize)
for i, item := range items {
batch = append(batch, item)
if len(batch) == batchSize || i == len(items)-1 {
processBatch(batch)
batch = batch[:0] // reset length, keep capacity
}
}
}
```
### Preventing Slice Memory Leaks
```go
// BAD: Subslice keeps entire backing array alive
func getFirst10(data []byte) []byte {
return data[:10] // entire data array stays in memory
}
// GOOD: Copy to release original array
func getFirst10(data []byte) []byte {
result := make([]byte, 10)
copy(result, data[:10])
return result
}
// Alternative: explicit capacity limit
func getFirst10(data []byte) []byte {
return data[:10:10] // cap=10, can't accidentally grow into original
}
```
## Struct Layout Optimization
### Field Ordering for Alignment
```go
// BAD: 32 bytes due to padding
type BadLayout struct {
a bool // 1 byte + 7 padding
b int64 // 8 bytes
c bool // 1 byte + 7 padding
d int64 // 8 bytes
}
// GOOD: 24 bytes with optimal ordering
type GoodLayout struct {
b int64 // 8 bytes
d int64 // 8 bytes
a bool // 1 byte
c bool // 1 byte + 6 padding
}
// Rule: Order fields from largest to smallest alignment
```
### Checking Struct Size
```go
func init() {
// Compile-time size assertions
var _ [24]byte = [unsafe.Sizeof(GoodLayout{})]byte{}
// Or runtime check
if unsafe.Sizeof(Event{}) > 256 {
panic("Event struct too large")
}
}
```
### Cache-Line Optimization
```go
const CacheLineSize = 64
// Pad struct to prevent false sharing in concurrent access
type PaddedCounter struct {
value uint64
_ [CacheLineSize - 8]byte // padding
}
type Counters struct {
reads PaddedCounter
writes PaddedCounter
// Each counter on separate cache line
}
```
## Object Reuse Patterns
### Reset Methods
```go
type Request struct {
Method string
Path string
Headers map[string]string
Body []byte
}
func (r *Request) Reset() {
r.Method = ""
r.Path = ""
// Reuse map, just clear entries
for k := range r.Headers {
delete(r.Headers, k)
}
r.Body = r.Body[:0]
}
var requestPool = sync.Pool{
New: func() interface{} {
return &Request{
Headers: make(map[string]string, 8),
Body: make([]byte, 0, 1024),
}
},
}
```
### Flyweight Pattern
```go
// Share immutable parts across many instances
type Event struct {
kind *Kind // shared, immutable
content string
}
type Kind struct {
ID int
Name string
Description string
}
var kindRegistry = map[int]*Kind{
0: {0, "set_metadata", "User metadata"},
1: {1, "text_note", "Text note"},
// ... pre-allocated, shared across all events
}
func NewEvent(kindID int, content string) Event {
return Event{
kind: kindRegistry[kindID], // no allocation
content: content,
}
}
```
## Channel Patterns for Memory Efficiency
### Buffered Channels as Object Pools
```go
type SimplePool struct {
pool chan *Buffer
}
func NewSimplePool(size int) *SimplePool {
p := &SimplePool{pool: make(chan *Buffer, size)}
for i := 0; i < size; i++ {
p.pool <- NewBuffer()
}
return p
}
func (p *SimplePool) Get() *Buffer {
select {
case b := <-p.pool:
return b
default:
return NewBuffer() // pool empty, allocate new
}
}
func (p *SimplePool) Put(b *Buffer) {
select {
case p.pool <- b:
default:
// pool full, let GC collect
}
}
```
### Batch Processing Channels
```go
// Reduce channel overhead by batching
func batchProcessor(input <-chan Item, batchSize int) <-chan []Item {
output := make(chan []Item)
go func() {
defer close(output)
batch := make([]Item, 0, batchSize)
for item := range input {
batch = append(batch, item)
if len(batch) == batchSize {
output <- batch
batch = make([]Item, 0, batchSize)
}
}
if len(batch) > 0 {
output <- batch
}
}()
return output
}
```
## Advanced Techniques
### Manual Memory Management with mmap
```go
import "golang.org/x/sys/unix"
// Allocate memory outside Go heap
func allocateMmap(size int) ([]byte, error) {
data, err := unix.Mmap(-1, 0, size,
unix.PROT_READ|unix.PROT_WRITE,
unix.MAP_ANON|unix.MAP_PRIVATE)
return data, err
}
func freeMmap(data []byte) error {
return unix.Munmap(data)
}
```
### Inline Arrays in Structs
```go
// Small-size optimization: inline for small, pointer for large
type SmallVec struct {
len int
small [8]int // inline storage for ≤8 elements
large []int // heap storage for >8 elements
}
func (v *SmallVec) Append(x int) {
if v.large != nil {
v.large = append(v.large, x)
v.len++
return
}
if v.len < 8 {
v.small[v.len] = x
v.len++
return
}
// Spill to heap
v.large = make([]int, 9, 16)
copy(v.large, v.small[:])
v.large[8] = x
v.len++
}
```
### Bump Allocator
```go
// Simple arena-style allocator for batch allocations
type BumpAllocator struct {
buf []byte
off int
}
func NewBumpAllocator(size int) *BumpAllocator {
return &BumpAllocator{buf: make([]byte, size)}
}
func (a *BumpAllocator) Alloc(size int) []byte {
if a.off+size > len(a.buf) {
panic("bump allocator exhausted")
}
b := a.buf[a.off : a.off+size]
a.off += size
return b
}
func (a *BumpAllocator) Reset() {
a.off = 0
}
// Usage: allocate many small objects, reset all at once
func processBatch(items []Item) {
arena := NewBumpAllocator(1 << 20) // 1MB
defer arena.Reset()
for _, item := range items {
buf := arena.Alloc(item.Size())
item.Serialize(buf)
}
}
```

View File

@@ -82,6 +82,49 @@ func (f *File) Read(p []byte) (n int, err error) {
}
```
### Interface Design - CRITICAL RULES
**Rule 1: Define interfaces in a dedicated package (e.g., `pkg/interfaces/<name>/`)**
- Interfaces provide isolation between packages and enable dependency inversion
- Keeping interfaces in a dedicated package prevents circular dependencies
- Each interface package should be minimal (just the interface, no implementations)
**Rule 2: NEVER use type assertions with interface literals**
- **NEVER** write `.(interface{ Method() Type })` - this is non-idiomatic and unmaintainable
- Interface literals cannot be documented, tested for satisfaction, or reused
```go
// BAD - interface literal in type assertion (NEVER DO THIS)
if checker, ok := obj.(interface{ Check() bool }); ok {
checker.Check()
}
// GOOD - use defined interface from dedicated package
import "myproject/pkg/interfaces/checker"
if c, ok := obj.(checker.Checker); ok {
c.Check()
}
```
**Rule 3: Resolving Circular Dependencies**
- If a circular dependency occurs, move the interface to `pkg/interfaces/`
- The implementing type stays in its original package
- The consuming code imports only the interface package
- Pattern:
```
pkg/interfaces/foo/ <- interface definition (no dependencies)
↑ ↑
pkg/bar/ pkg/baz/
(implements) (consumes via interface)
```
**Rule 4: Verify interface satisfaction at compile time**
```go
// Add this line to ensure *MyType implements MyInterface
var _ MyInterface = (*MyType)(nil)
```
### Concurrency
Use goroutines and channels for concurrent programming:
@@ -178,6 +221,26 @@ For detailed information, consult the reference files:
- Start comments with the name being described
- Use godoc format
6. **Configuration - CRITICAL**
- **NEVER** use `os.Getenv()` scattered throughout packages
- **ALWAYS** centralize environment variable parsing in a single config package (e.g., `app/config/`)
- Pass configuration via structs, not by reading environment directly
- This ensures discoverability, documentation, and testability of all config options
7. **Constants - CRITICAL**
- **ALWAYS** define named constants for values used more than a few times
- **ALWAYS** define named constants if multiple packages depend on the same value
- Constants shared across packages belong in a dedicated package (e.g., `pkg/constants/`)
- Magic numbers and strings are forbidden
```go
// BAD - magic number
if size > 1024 {
// GOOD - named constant
const MaxBufferSize = 1024
if size > MaxBufferSize {
```
## Common Commands
```bash

View File

@@ -0,0 +1,767 @@
---
name: nostr-tools
description: This skill should be used when working with nostr-tools library for Nostr protocol operations, including event creation, signing, filtering, relay communication, and NIP implementations. Provides comprehensive knowledge of nostr-tools APIs and patterns.
---
# nostr-tools Skill
This skill provides comprehensive knowledge and patterns for working with nostr-tools, the most popular JavaScript/TypeScript library for Nostr protocol development.
## When to Use This Skill
Use this skill when:
- Building Nostr clients or applications
- Creating and signing Nostr events
- Connecting to Nostr relays
- Implementing NIP features
- Working with Nostr keys and cryptography
- Filtering and querying events
- Building relay pools or connections
- Implementing NIP-44/NIP-04 encryption
## Core Concepts
### nostr-tools Overview
nostr-tools provides:
- **Event handling** - Create, sign, verify events
- **Key management** - Generate, convert, encode keys
- **Relay communication** - Connect, subscribe, publish
- **NIP implementations** - NIP-04, NIP-05, NIP-19, NIP-44, etc.
- **Cryptographic operations** - Schnorr signatures, encryption
- **Filter building** - Query events by various criteria
### Installation
```bash
npm install nostr-tools
```
### Basic Imports
```javascript
// Core functionality
import {
SimplePool,
generateSecretKey,
getPublicKey,
finalizeEvent,
verifyEvent
} from 'nostr-tools';
// NIP-specific imports
import { nip04, nip05, nip19, nip44 } from 'nostr-tools';
// Relay operations
import { Relay } from 'nostr-tools/relay';
```
## Key Management
### Generating Keys
```javascript
import { generateSecretKey, getPublicKey } from 'nostr-tools/pure';
// Generate new secret key (Uint8Array)
const secretKey = generateSecretKey();
// Derive public key
const publicKey = getPublicKey(secretKey);
console.log('Secret key:', bytesToHex(secretKey));
console.log('Public key:', publicKey); // hex string
```
### Key Encoding (NIP-19)
```javascript
import { nip19 } from 'nostr-tools';
// Encode to bech32
const nsec = nip19.nsecEncode(secretKey);
const npub = nip19.npubEncode(publicKey);
const note = nip19.noteEncode(eventId);
console.log(nsec); // nsec1...
console.log(npub); // npub1...
console.log(note); // note1...
// Decode from bech32
const { type, data } = nip19.decode(npub);
// type: 'npub', data: publicKey (hex)
// Encode profile reference (nprofile)
const nprofile = nip19.nprofileEncode({
pubkey: publicKey,
relays: ['wss://relay.example.com']
});
// Encode event reference (nevent)
const nevent = nip19.neventEncode({
id: eventId,
relays: ['wss://relay.example.com'],
author: publicKey,
kind: 1
});
// Encode address (naddr) for replaceable events
const naddr = nip19.naddrEncode({
identifier: 'my-article',
pubkey: publicKey,
kind: 30023,
relays: ['wss://relay.example.com']
});
```
## Event Operations
### Event Structure
```javascript
// Unsigned event template
const eventTemplate = {
kind: 1,
created_at: Math.floor(Date.now() / 1000),
tags: [],
content: 'Hello Nostr!'
};
// Signed event (after finalizeEvent)
const signedEvent = {
id: '...', // 32-byte sha256 hash as hex
pubkey: '...', // 32-byte public key as hex
created_at: 1234567890,
kind: 1,
tags: [],
content: 'Hello Nostr!',
sig: '...' // 64-byte Schnorr signature as hex
};
```
### Creating and Signing Events
```javascript
import { finalizeEvent, verifyEvent } from 'nostr-tools/pure';
// Create event template
const eventTemplate = {
kind: 1,
created_at: Math.floor(Date.now() / 1000),
tags: [
['p', publicKey], // Mention
['e', eventId, '', 'reply'], // Reply
['t', 'nostr'] // Hashtag
],
content: 'Hello Nostr!'
};
// Sign event
const signedEvent = finalizeEvent(eventTemplate, secretKey);
// Verify event
const isValid = verifyEvent(signedEvent);
console.log('Event valid:', isValid);
```
### Event Kinds
```javascript
// Common event kinds
const KINDS = {
Metadata: 0, // Profile metadata (NIP-01)
Text: 1, // Short text note (NIP-01)
RecommendRelay: 2, // Relay recommendation
Contacts: 3, // Contact list (NIP-02)
EncryptedDM: 4, // Encrypted DM (NIP-04)
EventDeletion: 5, // Delete events (NIP-09)
Repost: 6, // Repost (NIP-18)
Reaction: 7, // Reaction (NIP-25)
ChannelCreation: 40, // Channel (NIP-28)
ChannelMessage: 42, // Channel message
Zap: 9735, // Zap receipt (NIP-57)
Report: 1984, // Report (NIP-56)
RelayList: 10002, // Relay list (NIP-65)
Article: 30023, // Long-form content (NIP-23)
};
```
### Creating Specific Events
```javascript
// Profile metadata (kind 0)
const profileEvent = finalizeEvent({
kind: 0,
created_at: Math.floor(Date.now() / 1000),
tags: [],
content: JSON.stringify({
name: 'Alice',
about: 'Nostr enthusiast',
picture: 'https://example.com/avatar.jpg',
nip05: 'alice@example.com',
lud16: 'alice@getalby.com'
})
}, secretKey);
// Contact list (kind 3)
const contactsEvent = finalizeEvent({
kind: 3,
created_at: Math.floor(Date.now() / 1000),
tags: [
['p', pubkey1, 'wss://relay1.com', 'alice'],
['p', pubkey2, 'wss://relay2.com', 'bob'],
['p', pubkey3, '', 'carol']
],
content: '' // Or JSON relay preferences
}, secretKey);
// Reply to an event
const replyEvent = finalizeEvent({
kind: 1,
created_at: Math.floor(Date.now() / 1000),
tags: [
['e', rootEventId, '', 'root'],
['e', parentEventId, '', 'reply'],
['p', parentEventPubkey]
],
content: 'This is a reply'
}, secretKey);
// Reaction (kind 7)
const reactionEvent = finalizeEvent({
kind: 7,
created_at: Math.floor(Date.now() / 1000),
tags: [
['e', eventId],
['p', eventPubkey]
],
content: '+' // or '-' or emoji
}, secretKey);
// Delete event (kind 5)
const deleteEvent = finalizeEvent({
kind: 5,
created_at: Math.floor(Date.now() / 1000),
tags: [
['e', eventIdToDelete],
['e', anotherEventIdToDelete]
],
content: 'Deletion reason'
}, secretKey);
```
## Relay Communication
### Using SimplePool
SimplePool is the recommended way to interact with multiple relays:
```javascript
import { SimplePool } from 'nostr-tools/pool';
const pool = new SimplePool();
const relays = [
'wss://relay.damus.io',
'wss://nos.lol',
'wss://relay.nostr.band'
];
// Subscribe to events
const subscription = pool.subscribeMany(
relays,
[
{
kinds: [1],
authors: [publicKey],
limit: 10
}
],
{
onevent(event) {
console.log('Received event:', event);
},
oneose() {
console.log('End of stored events');
}
}
);
// Close subscription when done
subscription.close();
// Publish event to all relays
const results = await Promise.allSettled(
pool.publish(relays, signedEvent)
);
// Query events (returns Promise)
const events = await pool.querySync(relays, {
kinds: [0],
authors: [publicKey]
});
// Get single event
const event = await pool.get(relays, {
ids: [eventId]
});
// Close pool when done
pool.close(relays);
```
### Direct Relay Connection
```javascript
import { Relay } from 'nostr-tools/relay';
const relay = await Relay.connect('wss://relay.damus.io');
console.log(`Connected to ${relay.url}`);
// Subscribe
const sub = relay.subscribe([
{
kinds: [1],
limit: 100
}
], {
onevent(event) {
console.log('Event:', event);
},
oneose() {
console.log('EOSE');
sub.close();
}
});
// Publish
await relay.publish(signedEvent);
// Close
relay.close();
```
### Handling Connection States
```javascript
import { Relay } from 'nostr-tools/relay';
const relay = await Relay.connect('wss://relay.example.com');
// Listen for disconnect
relay.onclose = () => {
console.log('Relay disconnected');
};
// Check connection status
console.log('Connected:', relay.connected);
```
## Filters
### Filter Structure
```javascript
const filter = {
// Event IDs
ids: ['abc123...'],
// Authors (pubkeys)
authors: ['pubkey1', 'pubkey2'],
// Event kinds
kinds: [1, 6, 7],
// Tags (single-letter keys)
'#e': ['eventId1', 'eventId2'],
'#p': ['pubkey1'],
'#t': ['nostr', 'bitcoin'],
'#d': ['article-identifier'],
// Time range
since: 1704067200, // Unix timestamp
until: 1704153600,
// Limit results
limit: 100,
// Search (NIP-50, if relay supports)
search: 'nostr protocol'
};
```
### Common Filter Patterns
```javascript
// User's recent posts
const userPosts = {
kinds: [1],
authors: [userPubkey],
limit: 50
};
// User's profile
const userProfile = {
kinds: [0],
authors: [userPubkey]
};
// User's contacts
const userContacts = {
kinds: [3],
authors: [userPubkey]
};
// Replies to an event
const replies = {
kinds: [1],
'#e': [eventId]
};
// Reactions to an event
const reactions = {
kinds: [7],
'#e': [eventId]
};
// Feed from followed users
const feed = {
kinds: [1, 6],
authors: followedPubkeys,
limit: 100
};
// Events mentioning user
const mentions = {
kinds: [1],
'#p': [userPubkey],
limit: 50
};
// Hashtag search
const hashtagEvents = {
kinds: [1],
'#t': ['bitcoin'],
limit: 100
};
// Replaceable event by d-tag
const replaceableEvent = {
kinds: [30023],
authors: [authorPubkey],
'#d': ['article-slug']
};
```
### Multiple Filters
```javascript
// Subscribe with multiple filters (OR logic)
const filters = [
{ kinds: [1], authors: [userPubkey], limit: 20 },
{ kinds: [1], '#p': [userPubkey], limit: 20 }
];
pool.subscribeMany(relays, filters, {
onevent(event) {
// Receives events matching ANY filter
}
});
```
## Encryption
### NIP-04 (Legacy DMs)
```javascript
import { nip04 } from 'nostr-tools';
// Encrypt message
const ciphertext = await nip04.encrypt(
secretKey,
recipientPubkey,
'Hello, this is secret!'
);
// Create encrypted DM event
const dmEvent = finalizeEvent({
kind: 4,
created_at: Math.floor(Date.now() / 1000),
tags: [['p', recipientPubkey]],
content: ciphertext
}, secretKey);
// Decrypt message
const plaintext = await nip04.decrypt(
secretKey,
senderPubkey,
ciphertext
);
```
### NIP-44 (Modern Encryption)
```javascript
import { nip44 } from 'nostr-tools';
// Get conversation key (cache this for multiple messages)
const conversationKey = nip44.getConversationKey(
secretKey,
recipientPubkey
);
// Encrypt
const ciphertext = nip44.encrypt(
'Hello with NIP-44!',
conversationKey
);
// Decrypt
const plaintext = nip44.decrypt(
ciphertext,
conversationKey
);
```
## NIP Implementations
### NIP-05 (DNS Identifier)
```javascript
import { nip05 } from 'nostr-tools';
// Query NIP-05 identifier
const profile = await nip05.queryProfile('alice@example.com');
if (profile) {
console.log('Pubkey:', profile.pubkey);
console.log('Relays:', profile.relays);
}
// Verify NIP-05 for a pubkey
const isValid = await nip05.queryProfile('alice@example.com')
.then(p => p?.pubkey === expectedPubkey);
```
### NIP-10 (Reply Threading)
```javascript
import { nip10 } from 'nostr-tools';
// Parse reply tags
const parsed = nip10.parse(event);
console.log('Root:', parsed.root); // Original event
console.log('Reply:', parsed.reply); // Direct parent
console.log('Mentions:', parsed.mentions); // Other mentions
console.log('Profiles:', parsed.profiles); // Mentioned pubkeys
```
### NIP-21 (nostr: URIs)
```javascript
// Parse nostr: URIs
const uri = 'nostr:npub1...';
const { type, data } = nip19.decode(uri.replace('nostr:', ''));
```
### NIP-27 (Content References)
```javascript
// Parse nostr:npub and nostr:note references in content
const content = 'Check out nostr:npub1abc... and nostr:note1xyz...';
const references = content.match(/nostr:(n[a-z]+1[a-z0-9]+)/g);
references?.forEach(ref => {
const decoded = nip19.decode(ref.replace('nostr:', ''));
console.log(decoded.type, decoded.data);
});
```
### NIP-57 (Zaps)
```javascript
import { nip57 } from 'nostr-tools';
// Validate zap receipt
const zapReceipt = await pool.get(relays, {
kinds: [9735],
'#e': [eventId]
});
const validatedZap = await nip57.validateZapRequest(zapReceipt);
```
## Utilities
### Hex and Bytes Conversion
```javascript
import { bytesToHex, hexToBytes } from '@noble/hashes/utils';
// Convert secret key to hex
const secretKeyHex = bytesToHex(secretKey);
// Convert hex back to bytes
const secretKeyBytes = hexToBytes(secretKeyHex);
```
### Event ID Calculation
```javascript
import { getEventHash } from 'nostr-tools/pure';
// Calculate event ID without signing
const eventId = getEventHash(unsignedEvent);
```
### Signature Operations
```javascript
import {
getSignature,
verifyEvent
} from 'nostr-tools/pure';
// Sign event data
const signature = getSignature(unsignedEvent, secretKey);
// Verify complete event
const isValid = verifyEvent(signedEvent);
```
## Best Practices
### Connection Management
1. **Use SimplePool** - Manages connections efficiently
2. **Limit concurrent connections** - Don't connect to too many relays
3. **Handle disconnections** - Implement reconnection logic
4. **Close subscriptions** - Always close when done
### Event Handling
1. **Verify events** - Always verify signatures
2. **Deduplicate** - Events may come from multiple relays
3. **Handle replaceable events** - Latest by created_at wins
4. **Validate content** - Don't trust event content blindly
### Key Security
1. **Never expose secret keys** - Keep in secure storage
2. **Use NIP-07 in browsers** - Let extensions handle signing
3. **Validate input** - Check key formats before use
### Performance
1. **Cache events** - Avoid re-fetching
2. **Use filters wisely** - Be specific, use limits
3. **Batch operations** - Combine related queries
4. **Close idle connections** - Free up resources
## Common Patterns
### Building a Feed
```javascript
const pool = new SimplePool();
const relays = ['wss://relay.damus.io', 'wss://nos.lol'];
async function loadFeed(followedPubkeys) {
const events = await pool.querySync(relays, {
kinds: [1, 6],
authors: followedPubkeys,
limit: 100
});
// Sort by timestamp
return events.sort((a, b) => b.created_at - a.created_at);
}
```
### Real-time Updates
```javascript
function subscribeToFeed(followedPubkeys, onEvent) {
return pool.subscribeMany(
relays,
[{ kinds: [1, 6], authors: followedPubkeys }],
{
onevent: onEvent,
oneose() {
console.log('Caught up with stored events');
}
}
);
}
```
### Profile Loading
```javascript
async function loadProfile(pubkey) {
const [metadata] = await pool.querySync(relays, {
kinds: [0],
authors: [pubkey],
limit: 1
});
if (metadata) {
return JSON.parse(metadata.content);
}
return null;
}
```
### Event Deduplication
```javascript
const seenEvents = new Set();
function handleEvent(event) {
if (seenEvents.has(event.id)) {
return; // Skip duplicate
}
seenEvents.add(event.id);
// Process event...
}
```
## Troubleshooting
### Common Issues
**Events not publishing:**
- Check relay is writable
- Verify event is properly signed
- Check relay's accepted kinds
**Subscription not receiving events:**
- Verify filter syntax
- Check relay has matching events
- Ensure subscription isn't closed
**Signature verification fails:**
- Check event structure is correct
- Verify keys are in correct format
- Ensure event hasn't been modified
**NIP-05 lookup fails:**
- Check CORS headers on server
- Verify .well-known path is correct
- Handle network timeouts
## References
- **nostr-tools GitHub**: https://github.com/nbd-wtf/nostr-tools
- **Nostr Protocol**: https://github.com/nostr-protocol/nostr
- **NIPs Repository**: https://github.com/nostr-protocol/nips
- **NIP-01 (Basic Protocol)**: https://github.com/nostr-protocol/nips/blob/master/01.md
## Related Skills
- **nostr** - Nostr protocol fundamentals
- **svelte** - Building Nostr UIs with Svelte
- **applesauce-core** - Higher-level Nostr client utilities
- **applesauce-signers** - Nostr signing abstractions

View File

@@ -150,10 +150,20 @@ Event kind `7` for reactions:
#### NIP-42: Authentication
Client authentication to relays:
- AUTH message from relay
- Client responds with event kind `22242`
- AUTH message from relay (challenge)
- Client responds with event kind `22242` signed auth event
- Proves key ownership
**CRITICAL: Clients MUST wait for OK response after AUTH**
- Relays MUST respond to AUTH with an OK message (same as EVENT)
- An OK with `true` confirms the relay has stored the authenticated pubkey
- An OK with `false` indicates authentication failed:
1. **Alert the user** that authentication failed
2. **Assume the relay will reject** subsequent events requiring auth
3. Check the `reason` field for error details (e.g., "error: failed to parse auth event")
- Do NOT send events requiring authentication until OK `true` is received
- If no OK is received within timeout, assume connection issues and retry or alert user
#### NIP-50: Search
Query filter extension for full-text search:
- `search` field in REQ filters

View File

@@ -0,0 +1,899 @@
---
name: rollup
description: This skill should be used when working with Rollup module bundler, including configuration, plugins, code splitting, and build optimization. Provides comprehensive knowledge of Rollup patterns, plugin development, and bundling strategies.
---
# Rollup Skill
This skill provides comprehensive knowledge and patterns for working with Rollup module bundler effectively.
## When to Use This Skill
Use this skill when:
- Configuring Rollup for web applications
- Setting up Rollup for library builds
- Working with Rollup plugins
- Implementing code splitting
- Optimizing bundle size
- Troubleshooting build issues
- Integrating Rollup with Svelte or other frameworks
- Developing custom Rollup plugins
## Core Concepts
### Rollup Overview
Rollup is a module bundler that:
- **Tree-shakes by default** - Removes unused code automatically
- **ES module focused** - Native ESM output support
- **Plugin-based** - Extensible architecture
- **Multiple outputs** - Generate multiple formats from single input
- **Code splitting** - Dynamic imports for lazy loading
- **Scope hoisting** - Flattens modules for smaller bundles
### Basic Configuration
```javascript
// rollup.config.js
export default {
input: 'src/main.js',
output: {
file: 'dist/bundle.js',
format: 'esm'
}
};
```
### Output Formats
Rollup supports multiple output formats:
| Format | Description | Use Case |
|--------|-------------|----------|
| `esm` | ES modules | Modern browsers, bundlers |
| `cjs` | CommonJS | Node.js |
| `iife` | Self-executing function | Script tags |
| `umd` | Universal Module Definition | CDN, both environments |
| `amd` | Asynchronous Module Definition | RequireJS |
| `system` | SystemJS | SystemJS loader |
## Configuration
### Full Configuration Options
```javascript
// rollup.config.js
import resolve from '@rollup/plugin-node-resolve';
import commonjs from '@rollup/plugin-commonjs';
import terser from '@rollup/plugin-terser';
const production = !process.env.ROLLUP_WATCH;
export default {
// Entry point(s)
input: 'src/main.js',
// Output configuration
output: {
// Output file or directory
file: 'dist/bundle.js',
// Or for code splitting:
// dir: 'dist',
// Output format
format: 'esm',
// Name for IIFE/UMD builds
name: 'MyBundle',
// Sourcemap generation
sourcemap: true,
// Global variables for external imports (IIFE/UMD)
globals: {
jquery: '$'
},
// Banner/footer comments
banner: '/* My library v1.0.0 */',
footer: '/* End of bundle */',
// Chunk naming for code splitting
chunkFileNames: '[name]-[hash].js',
entryFileNames: '[name].js',
// Manual chunks for code splitting
manualChunks: {
vendor: ['lodash', 'moment']
},
// Interop mode for default exports
interop: 'auto',
// Preserve modules structure
preserveModules: false,
// Exports mode
exports: 'auto' // 'default', 'named', 'none', 'auto'
},
// External dependencies (not bundled)
external: ['lodash', /^node:/],
// Plugin array
plugins: [
resolve({
browser: true,
dedupe: ['svelte']
}),
commonjs(),
production && terser()
],
// Watch mode options
watch: {
include: 'src/**',
exclude: 'node_modules/**',
clearScreen: false
},
// Warning handling
onwarn(warning, warn) {
// Skip certain warnings
if (warning.code === 'CIRCULAR_DEPENDENCY') return;
warn(warning);
},
// Preserve entry signatures for code splitting
preserveEntrySignatures: 'strict',
// Treeshake options
treeshake: {
moduleSideEffects: false,
propertyReadSideEffects: false
}
};
```
### Multiple Outputs
```javascript
export default {
input: 'src/main.js',
output: [
{
file: 'dist/bundle.esm.js',
format: 'esm'
},
{
file: 'dist/bundle.cjs.js',
format: 'cjs'
},
{
file: 'dist/bundle.umd.js',
format: 'umd',
name: 'MyLibrary'
}
]
};
```
### Multiple Entry Points
```javascript
export default {
input: {
main: 'src/main.js',
utils: 'src/utils.js'
},
output: {
dir: 'dist',
format: 'esm'
}
};
```
### Array of Configurations
```javascript
export default [
{
input: 'src/main.js',
output: { file: 'dist/main.js', format: 'esm' }
},
{
input: 'src/worker.js',
output: { file: 'dist/worker.js', format: 'iife' }
}
];
```
## Essential Plugins
### @rollup/plugin-node-resolve
Resolve node_modules imports:
```javascript
import resolve from '@rollup/plugin-node-resolve';
export default {
plugins: [
resolve({
// Resolve browser field in package.json
browser: true,
// Prefer built-in modules
preferBuiltins: true,
// Only resolve these extensions
extensions: ['.mjs', '.js', '.json', '.node'],
// Dedupe packages (important for Svelte)
dedupe: ['svelte'],
// Main fields to check in package.json
mainFields: ['module', 'main', 'browser'],
// Export conditions
exportConditions: ['svelte', 'browser', 'module', 'import']
})
]
};
```
### @rollup/plugin-commonjs
Convert CommonJS to ES modules:
```javascript
import commonjs from '@rollup/plugin-commonjs';
export default {
plugins: [
commonjs({
// Include specific modules
include: /node_modules/,
// Exclude specific modules
exclude: ['node_modules/lodash-es/**'],
// Ignore conditional requires
ignoreDynamicRequires: false,
// Transform mixed ES/CJS modules
transformMixedEsModules: true,
// Named exports for specific modules
namedExports: {
'react': ['createElement', 'Component']
}
})
]
};
```
### @rollup/plugin-terser
Minify output:
```javascript
import terser from '@rollup/plugin-terser';
export default {
plugins: [
terser({
compress: {
drop_console: true,
drop_debugger: true
},
mangle: true,
format: {
comments: false
}
})
]
};
```
### rollup-plugin-svelte
Compile Svelte components:
```javascript
import svelte from 'rollup-plugin-svelte';
import css from 'rollup-plugin-css-only';
export default {
plugins: [
svelte({
// Enable dev mode
dev: !production,
// Emit CSS as a separate file
emitCss: true,
// Preprocess (SCSS, TypeScript, etc.)
preprocess: sveltePreprocess(),
// Compiler options
compilerOptions: {
dev: !production
},
// Custom element mode
customElement: false
}),
// Extract CSS to separate file
css({ output: 'bundle.css' })
]
};
```
### Other Common Plugins
```javascript
import json from '@rollup/plugin-json';
import replace from '@rollup/plugin-replace';
import alias from '@rollup/plugin-alias';
import image from '@rollup/plugin-image';
import copy from 'rollup-plugin-copy';
import livereload from 'rollup-plugin-livereload';
export default {
plugins: [
// Import JSON files
json(),
// Replace strings in code
replace({
preventAssignment: true,
'process.env.NODE_ENV': JSON.stringify('production'),
'__VERSION__': JSON.stringify('1.0.0')
}),
// Path aliases
alias({
entries: [
{ find: '@', replacement: './src' },
{ find: 'utils', replacement: './src/utils' }
]
}),
// Import images
image(),
// Copy static files
copy({
targets: [
{ src: 'public/*', dest: 'dist' }
]
}),
// Live reload in dev
!production && livereload('dist')
]
};
```
## Code Splitting
### Dynamic Imports
```javascript
// Automatically creates chunks
async function loadFeature() {
const { feature } = await import('./feature.js');
feature();
}
```
Configuration for code splitting:
```javascript
export default {
input: 'src/main.js',
output: {
dir: 'dist',
format: 'esm',
chunkFileNames: 'chunks/[name]-[hash].js'
}
};
```
### Manual Chunks
```javascript
export default {
output: {
manualChunks: {
// Vendor chunk
vendor: ['lodash', 'moment'],
// Or use a function for more control
manualChunks(id) {
if (id.includes('node_modules')) {
return 'vendor';
}
}
}
}
};
```
### Advanced Chunking Strategy
```javascript
export default {
output: {
manualChunks(id, { getModuleInfo }) {
// Separate chunks by feature
if (id.includes('/features/auth/')) {
return 'auth';
}
if (id.includes('/features/dashboard/')) {
return 'dashboard';
}
// Vendor chunks by package
if (id.includes('node_modules')) {
const match = id.match(/node_modules\/([^/]+)/);
if (match) {
const packageName = match[1];
// Group small packages
const smallPackages = ['lodash', 'date-fns'];
if (smallPackages.includes(packageName)) {
return 'vendor-utils';
}
return `vendor-${packageName}`;
}
}
}
}
};
```
## Watch Mode
### Configuration
```javascript
export default {
watch: {
// Files to watch
include: 'src/**',
// Files to ignore
exclude: 'node_modules/**',
// Don't clear screen on rebuild
clearScreen: false,
// Rebuild delay
buildDelay: 0,
// Watch chokidar options
chokidar: {
usePolling: true
}
}
};
```
### CLI Watch Mode
```bash
# Watch mode
rollup -c -w
# With environment variable
ROLLUP_WATCH=true rollup -c
```
## Plugin Development
### Plugin Structure
```javascript
function myPlugin(options = {}) {
return {
// Plugin name (required)
name: 'my-plugin',
// Build hooks
options(inputOptions) {
// Modify input options
return inputOptions;
},
buildStart(inputOptions) {
// Called on build start
},
resolveId(source, importer, options) {
// Custom module resolution
if (source === 'virtual-module') {
return source;
}
return null; // Defer to other plugins
},
load(id) {
// Load module content
if (id === 'virtual-module') {
return 'export default "Hello"';
}
return null;
},
transform(code, id) {
// Transform module code
if (id.endsWith('.txt')) {
return {
code: `export default ${JSON.stringify(code)}`,
map: null
};
}
},
buildEnd(error) {
// Called when build ends
if (error) {
console.error('Build failed:', error);
}
},
// Output generation hooks
renderStart(outputOptions, inputOptions) {
// Called before output generation
},
banner() {
return '/* Custom banner */';
},
footer() {
return '/* Custom footer */';
},
renderChunk(code, chunk, options) {
// Transform output chunk
return code;
},
generateBundle(options, bundle) {
// Modify output bundle
for (const fileName in bundle) {
const chunk = bundle[fileName];
if (chunk.type === 'chunk') {
// Modify chunk
}
}
},
writeBundle(options, bundle) {
// After bundle is written
},
closeBundle() {
// Called when bundle is closed
}
};
}
export default myPlugin;
```
### Plugin with Rollup Utils
```javascript
import { createFilter } from '@rollup/pluginutils';
function myTransformPlugin(options = {}) {
const filter = createFilter(options.include, options.exclude);
return {
name: 'my-transform',
transform(code, id) {
if (!filter(id)) return null;
// Transform code
const transformed = code.replace(/foo/g, 'bar');
return {
code: transformed,
map: null // Or generate sourcemap
};
}
};
}
```
## Svelte Integration
### Complete Svelte Setup
```javascript
// rollup.config.js
import svelte from 'rollup-plugin-svelte';
import commonjs from '@rollup/plugin-commonjs';
import resolve from '@rollup/plugin-node-resolve';
import terser from '@rollup/plugin-terser';
import css from 'rollup-plugin-css-only';
import livereload from 'rollup-plugin-livereload';
const production = !process.env.ROLLUP_WATCH;
function serve() {
let server;
function toExit() {
if (server) server.kill(0);
}
return {
writeBundle() {
if (server) return;
server = require('child_process').spawn(
'npm',
['run', 'start', '--', '--dev'],
{
stdio: ['ignore', 'inherit', 'inherit'],
shell: true
}
);
process.on('SIGTERM', toExit);
process.on('exit', toExit);
}
};
}
export default {
input: 'src/main.js',
output: {
sourcemap: true,
format: 'iife',
name: 'app',
file: 'public/build/bundle.js'
},
plugins: [
svelte({
compilerOptions: {
dev: !production
}
}),
css({ output: 'bundle.css' }),
resolve({
browser: true,
dedupe: ['svelte']
}),
commonjs(),
// Dev server
!production && serve(),
!production && livereload('public'),
// Minify in production
production && terser()
],
watch: {
clearScreen: false
}
};
```
## Best Practices
### Bundle Optimization
1. **Enable tree shaking** - Use ES modules
2. **Mark side effects** - Set `sideEffects` in package.json
3. **Use terser** - Minify production builds
4. **Analyze bundles** - Use rollup-plugin-visualizer
5. **Code split** - Lazy load routes and features
### External Dependencies
```javascript
export default {
// Don't bundle peer dependencies for libraries
external: [
'react',
'react-dom',
/^lodash\//
],
output: {
globals: {
react: 'React',
'react-dom': 'ReactDOM'
}
}
};
```
### Development vs Production
```javascript
const production = !process.env.ROLLUP_WATCH;
export default {
plugins: [
replace({
preventAssignment: true,
'process.env.NODE_ENV': JSON.stringify(
production ? 'production' : 'development'
)
}),
production && terser()
].filter(Boolean)
};
```
### Error Handling
```javascript
export default {
onwarn(warning, warn) {
// Ignore circular dependency warnings
if (warning.code === 'CIRCULAR_DEPENDENCY') {
return;
}
// Ignore unused external imports
if (warning.code === 'UNUSED_EXTERNAL_IMPORT') {
return;
}
// Treat other warnings as errors
if (warning.code === 'UNRESOLVED_IMPORT') {
throw new Error(warning.message);
}
// Use default warning handling
warn(warning);
}
};
```
## Common Patterns
### Library Build
```javascript
import pkg from './package.json';
export default {
input: 'src/index.js',
external: Object.keys(pkg.peerDependencies || {}),
output: [
{
file: pkg.main,
format: 'cjs',
sourcemap: true
},
{
file: pkg.module,
format: 'esm',
sourcemap: true
}
]
};
```
### Application Build
```javascript
export default {
input: 'src/main.js',
output: {
dir: 'dist',
format: 'esm',
chunkFileNames: 'chunks/[name]-[hash].js',
entryFileNames: '[name]-[hash].js',
sourcemap: true
},
plugins: [
// All dependencies bundled
resolve({ browser: true }),
commonjs(),
terser()
]
};
```
### Web Worker Build
```javascript
export default [
// Main application
{
input: 'src/main.js',
output: {
file: 'dist/main.js',
format: 'esm'
},
plugins: [resolve(), commonjs()]
},
// Web worker (IIFE format)
{
input: 'src/worker.js',
output: {
file: 'dist/worker.js',
format: 'iife'
},
plugins: [resolve(), commonjs()]
}
];
```
## Troubleshooting
### Common Issues
**Module not found:**
- Check @rollup/plugin-node-resolve is configured
- Verify package is installed
- Check `external` array
**CommonJS module issues:**
- Add @rollup/plugin-commonjs
- Check `namedExports` configuration
- Try `transformMixedEsModules: true`
**Circular dependencies:**
- Use `onwarn` to suppress or fix
- Refactor to break cycles
- Check import order
**Sourcemaps not working:**
- Set `sourcemap: true` in output
- Ensure plugins pass through maps
- Check browser devtools settings
**Large bundle size:**
- Use rollup-plugin-visualizer
- Check for duplicate dependencies
- Verify tree shaking is working
- Mark unused packages as external
## CLI Reference
```bash
# Basic build
rollup -c
# Watch mode
rollup -c -w
# Custom config
rollup -c rollup.custom.config.js
# Output format
rollup src/main.js --format esm --file dist/bundle.js
# Environment variables
NODE_ENV=production rollup -c
# Silent mode
rollup -c --silent
# Generate bundle stats
rollup -c --perf
```
## References
- **Rollup Documentation**: https://rollupjs.org
- **Plugin Directory**: https://github.com/rollup/plugins
- **Awesome Rollup**: https://github.com/rollup/awesome
- **GitHub**: https://github.com/rollup/rollup
## Related Skills
- **svelte** - Using Rollup with Svelte
- **typescript** - TypeScript compilation with Rollup
- **nostr-tools** - Bundling Nostr applications

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,7 @@ test-build
*.exe
*.dll
*.so
!libsecp256k1.so
*.dylib
# Test files

84
.gitea/README.md Normal file
View File

@@ -0,0 +1,84 @@
# Gitea Actions Setup
This directory contains workflows for Gitea Actions, which is a self-hosted CI/CD system compatible with GitHub Actions syntax.
## Workflow: go.yml
The `go.yml` workflow handles building, testing, and releasing the ORLY relay when version tags are pushed.
### Features
- **No external dependencies**: Uses only inline shell commands (no actions from GitHub)
- **Pure Go builds**: Uses CGO_ENABLED=0 with purego for secp256k1
- **Automated releases**: Creates Gitea releases with binaries and checksums
- **Tests included**: Runs the full test suite before building releases
### Prerequisites
1. **Gitea Token**: Add a secret named `GITEA_TOKEN` in your repository settings
- Go to: Repository Settings → Secrets → Add Secret
- Name: `GITEA_TOKEN`
- Value: Your Gitea personal access token with `repo` and `write:packages` permissions
2. **Runner Configuration**: Ensure your Gitea Actions runner is properly configured
- The runner should have access to pull Docker images
- Ubuntu-latest image should be available
### Usage
To create a new release:
```bash
# 1. Update version in pkg/version/version file
echo "v0.29.4" > pkg/version/version
# 2. Commit the version change
git add pkg/version/version
git commit -m "bump to v0.29.4"
# 3. Create and push the tag
git tag v0.29.4
git push origin v0.29.4
# 4. The workflow will automatically:
# - Build the binary
# - Run tests
# - Create a release on your Gitea instance
# - Upload the binary and checksums
```
### Environment Variables
The workflow uses standard Gitea Actions environment variables:
- `GITHUB_WORKSPACE`: Working directory for the job
- `GITHUB_REF_NAME`: Tag name (e.g., v1.2.3)
- `GITHUB_REPOSITORY`: Repository in format `owner/repo`
- `GITHUB_SERVER_URL`: Your Gitea instance URL (e.g., https://git.nostrdev.com)
### Troubleshooting
**Issue**: Workflow fails to clone repository
- **Solution**: Check that the repository is accessible without authentication, or configure runner credentials
**Issue**: Cannot create release
- **Solution**: Verify `GITEA_TOKEN` secret is set correctly with appropriate permissions
**Issue**: Go version not found
- **Solution**: The workflow downloads Go 1.25.3 directly from go.dev, ensure the runner has internet access
### Customization
To modify the workflow:
1. Edit `.gitea/workflows/go.yml`
2. Test changes by pushing a tag (or use `act` locally for testing)
3. Monitor the Actions tab in your Gitea repository for results
## Differences from GitHub Actions
- **Action dependencies**: This workflow doesn't use external actions (like `actions/checkout@v4`) to avoid GitHub dependency
- **Release creation**: Uses `tea` CLI instead of GitHub's release action
- **Inline commands**: All setup and build steps are done with shell scripts
This makes the workflow completely self-contained and independent of external services.

View File

@@ -0,0 +1,118 @@
name: Bug Report
about: Report a bug or unexpected behavior in ORLY relay
title: "[BUG] "
labels:
- bug
body:
- type: markdown
attributes:
value: |
## Bug Report Guidelines
Thank you for taking the time to report a bug. Please fill out the form below to help us understand and reproduce the issue.
**Before submitting:**
- Search [existing issues](https://git.mleku.dev/mleku/next.orly.dev/issues) to avoid duplicates
- Check the [documentation](https://git.mleku.dev/mleku/next.orly.dev) for configuration guidance
- Ensure you're running a recent version of ORLY
- type: input
id: version
attributes:
label: ORLY Version
description: Run `./orly version` to get the version
placeholder: "v0.35.4"
validations:
required: true
- type: dropdown
id: database
attributes:
label: Database Backend
description: Which database backend are you using?
options:
- Badger (default)
- Neo4j
- WasmDB
validations:
required: true
- type: textarea
id: description
attributes:
label: Bug Description
description: A clear and concise description of the bug
placeholder: Describe what happened and what you expected to happen
validations:
required: true
- type: textarea
id: reproduction
attributes:
label: Steps to Reproduce
description: Detailed steps to reproduce the behavior
placeholder: |
1. Start relay with `./orly`
2. Connect with client X
3. Perform action Y
4. Observe error Z
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected Behavior
description: What did you expect to happen?
validations:
required: true
- type: textarea
id: logs
attributes:
label: Relevant Logs
description: |
Include relevant log output. Set `ORLY_LOG_LEVEL=debug` or `trace` for more detail.
This will be automatically formatted as code.
render: shell
- type: textarea
id: config
attributes:
label: Configuration
description: |
Relevant environment variables or configuration (redact sensitive values).
This will be automatically formatted as code.
render: shell
placeholder: |
ORLY_ACL_MODE=follows
ORLY_POLICY_ENABLED=true
ORLY_DB_TYPE=badger
- type: textarea
id: environment
attributes:
label: Environment
description: Operating system, Go version, etc.
placeholder: |
OS: Linux 6.8.0
Go: 1.25.3
Architecture: amd64
- type: textarea
id: additional
attributes:
label: Additional Context
description: Any other context, screenshots, or information that might help
- type: checkboxes
id: checklist
attributes:
label: Checklist
options:
- label: I have searched existing issues and this is not a duplicate
required: true
- label: I have included version information
required: true
- label: I have included steps to reproduce the issue
required: true

View File

@@ -0,0 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Documentation
url: https://git.mleku.dev/mleku/next.orly.dev
about: Check the repository documentation before opening an issue
- name: Nostr Protocol (NIPs)
url: https://github.com/nostr-protocol/nips
about: For questions about Nostr protocol specifications

View File

@@ -0,0 +1,118 @@
name: Feature Request
about: Suggest a new feature or enhancement for ORLY relay
title: "[FEATURE] "
labels:
- enhancement
body:
- type: markdown
attributes:
value: |
## Feature Request Guidelines
Thank you for suggesting a feature. Please provide as much detail as possible to help us understand your proposal.
**Before submitting:**
- Search [existing issues](https://git.mleku.dev/mleku/next.orly.dev/issues) to avoid duplicates
- Check if this is covered by an existing [NIP](https://github.com/nostr-protocol/nips)
- Review the [documentation](https://git.mleku.dev/mleku/next.orly.dev) for current capabilities
- type: dropdown
id: category
attributes:
label: Feature Category
description: What area of ORLY does this feature relate to?
options:
- Protocol (NIP implementation)
- Database / Storage
- Performance / Optimization
- Policy / Access Control
- Web UI / Admin Interface
- Deployment / Operations
- API / Integration
- Documentation
- Other
validations:
required: true
- type: textarea
id: problem
attributes:
label: Problem Statement
description: |
What problem does this feature solve? Is this related to a frustration you have?
A clear problem statement helps us understand the motivation.
placeholder: "I'm always frustrated when..."
validations:
required: true
- type: textarea
id: solution
attributes:
label: Proposed Solution
description: |
Describe the solution you'd like. Be specific about expected behavior.
placeholder: "I would like ORLY to..."
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Alternatives Considered
description: |
Describe any alternative solutions or workarounds you've considered.
placeholder: "I've tried X but it doesn't work because..."
- type: input
id: nip
attributes:
label: Related NIP
description: If this relates to a Nostr Implementation Possibility, provide the NIP number
placeholder: "NIP-XX"
- type: dropdown
id: impact
attributes:
label: Scope of Impact
description: How significant is this feature?
options:
- Minor enhancement (small quality-of-life improvement)
- Moderate feature (adds useful capability)
- Major feature (significant new functionality)
- Breaking change (requires migration or config changes)
validations:
required: true
- type: dropdown
id: contribution
attributes:
label: Willingness to Contribute
description: Would you be willing to help implement this feature?
options:
- "Yes, I can submit a PR"
- "Yes, I can help with testing"
- "No, but I can provide more details"
- "No"
validations:
required: true
- type: textarea
id: additional
attributes:
label: Additional Context
description: |
Any other context, mockups, examples, or references that help explain the feature.
For protocol features, include example event structures or message flows if applicable.
- type: checkboxes
id: checklist
attributes:
label: Checklist
options:
- label: I have searched existing issues and this is not a duplicate
required: true
- label: I have described the problem this feature solves
required: true
- label: I have checked if this relates to an existing NIP
required: false

204
.gitea/workflows/go.yml Normal file
View File

@@ -0,0 +1,204 @@
# This workflow will build a golang project for Gitea Actions
# Using inline commands to avoid external action dependencies
#
# NOTE: All builds use CGO_ENABLED=0 since p8k library uses purego (not CGO)
# The library dynamically loads libsecp256k1 at runtime via purego
#
# Release Process:
# 1. Update the version in the pkg/version/version file (e.g. v1.2.3)
# 2. Create and push a tag matching the version:
# git tag v1.2.3
# git push origin v1.2.3
# 3. The workflow will automatically:
# - Build binaries for Linux AMD64
# - Run tests
# - Create a Gitea release with the binaries
# - Generate checksums
name: Go
on:
push:
tags:
- "v[0-9]+.[0-9]+.[0-9]+"
jobs:
build-and-release:
runs-on: ubuntu-latest
steps:
- name: Checkout code
run: |
set -e
echo "Cloning repository..."
echo "GITHUB_REF_NAME=${GITHUB_REF_NAME}"
echo "GITHUB_SERVER_URL=${GITHUB_SERVER_URL}"
echo "GITHUB_REPOSITORY=${GITHUB_REPOSITORY}"
echo "GITHUB_WORKSPACE=${GITHUB_WORKSPACE}"
git clone --depth 1 --branch ${GITHUB_REF_NAME} ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git ${GITHUB_WORKSPACE}
cd ${GITHUB_WORKSPACE}
echo "Cloned successfully. Last commit:"
git log -1
ls -la
- name: Set up Go
run: |
set -e
echo "Setting up Go 1.25.3..."
cd /tmp
wget -q https://go.dev/dl/go1.25.3.linux-amd64.tar.gz
sudo rm -rf /usr/local/go
sudo tar -C /usr/local -xzf go1.25.3.linux-amd64.tar.gz
export PATH=/usr/local/go/bin:$PATH
go version
- name: Set up Bun
run: |
set -e
echo "Installing Bun..."
curl -fsSL https://bun.sh/install | bash
export BUN_INSTALL="$HOME/.bun"
export PATH="$BUN_INSTALL/bin:$PATH"
bun --version
- name: Build Web UI
run: |
set -e
export BUN_INSTALL="$HOME/.bun"
export PATH="$BUN_INSTALL/bin:$PATH"
cd ${GITHUB_WORKSPACE}/app/web
echo "Installing frontend dependencies..."
bun install
echo "Building web app..."
bun run build
echo "Verifying dist directory was created..."
ls -lah dist/
echo "Web UI build complete"
- name: Build (Pure Go + purego)
run: |
set -e
export PATH=/usr/local/go/bin:$PATH
cd ${GITHUB_WORKSPACE}
echo "Building with CGO_ENABLED=0..."
CGO_ENABLED=0 go build -v ./...
- name: Test (Pure Go + purego)
run: |
set -e
export PATH=/usr/local/go/bin:$PATH
cd ${GITHUB_WORKSPACE}
echo "Running tests..."
# libsecp256k1.so is included in the repository
chmod +x libsecp256k1.so
# Set LD_LIBRARY_PATH so tests can find the library
export LD_LIBRARY_PATH=${GITHUB_WORKSPACE}:${LD_LIBRARY_PATH}
# Run tests but don't fail the build on test failures (some tests may need specific env)
CGO_ENABLED=0 go test -v $(go list ./... | grep -v '/cmd/benchmark/external/' | xargs -n1 sh -c 'ls $0/*_test.go 1>/dev/null 2>&1 && echo $0' | grep .) || echo "Some tests failed, continuing..."
- name: Build Release Binaries (Pure Go + purego)
run: |
set -e
export PATH=/usr/local/go/bin:$PATH
cd ${GITHUB_WORKSPACE}
# Extract version from tag (e.g., v1.2.3 -> 1.2.3)
VERSION=${GITHUB_REF_NAME#v}
echo "Building release binaries for version $VERSION (pure Go + purego)"
# Create directory for binaries
mkdir -p release-binaries
# Copy libsecp256k1.so from repository to release binaries
cp libsecp256k1.so release-binaries/libsecp256k1-linux-amd64.so
chmod +x release-binaries/libsecp256k1-linux-amd64.so
# Build for Linux AMD64 (pure Go + purego dynamic loading)
echo "Building Linux AMD64 (pure Go + purego dynamic loading)..."
GOEXPERIMENT=greenteagc,jsonv2 GOOS=linux GOARCH=amd64 CGO_ENABLED=0 \
go build -ldflags "-s -w" -o release-binaries/orly-${VERSION}-linux-amd64 .
# Create checksums
cd release-binaries
sha256sum * > SHA256SUMS.txt
cat SHA256SUMS.txt
cd ..
echo "Release binaries built successfully:"
ls -lh release-binaries/
- name: Create Gitea Release
env:
GITEA_TOKEN: ${{ secrets.GITEA_TOKEN }}
run: |
set -e # Exit on any error
export PATH=/usr/local/go/bin:$PATH
cd ${GITHUB_WORKSPACE}
# Validate GITEA_TOKEN is set
if [ -z "${GITEA_TOKEN}" ]; then
echo "ERROR: GITEA_TOKEN secret is not set!"
echo "Please configure the GITEA_TOKEN secret in repository settings."
exit 1
fi
VERSION=${GITHUB_REF_NAME}
REPO_OWNER=$(echo ${GITHUB_REPOSITORY} | cut -d'/' -f1)
REPO_NAME=$(echo ${GITHUB_REPOSITORY} | cut -d'/' -f2)
echo "Creating release for ${REPO_OWNER}/${REPO_NAME} version ${VERSION}"
# Verify release binaries exist
if [ ! -f "release-binaries/orly-${VERSION#v}-linux-amd64" ]; then
echo "ERROR: Release binary not found!"
ls -la release-binaries/ || echo "release-binaries directory does not exist"
exit 1
fi
# Use Gitea API directly (more reliable than tea CLI)
cd ${GITHUB_WORKSPACE}
API_URL="${GITHUB_SERVER_URL}/api/v1"
echo "Creating release via Gitea API..."
echo "API URL: ${API_URL}/repos/${REPO_OWNER}/${REPO_NAME}/releases"
# Create the release
RELEASE_RESPONSE=$(curl -s -X POST \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"tag_name\": \"${VERSION}\", \"name\": \"Release ${VERSION}\", \"body\": \"Automated release ${VERSION}\"}" \
"${API_URL}/repos/${REPO_OWNER}/${REPO_NAME}/releases")
echo "Release response: ${RELEASE_RESPONSE}"
# Extract release ID
RELEASE_ID=$(echo "${RELEASE_RESPONSE}" | grep -o '"id":[0-9]*' | head -1 | cut -d: -f2)
if [ -z "${RELEASE_ID}" ]; then
echo "ERROR: Failed to create release or extract release ID"
echo "Full response: ${RELEASE_RESPONSE}"
exit 1
fi
echo "Release created with ID: ${RELEASE_ID}"
# Upload assets
for ASSET in release-binaries/orly-${VERSION#v}-linux-amd64 release-binaries/libsecp256k1-linux-amd64.so release-binaries/SHA256SUMS.txt; do
FILENAME=$(basename "${ASSET}")
echo "Uploading ${FILENAME}..."
UPLOAD_RESPONSE=$(curl -s -X POST \
-H "Authorization: token ${GITEA_TOKEN}" \
-F "attachment=@${ASSET}" \
"${API_URL}/repos/${REPO_OWNER}/${REPO_NAME}/releases/${RELEASE_ID}/assets?name=${FILENAME}")
echo "Upload response for ${FILENAME}: ${UPLOAD_RESPONSE}"
done
echo "Release ${VERSION} created successfully with all assets!"
# Verify release exists
VERIFY=$(curl -s -H "Authorization: token ${GITEA_TOKEN}" \
"${API_URL}/repos/${REPO_OWNER}/${REPO_NAME}/releases/tags/${VERSION}")
echo "Verification: ${VERIFY}" | head -c 500

53
.github/workflows/ci.yaml vendored Normal file
View File

@@ -0,0 +1,53 @@
name: CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.23'
- name: Download libsecp256k1
run: |
wget -q https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so -O libsecp256k1.so
chmod +x libsecp256k1.so
- name: Run tests
run: |
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}$(pwd)"
CGO_ENABLED=0 go test ./...
- name: Build binary
run: |
CGO_ENABLED=0 go build -o orly .
./orly version
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.23'
- name: Check go mod tidy
run: |
go mod tidy
git diff --exit-code go.mod go.sum
- name: Run go vet
run: CGO_ENABLED=0 go vet ./...

View File

@@ -1,88 +0,0 @@
# This workflow will build a golang project
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-go
#
# NOTE: All builds use CGO_ENABLED=0 since p8k library uses purego (not CGO)
# The library dynamically loads libsecp256k1 at runtime via purego
#
# Release Process:
# 1. Update the version in the pkg/version/version file (e.g. v1.2.3)
# 2. Create and push a tag matching the version:
# git tag v1.2.3
# git push origin v1.2.3
# 3. The workflow will automatically:
# - Build binaries for multiple platforms (Linux, macOS, Windows)
# - Create a GitHub release with the binaries
# - Generate release notes
name: Go
on:
push:
tags:
- "v[0-9]+.[0-9]+.[0-9]+"
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: "1.25"
- name: Build (Pure Go + purego)
run: CGO_ENABLED=0 go build -v ./...
- name: Test (Pure Go + purego)
run: |
# Copy the libsecp256k1.so to root directory so tests can find it
cp pkg/crypto/p8k/libsecp256k1.so .
CGO_ENABLED=0 go test -v $(go list ./... | xargs -n1 sh -c 'ls $0/*_test.go 1>/dev/null 2>&1 && echo $0' | grep .)
release:
needs: build
runs-on: ubuntu-latest
permissions:
contents: write
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.25'
- name: Build Release Binaries (Pure Go + purego)
if: startsWith(github.ref, 'refs/tags/v')
run: |
# Extract version from tag (e.g., v1.2.3 -> 1.2.3)
VERSION=${GITHUB_REF#refs/tags/v}
echo "Building release binaries for version $VERSION (pure Go + purego)"
# Create directory for binaries
mkdir -p release-binaries
# Copy the pre-compiled libsecp256k1.so for Linux AMD64
cp pkg/crypto/p8k/libsecp256k1.so release-binaries/libsecp256k1-linux-amd64.so
# Build for Linux AMD64 (pure Go + purego dynamic loading)
echo "Building Linux AMD64 (pure Go + purego dynamic loading)..."
GOEXPERIMENT=greenteagc,jsonv2 GOOS=linux GOARCH=amd64 CGO_ENABLED=0 \
go build -ldflags "-s -w" -o release-binaries/orly-${VERSION}-linux-amd64 .
# Create checksums
cd release-binaries
sha256sum * > SHA256SUMS.txt
cd ..
- name: Create GitHub Release
if: startsWith(github.ref, 'refs/tags/v')
uses: softprops/action-gh-release@v1
with:
files: release-binaries/*
draft: false
prerelease: false
generate_release_notes: true

154
.github/workflows/release.yaml vendored Normal file
View File

@@ -0,0 +1,154 @@
name: Release
on:
push:
tags:
- 'v*'
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- goos: linux
goarch: amd64
platform: linux-amd64
ext: ""
lib: libsecp256k1.so
- goos: linux
goarch: arm64
platform: linux-arm64
ext: ""
lib: libsecp256k1.so
- goos: darwin
goarch: amd64
platform: darwin-amd64
ext: ""
lib: libsecp256k1.dylib
- goos: darwin
goarch: arm64
platform: darwin-arm64
ext: ""
lib: libsecp256k1.dylib
- goos: windows
goarch: amd64
platform: windows-amd64
ext: ".exe"
lib: libsecp256k1.dll
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.23'
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install bun
run: |
curl -fsSL https://bun.sh/install | bash
echo "$HOME/.bun/bin" >> $GITHUB_PATH
- name: Build Web UI
run: |
cd app/web
$HOME/.bun/bin/bun install
$HOME/.bun/bin/bun run build
- name: Get version
id: version
run: echo "version=$(cat pkg/version/version)" >> $GITHUB_OUTPUT
- name: Build binary
env:
CGO_ENABLED: 0
GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }}
run: |
VERSION=${{ steps.version.outputs.version }}
OUTPUT="orly-${VERSION}-${{ matrix.platform }}${{ matrix.ext }}"
go build -ldflags "-s -w -X main.version=${VERSION}" -o ${OUTPUT} .
sha256sum ${OUTPUT} > ${OUTPUT}.sha256
- name: Download runtime library
run: |
VERSION=${{ steps.version.outputs.version }}
LIB="${{ matrix.lib }}"
wget -q "https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/${LIB}" -O "${LIB}" || true
if [ -f "${LIB}" ]; then
sha256sum "${LIB}" > "${LIB}.sha256"
fi
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: orly-${{ matrix.platform }}
path: |
orly-*
libsecp256k1*
release:
needs: build
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Get version
id: version
run: echo "version=$(cat pkg/version/version)" >> $GITHUB_OUTPUT
- name: Download all artifacts
uses: actions/download-artifact@v4
with:
path: artifacts
merge-multiple: true
- name: Create combined checksums
run: |
cd artifacts
cat *.sha256 | sort -k2 > SHA256SUMS.txt
rm -f *.sha256
- name: List release files
run: ls -la artifacts/
- name: Create Release
uses: softprops/action-gh-release@v1
with:
name: ORLY ${{ steps.version.outputs.version }}
body: |
## ORLY ${{ steps.version.outputs.version }}
### Downloads
Download the appropriate binary for your platform. The `libsecp256k1` library is optional but recommended for better cryptographic performance.
### Installation
1. Download the binary for your platform
2. (Optional) Download the corresponding `libsecp256k1` library
3. Place both files in the same directory
4. Make the binary executable: `chmod +x orly-*`
5. Run: `./orly-*-linux-amd64` (or your platform's binary)
### Verify Downloads
```bash
sha256sum -c SHA256SUMS.txt
```
### Configuration
See the [repository documentation](https://git.mleku.dev/mleku/next.orly.dev) for configuration options.
files: |
artifacts/*
draft: false
prerelease: false

63
.gitignore vendored
View File

@@ -8,24 +8,10 @@
*
# Especially these
.vscode
.vscode/
.vscode/**
**/.vscode
**/.vscode/**
node_modules
node_modules/
node_modules/**
**/node_modules
**/node_modules/
**/node_modules/**
**/.vscode/
/test*
.idea
.idea/
.idea/**
/.idea/
/.idea/**
/.idea
# and others
/go.work.sum
/secp256k1/
@@ -81,9 +67,7 @@ cmd/benchmark/data
!license
!readme
!*.ico
!.idea/*
!*.xml
!.name
!.gitignore
!version
!out.jsonl
@@ -95,6 +79,7 @@ cmd/benchmark/data
!*.svelte
!.github/**
!.github/workflows/**
!.claude/**
!app/web/dist/**
!app/web/dist/*.js
!app/web/dist/*.js.map
@@ -103,52 +88,30 @@ cmd/benchmark/data
!app/web/dist/*.ico
!app/web/dist/*.png
!app/web/dist/*.svg
!Dockerfile
!Dockerfile*
!.dockerignore
!libsecp256k1.so
# ...even if they are in subdirectories
!*/
# Re-ignore IDE directories (must come after !*/)
.idea/
**/.idea/
# Re-ignore node_modules everywhere (must come after !*/)
node_modules/
**/node_modules/
/blocklist.json
/gui/gui/main.wasm
/gui/gui/index.html
pkg/database/testrealy
/.idea/workspace.xml
/.idea/dictionaries/project.xml
/.idea/shelf/Add_tombstone_handling__enhance_event_ID_logic__update_imports.xml
/.idea/.gitignore
/.idea/misc.xml
/.idea/modules.xml
/.idea/orly.dev.iml
/.idea/vcs.xml
/.idea/codeStyles/codeStyleConfig.xml
/.idea/material_theme_project_new.xml
/.idea/orly.iml
/.idea/go.imports.xml
/.idea/inspectionProfiles/Project_Default.xml
/.idea/.name
/ctxproxy.config.yml
cmd/benchmark/external/**
private*
pkg/protocol/directory-client/node_modules
# Build outputs
build/orly-*
build/libsecp256k1-*
build/SHA256SUMS-*
Dockerfile
/cmd/benchmark/reports/run_20251116_172629/aggregate_report.txt
/cmd/benchmark/reports/run_20251116_172629/next-orly_results.txt
/cmd/benchmark/reports/run_20251116_173450/aggregate_report.txt
/cmd/benchmark/reports/run_20251116_173450/next-orly_results.txt
/cmd/benchmark/reports/run_20251116_173846/aggregate_report.txt
/cmd/benchmark/reports/run_20251116_173846/next-orly_results.txt
/cmd/benchmark/reports/run_20251116_174246/aggregate_report.txt
/cmd/benchmark/reports/run_20251116_174246/next-orly_results.txt
/cmd/benchmark/reports/run_20251116_182250/aggregate_report.txt
/cmd/benchmark/reports/run_20251116_182250/next-orly_results.txt
/cmd/benchmark/reports/run_20251116_203720/aggregate_report.txt
/cmd/benchmark/reports/run_20251116_203720/next-orly_results.txt
/cmd/benchmark/reports/run_20251116_225648/aggregate_report.txt
/cmd/benchmark/reports/run_20251116_225648/next-orly_results.txt
/cmd/benchmark/reports/run_20251116_233547/aggregate_report.txt
/cmd/benchmark/reports/run_20251116_233547/next-orly_results.txt
cmd/benchmark/data.claude/skills/skill-creator/scripts/__pycache__/

View File

@@ -0,0 +1,442 @@
# Implementation Plan: Directory Spider (Issue #7)
## Overview
Add a new "directory spider" that discovers relays by crawling kind 10002 (relay list) events, expanding outward in hops from whitelisted users, and then fetches essential metadata events (kinds 0, 3, 10000, 10002) from the discovered network.
**Key Characteristics:**
- Runs once per day (configurable)
- Single-threaded, serial operations to minimize load
- 3-hop relay discovery from whitelisted users
- Fetches: kind 0 (profile), 3 (follow list), 10000 (mute list), 10002 (relay list)
---
## Architecture
### New Package Structure
```
pkg/spider/
├── spider.go # Existing follows spider
├── directory.go # NEW: Directory spider implementation
├── directory_test.go # NEW: Tests
└── common.go # NEW: Shared utilities (extract from spider.go)
```
### Core Components
```go
// DirectorySpider manages the daily relay discovery and metadata sync
type DirectorySpider struct {
ctx context.Context
cancel context.CancelFunc
db *database.D
pub publisher.I
// Configuration
interval time.Duration // Default: 24h
maxHops int // Default: 3
// State
running atomic.Bool
lastRun time.Time
// Relay discovery
discoveredRelays map[string]int // URL -> hop distance
processedRelays map[string]bool // Already fetched from
// Callbacks for integration
getSeedPubkeys func() [][]byte // Whitelisted users (from ACL)
}
```
---
## Implementation Phases
### Phase 1: Core Directory Spider Structure
**File:** `pkg/spider/directory.go`
1. **Create DirectorySpider struct** with:
- Context management for cancellation
- Database and publisher references
- Configuration (interval, max hops)
- State tracking (discovered relays, processed relays)
2. **Constructor:** `NewDirectorySpider(ctx, db, pub, interval, maxHops)`
- Initialize maps and state
- Set defaults (24h interval, 3 hops)
3. **Lifecycle methods:**
- `Start()` - Launch main goroutine
- `Stop()` - Cancel context and wait for shutdown
- `TriggerNow()` - Force immediate run (for testing/admin)
### Phase 2: Relay Discovery (3-Hop Expansion)
**Algorithm:**
```
Round 1: Get relay lists from whitelisted users
- Query local DB for kind 10002 events from seed pubkeys
- Extract relay URLs from "r" tags
- Mark as hop 0 relays
Round 2-4 (3 iterations):
- For each relay at current hop level (in serial):
1. Connect to relay
2. Query for ALL kind 10002 events (limit: 5000)
3. Extract new relay URLs
4. Mark as hop N+1 relays
5. Close connection
6. Sleep briefly between relays (rate limiting)
```
**Key Methods:**
```go
// discoverRelays performs the 3-hop relay expansion
func (ds *DirectorySpider) discoverRelays(ctx context.Context) error
// fetchRelayListsFromRelay connects to a relay and fetches kind 10002 events
func (ds *DirectorySpider) fetchRelayListsFromRelay(ctx context.Context, relayURL string) ([]*event.T, error)
// extractRelaysFromEvents parses kind 10002 events and extracts relay URLs
func (ds *DirectorySpider) extractRelaysFromEvents(events []*event.T) []string
```
### Phase 3: Metadata Fetching
After relay discovery, fetch essential metadata from all discovered relays:
**Kinds to fetch:**
- Kind 0: Profile metadata (replaceable)
- Kind 3: Follow lists (replaceable)
- Kind 10000: Mute lists (replaceable)
- Kind 10002: Relay lists (already have many, but get latest)
**Fetch Strategy:**
```go
// fetchMetadataFromRelays iterates through discovered relays serially
func (ds *DirectorySpider) fetchMetadataFromRelays(ctx context.Context) error {
for relayURL := range ds.discoveredRelays {
// Skip if already processed
if ds.processedRelays[relayURL] {
continue
}
// Fetch each kind type
for _, k := range []int{0, 3, 10000, 10002} {
events, err := ds.fetchKindFromRelay(ctx, relayURL, k)
// Store events...
}
ds.processedRelays[relayURL] = true
// Rate limiting sleep
time.Sleep(500 * time.Millisecond)
}
}
```
**Query Filters:**
- For replaceable events (0, 3, 10000, 10002): No time filter, let relay return latest
- Limit per query: 1000-5000 events
- Use pagination if relay supports it
### Phase 4: WebSocket Client for Fetching
**Reuse existing patterns from spider.go:**
```go
// fetchFromRelay handles connection, query, and cleanup
func (ds *DirectorySpider) fetchFromRelay(ctx context.Context, relayURL string, f *filter.F) ([]*event.T, error) {
// Create timeout context (30 seconds per relay)
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
// Connect using ws.Client (from pkg/protocol/ws)
client, err := ws.NewClient(ctx, relayURL)
if err != nil {
return nil, err
}
defer client.Close()
// Subscribe with filter
sub, err := client.Subscribe(ctx, f)
if err != nil {
return nil, err
}
// Collect events until EOSE or timeout
var events []*event.T
for ev := range sub.Events {
events = append(events, ev)
}
return events, nil
}
```
### Phase 5: Event Storage
**Storage Strategy:**
```go
func (ds *DirectorySpider) storeEvents(ctx context.Context, events []*event.T) (saved, duplicates int) {
for _, ev := range events {
_, err := ds.db.SaveEvent(ctx, ev)
if err != nil {
if errors.Is(err, database.ErrDuplicate) {
duplicates++
continue
}
// Log other errors but continue
log.W.F("failed to save event %s: %v", ev.ID.String(), err)
continue
}
saved++
// Publish to active subscribers
ds.pub.Deliver(ev)
}
return
}
```
### Phase 6: Main Loop
```go
func (ds *DirectorySpider) mainLoop() {
// Calculate time until next run
ticker := time.NewTicker(ds.interval)
defer ticker.Stop()
// Run immediately on start
ds.runOnce()
for {
select {
case <-ds.ctx.Done():
return
case <-ticker.C:
ds.runOnce()
}
}
}
func (ds *DirectorySpider) runOnce() {
if !ds.running.CompareAndSwap(false, true) {
log.I.F("directory spider already running, skipping")
return
}
defer ds.running.Store(false)
log.I.F("starting directory spider run")
start := time.Now()
// Reset state
ds.discoveredRelays = make(map[string]int)
ds.processedRelays = make(map[string]bool)
// Phase 1: Discover relays via 3-hop expansion
if err := ds.discoverRelays(ds.ctx); err != nil {
log.E.F("relay discovery failed: %v", err)
return
}
log.I.F("discovered %d relays", len(ds.discoveredRelays))
// Phase 2: Fetch metadata from all relays
if err := ds.fetchMetadataFromRelays(ds.ctx); err != nil {
log.E.F("metadata fetch failed: %v", err)
return
}
ds.lastRun = time.Now()
log.I.F("directory spider completed in %v", time.Since(start))
}
```
### Phase 7: Configuration
**New environment variables:**
```go
// In app/config/config.go
DirectorySpiderEnabled bool `env:"ORLY_DIRECTORY_SPIDER" default:"false" usage:"enable directory spider for metadata sync"`
DirectorySpiderInterval time.Duration `env:"ORLY_DIRECTORY_SPIDER_INTERVAL" default:"24h" usage:"how often to run directory spider"`
DirectorySpiderMaxHops int `env:"ORLY_DIRECTORY_SPIDER_HOPS" default:"3" usage:"maximum hops for relay discovery"`
```
### Phase 8: Integration with app/main.go
```go
// After existing spider initialization
if badgerDB, ok := db.(*database.D); ok && cfg.DirectorySpiderEnabled {
l.directorySpider, err = spider.NewDirectorySpider(
ctx,
badgerDB,
l.publishers,
cfg.DirectorySpiderInterval,
cfg.DirectorySpiderMaxHops,
)
if err != nil {
return nil, fmt.Errorf("failed to create directory spider: %w", err)
}
// Set callback to get seed pubkeys from ACL
l.directorySpider.SetSeedCallback(func() [][]byte {
// Get whitelisted users from all ACLs
var pubkeys [][]byte
for _, aclInstance := range acl.Registry.ACL {
if follows, ok := aclInstance.(*acl.Follows); ok {
pubkeys = append(pubkeys, follows.GetFollowedPubkeys()...)
}
}
return pubkeys
})
l.directorySpider.Start()
}
```
---
## Self-Relay Detection
Reuse the existing `isSelfRelay()` pattern from spider.go:
```go
func (ds *DirectorySpider) isSelfRelay(relayURL string) bool {
// Use NIP-11 to get relay pubkey
// Compare against our relay identity pubkey
// Cache results to avoid repeated requests
}
```
---
## Error Handling & Resilience
1. **Connection Timeouts:** 30 seconds per relay
2. **Query Timeouts:** 60 seconds per query
3. **Graceful Degradation:** Continue to next relay on failure
4. **Rate Limiting:** 500ms sleep between relays
5. **Memory Limits:** Process events in batches of 1000
6. **Context Cancellation:** Check at each step for shutdown
---
## Testing Strategy
### Unit Tests
```go
// pkg/spider/directory_test.go
func TestExtractRelaysFromEvents(t *testing.T)
func TestDiscoveryHopTracking(t *testing.T)
func TestSelfRelayFiltering(t *testing.T)
```
### Integration Tests
```go
func TestDirectorySpiderE2E(t *testing.T) {
// Start test relay
// Populate with kind 10002 events
// Run directory spider
// Verify events fetched and stored
}
```
---
## Logging
Use existing `lol.mleku.dev` logging patterns:
```go
log.I.F("directory spider: starting relay discovery")
log.D.F("directory spider: hop %d, discovered %d new relays", hop, count)
log.W.F("directory spider: failed to connect to %s: %v", url, err)
log.E.F("directory spider: critical error: %v", err)
```
---
## Implementation Order
1. **Phase 1:** Core struct and lifecycle (1-2 hours)
2. **Phase 2:** Relay discovery with hop expansion (2-3 hours)
3. **Phase 3:** Metadata fetching (1-2 hours)
4. **Phase 4:** WebSocket client integration (1 hour)
5. **Phase 5:** Event storage (30 min)
6. **Phase 6:** Main loop and scheduling (1 hour)
7. **Phase 7:** Configuration (30 min)
8. **Phase 8:** Integration with main.go (30 min)
9. **Testing:** Unit and integration tests (2-3 hours)
**Total Estimate:** 10-14 hours
---
## Future Enhancements (Out of Scope)
- Web UI status page for directory spider
- Metrics/stats collection (relays discovered, events fetched)
- Configurable kind list to fetch
- Priority ordering of relays (closer hops first)
- Persistent relay discovery cache between runs
---
## Dependencies
**Existing packages to use:**
- `pkg/protocol/ws` - WebSocket client
- `pkg/database` - Event storage
- `pkg/encoders/filter` - Query filter construction
- `pkg/acl` - Get whitelisted users
- `pkg/sync` - NIP-11 cache for self-detection (if needed)
**No new external dependencies required.**
---
## Follow-up Items (Post-Implementation)
### TODO: Verify Connection Behavior is Not Overly Aggressive
**Issue:** The current implementation creates a **new WebSocket connection for each kind query** when fetching metadata. For each relay, this means:
1. Connect → fetch kind 0 → disconnect
2. Connect → fetch kind 3 → disconnect
3. Connect → fetch kind 10000 → disconnect
4. Connect → fetch kind 10002 → disconnect
This could be seen as aggressive by remote relays and may trigger rate limiting or IP bans.
**Verification needed:**
- [ ] Monitor logs with `ORLY_LOG_LEVEL=debug` to see per-kind fetch results
- [ ] Check if relays are returning events for all 4 kinds or just kind 0
- [ ] Look for WARNING logs about connection failures or rate limiting
- [ ] Verify the 500ms delay between relays is sufficient
**Potential optimization (if needed):**
- Refactor `fetchMetadataFromRelays()` to use a single connection per relay
- Fetch all 4 kinds using multiple subscriptions on one connection
- Example pattern:
```go
client, err := ws.RelayConnect(ctx, relayURL)
defer client.Close()
for _, k := range kindsToFetch {
events, _ := fetchKindOnConnection(client, k)
// ...
}
```
**Priority:** Medium - only optimize if monitoring shows issues with the current approach

View File

@@ -0,0 +1,974 @@
# Implementation Plan: Policy Hot Reload, Follow List Whitelisting, and Web UI
**Issue:** https://git.nostrdev.com/mleku/next.orly.dev/issues/6
## Overview
This plan implements three interconnected features for ORLY's policy system:
1. **Dynamic Policy Configuration** via kind 12345 events (hot reload)
2. **Administrator Follow List Whitelisting** within the policy system
3. **Web Interface** for policy management with JSON editing
## Architecture Summary
### Current System Analysis
**Policy System** ([pkg/policy/policy.go](pkg/policy/policy.go)):
- Policy loaded from `~/.config/ORLY/policy.json` at startup
- `P` struct with unexported `rules` field (map[int]Rule)
- `PolicyManager` manages script runners for external policy scripts
- `LoadFromFile()` method exists for loading policy from disk
- No hot reload mechanism currently exists
**ACL System** ([pkg/acl/follows.go](pkg/acl/follows.go)):
- Separate from policy system
- Manages admin/owner/follows lists for write access control
- Fetches kind 3 events from relays
- Has callback mechanism for updates
**Event Handling** ([app/handle-event.go](app/handle-event.go)):213-226
- Special handling for NIP-43 events (join/leave requests)
- Pattern: Check kind early, process, return early
**Web UI**:
- Svelte-based component architecture
- Tab-based navigation in [app/web/src/App.svelte](app/web/src/App.svelte)
- API endpoints follow `/api/<feature>/<action>` pattern
## Feature 1: Dynamic Policy Configuration (Kind 12345)
### Design
**Event Kind:** 12345 (Relay Policy Configuration)
**Purpose:** Allow admins/owners to update policy configuration via Nostr event
**Security:** Only admins/owners can publish; only visible to admins/owners
**Process Flow:**
1. Admin/owner creates kind 12345 event with JSON policy in `content` field
2. Relay receives event via WebSocket
3. Validate sender is admin/owner
4. Pause policy manager (stop script runners)
5. Parse and validate JSON configuration
6. Apply new policy configuration
7. Persist to `~/.config/ORLY/policy.json`
8. Resume policy manager (restart script runners)
9. Send OK response
### Implementation Steps
#### Step 1.1: Define Kind Constant
**File:** Create `pkg/protocol/policyconfig/policyconfig.go`
```go
package policyconfig
const (
// KindPolicyConfig is a relay-internal event for policy configuration updates
// Only visible to admins and owners
KindPolicyConfig uint16 = 12345
)
```
#### Step 1.2: Add Policy Hot Reload Methods
**File:** [pkg/policy/policy.go](pkg/policy/policy.go)
Add methods to `P` struct:
```go
// Reload loads policy from JSON bytes and applies it to the existing policy instance
// This pauses the policy manager, updates configuration, and resumes
func (p *P) Reload(policyJSON []byte) error
// Pause pauses the policy manager and stops all script runners
func (p *P) Pause() error
// Resume resumes the policy manager and restarts script runners
func (p *P) Resume() error
// SaveToFile persists the current policy configuration to disk
func (p *P) SaveToFile(configPath string) error
```
**Implementation Details:**
- `Reload()` should:
- Call `Pause()` to stop all script runners
- Unmarshal JSON into policy struct (using shadow struct pattern)
- Validate configuration
- Populate binary caches
- Call `SaveToFile()` to persist
- Call `Resume()` to restart scripts
- Return error if any step fails
- `Pause()` should:
- Iterate through `p.manager.runners` map
- Call `Stop()` on each runner
- Set a paused flag on the manager
- `Resume()` should:
- Clear paused flag
- Call `startPolicyIfExists()` to restart default script
- Restart any rule-specific scripts that were running
- `SaveToFile()` should:
- Marshal policy to JSON (using pJSON shadow struct)
- Write atomically to config path (write to temp file, then rename)
#### Step 1.3: Handle Kind 12345 Events
**File:** [app/handle-event.go](app/handle-event.go)
Add handling after NIP-43 special events (after line 226):
```go
// Handle policy configuration update events (kind 12345)
case policyconfig.KindPolicyConfig:
// Process policy config update and return early
if err = l.HandlePolicyConfigUpdate(env.E); chk.E(err) {
log.E.F("failed to process policy config update: %v", err)
if err = Ok.Error(l, env, err.Error()); chk.E(err) {
return
}
return
}
// Send OK response
if err = Ok.Ok(l, env, "policy configuration updated"); chk.E(err) {
return
}
return
```
Create new file: `app/handle-policy-config.go`
```go
// HandlePolicyConfigUpdate processes kind 12345 policy configuration events
// Only admins and owners can update policy configuration
func (l *Listener) HandlePolicyConfigUpdate(ev *event.E) error {
// 1. Verify sender is admin or owner
// 2. Parse JSON from event content
// 3. Validate JSON structure
// 4. Call l.policyManager.Reload(jsonBytes)
// 5. Log success/failure
return nil
}
```
**Security Checks:**
- Verify `ev.Pubkey` is in admins or owners list
- Validate JSON syntax before applying
- Catch all errors and return descriptive messages
- Log all policy update attempts (success and failure)
#### Step 1.4: Query Filtering (Optional)
**File:** [app/handle-req.go](app/handle-req.go)
Add filter to hide kind 12345 from non-admins:
```go
// In handleREQ, after ACL checks:
// Filter out policy config events (kind 12345) for non-admin users
if !isAdminOrOwner(l.authedPubkey.Load(), l.Admins, l.Owners) {
// Remove kind 12345 from filter
for _, f := range filters {
f.Kinds.Remove(policyconfig.KindPolicyConfig)
}
}
```
## Feature 2: Administrator Follow List Whitelisting
### Design
**Purpose:** Enable policy-based follow list whitelisting (separate from ACL follows)
**Use Case:** Policy admins can designate follows who get special policy privileges
**Configuration:**
```json
{
"policy_admins": ["admin_pubkey_hex_1", "admin_pubkey_hex_2"],
"policy_follow_whitelist_enabled": true,
"rules": {
"1": {
"write_allow_follows": true // Allow writes from policy admin follows
}
}
}
```
### Implementation Steps
#### Step 2.1: Extend Policy Configuration Structure
**File:** [pkg/policy/policy.go](pkg/policy/policy.go)
Extend `P` struct:
```go
type P struct {
Kind Kinds `json:"kind"`
rules map[int]Rule
Global Rule `json:"global"`
DefaultPolicy string `json:"default_policy"`
// New fields for follow list whitelisting
PolicyAdmins []string `json:"policy_admins,omitempty"`
PolicyFollowWhitelistEnabled bool `json:"policy_follow_whitelist_enabled,omitempty"`
// Unexported cached data
policyAdminsBin [][]byte // Binary cache for admin pubkeys
policyFollows [][]byte // Cached follow list from policy admins
policyFollowsMx sync.RWMutex // Protect follows list
manager *PolicyManager
}
```
Extend `Rule` struct:
```go
type Rule struct {
// ... existing fields ...
// New field for follow-based whitelisting
WriteAllowFollows bool `json:"write_allow_follows,omitempty"`
ReadAllowFollows bool `json:"read_allow_follows,omitempty"`
}
```
Update `pJSON` shadow struct to include new fields.
#### Step 2.2: Add Follow List Fetching
**File:** [pkg/policy/policy.go](pkg/policy/policy.go)
Add methods:
```go
// FetchPolicyFollows fetches follow lists (kind 3) from database for policy admins
// This is called during policy load and can be called periodically
func (p *P) FetchPolicyFollows(db database.D) error {
p.policyFollowsMx.Lock()
defer p.policyFollowsMx.Unlock()
// Clear existing follows
p.policyFollows = nil
// For each policy admin, query kind 3 events
for _, adminPubkey := range p.policyAdminsBin {
// Build filter for kind 3 from this admin
// Query database for latest kind 3 event
// Extract p-tags from event
// Add to p.policyFollows list
}
return nil
}
// IsPolicyFollow checks if pubkey is in policy admin follows
func (p *P) IsPolicyFollow(pubkey []byte) bool {
p.policyFollowsMx.RLock()
defer p.policyFollowsMx.RUnlock()
for _, follow := range p.policyFollows {
if utils.FastEqual(pubkey, follow) {
return true
}
}
return false
}
```
#### Step 2.3: Integrate Follow Checking in Policy Rules
**File:** [pkg/policy/policy.go](pkg/policy/policy.go)
Update `checkRulePolicy()` method (around line 1062):
```go
// In write access checks, after checking write_allow list:
if access == "write" {
// Check if follow-based whitelisting is enabled for this rule
if rule.WriteAllowFollows && p.PolicyFollowWhitelistEnabled {
if p.IsPolicyFollow(loggedInPubkey) {
return true, nil // Allow write from policy admin follow
}
}
// Continue with existing write_allow checks...
}
// Similar for read access:
if access == "read" {
if rule.ReadAllowFollows && p.PolicyFollowWhitelistEnabled {
if p.IsPolicyFollow(loggedInPubkey) {
return true, nil // Allow read from policy admin follow
}
}
// Continue with existing read_allow checks...
}
```
#### Step 2.4: Periodic Follow List Refresh
**File:** [pkg/policy/policy.go](pkg/policy/policy.go)
Add to `NewWithManager()`:
```go
// Start periodic follow list refresh for policy admins
if len(policy.PolicyAdmins) > 0 && policy.PolicyFollowWhitelistEnabled {
go policy.startPeriodicFollowRefresh(ctx)
}
```
Add method:
```go
// startPeriodicFollowRefresh periodically fetches policy admin follow lists
func (p *P) startPeriodicFollowRefresh(ctx context.Context) {
ticker := time.NewTicker(15 * time.Minute) // Refresh every 15 minutes
defer ticker.Stop()
// Fetch immediately on startup
if err := p.FetchPolicyFollows(p.db); err != nil {
log.E.F("failed to fetch policy follows: %v", err)
}
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
if err := p.FetchPolicyFollows(p.db); err != nil {
log.E.F("failed to fetch policy follows: %v", err)
} else {
log.I.F("refreshed policy admin follow lists")
}
}
}
}
```
**Note:** Need to pass database reference to policy manager. Update `NewWithManager()` signature:
```go
func NewWithManager(ctx context.Context, appName string, enabled bool, db *database.D) *P
```
## Feature 3: Web Interface for Policy Management
### Design
**Components:**
1. `PolicyView.svelte` - Main policy management UI
2. API endpoints for policy CRUD operations
3. JSON editor with validation
4. Follow list viewer
**UI Features:**
- View current policy configuration (read-only JSON display)
- Edit policy JSON with syntax highlighting
- Validate JSON before publishing
- Publish kind 12345 event to update policy
- View policy admin pubkeys
- View follow lists for each policy admin
- Add/remove policy admin pubkeys (updates and republishes config)
### Implementation Steps
#### Step 3.1: Create Policy View Component
**File:** `app/web/src/PolicyView.svelte`
Structure:
```svelte
<script>
export let isLoggedIn = false;
export let userRole = "";
export let policyConfig = null;
export let policyAdmins = [];
export let policyFollows = [];
export let isLoadingPolicy = false;
export let policyMessage = "";
export let policyMessageType = "info";
export let policyEditJson = "";
import { createEventDispatcher } from "svelte";
const dispatch = createEventDispatcher();
// Event handlers
function loadPolicy() { dispatch("loadPolicy"); }
function savePolicy() { dispatch("savePolicy"); }
function validatePolicy() { dispatch("validatePolicy"); }
function addPolicyAdmin() { dispatch("addPolicyAdmin"); }
function removePolicyAdmin(pubkey) { dispatch("removePolicyAdmin", pubkey); }
function refreshFollows() { dispatch("refreshFollows"); }
</script>
<div class="policy-view">
<h2>Policy Configuration Management</h2>
{#if isLoggedIn && (userRole === "owner" || userRole === "admin")}
<!-- Policy JSON Editor Section -->
<div class="policy-section">
<h3>Policy Configuration</h3>
<div class="policy-controls">
<button on:click={loadPolicy}>🔄 Reload</button>
<button on:click={validatePolicy}> Validate</button>
<button on:click={savePolicy}>📤 Publish Update</button>
</div>
<textarea
class="policy-editor"
bind:value={policyEditJson}
spellcheck="false"
placeholder="Policy JSON configuration..."
/>
</div>
<!-- Policy Admins Section -->
<div class="policy-admins-section">
<h3>Policy Administrators</h3>
<p class="section-description">
Policy admins can update configuration and their follows get whitelisted
(if policy_follow_whitelist_enabled is true)
</p>
<div class="admin-list">
{#each policyAdmins as admin}
<div class="admin-item">
<span class="admin-pubkey">{admin}</span>
<button
class="remove-btn"
on:click={() => removePolicyAdmin(admin)}
>
Remove
</button>
</div>
{/each}
</div>
<div class="add-admin">
<input
type="text"
placeholder="npub or hex pubkey"
id="new-admin-input"
/>
<button on:click={addPolicyAdmin}>Add Admin</button>
</div>
</div>
<!-- Follow List Section -->
<div class="policy-follows-section">
<h3>Policy Follow Whitelist</h3>
<button on:click={refreshFollows}>🔄 Refresh Follows</button>
<div class="follows-list">
{#if policyFollows.length === 0}
<p class="no-follows">No follows loaded</p>
{:else}
<p class="follows-count">
{policyFollows.length} pubkey(s) in whitelist
</p>
<div class="follows-grid">
{#each policyFollows as follow}
<div class="follow-item">{follow}</div>
{/each}
</div>
{/if}
</div>
</div>
<!-- Message Display -->
{#if policyMessage}
<div class="policy-message {policyMessageType}">
{policyMessage}
</div>
{/if}
{:else}
<div class="access-denied">
<p>Policy management is only available to relay administrators and owners.</p>
{#if !isLoggedIn}
<button on:click={() => dispatch("openLoginModal")}>
Login
</button>
{/if}
</div>
{/if}
</div>
<style>
/* Policy-specific styling */
.policy-view { /* ... */ }
.policy-editor {
width: 100%;
min-height: 400px;
font-family: 'Monaco', 'Courier New', monospace;
font-size: 0.9em;
padding: 1em;
border: 1px solid var(--border-color);
border-radius: 4px;
background: var(--code-bg);
color: var(--code-text);
}
/* ... more styles ... */
</style>
```
#### Step 3.2: Add Policy Tab to Main App
**File:** [app/web/src/App.svelte](app/web/src/App.svelte)
Add state variables (around line 94):
```javascript
// Policy management state
let policyConfig = null;
let policyAdmins = [];
let policyFollows = [];
let isLoadingPolicy = false;
let policyMessage = "";
let policyMessageType = "info";
let policyEditJson = "";
```
Add tab definition in `tabs` array (look for export/import/sprocket tabs):
```javascript
if (isLoggedIn && (userRole === "owner" || userRole === "admin")) {
tabs.push({
id: "policy",
label: "Policy",
icon: "🛡️",
isSearchTab: false
});
}
```
Add component import:
```javascript
import PolicyView from "./PolicyView.svelte";
```
Add view in main content area (look for {#if selectedTab === "sprocket"}):
```svelte
{:else if selectedTab === "policy"}
<PolicyView
{isLoggedIn}
{userRole}
{policyConfig}
{policyAdmins}
{policyFollows}
{isLoadingPolicy}
{policyMessage}
{policyMessageType}
bind:policyEditJson
on:loadPolicy={handleLoadPolicy}
on:savePolicy={handleSavePolicy}
on:validatePolicy={handleValidatePolicy}
on:addPolicyAdmin={handleAddPolicyAdmin}
on:removePolicyAdmin={handleRemovePolicyAdmin}
on:refreshFollows={handleRefreshFollows}
on:openLoginModal={() => (showLoginModal = true)}
/>
```
Add event handlers:
```javascript
async function handleLoadPolicy() {
isLoadingPolicy = true;
policyMessage = "";
try {
const response = await fetch("/api/policy/config", {
credentials: "include"
});
if (!response.ok) {
throw new Error(`Failed to load policy: ${response.statusText}`);
}
const data = await response.json();
policyConfig = data.config;
policyEditJson = JSON.stringify(data.config, null, 2);
policyAdmins = data.config.policy_admins || [];
policyMessage = "Policy loaded successfully";
policyMessageType = "success";
} catch (error) {
policyMessage = `Error loading policy: ${error.message}`;
policyMessageType = "error";
console.error("Error loading policy:", error);
} finally {
isLoadingPolicy = false;
}
}
async function handleSavePolicy() {
isLoadingPolicy = true;
policyMessage = "";
try {
// Validate JSON first
const config = JSON.parse(policyEditJson);
// Publish kind 12345 event via websocket with auth
const event = {
kind: 12345,
content: policyEditJson,
tags: [],
created_at: Math.floor(Date.now() / 1000)
};
const result = await publishEventWithAuth(event, userSigner);
if (result.success) {
policyMessage = "Policy updated successfully";
policyMessageType = "success";
// Reload to get updated config
await handleLoadPolicy();
} else {
throw new Error(result.message || "Failed to publish policy update");
}
} catch (error) {
policyMessage = `Error updating policy: ${error.message}`;
policyMessageType = "error";
console.error("Error updating policy:", error);
} finally {
isLoadingPolicy = false;
}
}
function handleValidatePolicy() {
try {
JSON.parse(policyEditJson);
policyMessage = "Policy JSON is valid ✓";
policyMessageType = "success";
} catch (error) {
policyMessage = `Invalid JSON: ${error.message}`;
policyMessageType = "error";
}
}
async function handleRefreshFollows() {
isLoadingPolicy = true;
policyMessage = "";
try {
const response = await fetch("/api/policy/follows", {
credentials: "include"
});
if (!response.ok) {
throw new Error(`Failed to load follows: ${response.statusText}`);
}
const data = await response.json();
policyFollows = data.follows || [];
policyMessage = `Loaded ${policyFollows.length} follows`;
policyMessageType = "success";
} catch (error) {
policyMessage = `Error loading follows: ${error.message}`;
policyMessageType = "error";
console.error("Error loading follows:", error);
} finally {
isLoadingPolicy = false;
}
}
async function handleAddPolicyAdmin(event) {
// Get input value
const input = document.getElementById("new-admin-input");
const pubkey = input.value.trim();
if (!pubkey) {
policyMessage = "Please enter a pubkey";
policyMessageType = "error";
return;
}
try {
// Convert npub to hex if needed (implement or use nostr library)
// Add to policy_admins array in config
const config = JSON.parse(policyEditJson);
if (!config.policy_admins) {
config.policy_admins = [];
}
if (!config.policy_admins.includes(pubkey)) {
config.policy_admins.push(pubkey);
policyEditJson = JSON.stringify(config, null, 2);
input.value = "";
policyMessage = "Admin added (click Publish to save)";
policyMessageType = "info";
} else {
policyMessage = "Admin already in list";
policyMessageType = "warning";
}
} catch (error) {
policyMessage = `Error adding admin: ${error.message}`;
policyMessageType = "error";
}
}
async function handleRemovePolicyAdmin(event) {
const pubkey = event.detail;
try {
const config = JSON.parse(policyEditJson);
if (config.policy_admins) {
config.policy_admins = config.policy_admins.filter(p => p !== pubkey);
policyEditJson = JSON.stringify(config, null, 2);
policyMessage = "Admin removed (click Publish to save)";
policyMessageType = "info";
}
} catch (error) {
policyMessage = `Error removing admin: ${error.message}`;
policyMessageType = "error";
}
}
```
#### Step 3.3: Add API Endpoints
**File:** [app/server.go](app/server.go)
Add to route registration (around line 245):
```go
// Policy management endpoints (admin/owner only)
s.mux.HandleFunc("/api/policy/config", s.handlePolicyConfig)
s.mux.HandleFunc("/api/policy/follows", s.handlePolicyFollows)
```
Create new file: `app/handle-policy-api.go`
```go
package app
import (
"encoding/json"
"net/http"
"lol.mleku.dev/log"
"git.mleku.dev/mleku/nostr/encoders/hex"
)
// handlePolicyConfig returns the current policy configuration
// GET /api/policy/config
func (s *Server) handlePolicyConfig(w http.ResponseWriter, r *http.Request) {
// Verify authentication
session, err := s.getSession(r)
if err != nil || session == nil {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}
// Verify user is admin or owner
role := s.getUserRole(session.Pubkey)
if role != "admin" && role != "owner" {
http.Error(w, "Forbidden", http.StatusForbidden)
return
}
// Get current policy configuration from policy manager
// This requires adding a method to get the raw config
config := s.policyManager.GetConfig() // Need to implement this
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]interface{}{
"config": config,
})
}
// handlePolicyFollows returns the policy admin follow lists
// GET /api/policy/follows
func (s *Server) handlePolicyFollows(w http.ResponseWriter, r *http.Request) {
// Verify authentication
session, err := s.getSession(r)
if err != nil || session == nil {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}
// Verify user is admin or owner
role := s.getUserRole(session.Pubkey)
if role != "admin" && role != "owner" {
http.Error(w, "Forbidden", http.StatusForbidden)
return
}
// Get policy follows from policy manager
follows := s.policyManager.GetPolicyFollows() // Need to implement this
// Convert to hex strings for JSON response
followsHex := make([]string, len(follows))
for i, f := range follows {
followsHex[i] = hex.Enc(f)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]interface{}{
"follows": followsHex,
})
}
```
**Note:** Need to add getter methods to policy manager:
```go
// GetConfig returns the current policy configuration as a map
// File: pkg/policy/policy.go
func (p *P) GetConfig() map[string]interface{} {
// Marshal to JSON and back to get map representation
jsonBytes, _ := json.Marshal(p)
var config map[string]interface{}
json.Unmarshal(jsonBytes, &config)
return config
}
// GetPolicyFollows returns the current policy follow list
func (p *P) GetPolicyFollows() [][]byte {
p.policyFollowsMx.RLock()
defer p.policyFollowsMx.RUnlock()
follows := make([][]byte, len(p.policyFollows))
copy(follows, p.policyFollows)
return follows
}
```
## Testing Strategy
### Unit Tests
1. **Policy Reload Tests** (`pkg/policy/policy_test.go`):
- Test `Reload()` with valid JSON
- Test `Reload()` with invalid JSON
- Test `Pause()` and `Resume()` functionality
- Test `SaveToFile()` atomic write
2. **Follow List Tests** (`pkg/policy/follows_test.go`):
- Test `FetchPolicyFollows()` with mock database
- Test `IsPolicyFollow()` with various inputs
- Test follow list caching and expiry
3. **Handler Tests** (`app/handle-policy-config_test.go`):
- Test kind 12345 handling with admin pubkey
- Test kind 12345 rejection from non-admin
- Test JSON validation errors
### Integration Tests
1. **End-to-End Policy Update**:
- Publish kind 12345 event as admin
- Verify policy reloaded
- Verify new policy enforced
- Verify policy persisted to disk
2. **Follow Whitelist E2E**:
- Configure policy with follow whitelist enabled
- Add admin pubkey to policy_admins
- Publish kind 3 follow list for admin
- Verify follows can write/read per policy rules
3. **Web UI E2E**:
- Load policy via API
- Edit and publish via UI
- Verify changes applied
- Check follow list display
## Security Considerations
1. **Authorization**:
- Only admins/owners can publish kind 12345
- Only admins/owners can access policy API endpoints
- Policy events only visible to admins/owners in queries
2. **Validation**:
- Strict JSON schema validation before applying
- Rollback mechanism if policy fails to load
- Catch all parsing errors
3. **Audit Trail**:
- Log all policy update attempts
- Store kind 12345 events in database for audit
- Include who changed what and when
4. **Atomic Operations**:
- Pause-update-resume must be atomic
- File writes must be atomic (temp file + rename)
- No partial updates on failure
## Migration Path
### Phase 1: Backend Foundation
1. Implement kind 12345 constant
2. Add policy reload methods
3. Add follow list support to policy
4. Test hot reload mechanism
### Phase 2: Event Handling
1. Add kind 12345 handler
2. Add API endpoints
3. Test event flow end-to-end
### Phase 3: Web UI
1. Create PolicyView component
2. Integrate into App.svelte
3. Add JSON editor
4. Test user workflows
### Phase 4: Testing & Documentation
1. Write comprehensive tests
2. Update CLAUDE.md
3. Create user documentation
4. Add examples to docs/
## Open Questions / Decisions Needed
1. **Policy Admin vs Relay Admin**:
- Should policy_admins be separate from ORLY_ADMINS?
- **Recommendation:** Yes, separate. Policy admins manage policy, relay admins manage relay.
2. **Follow List Refresh Frequency**:
- How often to refresh policy admin follows?
- **Recommendation:** 15 minutes (configurable via ORLY_POLICY_FOLLOW_REFRESH)
3. **Backward Compatibility**:
- What happens to relays without policy_admins field?
- **Recommendation:** Fall back to empty list, disabled by default
4. **Database Reference in Policy**:
- Policy needs database reference for follow queries
- **Recommendation:** Pass database to NewWithManager()
5. **Error Handling on Reload Failure**:
- Should failed reload keep old policy or disable policy?
- **Recommendation:** Keep old policy, log error, return error to client
## Success Criteria
1. ✅ Admin can publish kind 12345 event with new policy JSON
2. ✅ Relay receives event, validates sender, reloads policy without restart
3. ✅ Policy persisted to `~/.config/ORLY/policy.json`
4. ✅ Script runners paused during reload, resumed after
5. ✅ Policy admins can be configured in policy JSON
6. ✅ Policy admin follow lists fetched from database
7. ✅ Follow-based whitelisting enforced in policy rules
8. ✅ Web UI displays current policy configuration
9. ✅ Web UI allows editing and validation of policy JSON
10. ✅ Web UI shows policy admin follows
11. ✅ Only admins/owners can access policy management
12. ✅ All tests pass
13. ✅ Documentation updated
## Estimated Effort
- **Backend (Policy + Event Handling):** 8-12 hours
- Policy reload methods: 3-4 hours
- Follow list support: 3-4 hours
- Event handling: 2-3 hours
- Testing: 2-3 hours
- **API Endpoints:** 2-3 hours
- Route setup: 1 hour
- Handler implementation: 1-2 hours
- Testing: 1 hour
- **Web UI:** 6-8 hours
- PolicyView component: 3-4 hours
- App integration: 2-3 hours
- Styling and UX: 2-3 hours
- Testing: 2 hours
- **Documentation & Testing:** 4-6 hours
- Unit tests: 2-3 hours
- Integration tests: 2-3 hours
- Documentation: 2 hours
**Total:** 20-29 hours
## Dependencies
- No external dependencies required
- Uses existing ORLY infrastructure
- Compatible with current policy system
## Next Steps
1. Review and approve this plan
2. Clarify open questions/decisions
3. Begin implementation in phases
4. Iterative testing and refinement

View File

@@ -1,319 +0,0 @@
# Badger Database Migration Guide
## Overview
This guide covers migrating your ORLY relay database when changing Badger configuration parameters, specifically for the VLogPercentile and table size optimizations.
## When Migration is Needed
Based on research of Badger v4 source code and documentation:
### Configuration Changes That DON'T Require Migration
The following options can be changed **without migration**:
- `BlockCacheSize` - Only affects in-memory cache
- `IndexCacheSize` - Only affects in-memory cache
- `NumCompactors` - Runtime setting
- `NumLevelZeroTables` - Affects compaction timing
- `NumMemtables` - Affects write buffering
- `DetectConflicts` - Runtime conflict detection
- `Compression` - New data uses new compression, old data remains as-is
- `BlockSize` - Explicitly stated in Badger source: "Changing BlockSize across DB runs will not break badger"
### Configuration Changes That BENEFIT from Migration
The following options apply to **new writes only** - existing data gradually adopts new settings through compaction:
- `VLogPercentile` - Affects where **new** values are stored (LSM vs vlog)
- `BaseTableSize` - **New** SST files use new size
- `MemTableSize` - Affects new write buffering
- `BaseLevelSize` - Affects new LSM tree structure
- `ValueLogFileSize` - New vlog files use new size
**Migration Impact:** Without migration, existing data remains in its current location (LSM tree or value log). The database will **gradually** adapt through normal compaction, which may take days or weeks depending on write volume.
## Migration Options
### Option 1: No Migration (Let Natural Compaction Handle It)
**Best for:** Low-traffic relays, testing environments
**Pros:**
- No downtime required
- No manual intervention
- Zero risk of data loss
**Cons:**
- Benefits take time to materialize (days/weeks)
- Old data layout persists until natural compaction
- Cache tuning benefits delayed
**Steps:**
1. Update Badger configuration in `pkg/database/database.go`
2. Restart ORLY relay
3. Monitor performance over several days
4. Optionally run manual GC: `db.RunValueLogGC(0.5)` periodically
### Option 2: Manual Value Log Garbage Collection
**Best for:** Medium-traffic relays wanting faster optimization
**Pros:**
- Faster than natural compaction
- Still safe (no export/import)
- Can run while relay is online
**Cons:**
- Still gradual (hours instead of days)
- CPU/disk intensive during GC
- Partial benefit until GC completes
**Steps:**
1. Update Badger configuration
2. Restart ORLY relay
3. Monitor logs for compaction activity
4. Manually trigger GC if needed (future feature - not currently exposed)
### Option 3: Full Export/Import Migration (RECOMMENDED for Production)
**Best for:** Production relays, large databases, maximum performance
**Pros:**
- Immediate full benefit of new configuration
- Clean database structure
- Predictable migration time
- Reclaims all disk space
**Cons:**
- Requires relay downtime (several hours for large DBs)
- Requires 2x disk space temporarily
- More complex procedure
**Steps:** See detailed procedure below
## Full Migration Procedure (Option 3)
### Prerequisites
1. **Disk space:** At minimum 2.5x current database size
- 1x for current database
- 1x for JSONL export
- 0.5x for new database (will be smaller with compression)
2. **Time estimate:**
- Export: ~100-500 MB/s depending on disk speed
- Import: ~50-200 MB/s with indexing overhead
- Example: 10 GB database = ~10-30 minutes total
3. **Backup:** Ensure you have a recent backup before proceeding
### Step-by-Step Migration
#### 1. Prepare Migration Script
Use the provided `scripts/migrate-badger-config.sh` script (see below).
#### 2. Stop the Relay
```bash
# If using systemd
sudo systemctl stop orly
# If running manually
pkill orly
```
#### 3. Run Migration
```bash
cd ~/src/next.orly.dev
chmod +x scripts/migrate-badger-config.sh
./scripts/migrate-badger-config.sh
```
The script will:
- Export all events to JSONL format
- Move old database to backup location
- Create new database with updated configuration
- Import all events (rebuilds indexes automatically)
- Verify event count matches
#### 4. Verify Migration
```bash
# Check that events were migrated
echo "Old event count:"
cat ~/.local/share/ORLY-backup-*/migration.log | grep "exported.*events"
echo "New event count:"
cat ~/.local/share/ORLY/migration.log | grep "saved.*events"
```
#### 5. Restart Relay
```bash
# If using systemd
sudo systemctl start orly
sudo journalctl -u orly -f
# If running manually
./orly
```
#### 6. Monitor Performance
Watch for improvements in:
- Cache hit ratio (should be >85% with new config)
- Average query latency (should be <3ms for cached events)
- No "Block cache too small" warnings in logs
#### 7. Clean Up (After Verification)
```bash
# Once you confirm everything works (wait 24-48 hours)
rm -rf ~/.local/share/ORLY-backup-*
rm ~/.local/share/ORLY/events-export.jsonl
```
## Migration Script
The migration script is located at `scripts/migrate-badger-config.sh` and handles:
- Automatic export of all events to JSONL
- Safe backup of existing database
- Creation of new database with updated config
- Import and indexing of all events
- Verification of event counts
## Rollback Procedure
If migration fails or performance degrades:
```bash
# Stop the relay
sudo systemctl stop orly # or pkill orly
# Restore old database
rm -rf ~/.local/share/ORLY
mv ~/.local/share/ORLY-backup-$(date +%Y%m%d)* ~/.local/share/ORLY
# Restart with old configuration
sudo systemctl start orly
```
## Configuration Changes Summary
### Changes Applied in pkg/database/database.go
```go
// Cache sizes (can change without migration)
opts.BlockCacheSize = 16384 MB (was 512 MB)
opts.IndexCacheSize = 4096 MB (was 256 MB)
// Table sizes (benefits from migration)
opts.BaseTableSize = 8 MB (was 64 MB)
opts.MemTableSize = 16 MB (was 64 MB)
opts.ValueLogFileSize = 128 MB (was 256 MB)
// Inline event optimization (CRITICAL - benefits from migration)
opts.VLogPercentile = 0.99 (was 0.0 - default)
// LSM structure (benefits from migration)
opts.BaseLevelSize = 64 MB (was 10 MB - default)
// Performance settings (no migration needed)
opts.DetectConflicts = false (was true)
opts.Compression = options.ZSTD (was options.None)
opts.NumCompactors = 8 (was 4)
opts.NumMemtables = 8 (was 5)
```
## Expected Improvements
### Before Migration
- Cache hit ratio: 33%
- Average latency: 9.35ms
- P95 latency: 34.48ms
- Block cache warnings: Yes
### After Migration
- Cache hit ratio: 85-95%
- Average latency: <3ms
- P95 latency: <8ms
- Block cache warnings: No
- Inline events: 3-5x faster reads
## Troubleshooting
### Migration Script Fails
**Error:** "Not enough disk space"
- Free up space or use Option 1 (natural compaction)
- Ensure you have 2.5x current DB size available
**Error:** "Export failed"
- Check database is not corrupted
- Ensure ORLY is stopped
- Check file permissions
**Error:** "Import count mismatch"
- This is informational - some events may be duplicates
- Check logs for specific errors
- Verify core events are present via relay queries
### Performance Not Improved
**After migration, performance is the same:**
1. Verify configuration was actually applied:
```bash
# Check running relay logs for config output
sudo journalctl -u orly | grep -i "block.*cache\|vlog"
```
2. Wait for cache to warm up (2-5 minutes after start)
3. Check if workload changed (different query patterns)
4. Verify disk I/O is not bottleneck:
```bash
iostat -x 5
```
### High CPU During Migration
- This is normal - import rebuilds all indexes
- Migration is single-threaded by design (data consistency)
- Expect 30-60% CPU usage on one core
## Additional Notes
### Compression Impact
The `Compression = options.ZSTD` setting:
- Only compresses **new** data
- Old data remains uncompressed until rewritten by compaction
- Migration forces all data to be rewritten → immediate compression benefit
- Expect 2-3x compression ratio for event data
### VLogPercentile Behavior
With `VLogPercentile = 0.99`:
- **99% of values** stored in LSM tree (fast access)
- **1% of values** stored in value log (large events >100 KB)
- Threshold dynamically adjusted based on value size distribution
- Perfect for ORLY's inline event optimization
### Production Considerations
For production relays:
1. Schedule migration during low-traffic period
2. Notify users of maintenance window
3. Have rollback plan ready
4. Monitor closely for 24-48 hours after migration
5. Keep backup for at least 1 week
## References
- Badger v4 Documentation: https://pkg.go.dev/github.com/dgraph-io/badger/v4
- ORLY Database Package: `pkg/database/database.go`
- Export/Import Implementation: `pkg/database/{export,import}.go`
- Cache Optimization Analysis: `cmd/benchmark/CACHE_OPTIMIZATION_STRATEGY.md`
- Inline Event Optimization: `cmd/benchmark/INLINE_EVENT_OPTIMIZATION.md`

View File

@@ -0,0 +1,254 @@
# Feature Request and Bug Report Protocol
This document describes how to submit effective bug reports and feature requests for ORLY relay. Following these guidelines helps maintainers understand and resolve issues quickly.
## Before Submitting
1. **Search existing issues** - Your issue may already be reported or discussed
2. **Check documentation** - Review `CLAUDE.md`, `docs/`, and `pkg/*/README.md` files
3. **Verify with latest version** - Ensure the issue exists in the current release
4. **Test with default configuration** - Rule out configuration-specific problems
## Bug Reports
### Required Information
**Title**: Concise summary of the problem
- Good: "Kind 3 events with 8000+ follows truncated on save"
- Bad: "Events not saving" or "Bug in database"
**Environment**:
```
ORLY version: (output of ./orly version)
OS: (e.g., Ubuntu 24.04, macOS 14.2)
Go version: (output of go version)
Database backend: (badger/neo4j/wasmdb)
```
**Configuration** (relevant settings only):
```bash
ORLY_DB_TYPE=badger
ORLY_POLICY_ENABLED=true
# Include any non-default settings
```
**Steps to Reproduce**:
1. Start relay with configuration X
2. Connect client and send event Y
3. Query for event with filter Z
4. Observe error/unexpected behavior
**Expected Behavior**: What should happen
**Actual Behavior**: What actually happens
**Logs**: Include relevant log output with `ORLY_LOG_LEVEL=debug` or `trace`
### Minimal Reproduction
The most effective bug reports include a minimal reproduction case:
```bash
# Example: Script that demonstrates the issue
export ORLY_LOG_LEVEL=debug
./orly &
sleep 2
# Send problematic event
echo '["EVENT", {...}]' | websocat ws://localhost:3334
# Show the failure
echo '["REQ", "test", {"kinds": [1]}]' | websocat ws://localhost:3334
```
Or provide a failing test case:
```go
func TestReproduceBug(t *testing.T) {
// Setup
db := setupTestDB(t)
// This should work but fails
event := createTestEvent(kind, content)
err := db.SaveEvent(ctx, event)
require.NoError(t, err)
// Query returns unexpected result
results, err := db.QueryEvents(ctx, filter)
assert.Len(t, results, 1) // Fails: got 0
}
```
## Feature Requests
### Required Information
**Title**: Clear description of the feature
- Good: "Add WebSocket compression support (permessage-deflate)"
- Bad: "Make it faster" or "New feature idea"
**Problem Statement**: What problem does this solve?
```
Currently, clients with high-latency connections experience slow sync times
because event data is transmitted uncompressed. A typical session transfers
50MB of JSON that could be reduced to ~10MB with compression.
```
**Proposed Solution**: How should it work?
```
Add optional permessage-deflate WebSocket extension support:
- New config: ORLY_WS_COMPRESSION=true
- Negotiate compression during WebSocket handshake
- Apply to messages over configurable threshold (default 1KB)
```
**Use Case**: Who benefits and how?
```
- Mobile clients on cellular connections
- Users syncing large follow lists
- Relays with bandwidth constraints
```
**Alternatives Considered** (optional):
```
- Application-level compression: Rejected because it requires client changes
- HTTP/2: Not applicable for WebSocket connections
```
### Implementation Notes (optional)
If you have implementation ideas:
```
Suggested approach:
1. Add compression config to app/config/config.go
2. Modify gorilla/websocket upgrader in app/handle-websocket.go
3. Add compression threshold check before WriteMessage()
Reference: gorilla/websocket has built-in permessage-deflate support
```
## What Makes Reports Effective
**Do**:
- Be specific and factual
- Include version numbers and exact error messages
- Provide reproducible steps
- Attach relevant logs (redact sensitive data)
- Link to related issues or discussions
- Respond to follow-up questions promptly
**Avoid**:
- Vague descriptions ("it doesn't work")
- Multiple unrelated issues in one report
- Assuming the cause without evidence
- Demanding immediate fixes
- Duplicating existing issues
## Issue Labels
When applicable, suggest appropriate labels:
| Label | Use When |
|-------|----------|
| `bug` | Something isn't working as documented |
| `enhancement` | New feature or improvement |
| `performance` | Speed or resource usage issue |
| `documentation` | Docs are missing or incorrect |
| `question` | Clarification needed (not a bug) |
| `good first issue` | Suitable for new contributors |
## Response Expectations
- **Acknowledgment**: Within a few days
- **Triage**: Issue labeled and prioritized
- **Resolution**: Depends on complexity and priority
Complex features may require discussion before implementation. Bug fixes for critical issues are prioritized.
## Following Up
If your issue hasn't received attention:
1. **Check issue status** - It may be labeled or assigned
2. **Add new information** - If you've discovered more details
3. **Politely bump** - A single follow-up comment after 2 weeks is appropriate
4. **Consider contributing** - PRs that fix bugs or implement features are welcome
## Contributing Fixes
If you want to fix a bug or implement a feature yourself:
1. Comment on the issue to avoid duplicate work
2. Follow the coding patterns in `CLAUDE.md`
3. Include tests for your changes
4. Keep PRs focused on a single issue
5. Reference the issue number in your PR
## Security Issues
**Do not report security vulnerabilities in public issues.**
For security-sensitive bugs:
- Contact maintainers directly
- Provide detailed reproduction steps privately
- Allow reasonable time for a fix before disclosure
## Examples
### Good Bug Report
```markdown
## WebSocket disconnects after 60 seconds of inactivity
**Environment**:
- ORLY v0.34.5
- Ubuntu 22.04
- Go 1.25.3
- Badger backend
**Steps to Reproduce**:
1. Connect to relay: `websocat ws://localhost:3334`
2. Send subscription: `["REQ", "test", {"kinds": [1], "limit": 1}]`
3. Wait 60 seconds without sending messages
4. Observe connection closed
**Expected**: Connection remains open (Nostr relays should maintain persistent connections)
**Actual**: Connection closed with code 1000 after exactly 60 seconds
**Logs** (ORLY_LOG_LEVEL=debug):
```
1764783029014485🔎 client timeout, closing connection /app/handle-websocket.go:142
```
**Possible Cause**: May be related to read deadline not being extended on subscription activity
```
### Good Feature Request
```markdown
## Add rate limiting per pubkey
**Problem**:
A single pubkey can flood the relay with events, consuming storage and
bandwidth. Currently there's no way to limit per-author submission rate.
**Proposed Solution**:
Add configurable rate limiting:
```bash
ORLY_RATE_LIMIT_EVENTS_PER_MINUTE=60
ORLY_RATE_LIMIT_BURST=10
```
When exceeded, return OK false with "rate-limited" message per NIP-20.
**Use Case**:
- Public relays protecting against spam
- Community relays with fair-use policies
- Paid relays enforcing subscription tiers
**Alternatives Considered**:
- IP-based limiting: Ineffective because users share IPs and use VPNs
- Global limiting: Punishes all users for one bad actor
```

644
CLAUDE.md
View File

@@ -1,455 +1,279 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
ORLY is a high-performance Nostr relay in Go with Badger/Neo4j/WasmDB backends, Svelte web UI, and purego-based secp256k1 crypto.
## Project Overview
## Quick Reference
ORLY is a high-performance Nostr relay written in Go, designed for personal relays, small communities, and business deployments. It emphasizes low latency, custom cryptography optimizations, and embedded database performance.
**Key Technologies:**
- **Language**: Go 1.25.3+
- **Database**: Badger v4 (embedded key-value store) or DGraph (distributed graph database)
- **Cryptography**: Custom p8k library using purego for secp256k1 operations (no CGO)
- **Web UI**: Svelte frontend embedded in the binary
- **WebSocket**: gorilla/websocket for Nostr protocol
- **Performance**: SIMD-accelerated SHA256 and hex encoding, query result caching with zstd compression
## Build Commands
### Basic Build
```bash
# Build relay binary only
go build -o orly
# Pure Go build (no CGO) - this is the standard approach
# Build
CGO_ENABLED=0 go build -o orly
```
./scripts/update-embedded-web.sh # With web UI
### Build with Web UI
```bash
# Recommended: Use the provided script
./scripts/update-embedded-web.sh
# Manual build
cd app/web
bun install
bun run build
cd ../../
go build -o orly
```
### Development Mode (Web UI Hot Reload)
```bash
# Terminal 1: Start relay with dev proxy
export ORLY_WEB_DISABLE=true
export ORLY_WEB_DEV_PROXY_URL=http://localhost:5173
./orly &
# Terminal 2: Start dev server
cd app/web && bun run dev
```
## Testing
### Run All Tests
```bash
# Standard test run
# Test
./scripts/test.sh
go test -v -run TestName ./pkg/package
# Or manually with purego setup
CGO_ENABLED=0 go test ./...
# Run
./orly # Start relay
./orly identity # Show relay pubkey
./orly version # Show version
# Note: libsecp256k1.so must be available for crypto tests
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}$(pwd)/pkg/crypto/p8k"
# Web UI dev (hot reload)
ORLY_WEB_DISABLE=true ORLY_WEB_DEV_PROXY_URL=http://localhost:5173 ./orly &
cd app/web && bun run dev
# NIP-98 HTTP debugging (build: go build -o nurl ./cmd/nurl)
NOSTR_SECRET_KEY=nsec1... ./nurl https://relay.example.com/api/logs
NOSTR_SECRET_KEY=nsec1... ./nurl https://relay.example.com/api/logs/clear
./nurl help # Show usage
# Vanity npub generator (build: go build -o vainstr ./cmd/vainstr)
./vainstr mleku end # Find npub ending with "mleku"
./vainstr orly begin # Find npub starting with "orly" (after npub1)
./vainstr foo contain # Find npub containing "foo"
./vainstr --threads 4 xyz end # Use 4 threads
```
### Run Specific Package Tests
## Key Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `ORLY_PORT` | 3334 | Server port |
| `ORLY_LOG_LEVEL` | info | trace/debug/info/warn/error |
| `ORLY_DB_TYPE` | badger | badger/bbolt/neo4j/wasmdb |
| `ORLY_POLICY_ENABLED` | false | Enable policy system |
| `ORLY_ACL_MODE` | none | none/follows/managed |
| `ORLY_TLS_DOMAINS` | | Let's Encrypt domains |
| `ORLY_AUTH_TO_WRITE` | false | Require auth for writes |
**Neo4j Memory Tuning** (only when `ORLY_DB_TYPE=neo4j`):
| Variable | Default | Description |
|----------|---------|-------------|
| `ORLY_NEO4J_MAX_CONN_POOL` | 25 | Max connections (lower = less memory) |
| `ORLY_NEO4J_FETCH_SIZE` | 1000 | Records per batch (-1=all) |
| `ORLY_NEO4J_QUERY_RESULT_LIMIT` | 10000 | Max results per query (0=unlimited) |
See `./orly help` for all options. **All env vars MUST be defined in `app/config/config.go`**.
## Architecture
```
main.go → Entry point
app/
server.go → HTTP/WebSocket server
handle-*.go → Nostr message handlers (EVENT, REQ, AUTH, etc.)
config/ → Environment configuration (go-simpler.org/env)
web/ → Svelte frontend (embedded via go:embed)
pkg/
database/ → Database interface + Badger implementation
bbolt/ → BBolt backend (HDD-optimized, B+tree)
neo4j/ → Neo4j backend with WoT extensions
wasmdb/ → WebAssembly IndexedDB backend
protocol/ → Nostr protocol (ws/, auth/, publish/)
encoders/ → Optimized JSON encoding with buffer pools
policy/ → Event filtering/validation
acl/ → Access control (none/follows/managed)
cmd/
relay-tester/ → Protocol compliance testing
benchmark/ → Performance testing
```
## Critical Rules
### 1. Binary-Optimized Tag Storage (MUST READ)
The nostr library stores `e` and `p` tag values as 33-byte binary (not 64-char hex).
```go
// WRONG - may be binary garbage
pubkey := string(tag.T[1])
pt, err := hex.Dec(string(pTag.Value()))
// CORRECT - always use ValueHex()
pubkey := string(pTag.ValueHex()) // Returns lowercase hex
pt, err := hex.Dec(string(pTag.ValueHex()))
// For event.E fields (always binary)
pubkeyHex := hex.Enc(ev.Pubkey[:])
```
**Always normalize to lowercase hex** when storing in Neo4j to prevent duplicates.
### 2. Configuration System
- **ALL env vars in `app/config/config.go`** - never use `os.Getenv()` in packages
- Pass config via structs (e.g., `database.DatabaseConfig`)
- Use `ORLY_` prefix for all variables
### 3. Interface Design
- **Define interfaces in `pkg/interfaces/<name>/`** - prevents circular deps
- **Never use interface literals** in type assertions: `.(interface{ Method() })` is forbidden
- Existing: `acl/`, `neterr/`, `resultiter/`, `store/`, `publisher/`, `typer/`
### 4. Constants
Define named constants for repeated values. No magic numbers/strings.
```go
// BAD
if timeout > 30 {
// GOOD
const DefaultTimeoutSeconds = 30
if timeout > DefaultTimeoutSeconds {
```
### 5. Domain Encapsulation
- Use unexported fields for internal state
- Provide public API methods (`IsEnabled()`, `CheckPolicy()`)
- Never change unexported→exported to fix bugs
### 6. Auth-Required Configuration (CAUTION)
**Be extremely careful when modifying auth-related settings in deployment configs.**
The `ORLY_AUTH_REQUIRED` and `ORLY_AUTH_TO_WRITE` settings control whether clients must authenticate via NIP-42 before interacting with the relay. Changing these on a production relay can:
- **Lock out all existing clients** if they don't support NIP-42 auth
- **Break automated systems** (bots, bridges, scrapers) that depend on anonymous access
- **Cause data sync issues** if upstream relays can't push events
Before enabling auth-required on any deployment:
1. Verify all expected clients support NIP-42
2. Ensure the relay identity key is properly configured
3. Test with a non-production instance first
## Database Backends
| Backend | Use Case | Build |
|---------|----------|-------|
| **Badger** (default) | Single-instance, SSD, high performance | Standard |
| **BBolt** | HDD-optimized, large archives, lower memory | `ORLY_DB_TYPE=bbolt` |
| **Neo4j** | Social graph, WoT queries | `ORLY_DB_TYPE=neo4j` |
| **WasmDB** | Browser/WebAssembly | `GOOS=js GOARCH=wasm` |
All implement `pkg/database.Database` interface.
### Scaling for Large Archives
For archives with millions of events, consider:
**Option 1: Tune Badger (SSD recommended)**
```bash
# Test database package
cd pkg/database && go test -v ./...
# Increase caches for larger working set (requires more RAM)
ORLY_DB_BLOCK_CACHE_MB=2048 # 2GB block cache
ORLY_DB_INDEX_CACHE_MB=1024 # 1GB index cache
ORLY_SERIAL_CACHE_PUBKEYS=500000 # 500k pubkeys
ORLY_SERIAL_CACHE_EVENT_IDS=2000000 # 2M event IDs
# Test protocol package
cd pkg/protocol && go test -v ./...
# Higher compression to reduce disk IO
ORLY_DB_ZSTD_LEVEL=9 # Best compression ratio
# Test with specific test function
go test -v -run TestSaveEvent ./pkg/database
# Enable storage GC with aggressive eviction
ORLY_GC_ENABLED=true
ORLY_GC_BATCH_SIZE=5000
ORLY_MAX_STORAGE_BYTES=107374182400 # 100GB cap
```
### Relay Protocol Testing
**Option 2: Use BBolt for HDD/Low-Memory Deployments**
```bash
# Test relay protocol compliance
go run cmd/relay-tester/main.go -url ws://localhost:3334
ORLY_DB_TYPE=bbolt
# List available tests
go run cmd/relay-tester/main.go -list
# Run specific test
go run cmd/relay-tester/main.go -url ws://localhost:3334 -test "Basic Event"
# Tune for your HDD
ORLY_BBOLT_BATCH_MAX_EVENTS=10000 # Larger batches for HDD
ORLY_BBOLT_BATCH_MAX_MB=256 # 256MB batch buffer
ORLY_BBOLT_FLUSH_TIMEOUT_SEC=60 # Longer flush interval
ORLY_BBOLT_BLOOM_SIZE_MB=32 # Larger bloom filter
ORLY_BBOLT_MMAP_SIZE_MB=16384 # 16GB mmap (scales with DB size)
```
### Benchmarking
**Migration Between Backends**
```bash
# Run Go benchmarks in specific package
go test -bench=. -benchmem ./pkg/database
# Migrate from Badger to BBolt
./orly migrate --from badger --to bbolt
# Crypto benchmarks
cd pkg/crypto/p8k && make bench
# Run full relay benchmark suite
cd cmd/benchmark
go run main.go -data-dir /tmp/bench-db -events 10000 -workers 4
# Benchmark reports are saved to cmd/benchmark/reports/
# The benchmark tool tests event storage, queries, and subscription performance
# Migrate with custom target path
./orly migrate --from badger --to bbolt --target-path /mnt/hdd/orly-archive
```
## Running the Relay
**BBolt vs Badger Trade-offs:**
- BBolt: Lower memory, HDD-friendly, simpler (B+tree), slower random reads
- Badger: Higher memory, SSD-optimized (LSM), faster concurrent access
### Basic Run
```bash
# Build and run
go build -o orly && ./orly
## Logging (lol.mleku.dev)
# With environment variables
export ORLY_LOG_LEVEL=debug
export ORLY_PORT=3334
./orly
```go
import "lol.mleku.dev/log"
import "lol.mleku.dev/chk"
log.T.F("trace: %s", msg) // T=Trace, D=Debug, I=Info, W=Warn, E=Error, F=Fatal
if chk.E(err) { return } // Log + check error
```
### Get Relay Identity
```bash
# Print relay identity secret and pubkey
./orly identity
## Development Workflows
**Add Nostr handler**: Create `app/handle-<type>.go` → add case in `handle-message.go`
**Add database index**: Define in `pkg/database/indexes/` → add migration → update `save-event.go` → add query builder
**Profiling**: `ORLY_PPROF=cpu ./orly` or `ORLY_PPROF_HTTP=true` for :6060
## Commit Format
```
Fix description in imperative mood (72 chars max)
- Bullet point details
- More details
Files modified:
- path/to/file.go: What changed
```
### Common Configuration
```bash
# TLS with Let's Encrypt
export ORLY_TLS_DOMAINS=relay.example.com
## Web UI Libraries
# Admin configuration
export ORLY_ADMINS=npub1...
### nsec-crypto.js
# Follows ACL mode
export ORLY_ACL_MODE=follows
Secure nsec encryption library at `app/web/src/nsec-crypto.js`. Uses Argon2id + AES-256-GCM.
# Enable sprocket event processing
export ORLY_SPROCKET_ENABLED=true
```js
import { encryptNsec, decryptNsec, isValidNsec, deriveKey } from "./nsec-crypto.js";
# Enable policy system
export ORLY_POLICY_ENABLED=true
// Encrypt nsec with password (~3 sec derivation)
const encrypted = await encryptNsec(nsec, password);
# Database backend selection (badger or dgraph)
export ORLY_DB_TYPE=badger
export ORLY_DGRAPH_URL=localhost:9080 # Only for dgraph backend
// Decrypt (validates bech32 checksum)
const nsec = await decryptNsec(encrypted, password);
# Query cache configuration (improves REQ response times)
export ORLY_QUERY_CACHE_SIZE_MB=512 # Default: 512MB
export ORLY_QUERY_CACHE_MAX_AGE=5m # Cache expiry time
# Database cache tuning (for Badger backend)
export ORLY_DB_BLOCK_CACHE_MB=512 # Block cache size
export ORLY_DB_INDEX_CACHE_MB=256 # Index cache size
// Validate nsec format and checksum
if (isValidNsec(nsec)) { ... }
```
## Code Architecture
**Argon2id parameters**: 4 threads, 8 iterations, 256MB memory, 32-byte output.
### Repository Structure
**Storage format**: Base64(salt[32] + iv[12] + ciphertext). Validates bech32 on encrypt/decrypt.
**Root Entry Point:**
- `main.go` - Application entry point with signal handling, profiling setup, and database initialization
- `app/main.go` - Core relay server initialization and lifecycle management
## Documentation
**Core Packages:**
| Topic | Location |
|-------|----------|
| Policy config | `docs/POLICY_CONFIGURATION_REFERENCE.md` |
| Policy guide | `docs/POLICY_USAGE_GUIDE.md` |
| Neo4j WoT schema | `pkg/neo4j/WOT_SPEC.md` |
| Neo4j schema changes | `pkg/neo4j/MODIFYING_SCHEMA.md` |
| Event kinds database | `app/web/src/eventKinds.js` |
| Nsec encryption | `app/web/src/nsec-crypto.js` |
**`app/`** - HTTP/WebSocket server and handlers
- `server.go` - Main Server struct and HTTP request routing
- `handle-*.go` - Nostr protocol message handlers (EVENT, REQ, COUNT, CLOSE, AUTH, DELETE)
- `handle-websocket.go` - WebSocket connection lifecycle and frame handling
- `listener.go` - Network listener setup
- `sprocket.go` - External event processing script manager
- `publisher.go` - Event broadcast to active subscriptions
- `payment_processor.go` - NWC integration for subscription payments
- `blossom.go` - Blob storage service initialization
- `web.go` - Embedded web UI serving and dev proxy
- `config/` - Environment variable configuration using go-simpler.org/env
## Dependencies
**`pkg/database/`** - Database abstraction layer with multiple backend support
- `interface.go` - Database interface definition for pluggable backends
- `factory.go` - Database backend selection (Badger or DGraph)
- `database.go` - Badger implementation with cache tuning and query cache
- `save-event.go` - Event storage with index updates
- `query-events.go` - Main query execution engine with filter normalization
- `query-for-*.go` - Specialized query builders for different filter patterns
- `indexes/` - Index key construction for efficient lookups
- `export.go` / `import.go` - Event export/import in JSONL format
- `subscriptions.go` - Active subscription tracking
- `identity.go` - Relay identity key management
- `migrations.go` - Database schema migration runner
**`pkg/protocol/`** - Nostr protocol implementation
- `ws/` - WebSocket message framing and parsing
- `auth/` - NIP-42 authentication challenge/response
- `publish/` - Event publisher for broadcasting to subscriptions
- `relayinfo/` - NIP-11 relay information document
- `directory/` - Distributed directory service (NIP-XX)
- `nwc/` - Nostr Wallet Connect client
- `blossom/` - Blob storage protocol
**`pkg/encoders/`** - Optimized Nostr data encoding/decoding
- `event/` - Event JSON marshaling/unmarshaling with buffer pooling
- `filter/` - Filter parsing and validation
- `bech32encoding/` - npub/nsec/note encoding
- `hex/` - SIMD-accelerated hex encoding using templexxx/xhex
- `timestamp/`, `kind/`, `tag/` - Specialized field encoders
**`pkg/crypto/`** - Cryptographic operations
- `p8k/` - Pure Go secp256k1 using purego (no CGO) to dynamically load libsecp256k1.so
- `secp.go` - Dynamic library loading and function binding
- `schnorr.go` - Schnorr signature operations (NIP-01)
- `ecdh.go` - ECDH for encrypted DMs (NIP-04, NIP-44)
- `recovery.go` - Public key recovery from signatures
- `libsecp256k1.so` - Pre-compiled secp256k1 library
- `keys/` - Key derivation and conversion utilities
- `sha256/` - SIMD-accelerated SHA256 using minio/sha256-simd
**`pkg/acl/`** - Access control systems
- `acl.go` - ACL registry and interface
- `follows.go` - Follows-based whitelist (admins + their follows can write)
- `managed.go` - NIP-86 managed relay with role-based permissions
- `none.go` - Open relay (no restrictions)
**`pkg/policy/`** - Event filtering and validation policies
- Policy configuration loaded from `~/.config/ORLY/policy.json`
- Per-kind size limits, age restrictions, custom scripts
- See `docs/POLICY_USAGE_GUIDE.md` for configuration examples
**`pkg/sync/`** - Distributed synchronization
- `cluster_manager.go` - Active replication between relay peers
- `relay_group_manager.go` - Relay group configuration (NIP-XX)
- `manager.go` - Distributed directory consensus
**`pkg/spider/`** - Event syncing from other relays
- `spider.go` - Spider manager for "follows" mode
- Fetches events from admin relays for followed pubkeys
**`pkg/utils/`** - Shared utilities
- `atomic/` - Extended atomic operations
- `interrupt/` - Signal handling and graceful shutdown
- `apputil/` - Application-level utilities
**Web UI (`app/web/`):**
- Svelte-based admin interface
- Embedded in binary via `go:embed`
- Features: event browser, sprocket management, user admin, settings
**Command-line Tools (`cmd/`):**
- `relay-tester/` - Nostr protocol compliance testing
- `benchmark/` - Multi-relay performance comparison
- `stresstest/` - Load testing tool
- `aggregator/` - Event aggregation utility
- `convert/` - Data format conversion
- `policytest/` - Policy validation testing
### Important Patterns
**Pure Go with Purego:**
- All builds use `CGO_ENABLED=0`
- The p8k crypto library uses `github.com/ebitengine/purego` to dynamically load `libsecp256k1.so` at runtime
- This avoids CGO complexity while maintaining C library performance
- `libsecp256k1.so` must be in `LD_LIBRARY_PATH` or same directory as binary
**Database Backend Selection:**
- Supports multiple backends via `ORLY_DB_TYPE` environment variable
- **Badger** (default): Embedded key-value store with custom indexing, ideal for single-instance deployments
- **DGraph**: Distributed graph database for larger, multi-node deployments
- Backend selected via factory pattern in `pkg/database/factory.go`
- All backends implement the same `Database` interface defined in `pkg/database/interface.go`
**Database Query Pattern:**
- Filters are analyzed in `get-indexes-from-filter.go` to determine optimal query strategy
- Filters are normalized before cache lookup, ensuring identical queries with different field ordering hit the cache
- Different query builders (`query-for-kinds.go`, `query-for-authors.go`, etc.) handle specific filter patterns
- All queries return event serials (uint64) for efficient joining
- Query results cached with zstd level 9 compression (configurable size and TTL)
- Final events fetched via `fetch-events-by-serials.go`
**WebSocket Message Flow:**
1. `handle-websocket.go` accepts connection and spawns goroutine
2. Incoming frames parsed by `pkg/protocol/ws/`
3. Routed to handlers: `handle-event.go`, `handle-req.go`, `handle-count.go`, etc.
4. Events stored via `database.SaveEvent()`
5. Active subscriptions notified via `publishers.Publish()`
**Configuration System:**
- Uses `go-simpler.org/env` for struct tags
- All config in `app/config/config.go` with `ORLY_` prefix
- Supports XDG directories via `github.com/adrg/xdg`
- Default data directory: `~/.local/share/ORLY`
**Event Publishing:**
- `pkg/protocol/publish/` manages publisher registry
- Each WebSocket connection registers its subscriptions
- `publishers.Publish(event)` broadcasts to matching subscribers
- Efficient filter matching without re-querying database
**Embedded Assets:**
- Web UI built to `app/web/dist/`
- Embedded via `//go:embed` directive in `app/web.go`
- Served at root path `/` with API at `/api/*`
## Development Workflow
### Making Changes to Web UI
1. Edit files in `app/web/src/`
2. For hot reload: `cd app/web && bun run dev` (with `ORLY_WEB_DISABLE=true` and `ORLY_WEB_DEV_PROXY_URL=http://localhost:5173`)
3. For production build: `./scripts/update-embedded-web.sh`
### Adding New Nostr Protocol Handlers
1. Create `app/handle-<message-type>.go`
2. Add case in `app/handle-message.go` message router
3. Implement handler following existing patterns
4. Add tests in `app/<handler>_test.go`
### Adding Database Indexes
1. Define index in `pkg/database/indexes/`
2. Add migration in `pkg/database/migrations.go`
3. Update `save-event.go` to populate index
4. Add query builder in `pkg/database/query-for-<index>.go`
5. Update `get-indexes-from-filter.go` to use new index
### Environment Variables for Development
```bash
# Verbose logging
export ORLY_LOG_LEVEL=trace
export ORLY_DB_LOG_LEVEL=debug
# Enable profiling
export ORLY_PPROF=cpu
export ORLY_PPROF_HTTP=true # Serves on :6060
# Health check endpoint
export ORLY_HEALTH_PORT=8080
```
### Profiling
```bash
# CPU profiling
export ORLY_PPROF=cpu
./orly
# Profile written on shutdown
# HTTP pprof server
export ORLY_PPROF_HTTP=true
./orly
# Visit http://localhost:6060/debug/pprof/
# Memory profiling
export ORLY_PPROF=memory
export ORLY_PPROF_PATH=/tmp/profiles
```
## Deployment
### Automated Deployment
```bash
# Deploy with systemd service
./scripts/deploy.sh
```
This script:
1. Installs Go 1.25.0 if needed
2. Builds relay with embedded web UI
3. Installs to `~/.local/bin/orly`
4. Creates systemd service
5. Sets capabilities for port 443 binding
### systemd Service Management
```bash
# Start/stop/restart
sudo systemctl start orly
sudo systemctl stop orly
sudo systemctl restart orly
# Enable on boot
sudo systemctl enable orly
# View logs
sudo journalctl -u orly -f
```
### Manual Deployment
```bash
# Build for production
./scripts/update-embedded-web.sh
# Or build all platforms
./scripts/build-all-platforms.sh
```
## Key Dependencies
- `github.com/dgraph-io/badger/v4` - Embedded database
- `github.com/gorilla/websocket` - WebSocket server
- `github.com/dgraph-io/badger/v4` - Badger DB (LSM, SSD-optimized)
- `go.etcd.io/bbolt` - BBolt DB (B+tree, HDD-optimized)
- `github.com/neo4j/neo4j-go-driver/v5` - Neo4j
- `github.com/gorilla/websocket` - WebSocket
- `github.com/ebitengine/purego` - CGO-free C loading
- `github.com/minio/sha256-simd` - SIMD SHA256
- `github.com/templexxx/xhex` - SIMD hex encoding
- `github.com/ebitengine/purego` - CGO-free C library loading
- `go-simpler.org/env` - Environment variable configuration
- `lol.mleku.dev` - Custom logging library
## Testing Guidelines
- Test files use `_test.go` suffix
- Use `github.com/stretchr/testify` for assertions
- Database tests require temporary database setup (see `pkg/database/testmain_test.go`)
- WebSocket tests should use `relay-tester` package
- Always clean up resources in tests (database, connections, goroutines)
## Performance Considerations
- **Query Cache**: 512MB query result cache (configurable via `ORLY_QUERY_CACHE_SIZE_MB`) with zstd level 9 compression reduces database load for repeated queries
- **Filter Normalization**: Filters are normalized before cache lookup, so identical queries with different field ordering produce cache hits
- **Database Caching**: Tune `ORLY_DB_BLOCK_CACHE_MB` and `ORLY_DB_INDEX_CACHE_MB` for workload (Badger backend only)
- **Query Optimization**: Add indexes for common filter patterns; multiple specialized query builders optimize different filter combinations
- **Batch Operations**: ID lookups and event fetching use batch operations via `GetSerialsByIds` and `FetchEventsBySerials`
- **Memory Pooling**: Use buffer pools in encoders (see `pkg/encoders/event/`)
- **SIMD Operations**: Leverage minio/sha256-simd and templexxx/xhex for cryptographic operations
- **Goroutine Management**: Each WebSocket connection runs in its own goroutine
## Recent Optimizations
ORLY has received several significant performance improvements in recent updates:
### Query Cache System (Latest)
- 512MB query result cache with zstd level 9 compression
- Filter normalization ensures cache hits regardless of filter field ordering
- Configurable size (`ORLY_QUERY_CACHE_SIZE_MB`) and TTL (`ORLY_QUERY_CACHE_MAX_AGE`)
- Dramatically reduces database load for repeated queries (common in Nostr clients)
- Cache key includes normalized filter representation for optimal hit rate
### Badger Cache Tuning
- Optimized block cache (default 512MB, tune via `ORLY_DB_BLOCK_CACHE_MB`)
- Optimized index cache (default 256MB, tune via `ORLY_DB_INDEX_CACHE_MB`)
- Resulted in 10-15% improvement in most benchmark scenarios
- See git history for cache tuning evolution
### Query Execution Improvements
- Multiple specialized query builders for different filter patterns:
- `query-for-kinds.go` - Kind-based queries
- `query-for-authors.go` - Author-based queries
- `query-for-tags.go` - Tag-based queries
- Combination builders for `kinds+authors`, `kinds+tags`, `kinds+authors+tags`
- Batch operations for ID lookups via `GetSerialsByIds`
- Serial-based event fetching for efficiency
- Filter analysis in `get-indexes-from-filter.go` selects optimal strategy
## Release Process
1. Update version in `pkg/version/version` file (e.g., v1.2.3)
2. Create and push tag:
```bash
git tag v1.2.3
git push origin v1.2.3
```
3. GitHub Actions workflow builds binaries for multiple platforms
4. Release created automatically with binaries and checksums
- `go-simpler.org/env` - Config
- `lol.mleku.dev` - Logging

101
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,101 @@
# Contributing to ORLY
Thank you for your interest in contributing to ORLY! This document outlines the process for reporting bugs, requesting features, and submitting contributions.
**Canonical Repository:** https://git.mleku.dev/mleku/next.orly.dev
## Issue Reporting Policy
### Before Opening an Issue
1. **Search existing issues** to avoid duplicates
2. **Check the documentation** in the repository
3. **Verify your version** - run `./orly version` and ensure you're on a recent release
4. **Review the CLAUDE.md** file for configuration guidance
### Bug Reports
Use the **Bug Report** template when reporting unexpected behavior. A good bug report includes:
- **Version information** - exact ORLY version from `./orly version`
- **Database backend** - Badger, Neo4j, or WasmDB
- **Clear description** - what happened vs. what you expected
- **Reproduction steps** - detailed steps to trigger the bug
- **Logs** - relevant log output (use `ORLY_LOG_LEVEL=debug` or `trace`)
- **Configuration** - relevant environment variables (redact secrets)
#### Log Levels for Debugging
```bash
export ORLY_LOG_LEVEL=trace # Most verbose
export ORLY_LOG_LEVEL=debug # Development debugging
export ORLY_LOG_LEVEL=info # Default
```
### Feature Requests
Use the **Feature Request** template when suggesting new functionality. A good feature request includes:
- **Problem statement** - what problem does this solve?
- **Proposed solution** - specific description of desired behavior
- **Alternatives considered** - workarounds you've tried
- **Related NIP** - if this implements a Nostr protocol specification
- **Impact assessment** - is this a minor tweak or major change?
#### Feature Categories
- **Protocol** - NIP implementations and Nostr protocol features
- **Database** - Storage backends, indexing, query optimization
- **Performance** - Caching, SIMD operations, memory optimization
- **Policy** - Access control, event filtering, validation
- **Web UI** - Admin interface improvements
- **Operations** - Deployment, monitoring, systemd integration
## Code Contributions
### Development Setup
```bash
# Clone the repository
git clone https://git.mleku.dev/mleku/next.orly.dev.git
cd next.orly.dev
# Build
CGO_ENABLED=0 go build -o orly
# Run tests
./scripts/test.sh
# Build with web UI
./scripts/update-embedded-web.sh
```
### Pull Request Guidelines
1. **One feature/fix per PR** - keep changes focused
2. **Write tests** - for new functionality and bug fixes
3. **Follow existing patterns** - match the code style of surrounding code
4. **Update documentation** - if your change affects configuration or behavior
5. **Test your changes** - run `./scripts/test.sh` before submitting
### Commit Message Format
```
Short summary (72 chars max, imperative mood)
- Bullet point describing change 1
- Bullet point describing change 2
Files modified:
- path/to/file1.go: Description of change
- path/to/file2.go: Description of change
```
## Communication
- **Issues:** https://git.mleku.dev/mleku/next.orly.dev/issues
- **Documentation:** https://git.mleku.dev/mleku/next.orly.dev
## License
By contributing to ORLY, you agree that your contributions will be licensed under the same license as the project.

816
DDD_ANALYSIS.md Normal file
View File

@@ -0,0 +1,816 @@
# Domain-Driven Design Analysis: ORLY Relay
This document provides a comprehensive Domain-Driven Design (DDD) analysis of the ORLY Nostr relay codebase, evaluating its alignment with DDD principles and identifying opportunities for improvement.
---
## Key Recommendations Summary
| # | Recommendation | Impact | Effort | Status |
|---|----------------|--------|--------|--------|
| 1 | [Formalize Domain Events](#1-formalize-domain-events) | High | Medium | Pending |
| 2 | [Strengthen Aggregate Boundaries](#2-strengthen-aggregate-boundaries) | High | Medium | Partial |
| 3 | [Extract Application Services](#3-extract-application-services) | Medium | High | Pending |
| 4 | [Establish Ubiquitous Language Glossary](#4-establish-ubiquitous-language-glossary) | Medium | Low | Pending |
| 5 | [Add Domain-Specific Error Types](#5-add-domain-specific-error-types) | Medium | Low | Pending |
| 6 | [Enforce Value Object Immutability](#6-enforce-value-object-immutability) | Low | Low | **Addressed** |
| 7 | [Document Context Map](#7-document-context-map) | Medium | Low | **This Document** |
---
## Table of Contents
1. [Executive Summary](#executive-summary)
2. [Strategic Design Analysis](#strategic-design-analysis)
- [Bounded Contexts](#bounded-contexts)
- [Context Map](#context-map)
- [Subdomain Classification](#subdomain-classification)
3. [Tactical Design Analysis](#tactical-design-analysis)
- [Entities](#entities)
- [Value Objects](#value-objects)
- [Aggregates](#aggregates)
- [Repositories](#repositories)
- [Domain Services](#domain-services)
- [Domain Events](#domain-events)
4. [Anti-Patterns Identified](#anti-patterns-identified)
5. [Detailed Recommendations](#detailed-recommendations)
6. [Implementation Checklist](#implementation-checklist)
7. [Appendix: File References](#appendix-file-references)
---
## Executive Summary
ORLY demonstrates **mature DDD adoption** for a system of its complexity. The codebase exhibits clear bounded context separation, proper repository patterns with multiple backend implementations, and well-designed interface segregation that prevents circular dependencies.
**Strengths:**
- Clear separation between `app/` (application layer) and `pkg/` (domain/infrastructure)
- Repository pattern with three interchangeable backends (Badger, Neo4j, WasmDB)
- Interface-based ACL system with pluggable implementations (None, Follows, Managed)
- Per-connection aggregate isolation in `Listener`
- Strong use of Go interfaces for dependency inversion
- **New:** Immutable `EventRef` value object alongside legacy `IdPkTs`
- **New:** Comprehensive protocol extensions (Blossom, Graph Queries, NIP-43, NIP-86)
- **New:** Distributed sync with cluster replication support
**Areas for Improvement:**
- Domain events are implicit rather than explicit types
- Some aggregates expose mutable state via public fields
- Handler methods mix application orchestration with domain logic
- Ubiquitous language is partially documented
**Overall DDD Maturity Score: 7.5/10** (improved from 7/10)
---
## Strategic Design Analysis
### Bounded Contexts
ORLY organizes code into distinct bounded contexts, each with its own model and language:
#### 1. Event Storage Context (`pkg/database/`)
- **Responsibility:** Persistent storage of Nostr events with indexing and querying
- **Key Abstractions:** `Database` interface (109 lines), `Subscription`, `Payment`, `NIP43Membership`
- **Implementations:** Badger (embedded), Neo4j (graph), WasmDB (browser)
- **File:** `pkg/database/interface.go:17-109`
#### 2. Access Control Context (`pkg/acl/`)
- **Responsibility:** Authorization decisions for read/write operations
- **Key Abstractions:** `I` interface, `Registry`, access levels (none/read/write/admin/owner)
- **Implementations:** `None`, `Follows`, `Managed`
- **Files:** `pkg/acl/acl.go`, `pkg/interfaces/acl/acl.go:21-40`
#### 3. Event Policy Context (`pkg/policy/`)
- **Responsibility:** Event filtering, validation, rate limiting rules, follows-based whitelisting
- **Key Abstractions:** `Rule`, `Kinds`, `P` (PolicyManager)
- **Invariants:** Whitelist/blacklist precedence, size limits, tag requirements, protected events
- **File:** `pkg/policy/policy.go` (extensive, ~1000 lines)
#### 4. Connection Management Context (`app/`)
- **Responsibility:** WebSocket lifecycle, message routing, authentication, flow control
- **Key Abstractions:** `Listener`, `Server`, message handlers, `messageRequest`
- **File:** `app/listener.go:24-52`
#### 5. Protocol Extensions Context (`pkg/protocol/`)
- **Responsibility:** NIP implementations beyond core protocol
- **Subcontexts:**
- **NIP-43 Membership** (`pkg/protocol/nip43/`): Invite-based access control
- **Graph Queries** (`pkg/protocol/graph/`): BFS traversal for follows/followers/threads
- **NWC Payments** (`pkg/protocol/nwc/`): Nostr Wallet Connect integration
- **Blossom** (`pkg/protocol/blossom/`): BUD protocol definitions
- **Directory** (`pkg/protocol/directory/`): Relay directory client
#### 6. Blob Storage Context (`pkg/blossom/`)
- **Responsibility:** Binary blob storage following BUD specifications
- **Key Abstractions:** `Server`, `Storage`, `Blob`, `BlobMeta`
- **Invariants:** SHA-256 hash integrity, MIME type validation, quota enforcement
- **Files:** `pkg/blossom/server.go`, `pkg/blossom/storage.go`
#### 7. Rate Limiting Context (`pkg/ratelimit/`)
- **Responsibility:** Adaptive throttling based on system load using PID controller
- **Key Abstractions:** `Limiter`, `Config`, `OperationType` (Read/Write)
- **Integration:** Memory pressure from database backends via `loadmonitor` interface
- **File:** `pkg/ratelimit/limiter.go`
#### 8. Distributed Sync Context (`pkg/sync/`)
- **Responsibility:** Federation and replication between relay peers
- **Key Abstractions:** `Manager`, `ClusterManager`, `RelayGroupManager`, `NIP11Cache`
- **Integration:** Serial-number based sync protocol, NIP-11 peer discovery
- **Files:** `pkg/sync/manager.go`, `pkg/sync/cluster.go`, `pkg/sync/relaygroup.go`
#### 9. Spider Context (`pkg/spider/`)
- **Responsibility:** Syncing events from admin relays for followed pubkeys
- **Key Abstractions:** `Spider`, `RelayConnection`, `DirectorySpider`
- **Integration:** Batch subscriptions, rate limit backoff, blackout periods
- **File:** `pkg/spider/spider.go`
### Context Map
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Connection Management (app/) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Server │───▶│ Listener │───▶│ Handlers │◀──▶│ Publishers │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
└────────┬────────────────────┬────────────────────┬──────────────────────────┘
│ │ │
│ [Conformist] │ [Customer-Supplier]│ [Customer-Supplier]
▼ ▼ ▼
┌────────────────┐ ┌────────────────┐ ┌────────────────┐
│ Access Control│ │ Event Storage │ │ Event Policy │
│ (pkg/acl/) │ │ (pkg/database/)│ │ (pkg/policy/) │
│ │ │ │ │ │
│ Registry ◀────┼───┼────Conformist──┼───┼─▶ Manager │
└────────────────┘ └────────────────┘ └────────────────┘
│ │ │
│ │ [Shared Kernel] │
│ ▼ │
│ ┌────────────────┐ │
│ │ Event Entity │ │
│ │(git.mleku.dev/ │◀───────────┘
│ │ mleku/nostr) │
│ └────────────────┘
│ │
│ [Anti-Corruption] │ [Customer-Supplier]
▼ ▼
┌────────────────┐ ┌────────────────┐ ┌────────────────┐
│ Rate Limiting │ │ Protocol │ │ Blob Storage │
│ (pkg/ratelimit)│ │ Extensions │ │ (pkg/blossom) │
│ │ │ (pkg/protocol/)│ │ │
└────────────────┘ └────────────────┘ └────────────────┘
┌────────────────────┼────────────────────┐
▼ ▼ ▼
┌────────────────┐ ┌────────────────┐ ┌────────────────┐
│ Distributed │ │ Spider │ │ Graph Queries │
│ Sync │ │ (pkg/spider) │ │(pkg/protocol/ │
│ (pkg/sync/) │ │ │ │ graph/) │
└────────────────┘ └────────────────┘ └────────────────┘
```
**Integration Patterns Identified:**
| Upstream | Downstream | Pattern | Notes |
|----------|------------|---------|-------|
| nostr library | All contexts | Shared Kernel | Event, Filter, Tag types |
| Database | ACL, Policy, Blossom | Customer-Supplier | Query for follow lists, permissions, blob storage |
| Policy | Handlers, Sync | Conformist | All respect policy decisions |
| ACL | Handlers, Blossom | Conformist | Handlers/Blossom respect access levels |
| Rate Limit | Database | Anti-Corruption | Load monitor abstraction |
| Sync | Database, Policy | Customer-Supplier | Serial-based event replication |
### Subdomain Classification
| Subdomain | Type | Justification |
|-----------|------|---------------|
| Event Storage | **Core** | Central to relay's value proposition |
| Access Control | **Core** | Key differentiator (WoT, follows-based, managed) |
| Event Policy | **Core** | Enables complex filtering rules |
| Graph Queries | **Core** | Unique social graph traversal capabilities |
| NIP-43 Membership | **Core** | Unique invite-based access model |
| Blob Storage (Blossom) | **Core** | Media hosting differentiator |
| Connection Management | **Supporting** | Standard WebSocket infrastructure |
| Rate Limiting | **Supporting** | Operational concern with PID controller |
| Distributed Sync | **Supporting** | Infrastructure for federation |
| Spider | **Supporting** | Data aggregation from external relays |
---
## Tactical Design Analysis
### Entities
Entities are objects with identity that persists across state changes.
#### Listener (Connection Entity)
```go
// app/listener.go:24-52
type Listener struct {
conn *websocket.Conn // Identity: connection handle
challenge atomicutils.Bytes // Auth challenge state
authedPubkey atomicutils.Bytes // Authenticated identity
subscriptions map[string]context.CancelFunc
messageQueue chan messageRequest // Async message processing
droppedMessages atomic.Int64 // Flow control counter
// ... more fields
}
```
- **Identity:** WebSocket connection pointer
- **Lifecycle:** Created on connect, destroyed on disconnect
- **Invariants:** Only one authenticated pubkey per connection; AUTH processed synchronously
#### InviteCode (NIP-43 Entity)
```go
// pkg/protocol/nip43/types.go:26-31
type InviteCode struct {
Code string // Identity: unique code
ExpiresAt time.Time
UsedBy []byte // Tracks consumption
CreatedAt time.Time
}
```
- **Identity:** Unique code string
- **Lifecycle:** Created → Valid → Used/Expired
- **Invariants:** Cannot be reused once consumed
#### Subscription (Payment Entity)
```go
// pkg/database/interface.go (implied by methods)
// GetSubscription, ExtendSubscription, RecordPayment
```
- **Identity:** Pubkey
- **Lifecycle:** Trial → Active → Expired
- **Invariants:** Can only extend if not expired
#### Blob (Blossom Entity)
```go
// pkg/blossom/blob.go (implied)
type BlobMeta struct {
SHA256 string // Identity: content-addressable
Size int64
Type string // MIME type
Uploaded time.Time
Owner []byte // Uploader pubkey
}
```
- **Identity:** SHA-256 hash
- **Lifecycle:** Uploaded → Active → Deleted
- **Invariants:** Hash must match content; owner can delete
### Value Objects
Value objects are immutable and defined by their attributes, not identity.
#### EventRef (Immutable Event Reference) - **NEW**
```go
// pkg/interfaces/store/store_interface.go:99-107
type EventRef struct {
id ntypes.EventID // 32 bytes
pub ntypes.Pubkey // 32 bytes
ts int64 // 8 bytes
ser uint64 // 8 bytes
}
```
- **Equality:** By all fields (fixed-size arrays)
- **Immutability:** Unexported fields, accessor methods return copies
- **Size:** 80 bytes, cache-line friendly, stack-allocated
#### IdPkTs (Legacy Event Reference)
```go
// pkg/interfaces/store/store_interface.go:67-72
type IdPkTs struct {
Id []byte // Event ID
Pub []byte // Pubkey
Ts int64 // Timestamp
Ser uint64 // Serial number
}
```
- **Equality:** By all fields
- **Issue:** Mutable slices (use `ToEventRef()` for immutable version)
- **Migration:** Has `ToEventRef()` and accessors `IDFixed()`, `PubFixed()`
#### Kinds (Policy Specification)
```go
// pkg/policy/policy.go:58-63
type Kinds struct {
Whitelist []int `json:"whitelist,omitempty"`
Blacklist []int `json:"blacklist,omitempty"`
}
```
- **Equality:** By whitelist/blacklist contents
- **Semantics:** Whitelist takes precedence over blacklist
#### Rule (Policy Rule)
```go
// pkg/policy/policy.go:75-180
type Rule struct {
Description string
WriteAllow []string
WriteDeny []string
ReadFollowsWhitelist []string
WriteFollowsWhitelist []string
MaxExpiryDuration string
SizeLimit *int64
ContentLimit *int64
Privileged bool
ProtectedRequired bool
ReadAllowPermissive bool
WriteAllowPermissive bool
// ... binary caches
}
```
- **Complexity:** 25+ fields, decomposition candidate
- **Binary caches:** Performance optimization for hex→binary conversion
#### WriteRequest (Message Value)
```go
// pkg/protocol/publish/types.go
type WriteRequest struct {
Data []byte
MsgType int
IsControl bool
IsPing bool
Deadline time.Time
}
```
### Aggregates
Aggregates are clusters of entities/value objects with consistency boundaries.
#### Listener Aggregate
- **Root:** `Listener`
- **Members:** Subscriptions map, auth state, write channel, message queue
- **Boundary:** Per-connection isolation
- **Invariants:**
- Subscriptions must exist before receiving matching events
- AUTH must complete before other messages check authentication
- Message processing uses RWMutex for pause/resume during policy updates
```go
// app/listener.go:226-249 - Aggregate consistency enforcement
l.authProcessing.Lock()
if isAuthMessage {
// Process AUTH synchronously while holding lock
l.HandleMessage(req.data, req.remote)
l.authProcessing.Unlock()
} else {
l.authProcessing.Unlock()
// Process concurrently
}
```
#### Event Aggregate (External)
- **Root:** `event.E` (from nostr library)
- **Members:** Tags, signature, content
- **Invariants:**
- ID must match computed hash
- Signature must be valid
- Timestamp must be within bounds (configurable per-kind)
- **Validation:** `app/handle-event.go`
#### InviteCode Aggregate
- **Root:** `InviteCode`
- **Members:** Code, expiry, usage tracking
- **Invariants:**
- Code uniqueness
- Single-use enforcement
- Expiry validation
#### Blossom Blob Aggregate
- **Root:** `BlobMeta`
- **Members:** Content data, metadata, owner
- **Invariants:**
- SHA-256 integrity
- Size limits
- MIME type restrictions
- Owner-only deletion
### Repositories
The Repository pattern abstracts persistence for aggregate roots.
#### Database Interface (Primary Repository)
```go
// pkg/database/interface.go:17-109
type Database interface {
// Core lifecycle
Path() string
Init(path string) error
Sync() error
Close() error
Ready() <-chan struct{}
// Event persistence (30+ methods)
SaveEvent(c context.Context, ev *event.E) (exists bool, err error)
QueryEvents(c context.Context, f *filter.F) (evs event.S, err error)
DeleteEvent(c context.Context, eid []byte) error
// Subscription management
GetSubscription(pubkey []byte) (*Subscription, error)
ExtendSubscription(pubkey []byte, days int) error
// NIP-43 membership
AddNIP43Member(pubkey []byte, inviteCode string) error
IsNIP43Member(pubkey []byte) (isMember bool, err error)
// Blossom integration
ExtendBlossomSubscription(pubkey []byte, tier string, storageMB int64, daysExtended int) error
GetBlossomStorageQuota(pubkey []byte) (quotaMB int64, err error)
// Query cache
GetCachedJSON(f *filter.F) ([][]byte, bool)
CacheMarshaledJSON(f *filter.F, marshaledJSON [][]byte)
}
```
**Repository Implementations:**
1. **Badger** (`pkg/database/database.go`): Embedded key-value store
2. **Neo4j** (`pkg/neo4j/`): Graph database for social queries
3. **WasmDB** (`pkg/wasmdb/`): Browser IndexedDB for WASM builds
**Interface Segregation:**
```go
// pkg/interfaces/store/store_interface.go:21-38
type I interface {
Pather
io.Closer
Wiper
Querier // QueryForIds
Querent // QueryEvents
Deleter // DeleteEvent
Saver // SaveEvent
Importer
Exporter
Syncer
LogLeveler
EventIdSerialer
Initer
SerialByIder
}
```
### Domain Services
Domain services encapsulate logic that doesn't belong to any single entity.
#### ACL Registry (Access Decision Service)
```go
// pkg/acl/acl.go:40-48
func (s *S) GetAccessLevel(pub []byte, address string) (level string)
func (s *S) CheckPolicy(ev *event.E) (allowed bool, err error)
func (s *S) AddFollow(pub []byte)
```
- Delegates to active ACL implementation
- Stateless decision based on pubkey and IP
- Optional `PolicyChecker` interface for custom validation
#### Policy Manager (Event Validation Service)
```go
// pkg/policy/policy.go (P type)
// CheckPolicy evaluates rule chains, scripts, whitelist/blacklist logic
// Supports per-kind rules with follows-based whitelisting
```
- Complex rule evaluation logic
- Script execution for custom validation
- Binary cache optimization for pubkey comparisons
#### InviteManager (Invite Lifecycle Service)
```go
// pkg/protocol/nip43/types.go:34-109
type InviteManager struct {
codes map[string]*InviteCode
expiry time.Duration
}
func (im *InviteManager) GenerateCode() (code string, err error)
func (im *InviteManager) ValidateAndConsume(code string, pubkey []byte) (bool, string)
```
- Manages invite code lifecycle
- Thread-safe with mutex protection
#### Graph Executor (Query Execution Service)
```go
// pkg/protocol/graph/executor.go:56-60
type Executor struct {
db GraphDatabase
relaySigner signer.I
relayPubkey []byte
}
func (e *Executor) Execute(q *Query) (*event.E, error)
```
- BFS traversal for follows/followers/threads
- Generates relay-signed ephemeral response events
#### Rate Limiter (Throttling Service)
```go
// pkg/ratelimit/limiter.go
type Limiter struct { ... }
func (l *Limiter) Wait(ctx context.Context, op OperationType) error
```
- PID controller-based adaptive throttling
- Separate setpoints for read/write operations
- Emergency mode with hysteresis
### Domain Events
**Current State:** Domain events are implicit in message flow, not explicit types.
**Implicit Events Identified:**
| Event | Trigger | Effect |
|-------|---------|--------|
| EventPublished | `SaveEvent()` success | `publishers.Deliver()` |
| EventDeleted | Kind 5 processing | Cascade delete targets |
| UserAuthenticated | AUTH envelope accepted | `authedPubkey` set |
| SubscriptionCreated | REQ envelope | Query + stream setup |
| MembershipAdded | NIP-43 join request | ACL update, kind 8000 event |
| MembershipRemoved | NIP-43 leave request | ACL update, kind 8001 event |
| PolicyUpdated | Policy config event | `messagePauseMutex.Lock()` |
| BlobUploaded | Blossom PUT success | Quota updated |
| BlobDeleted | Blossom DELETE | Quota released |
---
## Anti-Patterns Identified
### 1. Large Handler Methods (Partial Anemic Domain Model)
**Location:** `app/handle-event.go` (600+ lines)
**Issue:** The event handling contains:
- Input validation (lowercase hex, JSON structure)
- Policy checking
- ACL verification
- Signature verification
- Persistence
- Event delivery
- Special case handling (delete, ephemeral, NIP-43, NIP-86)
**Impact:** Difficult to test, maintain, and understand. Business rules are embedded in orchestration code.
### 2. Mutable Value Object Fields (Partially Addressed)
**Location:** `pkg/interfaces/store/store_interface.go:67-72`
```go
type IdPkTs struct {
Id []byte // Mutable slice
Pub []byte // Mutable slice
Ts int64
Ser uint64
}
```
**Mitigation:** New `EventRef` type with unexported fields provides immutable alternative.
Use `ToEventRef()` method for safe conversion.
### 3. Global Singleton Registry
**Location:** `pkg/acl/acl.go:10`
```go
var Registry = &S{}
```
**Impact:** Global state makes testing difficult and hides dependencies. Should be injected.
### 4. Missing Domain Events
**Impact:** Side effects are coupled to primary operations. Adding new behaviors (logging, analytics, notifications) requires modifying core handlers.
### 5. Oversized Rule Value Object
**Location:** `pkg/policy/policy.go:75-180`
The `Rule` struct has 25+ fields with binary caches, suggesting decomposition into:
- `AccessRule` (allow/deny lists, follows whitelists)
- `SizeRule` (limits)
- `TimeRule` (expiry, age)
- `ValidationRule` (tags, regex, protected)
---
## Detailed Recommendations
### 1. Formalize Domain Events
**Problem:** Side effects are tightly coupled to primary operations.
**Solution:** Create explicit domain event types and a simple event dispatcher.
```go
// pkg/domain/events/events.go
package events
type DomainEvent interface {
OccurredAt() time.Time
AggregateID() []byte
}
type EventPublished struct {
EventID []byte
Pubkey []byte
Kind int
Timestamp time.Time
}
type MembershipGranted struct {
Pubkey []byte
InviteCode string
Timestamp time.Time
}
type BlobUploaded struct {
SHA256 string
Owner []byte
Size int64
Timestamp time.Time
}
```
### 2. Strengthen Aggregate Boundaries
**Problem:** Aggregate internals are exposed via public fields.
**Solution:** The Listener already uses behavior methods well. Extend pattern:
```go
func (l *Listener) IsAuthenticated() bool {
return len(l.authedPubkey.Load()) > 0
}
func (l *Listener) AuthenticatedPubkey() []byte {
return l.authedPubkey.Load()
}
```
### 3. Extract Application Services
**Problem:** Handler methods contain mixed concerns.
**Solution:** Extract domain logic into focused application services.
```go
// pkg/application/event_service.go
type EventService struct {
db database.Database
policyMgr *policy.P
aclRegistry *acl.S
eventPublisher EventPublisher
}
func (s *EventService) ProcessIncomingEvent(ctx context.Context, ev *event.E, authedPubkey []byte) (*EventResult, error)
```
### 4. Establish Ubiquitous Language Glossary
**Problem:** Terminology is inconsistent across the codebase.
**Current Inconsistencies:**
- "subscription" (payment) vs "subscription" (REQ filter)
- "pub" vs "pubkey" vs "author"
- "spider" vs "sync" for relay federation
**Solution:** Maintain a `GLOSSARY.md`:
```markdown
# ORLY Ubiquitous Language
| Term | Definition | Code Symbol |
|------|------------|-------------|
| Event | A signed Nostr message | `event.E` |
| Relay | This server | `Server` |
| Connection | WebSocket session | `Listener` |
| Filter | Query criteria for events | `filter.F` |
| **Event Subscription** | Active filter receiving events | `subscriptions map` |
| **Payment Subscription** | Paid access tier | `database.Subscription` |
| Access Level | Permission tier | `acl.Level` |
| Policy | Event validation rules | `policy.Rule` |
| Blob | Binary content (images, media) | `blossom.BlobMeta` |
| Spider | Event aggregator from external relays | `spider.Spider` |
| Sync | Peer-to-peer replication | `sync.Manager` |
```
### 5. Add Domain-Specific Error Types
**Problem:** Errors are strings or generic types.
**Solution:** Create typed domain errors in `pkg/interfaces/neterr/` pattern:
```go
var (
ErrEventInvalid = &DomainError{Code: "EVENT_INVALID"}
ErrEventBlocked = &DomainError{Code: "EVENT_BLOCKED"}
ErrAuthRequired = &DomainError{Code: "AUTH_REQUIRED"}
ErrQuotaExceeded = &DomainError{Code: "QUOTA_EXCEEDED"}
ErrInviteCodeInvalid = &DomainError{Code: "INVITE_INVALID"}
ErrBlobTooLarge = &DomainError{Code: "BLOB_TOO_LARGE"}
)
```
### 6. Enforce Value Object Immutability - **ADDRESSED**
The `EventRef` type now provides an immutable alternative:
```go
// pkg/interfaces/store/store_interface.go:99-153
type EventRef struct {
id ntypes.EventID // unexported
pub ntypes.Pubkey // unexported
ts int64
ser uint64
}
func (r EventRef) ID() ntypes.EventID { return r.id } // Returns copy
func (r EventRef) IDHex() string { return r.id.Hex() }
func (i *IdPkTs) ToEventRef() EventRef // Migration path
```
### 7. Document Context Map - **THIS DOCUMENT**
The context map is now documented in this file with integration patterns.
---
## Implementation Checklist
### Currently Satisfied
- [x] Bounded contexts identified with clear boundaries
- [x] Repositories abstract persistence for aggregate roots
- [x] Multiple repository implementations (Badger/Neo4j/WasmDB)
- [x] Interface segregation prevents circular dependencies
- [x] Configuration centralized (`app/config/config.go`)
- [x] Per-connection aggregate isolation
- [x] Access control as pluggable strategy pattern
- [x] Value objects have immutable alternative (`EventRef`)
- [x] Context map documented
### Needs Attention
- [ ] Ubiquitous language documented and used consistently
- [ ] Domain events capture important state changes (explicit types)
- [ ] Entities have behavior, not just data (more encapsulation)
- [ ] No business logic in application services (handler decomposition)
- [ ] No infrastructure concerns in domain layer
---
## Appendix: File References
### Core Domain Files
| File | Purpose |
|------|---------|
| `pkg/database/interface.go` | Repository interface (109 lines) |
| `pkg/interfaces/acl/acl.go` | ACL interface definition with PolicyChecker |
| `pkg/interfaces/store/store_interface.go` | Store sub-interfaces, IdPkTs, EventRef |
| `pkg/policy/policy.go` | Policy rules and evaluation (~1000 lines) |
| `pkg/protocol/nip43/types.go` | NIP-43 invite management |
| `pkg/protocol/graph/executor.go` | Graph query execution |
### Application Layer Files
| File | Purpose |
|------|---------|
| `app/server.go` | HTTP/WebSocket server setup (1240 lines) |
| `app/listener.go` | Connection aggregate (297 lines) |
| `app/handle-event.go` | EVENT message handler |
| `app/handle-req.go` | REQ message handler |
| `app/handle-auth.go` | AUTH message handler |
| `app/handle-nip43.go` | NIP-43 membership handlers |
| `app/handle-nip86.go` | NIP-86 management handlers |
| `app/handle-policy-config.go` | Policy configuration events |
### Infrastructure Files
| File | Purpose |
|------|---------|
| `pkg/database/database.go` | Badger implementation |
| `pkg/neo4j/` | Neo4j implementation |
| `pkg/wasmdb/` | WasmDB implementation |
| `pkg/blossom/server.go` | Blossom blob storage server |
| `pkg/ratelimit/limiter.go` | PID-based rate limiting |
| `pkg/sync/manager.go` | Distributed sync manager |
| `pkg/sync/cluster.go` | Cluster replication |
| `pkg/spider/spider.go` | Event spider/aggregator |
### Interface Packages
| Package | Purpose |
|---------|---------|
| `pkg/interfaces/acl/` | ACL abstraction |
| `pkg/interfaces/loadmonitor/` | Load monitoring abstraction |
| `pkg/interfaces/neterr/` | Network error types |
| `pkg/interfaces/pid/` | PID controller interface |
| `pkg/interfaces/policy/` | Policy interface |
| `pkg/interfaces/publisher/` | Event publisher interface |
| `pkg/interfaces/resultiter/` | Result iterator interface |
| `pkg/interfaces/store/` | Store interface with IdPkTs, EventRef |
| `pkg/interfaces/typer/` | Type introspection interface |
---
*Generated: 2025-12-24*
*Analysis based on ORLY codebase v0.36.14*

View File

@@ -1,387 +0,0 @@
# Dgraph Database Implementation Status
## Overview
This document tracks the implementation of Dgraph as an alternative database backend for ORLY. The implementation allows switching between Badger (default) and Dgraph via the `ORLY_DB_TYPE` environment variable.
## Completion Status: ✅ STEP 1 COMPLETE - DGRAPH SERVER INTEGRATION + TESTS
**Build Status:** ✅ Successfully compiles with `CGO_ENABLED=0`
**Binary Test:** ✅ ORLY v0.29.0 starts and runs successfully
**Database Backend:** Uses badger by default, dgraph client integration complete
**Dgraph Integration:** ✅ Real dgraph client connection via dgo library
**Test Suite:** ✅ Comprehensive test suite mirroring badger tests
### ✅ Completed Components
1. **Core Infrastructure**
- Database interface abstraction (`pkg/database/interface.go`)
- Database factory with `ORLY_DB_TYPE` configuration
- Dgraph package structure (`pkg/dgraph/`)
- Schema definition for Nostr events, authors, tags, and markers
- Lifecycle management (initialization, shutdown)
2. **Serial Number Generation**
- Atomic counter using Dgraph markers (`pkg/dgraph/serial.go`)
- Automatic initialization on startup
- Thread-safe increment with mutex protection
- Serial numbers assigned during SaveEvent
3. **Event Operations**
- `SaveEvent`: Store events with graph relationships
- `QueryEvents`: DQL query generation from Nostr filters
- `QueryEventsWithOptions`: Support for delete events and versions
- `CountEvents`: Event counting
- `FetchEventBySerial`: Retrieve by serial number
- `DeleteEvent`: Event deletion by ID
- `Delete EventBySerial`: Event deletion by serial
- `ProcessDelete`: Kind 5 deletion processing
4. **Metadata Storage (Marker-based)**
- `SetMarker`/`GetMarker`/`HasMarker`/`DeleteMarker`: Key-value storage
- Relay identity storage (using markers)
- All metadata stored as special Marker nodes in graph
5. **Subscriptions & Payments**
- `GetSubscription`/`IsSubscriptionActive`/`ExtendSubscription`
- `RecordPayment`/`GetPaymentHistory`
- `ExtendBlossomSubscription`/`GetBlossomStorageQuota`
- `IsFirstTimeUser`
- All implemented using JSON-encoded markers
6. **NIP-43 Invite System**
- `AddNIP43Member`/`RemoveNIP43Member`/`IsNIP43Member`
- `GetNIP43Membership`/`GetAllNIP43Members`
- `StoreInviteCode`/`ValidateInviteCode`/`DeleteInviteCode`
- All implemented using JSON-encoded markers
7. **Import/Export**
- `Import`/`ImportEventsFromReader`/`ImportEventsFromStrings`
- JSONL format support
- Basic `Export` stub
8. **Configuration**
- `ORLY_DB_TYPE` environment variable added
- Factory pattern for database instantiation
- main.go updated to use database.Database interface
9. **Compilation Fixes (Completed)**
- ✅ All interface signatures matched to badger implementation
- ✅ Fixed 100+ type errors in pkg/dgraph package
- ✅ Updated app layer to use database interface instead of concrete types
- ✅ Added type assertions for compatibility with existing managers
- ✅ Project compiles successfully with both badger and dgraph implementations
10. **Dgraph Server Integration (✅ STEP 1 COMPLETE)**
- ✅ Added dgo client library (v230.0.1)
- ✅ Implemented gRPC connection to external dgraph instance
- ✅ Real Query() and Mutate() methods using dgraph client
- ✅ Schema definition and automatic application on startup
- ✅ ORLY_DGRAPH_URL configuration (default: localhost:9080)
- ✅ Proper connection lifecycle management
- ✅ Badger metadata store for local key-value storage
- ✅ Dual-storage architecture: dgraph for events, badger for metadata
11. **Test Suite (✅ COMPLETE)**
- ✅ Test infrastructure (testmain_test.go, helpers_test.go)
- ✅ Comprehensive save-event tests
- ✅ Comprehensive query-events tests
- ✅ Docker-compose setup for dgraph server
- ✅ Automated test scripts (test-dgraph.sh, dgraph-start.sh)
- ✅ Test documentation (DGRAPH_TESTING.md)
- ✅ All tests compile successfully
- ⏳ Tests require running dgraph server to execute
### ⚠️ Remaining Work (For Production Use)
1. **Unimplemented Methods** (Stubs - Not Critical)
- `GetSerialsFromFilter`: Returns "not implemented" error
- `GetSerialsByRange`: Returns "not implemented" error
- `EventIdsBySerial`: Returns "not implemented" error
- These are helper methods that may not be critical for basic operation
2. **📝 STEP 2: DQL Implementation** (Next Priority)
- Update save-event.go to use real Mutate() calls with RDF N-Quads
- Update query-events.go to parse actual DQL responses
- Implement proper event JSON unmarshaling from dgraph responses
- Add error handling for dgraph-specific errors
- Optimize DQL queries for performance
3. **Schema Optimizations**
- Current tag queries are simplified
- Complex tag filters may need refinement
- Consider using Dgraph facets for better tag indexing
4. **📝 STEP 3: Testing** (After DQL Implementation)
- Set up local dgraph instance for testing
- Integration testing with relay-tester
- Performance comparison with Badger
- Memory usage profiling
- Test with actual dgraph server instance
### 📦 Dependencies Added
```bash
go get github.com/dgraph-io/dgo/v230@v230.0.1
go get google.golang.org/grpc@latest
go get github.com/dgraph-io/badger/v4 # For metadata storage
```
All dependencies have been added and `go mod tidy` completed successfully.
### 🔌 Dgraph Server Integration Details
The implementation uses a **client-server architecture**:
1. **Dgraph Server** (External)
- Runs as a separate process (via docker or standalone)
- Default gRPC endpoint: `localhost:9080`
- Configured via `ORLY_DGRAPH_URL` environment variable
2. **ORLY Dgraph Client** (Integrated)
- Uses dgo library for gRPC communication
- Connects on startup, applies Nostr schema automatically
- Query and Mutate methods communicate with dgraph server
3. **Dual Storage Architecture**
- **Dgraph**: Event graph storage (events, authors, tags, relationships)
- **Badger**: Metadata storage (markers, counters, relay identity)
- This hybrid approach leverages strengths of both databases
## Implementation Approach
### Marker-Based Storage
For metadata that doesn't fit the graph model (subscriptions, NIP-43, identity), we use a marker-based approach:
1. **Markers** are special graph nodes with type "Marker"
2. Each marker has:
- `marker.key`: String index for lookup
- `marker.value`: Hex-encoded or JSON-encoded data
3. This provides key-value storage within the graph database
### Serial Number Management
Serial numbers are critical for event ordering. Implementation:
```go
// Serial counter stored as a special marker
const serialCounterKey = "serial_counter"
// Atomic increment with mutex protection
func (d *D) getNextSerial() (uint64, error) {
serialMutex.Lock()
defer serialMutex.Unlock()
// Query current value, increment, save
...
}
```
### Event Storage
Events are stored as graph nodes with relationships:
- **Event nodes**: ID, serial, kind, created_at, content, sig, pubkey, tags
- **Author nodes**: Pubkey with reverse edges to events
- **Tag nodes**: Tag type and value with reverse edges
- **Relationships**: `authored_by`, `references`, `mentions`, `tagged_with`
## Files Created/Modified
### New Files (`pkg/dgraph/`)
- `dgraph.go`: Main implementation, initialization, schema
- `save-event.go`: Event storage with RDF triple generation
- `query-events.go`: Nostr filter to DQL translation
- `fetch-event.go`: Event retrieval methods
- `delete.go`: Event deletion
- `markers.go`: Key-value metadata storage
- `identity.go`: Relay identity management
- `serial.go`: Serial number generation
- `subscriptions.go`: Subscription/payment methods
- `nip43.go`: NIP-43 invite system
- `import-export.go`: Import/export operations
- `logger.go`: Logging adapter
- `utils.go`: Helper functions
- `README.md`: Documentation
### Modified Files
- `pkg/database/interface.go`: Database interface definition
- `pkg/database/factory.go`: Database factory
- `pkg/database/database.go`: Badger compile-time check
- `app/config/config.go`: Added `ORLY_DB_TYPE` config
- `app/server.go`: Changed to use Database interface
- `app/main.go`: Updated to use Database interface
- `main.go`: Added dgraph import and factory usage
## Usage
### Setting Up Dgraph Server
Before using dgraph mode, start a dgraph server:
```bash
# Using docker (recommended)
docker run -d -p 8080:8080 -p 9080:9080 -p 8000:8000 \
-v ~/dgraph:/dgraph \
dgraph/standalone:latest
# Or using docker-compose (see docs/dgraph-docker-compose.yml)
docker-compose up -d dgraph
```
### Environment Configuration
```bash
# Use Badger (default)
./orly
# Use Dgraph with default localhost connection
export ORLY_DB_TYPE=dgraph
./orly
# Use Dgraph with custom server
export ORLY_DB_TYPE=dgraph
export ORLY_DGRAPH_URL=remote.dgraph.server:9080
./orly
# With full configuration
export ORLY_DB_TYPE=dgraph
export ORLY_DGRAPH_URL=localhost:9080
export ORLY_DATA_DIR=/path/to/data
./orly
```
### Data Storage
#### Badger
- Single directory with SST files
- Typical size: 100-500MB for moderate usage
#### Dgraph
- Three subdirectories:
- `p/`: Postings (main data)
- `w/`: Write-ahead log
- Typical size: 500MB-2GB overhead + event data
## Performance Considerations
### Memory Usage
- **Badger**: ~100-200MB baseline
- **Dgraph**: ~500MB-1GB baseline
### Query Performance
- **Simple queries** (by ID, kind, author): Dgraph may be slower than Badger
- **Graph traversals** (follows-of-follows): Dgraph significantly faster
- **Full-text search**: Dgraph has built-in support
### Recommendations
1. Use Badger for simple, high-performance relays
2. Use Dgraph for relays needing complex graph queries
3. Consider hybrid approach: Badger primary + Dgraph secondary
## Next Steps to Complete
### ✅ STEP 1: Dgraph Server Integration (COMPLETED)
- ✅ Added dgo client library
- ✅ Implemented gRPC connection
- ✅ Real Query/Mutate methods
- ✅ Schema application
- ✅ Configuration added
### 📝 STEP 2: DQL Implementation (Next Priority)
1. **Update SaveEvent Implementation** (2-3 hours)
- Replace RDF string building with actual Mutate() calls
- Use dgraph's SetNquads for event insertion
- Handle UIDs and references properly
- Add error handling and transaction rollback
2. **Update QueryEvents Implementation** (2-3 hours)
- Parse actual JSON responses from dgraph Query()
- Implement proper event deserialization
- Handle pagination with DQL offset/limit
- Add query optimization for common patterns
3. **Implement Helper Methods** (1-2 hours)
- FetchEventBySerial using DQL
- GetSerialsByIds using DQL
- CountEvents using DQL aggregation
- DeleteEvent using dgraph mutations
### 📝 STEP 3: Testing (After DQL)
1. **Setup Dgraph Test Instance** (30 minutes)
```bash
# Start dgraph server
docker run -d -p 9080:9080 dgraph/standalone:latest
# Test connection
ORLY_DB_TYPE=dgraph ORLY_DGRAPH_URL=localhost:9080 ./orly
```
2. **Basic Functional Testing** (1 hour)
```bash
# Start with dgraph
ORLY_DB_TYPE=dgraph ./orly
# Test with relay-tester
go run cmd/relay-tester/main.go -url ws://localhost:3334
```
3. **Performance Testing** (2 hours)
```bash
# Compare query performance
# Memory profiling
# Load testing
```
## Known Limitations
1. **Subscription Storage**: Uses simple JSON encoding in markers rather than proper graph nodes
2. **Tag Queries**: Simplified implementation may not handle all complex tag filter combinations
3. **Export**: Basic stub - needs full implementation for production use
4. **Migrations**: Not implemented (Dgraph schema changes require manual updates)
## Conclusion
The Dgraph implementation has completed **✅ STEP 1: DGRAPH SERVER INTEGRATION** successfully.
### What Works Now (Step 1 Complete)
- ✅ Full database interface implementation
- ✅ All method signatures match badger implementation
- ✅ Project compiles successfully with `CGO_ENABLED=0`
- ✅ Binary runs and starts successfully
- ✅ Real dgraph client connection via dgo library
- ✅ gRPC communication with external dgraph server
- ✅ Schema application on startup
- ✅ Query() and Mutate() methods implemented
- ✅ ORLY_DGRAPH_URL configuration
- ✅ Dual-storage architecture (dgraph + badger metadata)
### Implementation Status
- **Step 1: Dgraph Server Integration** ✅ COMPLETE
- **Step 2: DQL Implementation** 📝 Next (save-event.go and query-events.go need updates)
- **Step 3: Testing** 📝 After Step 2 (relay-tester, performance benchmarks)
### Architecture Summary
The implementation uses a **client-server architecture** with dual storage:
1. **Dgraph Client** (ORLY)
- Connects to external dgraph via gRPC (default: localhost:9080)
- Applies Nostr schema automatically on startup
- Query/Mutate methods ready for DQL operations
2. **Dgraph Server** (External)
- Run separately via docker or standalone binary
- Stores event graph data (events, authors, tags, relationships)
- Handles all graph queries and mutations
3. **Badger Metadata Store** (Local)
- Stores markers, counters, relay identity
- Provides fast key-value access for non-graph data
- Complements dgraph for hybrid storage benefits
The abstraction layer is complete and the dgraph client integration is functional. Next step is implementing actual DQL query/mutation logic in save-event.go and query-events.go.

64
Dockerfile Normal file
View File

@@ -0,0 +1,64 @@
# Multi-stage Dockerfile for ORLY relay
# Stage 1: Build stage
# Use Debian-based Go image to match runtime stage (avoids musl/glibc linker mismatch)
FROM golang:1.25-bookworm AS builder
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends git make && rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /build
# Copy go mod files
COPY go.mod go.sum ./
RUN go mod download
# Copy source code
COPY . .
# Build the binary with CGO disabled
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o orly -ldflags="-w -s" .
# Stage 2: Runtime stage
# Use Debian slim instead of Alpine because Debian's libsecp256k1 includes
# Schnorr signatures (secp256k1_schnorrsig_*) and ECDH which Nostr requires.
# Alpine's libsecp256k1 is built without these modules.
FROM debian:bookworm-slim
# Install runtime dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends ca-certificates curl libsecp256k1-1 && \
rm -rf /var/lib/apt/lists/*
# Create app user
RUN groupadd -g 1000 orly && \
useradd -m -u 1000 -g orly orly
# Set working directory
WORKDIR /app
# Copy binary (libsecp256k1.so.1 is already installed via apt)
COPY --from=builder /build/orly /app/orly
# Create data directory
RUN mkdir -p /data && chown -R orly:orly /data /app
# Switch to app user
USER orly
# Expose ports
EXPOSE 3334
# Health check
HEALTHCHECK --interval=10s --timeout=5s --start-period=20s --retries=3 \
CMD curl -f http://localhost:3334/ || exit 1
# Set default environment variables
ENV ORLY_LISTEN=0.0.0.0 \
ORLY_PORT=3334 \
ORLY_DATA_DIR=/data \
ORLY_LOG_LEVEL=info
# Run the binary
ENTRYPOINT ["/app/orly"]

43
Dockerfile.relay-tester Normal file
View File

@@ -0,0 +1,43 @@
# Dockerfile for relay-tester
# Use Debian-based Go image to match runtime stage (avoids musl/glibc linker mismatch)
FROM golang:1.25-bookworm AS builder
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends git && rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /build
# Copy go mod files
COPY go.mod go.sum ./
RUN go mod download
# Copy source code
COPY . .
# Build the relay-tester binary
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o relay-tester ./cmd/relay-tester
# Runtime stage
# Use Debian slim instead of Alpine because Debian's libsecp256k1 includes
# Schnorr signatures (secp256k1_schnorrsig_*) and ECDH which Nostr requires.
# Alpine's libsecp256k1 is built without these modules.
FROM debian:bookworm-slim
# Install runtime dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends ca-certificates libsecp256k1-1 && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy binary (libsecp256k1.so.1 is already installed via apt)
COPY --from=builder /build/relay-tester /app/relay-tester
# Default relay URL (can be overridden)
ENV RELAY_URL=ws://orly:3334
# Run the relay tester
ENTRYPOINT ["/app/relay-tester"]
CMD ["-url", "${RELAY_URL}"]

702
README.md Normal file
View File

@@ -0,0 +1,702 @@
# next.orly.dev
---
![orly.dev](./docs/orly.png)
![Version v0.24.1](https://img.shields.io/badge/version-v0.24.1-blue.svg)
[![Documentation](https://img.shields.io/badge/godoc-documentation-blue.svg)](https://pkg.go.dev/next.orly.dev)
[![Support this project](https://img.shields.io/badge/donate-geyser_crowdfunding_project_page-orange.svg)](https://geyser.fund/project/orly)
zap me: <20>mlekudev@getalby.com
follow me on [nostr](https://jumble.social/users/npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku)
## Table of Contents
- [Bug Reports & Feature Requests](#%EF%B8%8F-bug-reports--feature-requests)
- [System Requirements](#%EF%B8%8F-system-requirements)
- [About](#about)
- [Performance & Cryptography](#performance--cryptography)
- [Building](#building)
- [Prerequisites](#prerequisites)
- [Basic Build](#basic-build)
- [Building with Web UI](#building-with-web-ui)
- [Core Features](#core-features)
- [Web UI](#web-ui)
- [Sprocket Event Processing](#sprocket-event-processing)
- [Policy System](#policy-system)
- [Deployment](#deployment)
- [Automated Deployment](#automated-deployment)
- [TLS Configuration](#tls-configuration)
- [systemd Service Management](#systemd-service-management)
- [Remote Deployment](#remote-deployment)
- [Configuration](#configuration)
- [Firewall Configuration](#firewall-configuration)
- [Monitoring](#monitoring)
- [Testing](#testing)
- [Command-Line Tools](#command-line-tools)
- [Access Control](#access-control)
- [Follows ACL](#follows-acl)
- [Curation ACL](#curation-acl)
- [Cluster Replication](#cluster-replication)
- [Developer Notes](#developer-notes)
## ⚠️ Bug Reports & Feature Requests
**Bug reports and feature requests that do not follow the protocol will not be accepted.**
Before submitting any issue, you must read and follow [BUG_REPORTS_AND_FEATURE_REQUEST_PROTOCOL.md](./BUG_REPORTS_AND_FEATURE_REQUEST_PROTOCOL.md).
Requirements:
- **Bug reports**: Include environment details, reproduction steps, expected/actual behavior, and logs
- **Feature requests**: Include problem statement, proposed solution, and use cases
- **Both**: Search existing issues first, verify with latest version, provide minimal reproduction
Issues missing required information will be closed without review.
## ⚠️ System Requirements
> **IMPORTANT: ORLY requires a minimum of 500MB of free memory to operate.**
>
> The relay uses adaptive PID-controlled rate limiting to manage memory pressure. By default, it will:
> - Auto-detect available system memory at startup
> - Target 66% of available memory, capped at 1.5GB for optimal performance
> - **Fail to start** if less than 500MB is available
>
> You can override the memory target with `ORLY_RATE_LIMIT_TARGET_MB` (e.g., `ORLY_RATE_LIMIT_TARGET_MB=2000` for 2GB).
>
> To disable rate limiting (not recommended): `ORLY_RATE_LIMIT_ENABLED=false`
## About
ORLY is a nostr relay written from the ground up to be performant, low latency, and built with a number of features designed to make it well suited for:
- personal relays
- small community relays
- business deployments and RaaS (Relay as a Service) with a nostr-native NWC client to allow accepting payments through NWC capable lightning nodes
- high availability clusters for reliability and/or providing a unified data set across multiple regions
## Performance & Cryptography
ORLY leverages high-performance libraries and custom optimizations for exceptional speed:
- **SIMD Libraries**: Uses [minio/sha256-simd](https://github.com/minio/sha256-simd) for accelerated SHA256 hashing
- **p256k1 Cryptography**: Implements [p256k1.mleku.dev](https://github.com/p256k1/p256k1) for fast elliptic curve operations optimized for nostr
- **Fast Message Encoders**: High-performance encoding/decoding with [templexxx/xhex](https://github.com/templexxx/xhex) for SIMD-accelerated hex operations
The encoders achieve **24% faster JSON marshaling**, **16% faster canonical encoding**, and **54-91% reduction in memory allocations** through custom buffer pre-allocation and zero-allocation optimization techniques.
ORLY uses a fast embedded [badger](https://github.com/hypermodeinc/badger) database with a database designed for high performance querying and event storage.
## Building
ORLY is a standard Go application that can be built using the Go toolchain.
### Prerequisites
- Go 1.25.3 or later
- Git
- For web UI: [Bun](https://bun.sh/) JavaScript runtime
### Basic Build
To build the relay binary only:
```bash
git clone <repository-url>
cd next.orly.dev
go build -o orly
```
### Building with Web UI
To build with the embedded web interface:
```bash
# Build the Svelte web application
cd app/web
bun install
bun run build
# Build the Go binary from project root
cd ../../
go build -o orly
```
The recommended way to build and embed the web UI is using the provided script:
```bash
./scripts/update-embedded-web.sh
```
This script will:
- Build the Svelte app in `app/web` to `app/web/dist` using Bun (preferred) or fall back to npm/yarn/pnpm
- Run `go install` from the repository root so the binary picks up the new embedded assets
- Automatically detect and use the best available JavaScript package manager
For manual builds, you can also use:
```bash
#!/bin/bash
# build.sh
echo "Building Svelte app..."
cd app/web
bun install
bun run build
echo "Building Go binary..."
cd ../../
go build -o orly
echo "Build complete!"
```
Make it executable with `chmod +x build.sh` and run with `./build.sh`.
## Core Features
### Web UI
ORLY includes a modern web-based user interface built with [Svelte](https://svelte.dev/) for relay management and monitoring.
- **Secure Authentication**: Nostr key pair authentication with challenge-response
- **Event Management**: Browse, export, import, and search events
- **User Administration**: Role-based permissions (guest, user, admin, owner)
- **Sprocket Management**: Upload and monitor event processing scripts
- **Real-time Updates**: Live event streaming and system monitoring
- **Responsive Design**: Works on desktop and mobile devices
- **Dark/Light Themes**: Persistent theme preferences
The web UI is embedded in the relay binary and accessible at the relay's root path.
#### Web UI Development
For development with hot-reloading, ORLY can proxy web requests to a local dev server while still handling WebSocket relay connections and API requests.
**Environment Variables:**
- `ORLY_WEB_DISABLE` - Set to `true` to disable serving the embedded web UI
- `ORLY_WEB_DEV_PROXY_URL` - URL of the dev server to proxy web requests to (e.g., `localhost:8080`)
**Setup:**
1. Start the dev server (in one terminal):
```bash
cd app/web
bun install
bun run dev
```
Note the port sirv is listening on (e.g., `http://localhost:8080`).
2. Start the relay with dev proxy enabled (in another terminal):
```bash
export ORLY_WEB_DISABLE=true
export ORLY_WEB_DEV_PROXY_URL=localhost:8080
./orly
```
The relay will:
- Handle WebSocket connections at `/` for Nostr protocol
- Handle API requests at `/api/*`
- Proxy all other requests (HTML, JS, CSS, assets) to the dev server
**With a reverse proxy/tunnel:**
If you're running behind a reverse proxy or tunnel (e.g., Caddy, nginx, Cloudflare Tunnel), the setup is the same. The relay listens locally and your reverse proxy forwards traffic to it:
```
Browser <20> Reverse Proxy <20> ORLY (port 3334) <20> Dev Server (port 8080)
<20>
WebSocket/API
```
Example with the relay on port 3334 and sirv on port 8080:
```bash
# Terminal 1: Dev server
cd app/web && bun run dev
# Output: Your application is ready~!
# Local: http://localhost:8080
# Terminal 2: Relay
export ORLY_WEB_DISABLE=true
export ORLY_WEB_DEV_PROXY_URL=localhost:8080
export ORLY_PORT=3334
./orly
```
**Disabling the web UI without a proxy:**
If you only want to disable the embedded web UI (without proxying to a dev server), just set `ORLY_WEB_DISABLE=true` without setting `ORLY_WEB_DEV_PROXY_URL`. The relay will return 404 for web UI requests while still handling WebSocket and API requests.
### Sprocket Event Processing
ORLY includes a powerful sprocket system for external event processing scripts. Sprocket scripts enable custom filtering, validation, and processing logic for Nostr events before storage.
- **Real-time Processing**: Scripts receive events via stdin and respond with JSONL decisions
- **Three Actions**: `accept`, `reject`, or `shadowReject` events based on custom logic
- **Automatic Recovery**: Failed scripts are automatically disabled with periodic recovery attempts
- **Web UI Management**: Upload, configure, and monitor scripts through the admin interface
```bash
export ORLY_SPROCKET_ENABLED=true
export ORLY_APP_NAME="ORLY"
# Place script at ~/.config/ORLY/sprocket.sh
```
For detailed configuration and examples, see the [sprocket documentation](docs/sprocket/).
### Policy System
ORLY includes a comprehensive policy system for fine-grained control over event storage and retrieval. Configure custom validation rules, access controls, size limits, and age restrictions.
- **Access Control**: Allow/deny based on pubkeys, roles, or social relationships
- **Content Filtering**: Size limits, age validation, and custom rules
- **Script Integration**: Execute custom scripts for complex policy logic
- **Real-time Enforcement**: Policies applied to both read and write operations
```bash
export ORLY_POLICY_ENABLED=true
# Default policy file: ~/.config/ORLY/policy.json
# OPTIONAL: Use a custom policy file location
# WARNING: ORLY_POLICY_PATH MUST be an ABSOLUTE path (starting with /)
# Relative paths will be REJECTED and the relay will fail to start
export ORLY_POLICY_PATH=/etc/orly/policy.json
```
For detailed configuration and examples, see the [Policy Usage Guide](docs/POLICY_USAGE_GUIDE.md).
## Deployment
ORLY includes an automated deployment script that handles Go installation, dependency setup, building, and systemd service configuration.
### Automated Deployment
The deployment script (`scripts/deploy.sh`) provides a complete setup solution:
```bash
# Clone the repository
git clone <repository-url>
cd next.orly.dev
# Run the deployment script
./scripts/deploy.sh
```
The script will:
1. **Install Go 1.25.3** if not present (in `~/.local/go`)
2. **Configure environment** by creating `~/.goenv` and updating `~/.bashrc`
3. **Build the relay** with embedded web UI using `update-embedded-web.sh`
4. **Set capabilities** for port 443 binding (requires sudo)
5. **Install binary** to `~/.local/bin/orly`
6. **Create systemd service** and enable it
After deployment, reload your shell environment:
```bash
source ~/.bashrc
```
### TLS Configuration
ORLY supports automatic TLS certificate management with Let's Encrypt and custom certificates:
```bash
# Enable TLS with Let's Encrypt for specific domains
export ORLY_TLS_DOMAINS=relay.example.com,backup.relay.example.com
# Optional: Use custom certificates (will load .pem and .key files)
export ORLY_CERTS=/path/to/cert1,/path/to/cert2
# When TLS domains are configured, ORLY will:
# - Listen on port 443 for HTTPS/WSS
# - Listen on port 80 for ACME challenges
# - Ignore ORLY_PORT setting
```
Certificate files should be named with `.pem` and `.key` extensions:
- `/path/to/cert1.pem` (certificate)
- `/path/to/cert1.key` (private key)
### systemd Service Management
The deployment script creates a systemd service for easy management:
```bash
# Start the service
sudo systemctl start orly
# Stop the service
sudo systemctl stop orly
# Restart the service
sudo systemctl restart orly
# Enable service to start on boot
sudo systemctl enable orly --now
# Disable service from starting on boot
sudo systemctl disable orly --now
# Check service status
sudo systemctl status orly
# View service logs
sudo journalctl -u orly -f
# View recent logs
sudo journalctl -u orly --since "1 hour ago"
```
### Remote Deployment
You can deploy ORLY on a remote server using SSH:
```bash
# Deploy to a VPS with SSH key authentication
ssh user@your-server.com << 'EOF'
# Clone and deploy
git clone <repository-url>
cd next.orly.dev
./scripts/deploy.sh
# Configure your relay
echo 'export ORLY_TLS_DOMAINS=relay.example.com' >> ~/.bashrc
echo 'export ORLY_ADMINS=npub1your_admin_key_here' >> ~/.bashrc
# Start the service
sudo systemctl start orly --now
EOF
# Check deployment status
ssh user@your-server.com 'sudo systemctl status orly'
```
### Configuration
After deployment, configure your relay by setting environment variables in your shell profile:
```bash
# Add to ~/.bashrc or ~/.profile
export ORLY_TLS_DOMAINS=relay.example.com
export ORLY_ADMINS=npub1your_admin_key
export ORLY_ACL_MODE=follows
export ORLY_APP_NAME="MyRelay"
```
Then restart the service:
```bash
source ~/.bashrc
sudo systemctl restart orly
```
### Firewall Configuration
Ensure your firewall allows the necessary ports:
```bash
# For TLS-enabled relays
sudo ufw allow 80/tcp # HTTP (ACME challenges)
sudo ufw allow 443/tcp # HTTPS/WSS
# For non-TLS relays
sudo ufw allow 3334/tcp # Default ORLY port
# Enable firewall if not already enabled
sudo ufw enable
```
### Monitoring
Monitor your relay using systemd and standard Linux tools:
```bash
# Service status and logs
sudo systemctl status orly
sudo journalctl -u orly -f
# Resource usage
htop
sudo ss -tulpn | grep orly
# Disk usage (database grows over time)
du -sh ~/.local/share/ORLY/
# Check TLS certificates (if using Let's Encrypt)
ls -la ~/.local/share/ORLY/autocert/
```
## Testing
ORLY includes comprehensive testing tools for protocol validation and performance testing.
- **Protocol Testing**: Use `relay-tester` for Nostr protocol compliance validation
- **Stress Testing**: Performance testing under various load conditions
- **Benchmark Suite**: Comparative performance testing across relay implementations
For detailed testing instructions, multi-relay testing scenarios, and advanced usage, see the [Relay Testing Guide](docs/RELAY_TESTING_GUIDE.md).
The benchmark suite provides comprehensive performance testing and comparison across multiple relay implementations, including throughput, latency, and memory usage metrics.
## Command-Line Tools
ORLY includes several command-line utilities in the `cmd/` directory for testing, debugging, and administration.
### relay-tester
Nostr protocol compliance testing tool. Validates that a relay correctly implements the Nostr protocol specification.
```bash
# Run all protocol compliance tests
go run ./cmd/relay-tester -url ws://localhost:3334
# List available tests
go run ./cmd/relay-tester -list
# Run specific test
go run ./cmd/relay-tester -url ws://localhost:3334 -test "Basic Event"
# Output results as JSON
go run ./cmd/relay-tester -url ws://localhost:3334 -json
```
### benchmark
Comprehensive relay performance benchmarking tool. Tests event storage, queries, and subscription performance with detailed latency metrics (P90, P95, P99).
```bash
# Run benchmarks against local database
go run ./cmd/benchmark -data-dir /tmp/bench-db -events 10000 -workers 4
# Run benchmarks against a running relay
go run ./cmd/benchmark -relay ws://localhost:3334 -events 5000
# Use different database backends
go run ./cmd/benchmark -dgraph -events 10000
go run ./cmd/benchmark -neo4j -events 10000
```
The `cmd/benchmark/` directory also includes Docker Compose configurations for comparative benchmarks across multiple relay implementations (strfry, nostr-rs-relay, khatru, etc.).
### stresstest
Load testing tool for evaluating relay performance under sustained high-traffic conditions. Generates events with random content and tags to simulate realistic workloads.
```bash
# Run stress test with 10 concurrent workers
go run ./cmd/stresstest -url ws://localhost:3334 -workers 10 -duration 60s
# Generate events with random p-tags (up to 100 per event)
go run ./cmd/stresstest -url ws://localhost:3334 -workers 5
```
### blossomtest
Tests the Blossom blob storage protocol (BUD-01/BUD-02) implementation. Validates upload, download, and authentication flows.
```bash
# Test with generated key
go run ./cmd/blossomtest -url http://localhost:3334 -size 1024
# Test with specific nsec
go run ./cmd/blossomtest -url http://localhost:3334 -nsec nsec1...
# Test anonymous uploads (no authentication)
go run ./cmd/blossomtest -url http://localhost:3334 -no-auth
```
### aggregator
Event aggregation utility that fetches events from multiple relays using bloom filters for deduplication. Useful for syncing events across relays with memory-efficient duplicate detection.
```bash
go run ./cmd/aggregator -relays wss://relay1.com,wss://relay2.com -output events.jsonl
```
### convert
Key format conversion utility. Converts between hex and bech32 (npub/nsec) formats for Nostr keys.
```bash
# Convert npub to hex
go run ./cmd/convert npub1abc...
# Convert hex to npub
go run ./cmd/convert 0123456789abcdef...
# Convert secret key (nsec or hex) - outputs both nsec and derived npub
go run ./cmd/convert --secret nsec1xyz...
```
### FIND
Free Internet Name Daemon - CLI tool for the distributed naming system. Manages name registration, transfers, and certificate issuance.
```bash
# Validate a name format
go run ./cmd/FIND verify-name example.nostr
# Generate a new key pair
go run ./cmd/FIND generate-key
# Create a registration proposal
go run ./cmd/FIND register myname.nostr
# Transfer a name to a new owner
go run ./cmd/FIND transfer myname.nostr npub1newowner...
```
### policytest
Tests the policy system for event write control. Validates that policy rules correctly allow or reject events based on kind, pubkey, and other criteria.
```bash
go run ./cmd/policytest -url ws://localhost:3334 -type event -kind 4678
go run ./cmd/policytest -url ws://localhost:3334 -type req -kind 1
go run ./cmd/policytest -url ws://localhost:3334 -type publish-and-query -count 5
```
### policyfiltertest
Tests policy-based filtering with authorized and unauthorized pubkeys. Validates access control rules for specific users.
```bash
go run ./cmd/policyfiltertest -url ws://localhost:3334 \
-allowed-pubkey <hex> -allowed-sec <hex> \
-unauthorized-pubkey <hex> -unauthorized-sec <hex>
```
### subscription-test
Tests WebSocket subscription stability over extended periods. Monitors for dropped subscriptions and connection issues.
```bash
# Run subscription stability test for 60 seconds
go run ./cmd/subscription-test -url ws://localhost:3334 -duration 60 -kind 1
# With verbose output
go run ./cmd/subscription-test -url ws://localhost:3334 -duration 120 -v
```
### subscription-test-simple
Simplified subscription stability test that verifies subscriptions remain active without dropping over the test duration.
```bash
go run ./cmd/subscription-test-simple -url ws://localhost:3334 -duration 120
```
## Access Control
ORLY provides four ACL (Access Control List) modes to control who can publish events to your relay:
| Mode | Description | Best For |
|------|-------------|----------|
| `none` | Open relay, anyone can write | Public relays |
| `follows` | Write access based on admin follow lists | Personal/community relays |
| `managed` | Explicit allow/deny lists via NIP-86 API | Private relays |
| `curating` | Three-tier classification with rate limiting | Curated community relays |
```bash
export ORLY_ACL_MODE=follows # or: none, managed, curating
```
### Follows ACL
The follows ACL system provides flexible relay access control based on social relationships in the Nostr network.
```bash
export ORLY_ACL_MODE=follows
export ORLY_ADMINS=npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku
./orly
```
The system grants write access to users followed by designated admins, with read-only access for others. Follow lists update dynamically as admins modify their relationships.
### Curation ACL
The curation ACL mode provides sophisticated content curation with a three-tier publisher classification system:
- **Trusted**: Unlimited publishing, bypass rate limits
- **Blacklisted**: Blocked from publishing, invisible to regular users
- **Unclassified**: Rate-limited publishing (default 50 events/day)
Key features:
- **Kind whitelisting**: Only allow specific event kinds (e.g., social, DMs, longform)
- **IP-based flood protection**: Auto-ban IPs that exceed rate limits
- **Spam flagging**: Mark events as spam without deleting
- **Web UI management**: Configure via the built-in curation interface
```bash
export ORLY_ACL_MODE=curating
export ORLY_OWNERS=npub1your_owner_key
./orly
```
After starting, publish a configuration event (kind 30078) to enable the relay. The web UI at `/#curation` provides a complete management interface.
For detailed configuration and API documentation, see the [Curation Mode Guide](docs/CURATION_MODE_GUIDE.md).
### Cluster Replication
ORLY supports distributed relay clusters using active replication. When configured with peer relays, ORLY will automatically synchronize events between cluster members using efficient HTTP polling.
```bash
export ORLY_RELAY_PEERS=https://peer1.example.com,https://peer2.example.com
export ORLY_CLUSTER_ADMINS=npub1cluster_admin_key
```
**Privacy Considerations:** By default, ORLY propagates all events including privileged events (DMs, gift wraps, etc.) to cluster peers for complete synchronization. This ensures no data loss but may expose private communications to other relay operators in your cluster.
To enhance privacy, you can disable propagation of privileged events:
```bash
export ORLY_CLUSTER_PROPAGATE_PRIVILEGED_EVENTS=false
```
**Important:** When disabled, privileged events will not be replicated to peer relays. This provides better privacy but means these events will only be available on the originating relay. Users should be aware that accessing their privileged events may require connecting directly to the relay where they were originally published.
## Developer Notes
### Binary-Optimized Tag Storage
The nostr library (`git.mleku.dev/mleku/nostr/encoders/tag`) uses binary optimization for `e` and `p` tags to reduce memory usage and improve comparison performance.
When events are unmarshaled from JSON, 64-character hex values in e/p tags are converted to 33-byte binary format (32 bytes hash + null terminator).
**Important:** When working with e/p tag values in code:
- **DO NOT** use `tag.Value()` directly - it returns raw bytes which may be binary, not hex
- **ALWAYS** use `tag.ValueHex()` to get a hex string regardless of storage format
- **Use** `tag.ValueBinary()` to get raw 32-byte binary (returns nil if not binary-encoded)
```go
// CORRECT: Use ValueHex() for hex decoding
pt, err := hex.Dec(string(pTag.ValueHex()))
// WRONG: Value() may return binary bytes, not hex
pt, err := hex.Dec(string(pTag.Value())) // Will fail for binary-encoded tags!
```
### Release Process
The `/release` command pushes to multiple git remotes. To push to git.mleku.dev with the dedicated SSH key, ensure the `gitmlekudev` key is configured:
```bash
# SSH key should be at ~/.ssh/gitmlekudev
# The release command uses GIT_SSH_COMMAND to specify this key:
GIT_SSH_COMMAND="ssh -i ~/.ssh/gitmlekudev" git push ssh://mleku@git.mleku.dev:2222/mleku/next.orly.dev.git main --tags
```
Remotes pushed during release:
- `origin` - Primary remote
- `gitea` - Gitea mirror
- `git.mleku.dev` - Using `gitmlekudev` SSH key

View File

@@ -42,12 +42,23 @@ func (s *Server) blossomHandler(w http.ResponseWriter, r *http.Request) {
if !strings.HasPrefix(r.URL.Path, "/") {
r.URL.Path = "/" + r.URL.Path
}
// Set baseURL in request context for blossom server to use
// Use the exported key type from the blossom package
baseURL := s.ServiceURL(r) + "/blossom"
type baseURLKey struct{}
r = r.WithContext(context.WithValue(r.Context(), baseURLKey{}, baseURL))
r = r.WithContext(context.WithValue(r.Context(), blossom.BaseURLKey{}, baseURL))
s.blossomServer.Handler().ServeHTTP(w, r)
}
// blossomRootHandler handles blossom requests at root level (for clients like Jumble)
// Note: Even though requests come to root-level paths like /upload, we return URLs
// with /blossom prefix because that's where the blob download handlers are registered.
func (s *Server) blossomRootHandler(w http.ResponseWriter, r *http.Request) {
// Set baseURL with /blossom prefix so returned blob URLs point to working handlers
baseURL := s.ServiceURL(r) + "/blossom"
r = r.WithContext(context.WithValue(r.Context(), blossom.BaseURLKey{}, baseURL))
s.blossomServer.Handler().ServeHTTP(w, r)
}

View File

@@ -1,5 +1,13 @@
// Package config provides a go-simpler.org/env configuration table and helpers
// for working with the list of key/value lists stored in .env files.
//
// IMPORTANT: This file is the SINGLE SOURCE OF TRUTH for all environment variables.
// All configuration options MUST be defined here with proper `env` struct tags.
// Never use os.Getenv() directly in other packages - pass configuration via structs.
// This ensures all options appear in `./orly help` output and are documented.
//
// For database backends, use GetDatabaseConfigValues() to extract database-specific
// settings, then construct a database.DatabaseConfig in the caller (e.g., main.go).
package config
import (
@@ -16,6 +24,8 @@ import (
"go-simpler.org/env"
lol "lol.mleku.dev"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/logbuffer"
"next.orly.dev/pkg/version"
)
@@ -31,9 +41,11 @@ type C struct {
EnableShutdown bool `env:"ORLY_ENABLE_SHUTDOWN" default:"false" usage:"if true, expose /shutdown on the health port to gracefully stop the process (for profiling)"`
LogLevel string `env:"ORLY_LOG_LEVEL" default:"info" usage:"relay log level: fatal error warn info debug trace"`
DBLogLevel string `env:"ORLY_DB_LOG_LEVEL" default:"info" usage:"database log level: fatal error warn info debug trace"`
DBBlockCacheMB int `env:"ORLY_DB_BLOCK_CACHE_MB" default:"512" usage:"Badger block cache size in MB (higher improves read hit ratio)"`
DBIndexCacheMB int `env:"ORLY_DB_INDEX_CACHE_MB" default:"256" usage:"Badger index cache size in MB (improves index lookup performance)"`
DBBlockCacheMB int `env:"ORLY_DB_BLOCK_CACHE_MB" default:"1024" usage:"Badger block cache size in MB (higher improves read hit ratio, increase for large archives)"`
DBIndexCacheMB int `env:"ORLY_DB_INDEX_CACHE_MB" default:"512" usage:"Badger index cache size in MB (improves index lookup performance, increase for large archives)"`
DBZSTDLevel int `env:"ORLY_DB_ZSTD_LEVEL" default:"3" usage:"Badger ZSTD compression level (1=fast/500MB/s, 3=balanced, 9=best ratio/slower, 0=disable)"`
LogToStdout bool `env:"ORLY_LOG_TO_STDOUT" default:"false" usage:"log to stdout instead of stderr"`
LogBufferSize int `env:"ORLY_LOG_BUFFER_SIZE" default:"10000" usage:"number of log entries to keep in memory for web UI viewing (0 disables)"`
Pprof string `env:"ORLY_PPROF" usage:"enable pprof in modes: cpu,memory,allocation,heap,block,goroutine,threadcreate,mutex"`
PprofPath string `env:"ORLY_PPROF_PATH" usage:"optional directory to write pprof profiles into (inside container); default is temporary dir"`
PprofHTTP bool `env:"ORLY_PPROF_HTTP" default:"false" usage:"if true, expose net/http/pprof on port 6060"`
@@ -41,7 +53,7 @@ type C struct {
IPBlacklist []string `env:"ORLY_IP_BLACKLIST" usage:"comma-separated list of IP addresses to block; matches on prefixes to allow subnets, e.g. 192.168 = 192.168.0.0/16"`
Admins []string `env:"ORLY_ADMINS" usage:"comma-separated list of admin npubs"`
Owners []string `env:"ORLY_OWNERS" usage:"comma-separated list of owner npubs, who have full control of the relay for wipe and restart and other functions"`
ACLMode string `env:"ORLY_ACL_MODE" usage:"ACL mode: follows, managed (nip-86), none" default:"none"`
ACLMode string `env:"ORLY_ACL_MODE" usage:"ACL mode: follows, managed (nip-86), curating, none" default:"none"`
AuthRequired bool `env:"ORLY_AUTH_REQUIRED" usage:"require authentication for all requests (works with managed ACL)" default:"false"`
AuthToWrite bool `env:"ORLY_AUTH_TO_WRITE" usage:"require authentication only for write operations (EVENT), allow REQ/COUNT without auth" default:"false"`
BootstrapRelays []string `env:"ORLY_BOOTSTRAP_RELAYS" usage:"comma-separated list of bootstrap relay URLs for initial sync"`
@@ -55,7 +67,13 @@ type C struct {
ClusterAdmins []string `env:"ORLY_CLUSTER_ADMINS" usage:"comma-separated list of npubs authorized to manage cluster membership"`
FollowListFrequency time.Duration `env:"ORLY_FOLLOW_LIST_FREQUENCY" usage:"how often to fetch admin follow lists (default: 1h)" default:"1h"`
// Blossom blob storage service level settings
// Progressive throttle for follows ACL mode - allows non-followed users to write with increasing delay
FollowsThrottleEnabled bool `env:"ORLY_FOLLOWS_THROTTLE" default:"false" usage:"enable progressive delay for non-followed users in follows ACL mode"`
FollowsThrottlePerEvent time.Duration `env:"ORLY_FOLLOWS_THROTTLE_INCREMENT" default:"200ms" usage:"delay added per event for non-followed users"`
FollowsThrottleMaxDelay time.Duration `env:"ORLY_FOLLOWS_THROTTLE_MAX" default:"60s" usage:"maximum throttle delay cap"`
// Blossom blob storage service settings
BlossomEnabled bool `env:"ORLY_BLOSSOM_ENABLED" default:"true" usage:"enable Blossom blob storage server (only works with Badger backend)"`
BlossomServiceLevels string `env:"ORLY_BLOSSOM_SERVICE_LEVELS" usage:"comma-separated list of service levels in format: name:storage_mb_per_sat_per_month (e.g., basic:1,premium:10)"`
// Web UI and dev mode settings
@@ -68,7 +86,13 @@ type C struct {
// Spider settings
SpiderMode string `env:"ORLY_SPIDER_MODE" default:"none" usage:"spider mode for syncing events: none, follows"`
PolicyEnabled bool `env:"ORLY_POLICY_ENABLED" default:"false" usage:"enable policy-based event processing (configuration found in $HOME/.config/ORLY/policy.json)"`
// Directory Spider settings
DirectorySpiderEnabled bool `env:"ORLY_DIRECTORY_SPIDER" default:"false" usage:"enable directory spider for metadata sync (kinds 0, 3, 10000, 10002)"`
DirectorySpiderInterval time.Duration `env:"ORLY_DIRECTORY_SPIDER_INTERVAL" default:"24h" usage:"how often to run directory spider"`
DirectorySpiderMaxHops int `env:"ORLY_DIRECTORY_SPIDER_HOPS" default:"3" usage:"maximum hops for relay discovery from seed users"`
PolicyEnabled bool `env:"ORLY_POLICY_ENABLED" default:"false" usage:"enable policy-based event processing (default config: $HOME/.config/ORLY/policy.json)"`
PolicyPath string `env:"ORLY_POLICY_PATH" usage:"ABSOLUTE path to policy configuration file (MUST start with /); overrides default location; relative paths are rejected"`
// NIP-43 Relay Access Metadata and Requests
NIP43Enabled bool `env:"ORLY_NIP43_ENABLED" default:"false" usage:"enable NIP-43 relay access metadata and invite system"`
@@ -77,17 +101,114 @@ type C struct {
NIP43InviteExpiry time.Duration `env:"ORLY_NIP43_INVITE_EXPIRY" default:"24h" usage:"how long invite codes remain valid"`
// Database configuration
DBType string `env:"ORLY_DB_TYPE" default:"badger" usage:"database backend to use: badger or dgraph"`
DgraphURL string `env:"ORLY_DGRAPH_URL" default:"localhost:9080" usage:"dgraph gRPC endpoint address (only used when ORLY_DB_TYPE=dgraph)"`
QueryCacheSizeMB int `env:"ORLY_QUERY_CACHE_SIZE_MB" default:"512" usage:"query cache size in MB (caches database query results for faster REQ responses)"`
QueryCacheMaxAge string `env:"ORLY_QUERY_CACHE_MAX_AGE" default:"5m" usage:"maximum age for cached query results (e.g., 5m, 10m, 1h)"`
DBType string `env:"ORLY_DB_TYPE" default:"badger" usage:"database backend to use: badger, bbolt, or neo4j"`
QueryCacheDisabled bool `env:"ORLY_QUERY_CACHE_DISABLED" default:"true" usage:"disable query cache to reduce memory usage (trades memory for query performance)"`
// BBolt configuration (only used when ORLY_DB_TYPE=bbolt)
BboltBatchMaxEvents int `env:"ORLY_BBOLT_BATCH_MAX_EVENTS" default:"5000" usage:"max events before flush (tuned for HDD, only used when ORLY_DB_TYPE=bbolt)"`
BboltBatchMaxMB int `env:"ORLY_BBOLT_BATCH_MAX_MB" default:"128" usage:"max batch size in MB before flush (only used when ORLY_DB_TYPE=bbolt)"`
BboltFlushTimeout int `env:"ORLY_BBOLT_FLUSH_TIMEOUT_SEC" default:"30" usage:"max seconds before flush (only used when ORLY_DB_TYPE=bbolt)"`
BboltBloomSizeMB int `env:"ORLY_BBOLT_BLOOM_SIZE_MB" default:"16" usage:"bloom filter size in MB for edge queries (only used when ORLY_DB_TYPE=bbolt)"`
BboltNoSync bool `env:"ORLY_BBOLT_NO_SYNC" default:"false" usage:"disable fsync for performance (DANGEROUS - data loss risk, only used when ORLY_DB_TYPE=bbolt)"`
BboltMmapSizeMB int `env:"ORLY_BBOLT_MMAP_SIZE_MB" default:"8192" usage:"initial mmap size in MB (only used when ORLY_DB_TYPE=bbolt)"`
QueryCacheSizeMB int `env:"ORLY_QUERY_CACHE_SIZE_MB" default:"512" usage:"query cache size in MB (caches database query results for faster REQ responses)"`
QueryCacheMaxAge string `env:"ORLY_QUERY_CACHE_MAX_AGE" default:"5m" usage:"maximum age for cached query results (e.g., 5m, 10m, 1h)"`
// Neo4j configuration (only used when ORLY_DB_TYPE=neo4j)
Neo4jURI string `env:"ORLY_NEO4J_URI" default:"bolt://localhost:7687" usage:"Neo4j bolt URI (only used when ORLY_DB_TYPE=neo4j)"`
Neo4jUser string `env:"ORLY_NEO4J_USER" default:"neo4j" usage:"Neo4j authentication username (only used when ORLY_DB_TYPE=neo4j)"`
Neo4jPassword string `env:"ORLY_NEO4J_PASSWORD" default:"password" usage:"Neo4j authentication password (only used when ORLY_DB_TYPE=neo4j)"`
// Neo4j driver tuning (memory and connection management)
Neo4jMaxConnPoolSize int `env:"ORLY_NEO4J_MAX_CONN_POOL" default:"25" usage:"max Neo4j connection pool size (driver default: 100, lower reduces memory)"`
Neo4jFetchSize int `env:"ORLY_NEO4J_FETCH_SIZE" default:"1000" usage:"max records per fetch batch (prevents memory overflow, -1=fetch all)"`
Neo4jMaxTxRetrySeconds int `env:"ORLY_NEO4J_MAX_TX_RETRY_SEC" default:"30" usage:"max seconds for retryable transaction attempts"`
Neo4jQueryResultLimit int `env:"ORLY_NEO4J_QUERY_RESULT_LIMIT" default:"10000" usage:"max results returned per query (prevents unbounded memory usage, 0=unlimited)"`
// Advanced database tuning (increase for large archives to reduce cache misses)
SerialCachePubkeys int `env:"ORLY_SERIAL_CACHE_PUBKEYS" default:"250000" usage:"max pubkeys to cache for compact event storage (~8MB memory, increase for large archives)"`
SerialCacheEventIds int `env:"ORLY_SERIAL_CACHE_EVENT_IDS" default:"1000000" usage:"max event IDs to cache for compact event storage (~32MB memory, increase for large archives)"`
// Connection concurrency control
MaxHandlersPerConnection int `env:"ORLY_MAX_HANDLERS_PER_CONN" default:"100" usage:"max concurrent message handlers per WebSocket connection (limits goroutine growth under load)"`
// Adaptive rate limiting (PID-controlled)
RateLimitEnabled bool `env:"ORLY_RATE_LIMIT_ENABLED" default:"true" usage:"enable adaptive PID-controlled rate limiting for database operations"`
RateLimitTargetMB int `env:"ORLY_RATE_LIMIT_TARGET_MB" default:"0" usage:"target memory limit in MB (0=auto-detect: 66% of available, min 500MB)"`
RateLimitWriteKp float64 `env:"ORLY_RATE_LIMIT_WRITE_KP" default:"0.5" usage:"PID proportional gain for write operations"`
RateLimitWriteKi float64 `env:"ORLY_RATE_LIMIT_WRITE_KI" default:"0.1" usage:"PID integral gain for write operations"`
RateLimitWriteKd float64 `env:"ORLY_RATE_LIMIT_WRITE_KD" default:"0.05" usage:"PID derivative gain for write operations (filtered)"`
RateLimitReadKp float64 `env:"ORLY_RATE_LIMIT_READ_KP" default:"0.3" usage:"PID proportional gain for read operations"`
RateLimitReadKi float64 `env:"ORLY_RATE_LIMIT_READ_KI" default:"0.05" usage:"PID integral gain for read operations"`
RateLimitReadKd float64 `env:"ORLY_RATE_LIMIT_READ_KD" default:"0.02" usage:"PID derivative gain for read operations (filtered)"`
RateLimitMaxWriteMs int `env:"ORLY_RATE_LIMIT_MAX_WRITE_MS" default:"1000" usage:"maximum delay for write operations in milliseconds"`
RateLimitMaxReadMs int `env:"ORLY_RATE_LIMIT_MAX_READ_MS" default:"500" usage:"maximum delay for read operations in milliseconds"`
RateLimitWriteTarget float64 `env:"ORLY_RATE_LIMIT_WRITE_TARGET" default:"0.85" usage:"PID setpoint for writes (throttle when load exceeds this, 0.0-1.0)"`
RateLimitReadTarget float64 `env:"ORLY_RATE_LIMIT_READ_TARGET" default:"0.90" usage:"PID setpoint for reads (throttle when load exceeds this, 0.0-1.0)"`
RateLimitEmergencyThreshold float64 `env:"ORLY_RATE_LIMIT_EMERGENCY_THRESHOLD" default:"1.167" usage:"memory pressure ratio (target+1/6) to trigger emergency mode with aggressive throttling"`
RateLimitRecoveryThreshold float64 `env:"ORLY_RATE_LIMIT_RECOVERY_THRESHOLD" default:"0.833" usage:"memory pressure ratio (target-1/6) below which emergency mode exits (hysteresis)"`
RateLimitEmergencyMaxMs int `env:"ORLY_RATE_LIMIT_EMERGENCY_MAX_MS" default:"5000" usage:"maximum delay for writes in emergency mode (milliseconds)"`
// TLS configuration
TLSDomains []string `env:"ORLY_TLS_DOMAINS" usage:"comma-separated list of domains to respond to for TLS"`
Certs []string `env:"ORLY_CERTS" usage:"comma-separated list of paths to certificate root names (e.g., /path/to/cert will load /path/to/cert.pem and /path/to/cert.key)"`
// WireGuard VPN configuration (for secure bunker access)
WGEnabled bool `env:"ORLY_WG_ENABLED" default:"false" usage:"enable embedded WireGuard VPN server for private bunker access"`
WGPort int `env:"ORLY_WG_PORT" default:"51820" usage:"UDP port for WireGuard VPN server"`
WGEndpoint string `env:"ORLY_WG_ENDPOINT" usage:"public IP/domain for WireGuard endpoint (required if WG enabled)"`
WGNetwork string `env:"ORLY_WG_NETWORK" default:"10.73.0.0/16" usage:"WireGuard internal network CIDR"`
// NIP-46 Bunker configuration (remote signing service)
BunkerEnabled bool `env:"ORLY_BUNKER_ENABLED" default:"false" usage:"enable NIP-46 bunker signing service (requires WireGuard)"`
BunkerPort int `env:"ORLY_BUNKER_PORT" default:"3335" usage:"internal port for bunker WebSocket (only accessible via WireGuard)"`
// Tor hidden service configuration (subprocess mode - runs tor binary automatically)
TorEnabled bool `env:"ORLY_TOR_ENABLED" default:"true" usage:"enable Tor hidden service (spawns tor subprocess; disable with false if tor not installed)"`
TorPort int `env:"ORLY_TOR_PORT" default:"3336" usage:"internal port for Tor hidden service traffic"`
TorDataDir string `env:"ORLY_TOR_DATA_DIR" usage:"Tor data directory (default: $ORLY_DATA_DIR/tor)"`
TorBinary string `env:"ORLY_TOR_BINARY" default:"tor" usage:"path to tor binary (default: search in PATH)"`
TorSOCKS int `env:"ORLY_TOR_SOCKS" default:"0" usage:"SOCKS port for outbound Tor connections (0=disabled)"`
// Cashu access token configuration (NIP-XX)
CashuEnabled bool `env:"ORLY_CASHU_ENABLED" default:"false" usage:"enable Cashu blind signature tokens for access control"`
CashuTokenTTL string `env:"ORLY_CASHU_TOKEN_TTL" default:"168h" usage:"token validity duration (default: 1 week)"`
CashuKeysetTTL string `env:"ORLY_CASHU_KEYSET_TTL" default:"168h" usage:"keyset active signing period (default: 1 week)"`
CashuVerifyTTL string `env:"ORLY_CASHU_VERIFY_TTL" default:"504h" usage:"keyset verification period (default: 3 weeks)"`
CashuScopes string `env:"ORLY_CASHU_SCOPES" default:"relay,nip46" usage:"comma-separated list of allowed token scopes"`
CashuReauthorize bool `env:"ORLY_CASHU_REAUTHORIZE" default:"true" usage:"re-check ACL on each token verification for stateless revocation"`
// Nostr Relay Connect (NRC) configuration - tunnel private relay through public relay
NRCEnabled bool `env:"ORLY_NRC_ENABLED" default:"false" usage:"enable NRC bridge to expose this relay through a public rendezvous relay"`
NRCRendezvousURL string `env:"ORLY_NRC_RENDEZVOUS_URL" usage:"WebSocket URL of the public relay to use as rendezvous point (e.g., wss://relay.example.com)"`
NRCAuthorizedKeys string `env:"ORLY_NRC_AUTHORIZED_KEYS" usage:"comma-separated list of authorized client pubkeys (hex) for secret-based auth"`
NRCUseCashu bool `env:"ORLY_NRC_USE_CASHU" default:"false" usage:"use Cashu access tokens for NRC authentication instead of static secrets"`
NRCSessionTimeout string `env:"ORLY_NRC_SESSION_TIMEOUT" default:"30m" usage:"inactivity timeout for NRC sessions"`
// Cluster replication configuration
ClusterPropagatePrivilegedEvents bool `env:"ORLY_CLUSTER_PROPAGATE_PRIVILEGED_EVENTS" default:"true" usage:"propagate privileged events (DMs, gift wraps, etc.) to relay peers for replication"`
// Graph query configuration (NIP-XX)
GraphQueriesEnabled bool `env:"ORLY_GRAPH_QUERIES_ENABLED" default:"true" usage:"enable graph traversal queries (_graph filter extension)"`
GraphMaxDepth int `env:"ORLY_GRAPH_MAX_DEPTH" default:"16" usage:"maximum depth for graph traversal queries (1-16)"`
GraphMaxResults int `env:"ORLY_GRAPH_MAX_RESULTS" default:"10000" usage:"maximum pubkeys/events returned per graph query"`
GraphRateLimitRPM int `env:"ORLY_GRAPH_RATE_LIMIT_RPM" default:"60" usage:"graph queries per minute per connection (0=unlimited)"`
// Archive relay configuration (query augmentation from authoritative archives)
ArchiveEnabled bool `env:"ORLY_ARCHIVE_ENABLED" default:"false" usage:"enable archive relay query augmentation (fetch from archives, cache locally)"`
ArchiveRelays []string `env:"ORLY_ARCHIVE_RELAYS" default:"wss://archive.orly.dev/" usage:"comma-separated list of archive relay URLs for query augmentation"`
ArchiveTimeoutSec int `env:"ORLY_ARCHIVE_TIMEOUT_SEC" default:"30" usage:"timeout in seconds for archive relay queries"`
ArchiveCacheTTLHrs int `env:"ORLY_ARCHIVE_CACHE_TTL_HRS" default:"24" usage:"hours to cache query fingerprints to avoid repeated archive requests"`
// Storage management configuration (access-based garbage collection)
MaxStorageBytes int64 `env:"ORLY_MAX_STORAGE_BYTES" default:"0" usage:"maximum storage in bytes (0=auto-detect 80%% of filesystem)"`
GCEnabled bool `env:"ORLY_GC_ENABLED" default:"true" usage:"enable continuous garbage collection based on access patterns"`
GCIntervalSec int `env:"ORLY_GC_INTERVAL_SEC" default:"60" usage:"seconds between GC runs when storage exceeds limit"`
GCBatchSize int `env:"ORLY_GC_BATCH_SIZE" default:"1000" usage:"number of events to consider per GC run"`
// ServeMode is set programmatically by the 'serve' subcommand to grant full owner
// access to all users (no env tag - internal use only)
ServeMode bool
}
// New creates and initializes a new configuration object for the relay
@@ -130,6 +251,21 @@ func New() (cfg *C, err error) {
if cfg.LogToStdout {
lol.Writer = os.Stdout
}
// Initialize log buffer for web UI viewing
if cfg.LogBufferSize > 0 {
logbuffer.Init(cfg.LogBufferSize)
logbuffer.SetCurrentLevel(cfg.LogLevel)
lol.Writer = logbuffer.NewBufferedWriter(lol.Writer, logbuffer.GlobalBuffer)
// Reinitialize the loggers to use the new wrapped Writer
// The lol.Main logger is initialized in init() with os.Stderr directly,
// so we need to recreate it with the new Writer
l, c, e := lol.New(lol.Writer, 2)
lol.Main.Log = l
lol.Main.Check = c
lol.Main.Errorf = e
// Also update the log package convenience variables
log.F, log.E, log.W, log.I, log.D, log.T = l.F, l.E, l.W, l.I, l.D, l.T
}
lol.SetLogLevel(cfg.LogLevel)
return
}
@@ -193,6 +329,117 @@ func IdentityRequested() (requested bool) {
return
}
// ServeRequested checks if the first command line argument is "serve" and returns
// whether the relay should start in ephemeral serve mode with RAM-based storage.
//
// Return Values
// - requested: true if the 'serve' subcommand was provided, false otherwise.
func ServeRequested() (requested bool) {
if len(os.Args) > 1 {
switch strings.ToLower(os.Args[1]) {
case "serve":
requested = true
}
}
return
}
// VersionRequested checks if the first command line argument is "version" and returns
// whether the version should be printed and the program should exit.
//
// Return Values
// - requested: true if the 'version' subcommand was provided, false otherwise.
func VersionRequested() (requested bool) {
if len(os.Args) > 1 {
switch strings.ToLower(os.Args[1]) {
case "version", "-v", "--v", "-version", "--version":
requested = true
}
}
return
}
// CuratingModeRequested checks if the first command line argument is "curatingmode"
// and returns the owner npub/hex pubkey if provided.
//
// Return Values
// - requested: true if the 'curatingmode' subcommand was provided
// - ownerKey: the npub or hex pubkey provided as the second argument (empty if not provided)
func CuratingModeRequested() (requested bool, ownerKey string) {
if len(os.Args) > 1 {
switch strings.ToLower(os.Args[1]) {
case "curatingmode":
requested = true
if len(os.Args) > 2 {
ownerKey = os.Args[2]
}
}
}
return
}
// MigrateRequested checks if the first command line argument is "migrate"
// and returns the migration parameters.
//
// Return Values
// - requested: true if the 'migrate' subcommand was provided
// - fromType: source database type (badger, bbolt, neo4j)
// - toType: destination database type
// - targetPath: optional target path for destination database
func MigrateRequested() (requested bool, fromType, toType, targetPath string) {
if len(os.Args) > 1 {
switch strings.ToLower(os.Args[1]) {
case "migrate":
requested = true
// Parse --from, --to, --target-path flags
for i := 2; i < len(os.Args); i++ {
arg := os.Args[i]
switch {
case strings.HasPrefix(arg, "--from="):
fromType = strings.TrimPrefix(arg, "--from=")
case strings.HasPrefix(arg, "--to="):
toType = strings.TrimPrefix(arg, "--to=")
case strings.HasPrefix(arg, "--target-path="):
targetPath = strings.TrimPrefix(arg, "--target-path=")
case arg == "--from" && i+1 < len(os.Args):
i++
fromType = os.Args[i]
case arg == "--to" && i+1 < len(os.Args):
i++
toType = os.Args[i]
case arg == "--target-path" && i+1 < len(os.Args):
i++
targetPath = os.Args[i]
}
}
}
}
return
}
// NRCRequested checks if the first command line argument is "nrc" and returns
// the NRC subcommand parameters.
//
// Return Values
// - requested: true if the 'nrc' subcommand was provided
// - subcommand: the NRC subcommand (generate, list, revoke)
// - args: additional arguments for the subcommand
func NRCRequested() (requested bool, subcommand string, args []string) {
if len(os.Args) > 1 {
switch strings.ToLower(os.Args[1]) {
case "nrc":
requested = true
if len(os.Args) > 2 {
subcommand = strings.ToLower(os.Args[2])
if len(os.Args) > 3 {
args = os.Args[3:]
}
}
}
}
return
}
// KV is a key/value pair.
type KV struct{ Key, Value string }
@@ -324,13 +571,20 @@ func PrintHelp(cfg *C, printer io.Writer) {
)
_, _ = fmt.Fprintf(
printer,
`Usage: %s [env|help]
`Usage: %s [env|help|identity|migrate|serve|version]
- env: print environment variables configuring %s
- help: print this help text
- identity: print the relay identity secret and public key
- migrate: migrate data between database backends
Example: %s migrate --from badger --to bbolt
- serve: start ephemeral relay with RAM-based storage at /dev/shm/orlyserve
listening on 0.0.0.0:10547 with 'none' ACL mode (open relay)
useful for testing and benchmarking
- version: print version and exit (also: -v, --v, -version, --version)
`,
cfg.AppName, cfg.AppName,
cfg.AppName, cfg.AppName, cfg.AppName,
)
_, _ = fmt.Fprintf(
printer,
@@ -341,3 +595,267 @@ func PrintHelp(cfg *C, printer io.Writer) {
PrintEnv(cfg, printer)
fmt.Fprintln(printer)
}
// GetDatabaseConfigValues returns the database configuration values as individual fields.
// This avoids circular imports with pkg/database while allowing main.go to construct
// a database.DatabaseConfig with the correct type.
func (cfg *C) GetDatabaseConfigValues() (
dataDir, logLevel string,
blockCacheMB, indexCacheMB, queryCacheSizeMB int,
queryCacheMaxAge time.Duration,
queryCacheDisabled bool,
serialCachePubkeys, serialCacheEventIds int,
zstdLevel int,
neo4jURI, neo4jUser, neo4jPassword string,
neo4jMaxConnPoolSize, neo4jFetchSize, neo4jMaxTxRetrySeconds, neo4jQueryResultLimit int,
) {
// Parse query cache max age from string to duration
queryCacheMaxAge = 5 * time.Minute // Default
if cfg.QueryCacheMaxAge != "" {
if duration, err := time.ParseDuration(cfg.QueryCacheMaxAge); err == nil {
queryCacheMaxAge = duration
}
}
return cfg.DataDir, cfg.DBLogLevel,
cfg.DBBlockCacheMB, cfg.DBIndexCacheMB, cfg.QueryCacheSizeMB,
queryCacheMaxAge,
cfg.QueryCacheDisabled,
cfg.SerialCachePubkeys, cfg.SerialCacheEventIds,
cfg.DBZSTDLevel,
cfg.Neo4jURI, cfg.Neo4jUser, cfg.Neo4jPassword,
cfg.Neo4jMaxConnPoolSize, cfg.Neo4jFetchSize, cfg.Neo4jMaxTxRetrySeconds, cfg.Neo4jQueryResultLimit
}
// GetRateLimitConfigValues returns the rate limiting configuration values.
// This avoids circular imports with pkg/ratelimit while allowing main.go to construct
// a ratelimit.Config with the correct type.
func (cfg *C) GetRateLimitConfigValues() (
enabled bool,
targetMB int,
writeKp, writeKi, writeKd float64,
readKp, readKi, readKd float64,
maxWriteMs, maxReadMs int,
writeTarget, readTarget float64,
emergencyThreshold, recoveryThreshold float64,
emergencyMaxMs int,
) {
return cfg.RateLimitEnabled,
cfg.RateLimitTargetMB,
cfg.RateLimitWriteKp, cfg.RateLimitWriteKi, cfg.RateLimitWriteKd,
cfg.RateLimitReadKp, cfg.RateLimitReadKi, cfg.RateLimitReadKd,
cfg.RateLimitMaxWriteMs, cfg.RateLimitMaxReadMs,
cfg.RateLimitWriteTarget, cfg.RateLimitReadTarget,
cfg.RateLimitEmergencyThreshold, cfg.RateLimitRecoveryThreshold,
cfg.RateLimitEmergencyMaxMs
}
// GetWireGuardConfigValues returns the WireGuard VPN configuration values.
// This avoids circular imports with pkg/wireguard while allowing main.go to construct
// the WireGuard server configuration.
func (cfg *C) GetWireGuardConfigValues() (
enabled bool,
port int,
endpoint string,
network string,
bunkerEnabled bool,
bunkerPort int,
) {
return cfg.WGEnabled,
cfg.WGPort,
cfg.WGEndpoint,
cfg.WGNetwork,
cfg.BunkerEnabled,
cfg.BunkerPort
}
// GetCashuConfigValues returns the Cashu access token configuration values.
// This avoids circular imports with pkg/cashu while allowing main.go to construct
// the Cashu issuer/verifier configuration.
func (cfg *C) GetCashuConfigValues() (
enabled bool,
tokenTTL time.Duration,
keysetTTL time.Duration,
verifyTTL time.Duration,
scopes []string,
reauthorize bool,
) {
// Parse token TTL
tokenTTL = 168 * time.Hour // Default: 1 week
if cfg.CashuTokenTTL != "" {
if d, err := time.ParseDuration(cfg.CashuTokenTTL); err == nil {
tokenTTL = d
}
}
// Parse keyset TTL
keysetTTL = 168 * time.Hour // Default: 1 week
if cfg.CashuKeysetTTL != "" {
if d, err := time.ParseDuration(cfg.CashuKeysetTTL); err == nil {
keysetTTL = d
}
}
// Parse verify TTL
verifyTTL = 504 * time.Hour // Default: 3 weeks
if cfg.CashuVerifyTTL != "" {
if d, err := time.ParseDuration(cfg.CashuVerifyTTL); err == nil {
verifyTTL = d
}
}
// Parse scopes
if cfg.CashuScopes != "" {
scopes = strings.Split(cfg.CashuScopes, ",")
for i := range scopes {
scopes[i] = strings.TrimSpace(scopes[i])
}
}
return cfg.CashuEnabled,
tokenTTL,
keysetTTL,
verifyTTL,
scopes,
cfg.CashuReauthorize
}
// GetArchiveConfigValues returns the archive relay configuration values.
// This avoids circular imports with pkg/archive while allowing main.go to construct
// the archive manager configuration.
func (cfg *C) GetArchiveConfigValues() (
enabled bool,
relays []string,
timeoutSec int,
cacheTTLHrs int,
) {
return cfg.ArchiveEnabled,
cfg.ArchiveRelays,
cfg.ArchiveTimeoutSec,
cfg.ArchiveCacheTTLHrs
}
// GetStorageConfigValues returns the storage management configuration values.
// This avoids circular imports with pkg/storage while allowing main.go to construct
// the garbage collector and access tracker configuration.
func (cfg *C) GetStorageConfigValues() (
maxStorageBytes int64,
gcEnabled bool,
gcIntervalSec int,
gcBatchSize int,
) {
return cfg.MaxStorageBytes,
cfg.GCEnabled,
cfg.GCIntervalSec,
cfg.GCBatchSize
}
// GetTorConfigValues returns the Tor hidden service configuration values.
// This avoids circular imports with pkg/tor while allowing main.go to construct
// the Tor service configuration.
func (cfg *C) GetTorConfigValues() (
enabled bool,
port int,
dataDir string,
binary string,
socksPort int,
) {
dataDir = cfg.TorDataDir
if dataDir == "" {
dataDir = filepath.Join(cfg.DataDir, "tor")
}
return cfg.TorEnabled,
cfg.TorPort,
dataDir,
cfg.TorBinary,
cfg.TorSOCKS
}
// GetGraphConfigValues returns the graph query configuration values.
// This avoids circular imports with pkg/protocol/graph while allowing main.go
// to construct the graph executor configuration.
func (cfg *C) GetGraphConfigValues() (
enabled bool,
maxDepth int,
maxResults int,
rateLimitRPM int,
) {
maxDepth = cfg.GraphMaxDepth
if maxDepth < 1 {
maxDepth = 1
}
if maxDepth > 16 {
maxDepth = 16
}
return cfg.GraphQueriesEnabled,
maxDepth,
cfg.GraphMaxResults,
cfg.GraphRateLimitRPM
}
// GetBboltConfigValues returns the BBolt database configuration values.
// This avoids circular imports with pkg/bbolt while allowing main.go to construct
// the BBolt-specific configuration.
func (cfg *C) GetBboltConfigValues() (
batchMaxEvents int,
batchMaxBytes int64,
flushTimeoutSec int,
bloomSizeMB int,
noSync bool,
mmapSizeBytes int,
) {
return cfg.BboltBatchMaxEvents,
int64(cfg.BboltBatchMaxMB) * 1024 * 1024,
cfg.BboltFlushTimeout,
cfg.BboltBloomSizeMB,
cfg.BboltNoSync,
cfg.BboltMmapSizeMB * 1024 * 1024
}
// GetNRCConfigValues returns the NRC (Nostr Relay Connect) configuration values.
// This avoids circular imports with pkg/protocol/nrc while allowing main.go to construct
// the NRC bridge configuration.
func (cfg *C) GetNRCConfigValues() (
enabled bool,
rendezvousURL string,
authorizedKeys []string,
useCashu bool,
sessionTimeout time.Duration,
) {
// Parse session timeout
sessionTimeout = 30 * time.Minute // Default
if cfg.NRCSessionTimeout != "" {
if d, err := time.ParseDuration(cfg.NRCSessionTimeout); err == nil {
sessionTimeout = d
}
}
// Parse authorized keys
if cfg.NRCAuthorizedKeys != "" {
keys := strings.Split(cfg.NRCAuthorizedKeys, ",")
for _, k := range keys {
k = strings.TrimSpace(k)
if k != "" {
authorizedKeys = append(authorizedKeys, k)
}
}
}
return cfg.NRCEnabled,
cfg.NRCRendezvousURL,
authorizedKeys,
cfg.NRCUseCashu,
sessionTimeout
}
// GetFollowsThrottleConfigValues returns the progressive throttle configuration values
// for the follows ACL mode. This allows non-followed users to write with increasing delay.
func (cfg *C) GetFollowsThrottleConfigValues() (
enabled bool,
perEvent time.Duration,
maxDelay time.Duration,
) {
return cfg.FollowsThrottleEnabled,
cfg.FollowsThrottlePerEvent,
cfg.FollowsThrottleMaxDelay
}

View File

@@ -3,15 +3,27 @@ package app
import (
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
"next.orly.dev/pkg/encoders/envelopes/okenvelope"
"next.orly.dev/pkg/protocol/auth"
"git.mleku.dev/mleku/nostr/encoders/envelopes/authenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/okenvelope"
"git.mleku.dev/mleku/nostr/encoders/reason"
"git.mleku.dev/mleku/nostr/protocol/auth"
)
// zeroEventID is used for OK responses when we cannot parse the event ID
var zeroEventID = make([]byte, 32)
func (l *Listener) HandleAuth(b []byte) (err error) {
var rem []byte
env := authenvelope.NewResponse()
if rem, err = env.Unmarshal(b); chk.E(err) {
// NIP-42: AUTH messages MUST be answered with an OK message
// For parse failures, use zero event ID
log.E.F("%s AUTH unmarshal failed: %v", l.remote, err)
if writeErr := okenvelope.NewFrom(
zeroEventID, false, reason.Error.F("failed to parse auth event: %s", err),
).Write(l); chk.E(writeErr) {
return writeErr
}
return
}
defer func() {

83
app/handle-bunker.go Normal file
View File

@@ -0,0 +1,83 @@
package app
import (
"encoding/json"
"net/http"
"strings"
"git.mleku.dev/mleku/nostr/encoders/bech32encoding"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/interfaces/signer/p8k"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
)
// BunkerInfoResponse is returned by the /api/bunker/info endpoint.
type BunkerInfoResponse struct {
RelayURL string `json:"relay_url"` // WebSocket URL for NIP-46 connections
RelayNpub string `json:"relay_npub"` // Relay's npub
RelayPubkey string `json:"relay_pubkey"` // Relay's hex pubkey
ACLMode string `json:"acl_mode"` // Current ACL mode
CashuEnabled bool `json:"cashu_enabled"` // Whether CAT is required
Available bool `json:"available"` // Whether bunker is available
}
// handleBunkerInfo returns bunker connection information.
// This is a public endpoint that doesn't require authentication.
func (s *Server) handleBunkerInfo(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Get relay identity
relaySecret, err := s.DB.GetOrCreateRelayIdentitySecret()
if chk.E(err) {
log.E.F("failed to get relay identity: %v", err)
http.Error(w, "Failed to get relay identity", http.StatusInternalServerError)
return
}
// Derive public key
sign, err := p8k.New()
if chk.E(err) {
http.Error(w, "Failed to create signer", http.StatusInternalServerError)
return
}
if err := sign.InitSec(relaySecret); chk.E(err) {
http.Error(w, "Failed to initialize signer", http.StatusInternalServerError)
return
}
relayPubkey := sign.Pub()
relayPubkeyHex := hex.Enc(relayPubkey)
// Encode as npub
relayNpubBytes, err := bech32encoding.BinToNpub(relayPubkey)
relayNpub := string(relayNpubBytes)
if chk.E(err) {
relayNpub = relayPubkeyHex // Fallback to hex
}
// Build WebSocket URL from service URL
serviceURL := s.ServiceURL(r)
wsURL := strings.Replace(serviceURL, "https://", "wss://", 1)
wsURL = strings.Replace(wsURL, "http://", "ws://", 1)
// Check if Cashu is enabled
cashuEnabled := s.CashuIssuer != nil
// Bunker is available when ACL mode is not "none"
available := s.Config.ACLMode != "none"
resp := BunkerInfoResponse{
RelayURL: wsURL,
RelayNpub: relayNpub,
RelayPubkey: relayPubkeyHex,
ACLMode: s.Config.ACLMode,
CashuEnabled: cashuEnabled,
Available: available,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(resp)
}

154
app/handle-cashu.go Normal file
View File

@@ -0,0 +1,154 @@
package app
import (
"encoding/hex"
"encoding/json"
"net/http"
"time"
"lol.mleku.dev/log"
"git.mleku.dev/mleku/nostr/httpauth"
"next.orly.dev/pkg/cashu/issuer"
"next.orly.dev/pkg/cashu/keyset"
"next.orly.dev/pkg/cashu/token"
)
// CashuMintRequest is the request body for token issuance.
type CashuMintRequest struct {
BlindedMessage string `json:"blinded_message"` // Hex-encoded blinded point B_
Scope string `json:"scope"` // Token scope (e.g., "relay", "nip46")
Kinds []int `json:"kinds,omitempty"` // Permitted event kinds
KindRanges [][]int `json:"kind_ranges,omitempty"` // Permitted kind ranges
}
// CashuMintResponse is the response body for token issuance.
type CashuMintResponse struct {
BlindedSignature string `json:"blinded_signature"` // Hex-encoded blinded signature C_
KeysetID string `json:"keyset_id"` // Keyset ID used
Expiry int64 `json:"expiry"` // Token expiration timestamp
MintPubkey string `json:"mint_pubkey"` // Hex-encoded mint public key
}
// handleCashuMint handles POST /cashu/mint - issues a new token.
func (s *Server) handleCashuMint(w http.ResponseWriter, r *http.Request) {
// Check if Cashu is enabled
if s.CashuIssuer == nil {
log.W.F("Cashu mint request but issuer not initialized")
http.Error(w, "Cashu tokens not enabled", http.StatusNotImplemented)
return
}
// Require NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if err != nil {
authHeader := r.Header.Get("Authorization")
if len(authHeader) > 100 {
authHeader = authHeader[:100] + "..."
}
log.W.F("Cashu mint NIP-98 auth error: %v (valid=%v, authHeader=%q)", err, valid, authHeader)
http.Error(w, "NIP-98 authentication required", http.StatusUnauthorized)
return
}
if !valid {
log.W.F("Cashu mint NIP-98 auth invalid signature")
http.Error(w, "NIP-98 authentication required", http.StatusUnauthorized)
return
}
// Parse request body
var req CashuMintRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, "Invalid request body", http.StatusBadRequest)
return
}
// Decode blinded message from hex
blindedMsg, err := hex.DecodeString(req.BlindedMessage)
if err != nil {
http.Error(w, "Invalid blinded_message: must be hex", http.StatusBadRequest)
return
}
// Default scope
if req.Scope == "" {
req.Scope = token.ScopeRelay
}
// Issue token
issueReq := &issuer.IssueRequest{
BlindedMessage: blindedMsg,
Pubkey: pubkey,
Scope: req.Scope,
Kinds: req.Kinds,
KindRanges: req.KindRanges,
}
resp, err := s.CashuIssuer.Issue(r.Context(), issueReq, r.RemoteAddr)
if err != nil {
log.W.F("Cashu mint failed for %x: %v", pubkey[:8], err)
http.Error(w, err.Error(), http.StatusForbidden)
return
}
log.D.F("Cashu token issued for %x, scope=%s, keyset=%s", pubkey[:8], req.Scope, resp.KeysetID)
// Return response
mintResp := CashuMintResponse{
BlindedSignature: hex.EncodeToString(resp.BlindedSignature),
KeysetID: resp.KeysetID,
Expiry: resp.Expiry,
MintPubkey: hex.EncodeToString(resp.MintPubkey),
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(mintResp)
}
// handleCashuKeysets handles GET /cashu/keysets - returns available keysets.
func (s *Server) handleCashuKeysets(w http.ResponseWriter, r *http.Request) {
if s.CashuIssuer == nil {
http.Error(w, "Cashu tokens not enabled", http.StatusNotImplemented)
return
}
infos := s.CashuIssuer.GetKeysetInfo()
type KeysetsResponse struct {
Keysets []keyset.KeysetInfo `json:"keysets"`
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(KeysetsResponse{Keysets: infos})
}
// handleCashuInfo handles GET /cashu/info - returns mint information.
func (s *Server) handleCashuInfo(w http.ResponseWriter, r *http.Request) {
if s.CashuIssuer == nil {
http.Error(w, "Cashu tokens not enabled", http.StatusNotImplemented)
return
}
info := s.CashuIssuer.GetMintInfo(s.Config.AppName)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(info)
}
// CashuTokenTTL returns the configured token TTL.
func (s *Server) CashuTokenTTL() time.Duration {
enabled, tokenTTL, _, _, _, _ := s.Config.GetCashuConfigValues()
if !enabled {
return 0
}
return tokenTTL
}
// CashuKeysetTTL returns the configured keyset TTL.
func (s *Server) CashuKeysetTTL() time.Duration {
enabled, _, keysetTTL, _, _, _ := s.Config.GetCashuConfigValues()
if !enabled {
return 0
}
return keysetTTL
}

View File

@@ -5,7 +5,7 @@ import (
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/encoders/envelopes/closeenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/closeenvelope"
)
// HandleClose processes a CLOSE envelope by unmarshalling the request,

View File

@@ -9,10 +9,10 @@ import (
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/crypto/ec/schnorr"
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
"next.orly.dev/pkg/encoders/envelopes/countenvelope"
"next.orly.dev/pkg/utils/normalize"
"git.mleku.dev/mleku/nostr/crypto/ec/schnorr"
"git.mleku.dev/mleku/nostr/encoders/envelopes/authenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/countenvelope"
"git.mleku.dev/mleku/nostr/utils/normalize"
)
// HandleCount processes a COUNT envelope by parsing the request, verifying

View File

@@ -4,14 +4,14 @@ import (
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/database/indexes/types"
"next.orly.dev/pkg/encoders/envelopes/eventenvelope"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/ints"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/encoders/tag/atag"
"git.mleku.dev/mleku/nostr/encoders/envelopes/eventenvelope"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/filter"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/ints"
"git.mleku.dev/mleku/nostr/encoders/kind"
"git.mleku.dev/mleku/nostr/encoders/tag"
"git.mleku.dev/mleku/nostr/encoders/tag/atag"
utils "next.orly.dev/pkg/utils"
)
@@ -25,7 +25,15 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
log.I.F("HandleDelete: processing delete event %0x from pubkey %0x", env.E.ID, env.E.Pubkey)
log.I.F("HandleDelete: delete event tags: %d tags", len(*env.E.Tags))
for i, t := range *env.E.Tags {
log.I.F("HandleDelete: tag %d: %s = %s", i, string(t.Key()), string(t.Value()))
// Use ValueHex() for e/p tags to properly display binary-encoded values
key := string(t.Key())
var val string
if key == "e" || key == "p" {
val = string(t.ValueHex()) // Properly converts binary to hex
} else {
val = string(t.Value())
}
log.I.F("HandleDelete: tag %d: %s = %s", i, key, val)
}
// Debug: log admin and owner lists
@@ -142,20 +150,21 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
// if e tags are found, delete them if the author is signer, or one of
// the owners is signer
if utils.FastEqual(t.Key(), []byte("e")) {
val := t.Value()
if len(val) == 0 {
// Use ValueHex() which properly handles both binary-encoded and hex string formats
hexVal := t.ValueHex()
if len(hexVal) == 0 {
log.W.F("HandleDelete: empty e-tag value")
continue
}
log.I.F("HandleDelete: processing e-tag with value: %s", string(val))
var dst []byte
if b, e := hex.Dec(string(val)); chk.E(e) {
log.E.F("HandleDelete: failed to decode hex event ID %s: %v", string(val), e)
log.I.F("HandleDelete: processing e-tag event ID: %s", string(hexVal))
// Decode hex to binary for filter
dst, e := hex.Dec(string(hexVal))
if chk.E(e) {
log.E.F("HandleDelete: failed to decode event ID %s: %v", string(hexVal), e)
continue
} else {
dst = b
log.I.F("HandleDelete: decoded event ID: %0x", dst)
}
f := &filter.F{
Ids: tag.NewFromBytesSlice(dst),
}
@@ -164,7 +173,7 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
log.E.F("HandleDelete: failed to get serials from filter: %v", err)
continue
}
log.I.F("HandleDelete: found %d serials for event ID %s", len(sers), string(val))
log.I.F("HandleDelete: found %d serials for event ID %0x", len(sers), dst)
// if found, delete them
if len(sers) > 0 {
// there should be only one event per serial, so we can just

72
app/handle-event-types.go Normal file
View File

@@ -0,0 +1,72 @@
package app
import (
"git.mleku.dev/mleku/nostr/encoders/envelopes/eventenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/okenvelope"
"git.mleku.dev/mleku/nostr/encoders/reason"
"next.orly.dev/pkg/event/authorization"
"next.orly.dev/pkg/event/routing"
"next.orly.dev/pkg/event/validation"
)
// sendValidationError sends an appropriate OK response for a validation failure.
func (l *Listener) sendValidationError(env eventenvelope.I, result validation.Result) error {
var r []byte
switch result.Code {
case validation.ReasonBlocked:
r = reason.Blocked.F(result.Msg)
case validation.ReasonInvalid:
r = reason.Invalid.F(result.Msg)
case validation.ReasonError:
r = reason.Error.F(result.Msg)
default:
r = reason.Error.F(result.Msg)
}
return okenvelope.NewFrom(env.Id(), false, r).Write(l)
}
// sendAuthorizationDenied sends an appropriate OK response for an authorization denial.
func (l *Listener) sendAuthorizationDenied(env eventenvelope.I, decision authorization.Decision) error {
var r []byte
if decision.RequireAuth {
r = reason.AuthRequired.F(decision.DenyReason)
} else {
r = reason.Blocked.F(decision.DenyReason)
}
return okenvelope.NewFrom(env.Id(), false, r).Write(l)
}
// sendRoutingError sends an appropriate OK response for a routing error.
func (l *Listener) sendRoutingError(env eventenvelope.I, result routing.Result) error {
if result.Error != nil {
return okenvelope.NewFrom(env.Id(), false, reason.Error.F(result.Error.Error())).Write(l)
}
return nil
}
// sendProcessingError sends an appropriate OK response for a processing failure.
func (l *Listener) sendProcessingError(env eventenvelope.I, msg string) error {
return okenvelope.NewFrom(env.Id(), false, reason.Error.F(msg)).Write(l)
}
// sendProcessingBlocked sends an appropriate OK response for a blocked event.
func (l *Listener) sendProcessingBlocked(env eventenvelope.I, msg string) error {
return okenvelope.NewFrom(env.Id(), false, reason.Blocked.F(msg)).Write(l)
}
// sendRawValidationError sends an OK response for raw JSON validation failure (before unmarshal).
// Since we don't have an event ID at this point, we pass nil.
func (l *Listener) sendRawValidationError(result validation.Result) error {
var r []byte
switch result.Code {
case validation.ReasonBlocked:
r = reason.Blocked.F(result.Msg)
case validation.ReasonInvalid:
r = reason.Invalid.F(result.Msg)
case validation.ReasonError:
r = reason.Error.F(result.Msg)
default:
r = reason.Error.F(result.Msg)
}
return okenvelope.NewFrom(nil, false, r).Write(l)
}

View File

@@ -2,25 +2,41 @@ package app
import (
"context"
"fmt"
"strings"
"time"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
"next.orly.dev/pkg/encoders/envelopes/eventenvelope"
"next.orly.dev/pkg/encoders/envelopes/okenvelope"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/reason"
"next.orly.dev/pkg/cashu/token"
"next.orly.dev/pkg/event/routing"
"git.mleku.dev/mleku/nostr/encoders/envelopes/authenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/eventenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/noticeenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/okenvelope"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/kind"
"git.mleku.dev/mleku/nostr/encoders/reason"
"next.orly.dev/pkg/protocol/nip43"
"next.orly.dev/pkg/utils"
)
func (l *Listener) HandleEvent(msg []byte) (err error) {
log.D.F("HandleEvent: START handling event: %s", msg)
// 1. Raw JSON validation (before unmarshal) - use validation service
if result := l.eventValidator.ValidateRawJSON(msg); !result.Valid {
log.W.F("HandleEvent: rejecting event with validation error: %s", result.Msg)
// Send NOTICE to alert client developers about the issue
if noticeErr := noticeenvelope.NewFrom(result.Msg).Write(l); noticeErr != nil {
log.E.F("failed to send NOTICE for validation error: %v", noticeErr)
}
// Send OK false with the error message
if err = l.sendRawValidationError(result); chk.E(err) {
return
}
return nil
}
// decode the envelope
env := eventenvelope.NewSubmission()
log.I.F("HandleEvent: received event message length: %d", len(msg))
@@ -110,105 +126,43 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
}
}
// Check if policy is enabled and process event through it
if l.policyManager != nil && l.policyManager.Manager != nil && l.policyManager.Manager.IsEnabled() {
// Check policy for write access
allowed, policyErr := l.policyManager.CheckPolicy("write", env.E, l.authedPubkey.Load(), l.remote)
if chk.E(policyErr) {
log.E.F("policy check failed: %v", policyErr)
if err = Ok.Error(
l, env, "policy check failed",
); chk.E(err) {
return
}
return
}
if !allowed {
log.D.F("policy rejected event %0x", env.E.ID)
if err = Ok.Blocked(
l, env, "event blocked by policy",
); chk.E(err) {
return
}
return
}
log.D.F("policy allowed event %0x", env.E.ID)
// Check ACL policy for managed ACL mode, but skip for peer relay sync events
if acl.Registry.Active.Load() == "managed" && !l.isPeerRelayPubkey(l.authedPubkey.Load()) {
allowed, aclErr := acl.Registry.CheckPolicy(env.E)
if chk.E(aclErr) {
log.E.F("ACL policy check failed: %v", aclErr)
if err = Ok.Error(
l, env, "ACL policy check failed",
); chk.E(err) {
return
}
return
}
if !allowed {
log.D.F("ACL policy rejected event %0x", env.E.ID)
if err = Ok.Blocked(
l, env, "event blocked by ACL policy",
); chk.E(err) {
return
}
return
}
log.D.F("ACL policy allowed event %0x", env.E.ID)
}
}
// check the event ID is correct
calculatedId := env.E.GetIDBytes()
if !utils.FastEqual(calculatedId, env.E.ID) {
if err = Ok.Invalid(
l, env, "event id is computed incorrectly, "+
"event has ID %0x, but when computed it is %0x",
env.E.ID, calculatedId,
); chk.E(err) {
return
}
return
}
// validate timestamp - reject events too far in the future (more than 1 hour)
now := time.Now().Unix()
if env.E.CreatedAt > now+3600 {
if err = Ok.Invalid(
l, env,
"timestamp too far in the future",
); chk.E(err) {
// Event validation (ID, timestamp, signature) - use validation service
if result := l.eventValidator.ValidateEvent(env.E); !result.Valid {
if err = l.sendValidationError(env, result); chk.E(err) {
return
}
return
}
// verify the signature
var ok bool
if ok, err = env.Verify(); chk.T(err) {
if err = Ok.Error(
l, env, fmt.Sprintf(
"failed to verify signature: %s",
err.Error(),
),
); chk.E(err) {
return
}
} else if !ok {
if err = Ok.Invalid(
l, env,
"signature is invalid",
); chk.E(err) {
// Check Cashu token kind permissions if a token was provided
if l.cashuToken != nil && !l.cashuToken.IsKindPermitted(int(env.E.Kind)) {
log.W.F("HandleEvent: rejecting event kind %d - not permitted by Cashu token", env.E.Kind)
if err = Ok.Error(l, env, "event kind not permitted by access token"); chk.E(err) {
return
}
return
}
// Require Cashu token for NIP-46 events when Cashu is enabled and ACL is active
const kindNIP46 = 24133
if env.E.Kind == kindNIP46 && l.CashuVerifier != nil && l.Config.ACLMode != "none" {
if l.cashuToken == nil {
log.W.F("HandleEvent: rejecting NIP-46 event - Cashu access token required")
if err = Ok.Error(l, env, "restricted: NIP-46 requires Cashu access token"); chk.E(err) {
return
}
return
}
// Also verify the token has NIP-46 scope
if l.cashuToken.Scope != token.ScopeNIP46 && l.cashuToken.Scope != token.ScopeRelay {
log.W.F("HandleEvent: rejecting NIP-46 event - token scope %q not valid for NIP-46", l.cashuToken.Scope)
if err = Ok.Error(l, env, "restricted: access token scope not valid for NIP-46"); chk.E(err) {
return
}
return
}
}
// Handle NIP-43 special events before ACL checks
switch env.E.Kind {
case nip43.KindJoinRequest:
@@ -223,325 +177,170 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
log.E.F("failed to process NIP-43 leave request: %v", err)
}
return
}
// check permissions of user
log.I.F(
"HandleEvent: checking ACL permissions for pubkey: %s",
hex.Enc(l.authedPubkey.Load()),
)
// If ACL mode is "none" and no pubkey is set, use the event's pubkey
// But if auth is required or AuthToWrite is enabled, always use the authenticated pubkey
var pubkeyForACL []byte
if len(l.authedPubkey.Load()) == 0 && acl.Registry.Active.Load() == "none" && !l.Config.AuthRequired && !l.Config.AuthToWrite {
pubkeyForACL = env.E.Pubkey
log.I.F(
"HandleEvent: ACL mode is 'none' and auth not required, using event pubkey for ACL check: %s",
hex.Enc(pubkeyForACL),
)
} else {
pubkeyForACL = l.authedPubkey.Load()
}
// If auth is required or AuthToWrite is enabled but user is not authenticated, deny access
if (l.Config.AuthRequired || l.Config.AuthToWrite) && len(l.authedPubkey.Load()) == 0 {
log.D.F("HandleEvent: authentication required for write operations but user not authenticated")
if err = okenvelope.NewFrom(
env.Id(), false,
reason.AuthRequired.F("authentication required for write operations"),
).Write(l); chk.E(err) {
return
}
// Send AUTH challenge to prompt authentication
log.D.F("HandleEvent: sending AUTH challenge to %s", l.remote)
if err = authenvelope.NewChallengeWith(l.challenge.Load()).
Write(l); chk.E(err) {
return
}
return
}
accessLevel := acl.Registry.GetAccessLevel(pubkeyForACL, l.remote)
log.I.F("HandleEvent: ACL access level: %s", accessLevel)
// Skip ACL check for admin/owner delete events
skipACLCheck := false
if env.E.Kind == kind.EventDeletion.K {
// Check if the delete event signer is admin or owner
for _, admin := range l.Admins {
if utils.FastEqual(admin, env.E.Pubkey) {
skipACLCheck = true
log.I.F("HandleEvent: admin delete event - skipping ACL check")
break
}
}
if !skipACLCheck {
for _, owner := range l.Owners {
if utils.FastEqual(owner, env.E.Pubkey) {
skipACLCheck = true
log.I.F("HandleEvent: owner delete event - skipping ACL check")
break
}
}
}
}
if !skipACLCheck {
switch accessLevel {
case "none":
log.D.F(
"handle event: sending 'OK,false,auth-required...' to %s",
l.remote,
)
if err = okenvelope.NewFrom(
env.Id(), false,
reason.AuthRequired.F("auth required for write access"),
).Write(l); chk.E(err) {
// return
}
log.D.F("handle event: sending challenge to %s", l.remote)
if err = authenvelope.NewChallengeWith(l.challenge.Load()).
Write(l); chk.E(err) {
return
}
return
case "read":
log.D.F(
"handle event: sending 'OK,false,auth-required:...' to %s",
l.remote,
)
if err = okenvelope.NewFrom(
env.Id(), false,
reason.AuthRequired.F("auth required for write access"),
).Write(l); chk.E(err) {
return
}
log.D.F("handle event: sending challenge to %s", l.remote)
if err = authenvelope.NewChallengeWith(l.challenge.Load()).
Write(l); chk.E(err) {
return
}
return
case "blocked":
log.D.F(
"handle event: sending 'OK,false,blocked...' to %s",
l.remote,
)
if err = okenvelope.NewFrom(
env.Id(), false,
reason.AuthRequired.F("IP address blocked"),
).Write(l); chk.E(err) {
return
}
return
case "banned":
log.D.F(
"handle event: sending 'OK,false,banned...' to %s",
l.remote,
)
if err = okenvelope.NewFrom(
env.Id(), false,
reason.AuthRequired.F("pubkey banned"),
).Write(l); chk.E(err) {
return
}
return
default:
// user has write access or better, continue
log.I.F("HandleEvent: user has %s access, continuing", accessLevel)
}
} else {
log.I.F("HandleEvent: skipping ACL check for admin/owner delete event")
}
// check if event is ephemeral - if so, deliver and return early
if kind.IsEphemeral(env.E.Kind) {
log.D.F("handling ephemeral event %0x (kind %d)", env.E.ID, env.E.Kind)
// Send OK response for ephemeral events
if err = Ok.Ok(l, env, ""); chk.E(err) {
return
}
// Deliver the event to subscribers immediately
clonedEvent := env.E.Clone()
go l.publishers.Deliver(clonedEvent)
log.D.F("delivered ephemeral event %0x", env.E.ID)
return
}
log.D.F("processing regular event %0x (kind %d)", env.E.ID, env.E.Kind)
// check for protected tag (NIP-70)
protectedTag := env.E.Tags.GetFirst([]byte("-"))
if protectedTag != nil && acl.Registry.Active.Load() != "none" {
// check that the pubkey of the event matches the authed pubkey
if !utils.FastEqual(l.authedPubkey.Load(), env.E.Pubkey) {
if err = Ok.Blocked(
l, env,
"protected tag may only be published by user authed to the same pubkey",
); chk.E(err) {
return
}
return
}
}
// if the event is a delete, process the delete
log.I.F(
"HandleEvent: checking if event is delete - kind: %d, EventDeletion.K: %d",
env.E.Kind, kind.EventDeletion.K,
)
if env.E.Kind == kind.EventDeletion.K {
log.I.F("processing delete event %0x", env.E.ID)
// Store the delete event itself FIRST to ensure it's available for queries
saveCtx, cancel := context.WithTimeout(
context.Background(), 30*time.Second,
)
defer cancel()
log.I.F(
"attempting to save delete event %0x from pubkey %0x", env.E.ID,
env.E.Pubkey,
)
log.I.F("delete event pubkey hex: %s", hex.Enc(env.E.Pubkey))
if _, err = l.DB.SaveEvent(saveCtx, env.E); err != nil {
log.E.F("failed to save delete event %0x: %v", env.E.ID, err)
if strings.HasPrefix(err.Error(), "blocked:") {
errStr := err.Error()[len("blocked: "):len(err.Error())]
if err = Ok.Error(
l, env, errStr,
); chk.E(err) {
case acl.CuratingConfigKind:
// Handle curating configuration events (kind 30078 with d-tag "curating-config")
// Check if this is a curating config event (verify d-tag)
dTag := env.E.Tags.GetFirst([]byte("d"))
if dTag != nil && string(dTag.Value()) == acl.CuratingConfigDTag {
if err = l.HandleCuratingConfigUpdate(env.E); chk.E(err) {
log.E.F("failed to process curating config update: %v", err)
if err = Ok.Error(l, env, err.Error()); chk.E(err) {
return
}
return
}
chk.E(err)
return
}
log.I.F("successfully saved delete event %0x", env.E.ID)
// Now process the deletion (remove target events)
if err = l.HandleDelete(env); err != nil {
log.E.F("HandleDelete failed for event %0x: %v", env.E.ID, err)
if strings.HasPrefix(err.Error(), "blocked:") {
errStr := err.Error()[len("blocked: "):len(err.Error())]
if err = Ok.Error(
l, env, errStr,
); chk.E(err) {
return
}
return
// Save the event and send OK response
result := l.eventProcessor.Process(context.Background(), env.E)
if result.Error != nil {
log.E.F("failed to save curating config event: %v", result.Error)
}
// For non-blocked errors, still send OK but log the error
log.W.F("Delete processing failed but continuing: %v", err)
} else {
log.I.F(
"HandleDelete completed successfully for event %0x", env.E.ID,
)
}
// Send OK response for delete events
if err = Ok.Ok(l, env, ""); chk.E(err) {
return
}
// Deliver the delete event to subscribers
clonedEvent := env.E.Clone()
go l.publishers.Deliver(clonedEvent)
log.D.F("processed delete event %0x", env.E.ID)
return
} else {
// check if the event was deleted
// Combine admins and owners for deletion checking
adminOwners := append(l.Admins, l.Owners...)
if err = l.DB.CheckForDeleted(env.E, adminOwners); err != nil {
if strings.HasPrefix(err.Error(), "blocked:") {
errStr := err.Error()[len("blocked: "):len(err.Error())]
if err = Ok.Error(
l, env, errStr,
); chk.E(err) {
return
}
}
}
}
// store the event - use a separate context to prevent cancellation issues
saveCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// log.I.F("saving event %0x, %s", env.E.ID, env.E.Serialize())
if _, err = l.DB.SaveEvent(saveCtx, env.E); err != nil {
if strings.HasPrefix(err.Error(), "blocked:") {
errStr := err.Error()[len("blocked: "):len(err.Error())]
if err = Ok.Error(
l, env, errStr,
); chk.E(err) {
if err = Ok.Ok(l, env, "curating configuration updated"); chk.E(err) {
return
}
return
}
chk.E(err)
return
}
// Handle relay group configuration events
if l.relayGroupMgr != nil {
if err := l.relayGroupMgr.ValidateRelayGroupEvent(env.E); err != nil {
log.W.F("invalid relay group config event %s: %v", hex.Enc(env.E.ID), err)
}
// Process the event and potentially update peer lists
if l.syncManager != nil {
l.relayGroupMgr.HandleRelayGroupEvent(env.E, l.syncManager)
}
}
// Handle cluster membership events (Kind 39108)
if env.E.Kind == 39108 && l.clusterManager != nil {
if err := l.clusterManager.HandleMembershipEvent(env.E); err != nil {
log.W.F("invalid cluster membership event %s: %v", hex.Enc(env.E.ID), err)
}
}
// Update serial for distributed synchronization
if l.syncManager != nil {
l.syncManager.UpdateSerial()
log.D.F("updated serial for event %s", hex.Enc(env.E.ID))
}
// Send a success response storing
if err = Ok.Ok(l, env, ""); chk.E(err) {
return
}
// Deliver the event to subscribers immediately after sending OK response
// Clone the event to prevent corruption when the original is freed
clonedEvent := env.E.Clone()
go l.publishers.Deliver(clonedEvent)
log.D.F("saved event %0x", env.E.ID)
var isNewFromAdmin bool
// Check if event is from admin or owner
for _, admin := range l.Admins {
if utils.FastEqual(admin, env.E.Pubkey) {
isNewFromAdmin = true
break
}
}
if !isNewFromAdmin {
for _, owner := range l.Owners {
if utils.FastEqual(owner, env.E.Pubkey) {
isNewFromAdmin = true
break
// Not a curating config event, continue with normal processing
case kind.PolicyConfig.K:
// Handle policy configuration update events (kind 12345)
// Only policy admins can update policy configuration
if err = l.HandlePolicyConfigUpdate(env.E); chk.E(err) {
log.E.F("failed to process policy config update: %v", err)
if err = Ok.Error(l, env, err.Error()); chk.E(err) {
return
}
return
}
}
if isNewFromAdmin {
log.I.F("new event from admin %0x", env.E.Pubkey)
// if a follow list was saved, reconfigure ACLs now that it is persisted
if env.E.Kind == kind.FollowList.K ||
env.E.Kind == kind.RelayListMetadata.K {
// Run ACL reconfiguration asynchronously to prevent blocking websocket operations
// Send OK response
if err = Ok.Ok(l, env, "policy configuration updated"); chk.E(err) {
return
}
return
case kind.FollowList.K:
// Check if this is a follow list update from a policy admin
// If so, refresh the policy follows cache immediately
if l.IsPolicyAdminFollowListEvent(env.E) {
// Process the follow list update (async, don't block)
go func() {
if err := acl.Registry.Configure(); chk.E(err) {
log.E.F("failed to reconfigure ACL: %v", err)
if updateErr := l.HandlePolicyAdminFollowListUpdate(env.E); updateErr != nil {
log.W.F("failed to update policy follows from admin follow list: %v", updateErr)
}
}()
}
// Continue with normal follow list processing (store the event)
}
// Authorization check (policy + ACL) - use authorization service
decision := l.eventAuthorizer.Authorize(env.E, l.authedPubkey.Load(), l.remote, env.E.Kind)
if !decision.Allowed {
log.D.F("HandleEvent: authorization denied: %s (requireAuth=%v)", decision.DenyReason, decision.RequireAuth)
if decision.RequireAuth {
// Send OK false with reason
if err = okenvelope.NewFrom(
env.Id(), false,
reason.AuthRequired.F(decision.DenyReason),
).Write(l); chk.E(err) {
return
}
// Send AUTH challenge
if err = authenvelope.NewChallengeWith(l.challenge.Load()).Write(l); chk.E(err) {
return
}
} else {
// Send OK false with blocked reason
if err = Ok.Blocked(l, env, decision.DenyReason); chk.E(err) {
return
}
}
return
}
log.I.F("HandleEvent: authorized with access level %s", decision.AccessLevel)
// Progressive throttle for follows ACL mode (delays non-followed users)
if delay := l.getFollowsThrottleDelay(env.E); delay > 0 {
log.D.F("HandleEvent: applying progressive throttle delay of %v for %0x from %s",
delay, env.E.Pubkey, l.remote)
select {
case <-l.ctx.Done():
return l.ctx.Err()
case <-time.After(delay):
// Delay completed, continue processing
}
}
// Route special event kinds (ephemeral, etc.) - use routing service
if routeResult := l.eventRouter.Route(env.E, l.authedPubkey.Load()); routeResult.Action != routing.Continue {
if routeResult.Action == routing.Handled {
// Event fully handled by router, send OK and return
log.D.F("event %0x handled by router", env.E.ID)
if err = Ok.Ok(l, env, routeResult.Message); chk.E(err) {
return
}
return
} else if routeResult.Action == routing.Error {
// Router encountered an error
if err = l.sendRoutingError(env, routeResult); chk.E(err) {
return
}
return
}
}
log.D.F("processing regular event %0x (kind %d)", env.E.ID, env.E.Kind)
// NIP-70 protected tag validation - use validation service
if acl.Registry.Active.Load() != "none" {
if result := l.eventValidator.ValidateProtectedTag(env.E, l.authedPubkey.Load()); !result.Valid {
if err = l.sendValidationError(env, result); chk.E(err) {
return
}
return
}
}
// Handle delete events specially - save first, then process deletions
if env.E.Kind == kind.EventDeletion.K {
log.I.F("processing delete event %0x", env.E.ID)
// Save and deliver using processing service
result := l.eventProcessor.Process(context.Background(), env.E)
if result.Blocked {
if err = Ok.Error(l, env, result.BlockMsg); chk.E(err) {
return
}
return
}
if result.Error != nil {
chk.E(result.Error)
return
}
// Process deletion targets (remove referenced events)
if err = l.HandleDelete(env); err != nil {
log.W.F("HandleDelete failed for event %0x: %v", env.E.ID, err)
}
if err = Ok.Ok(l, env, ""); chk.E(err) {
return
}
log.D.F("processed delete event %0x", env.E.ID)
return
}
// Process event: save, run hooks, and deliver to subscribers
result := l.eventProcessor.Process(context.Background(), env.E)
if result.Blocked {
if err = Ok.Error(l, env, result.BlockMsg); chk.E(err) {
return
}
return
}
if result.Error != nil {
chk.E(result.Error)
return
}
// Send success response
if err = Ok.Ok(l, env, ""); chk.E(err) {
return
}
log.D.F("saved event %0x", env.E.ID)
return
}
@@ -562,3 +361,22 @@ func (l *Listener) isPeerRelayPubkey(pubkey []byte) bool {
return false
}
// HandleCuratingConfigUpdate processes curating configuration events (kind 30078)
func (l *Listener) HandleCuratingConfigUpdate(ev *event.E) error {
// Check if curating ACL is active
if acl.Registry.Type() != "curating" {
return nil // Ignore config events if not in curating mode
}
// Find the curating ACL instance
for _, aclInstance := range acl.Registry.ACL {
if aclInstance.Type() == "curating" {
if curating, ok := aclInstance.(*acl.Curating); ok {
return curating.ProcessConfigEvent(ev)
}
}
}
return nil
}

185
app/handle-logs.go Normal file
View File

@@ -0,0 +1,185 @@
package app
import (
"encoding/json"
"net/http"
"strconv"
lol "lol.mleku.dev"
"lol.mleku.dev/chk"
"git.mleku.dev/mleku/nostr/httpauth"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/logbuffer"
)
// LogsResponse is the response structure for GET /api/logs
type LogsResponse struct {
Logs []logbuffer.LogEntry `json:"logs"`
Total int `json:"total"`
HasMore bool `json:"has_more"`
}
// LogLevelResponse is the response structure for GET /api/logs/level
type LogLevelResponse struct {
Level string `json:"level"`
}
// LogLevelRequest is the request structure for POST /api/logs/level
type LogLevelRequest struct {
Level string `json:"level"`
}
// handleGetLogs handles GET /api/logs
func (s *Server) handleGetLogs(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level only
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
// Check if log buffer is available
if logbuffer.GlobalBuffer == nil {
http.Error(w, "Log buffer not enabled", http.StatusServiceUnavailable)
return
}
// Parse query parameters
offset := 0
limit := 100
if offsetStr := r.URL.Query().Get("offset"); offsetStr != "" {
if v, err := strconv.Atoi(offsetStr); err == nil && v >= 0 {
offset = v
}
}
if limitStr := r.URL.Query().Get("limit"); limitStr != "" {
if v, err := strconv.Atoi(limitStr); err == nil && v > 0 && v <= 500 {
limit = v
}
}
// Get logs from buffer
logs := logbuffer.GlobalBuffer.Get(offset, limit)
total := logbuffer.GlobalBuffer.Count()
hasMore := offset+len(logs) < total
response := LogsResponse{
Logs: logs,
Total: total,
HasMore: hasMore,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}
// handleClearLogs handles POST /api/logs/clear
func (s *Server) handleClearLogs(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level only
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
// Check if log buffer is available
if logbuffer.GlobalBuffer == nil {
http.Error(w, "Log buffer not enabled", http.StatusServiceUnavailable)
return
}
// Clear the buffer
logbuffer.GlobalBuffer.Clear()
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]string{"status": "ok"})
}
// handleLogLevel handles GET and POST /api/logs/level
func (s *Server) handleLogLevel(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case http.MethodGet:
s.handleGetLogLevel(w, r)
case http.MethodPost:
s.handleSetLogLevel(w, r)
default:
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
}
// handleGetLogLevel handles GET /api/logs/level
func (s *Server) handleGetLogLevel(w http.ResponseWriter, r *http.Request) {
// No auth required for reading log level
level := logbuffer.GetCurrentLevel()
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(LogLevelResponse{Level: level})
}
// handleSetLogLevel handles POST /api/logs/level
func (s *Server) handleSetLogLevel(w http.ResponseWriter, r *http.Request) {
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level only
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
// Parse request body
var req LogLevelRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, "Invalid request body", http.StatusBadRequest)
return
}
// Validate and set log level
level := logbuffer.SetCurrentLevel(req.Level)
lol.SetLogLevel(level)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(LogLevelResponse{Level: level})
}

View File

@@ -8,13 +8,13 @@ import (
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/encoders/envelopes"
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
"next.orly.dev/pkg/encoders/envelopes/closeenvelope"
"next.orly.dev/pkg/encoders/envelopes/countenvelope"
"next.orly.dev/pkg/encoders/envelopes/eventenvelope"
"next.orly.dev/pkg/encoders/envelopes/noticeenvelope"
"next.orly.dev/pkg/encoders/envelopes/reqenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes"
"git.mleku.dev/mleku/nostr/encoders/envelopes/authenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/closeenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/countenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/eventenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/noticeenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/reqenvelope"
)
// validateJSONMessage checks if a message contains invalid control characters
@@ -40,6 +40,11 @@ func validateJSONMessage(msg []byte) (err error) {
}
func (l *Listener) HandleMessage(msg []byte, remote string) {
// Acquire read lock for message processing - allows concurrent processing
// but blocks during policy/follow list updates (which acquire write lock)
l.Server.AcquireMessageProcessingLock()
defer l.Server.ReleaseMessageProcessingLock()
// Handle blacklisted IPs - discard messages but keep connection open until timeout
if l.isBlacklisted {
// Check if timeout has been reached

View File

@@ -9,9 +9,9 @@ import (
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/encoders/envelopes/okenvelope"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/envelopes/okenvelope"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/hex"
"next.orly.dev/pkg/protocol/nip43"
)

View File

@@ -7,11 +7,13 @@ import (
"time"
"next.orly.dev/app/config"
"next.orly.dev/pkg/crypto/keys"
"next.orly.dev/pkg/acl"
"git.mleku.dev/mleku/nostr/crypto/keys"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/interfaces/signer/p8k"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/tag"
"git.mleku.dev/mleku/nostr/interfaces/signer/p8k"
"next.orly.dev/pkg/protocol/nip43"
"next.orly.dev/pkg/protocol/publish"
)
@@ -38,24 +40,47 @@ func setupTestListener(t *testing.T) (*Listener, *database.D, func()) {
RelayURL: "wss://test.relay",
Listen: "localhost",
Port: 3334,
ACLMode: "none",
}
server := &Server{
Ctx: ctx,
Config: cfg,
D: db,
DB: db,
publishers: publish.New(NewPublisher(ctx)),
InviteManager: nip43.NewInviteManager(cfg.NIP43InviteExpiry),
cfg: cfg,
db: db,
}
listener := &Listener{
Server: server,
ctx: ctx,
// Configure ACL registry
acl.Registry.SetMode(cfg.ACLMode)
if err = acl.Registry.Configure(cfg, db, ctx); err != nil {
db.Close()
os.RemoveAll(tempDir)
t.Fatalf("failed to configure ACL: %v", err)
}
listener := &Listener{
Server: server,
ctx: ctx,
writeChan: make(chan publish.WriteRequest, 100),
writeDone: make(chan struct{}),
messageQueue: make(chan messageRequest, 100),
processingDone: make(chan struct{}),
subscriptions: make(map[string]context.CancelFunc),
}
// Start write worker and message processor
go listener.writeWorker()
go listener.messageProcessor()
cleanup := func() {
// Close listener channels
close(listener.writeChan)
<-listener.writeDone
close(listener.messageQueue)
<-listener.processingDone
db.Close()
os.RemoveAll(tempDir)
}
@@ -350,8 +375,13 @@ func TestHandleNIP43InviteRequest_ValidRequest(t *testing.T) {
}
adminPubkey := adminSigner.Pub()
// Add admin to server (simulating admin config)
listener.Server.Admins = [][]byte{adminPubkey}
// Add admin to config and reconfigure ACL
adminHex := hex.Enc(adminPubkey)
listener.Server.Config.Admins = []string{adminHex}
acl.Registry.SetMode("none")
if err = acl.Registry.Configure(listener.Server.Config, listener.Server.DB, listener.ctx); err != nil {
t.Fatalf("failed to reconfigure ACL: %v", err)
}
// Handle invite request
inviteEvent, err := listener.Server.HandleNIP43InviteRequest(adminPubkey)

View File

@@ -0,0 +1,593 @@
package app
import (
"context"
"encoding/hex"
"encoding/json"
"io"
"net/http"
"strconv"
"lol.mleku.dev/chk"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/database"
"git.mleku.dev/mleku/nostr/httpauth"
)
// handleCuratingNIP86Request handles curating NIP-86 requests with pre-authenticated pubkey.
// This is called from the main NIP-86 handler after authentication.
func (s *Server) handleCuratingNIP86Request(w http.ResponseWriter, r *http.Request, pubkey []byte) {
_ = pubkey // Pubkey already validated by caller
// Get the curating ACL instance
var curatingACL *acl.Curating
for _, aclInstance := range acl.Registry.ACL {
if aclInstance.Type() == "curating" {
if curating, ok := aclInstance.(*acl.Curating); ok {
curatingACL = curating
break
}
}
}
if curatingACL == nil {
http.Error(w, "Curating ACL not available", http.StatusInternalServerError)
return
}
// Read and parse the request
body, err := io.ReadAll(r.Body)
if chk.E(err) {
http.Error(w, "Failed to read request body", http.StatusBadRequest)
return
}
var request NIP86Request
if err := json.Unmarshal(body, &request); chk.E(err) {
http.Error(w, "Invalid JSON request", http.StatusBadRequest)
return
}
// Set response headers
w.Header().Set("Content-Type", "application/json")
// Handle the request based on method
response := s.handleCuratingNIP86Method(request, curatingACL)
// Send response
jsonData, err := json.Marshal(response)
if chk.E(err) {
http.Error(w, "Error generating response", http.StatusInternalServerError)
return
}
w.Write(jsonData)
}
// handleCuratingNIP86Management handles NIP-86 management API requests for curating mode (standalone)
func (s *Server) handleCuratingNIP86Management(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Check Content-Type
contentType := r.Header.Get("Content-Type")
if contentType != "application/nostr+json+rpc" {
http.Error(w, "Content-Type must be application/nostr+json+rpc", http.StatusBadRequest)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner or admin level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" && accessLevel != "admin" {
http.Error(w, "Owner or admin permission required", http.StatusForbidden)
return
}
// Check if curating ACL is active
if acl.Registry.Type() != "curating" {
http.Error(w, "Curating ACL mode is not active", http.StatusBadRequest)
return
}
// Delegate to shared request handler
s.handleCuratingNIP86Request(w, r, pubkey)
}
// handleCuratingNIP86Method handles individual NIP-86 methods for curating mode
func (s *Server) handleCuratingNIP86Method(request NIP86Request, curatingACL *acl.Curating) NIP86Response {
dbACL := curatingACL.GetCuratingACL()
switch request.Method {
case "supportedmethods":
return s.handleCuratingSupportedMethods()
case "trustpubkey":
return s.handleTrustPubkey(request.Params, curatingACL)
case "untrustpubkey":
return s.handleUntrustPubkey(request.Params, curatingACL)
case "listtrustedpubkeys":
return s.handleListTrustedPubkeys(dbACL)
case "blacklistpubkey":
return s.handleBlacklistPubkey(request.Params, curatingACL)
case "unblacklistpubkey":
return s.handleUnblacklistPubkey(request.Params, curatingACL)
case "listblacklistedpubkeys":
return s.handleListBlacklistedPubkeys(dbACL)
case "listunclassifiedusers":
return s.handleListUnclassifiedUsers(request.Params, dbACL)
case "markspam":
return s.handleMarkSpam(request.Params, dbACL)
case "unmarkspam":
return s.handleUnmarkSpam(request.Params, dbACL)
case "listspamevents":
return s.handleListSpamEvents(dbACL)
case "deleteevent":
return s.handleDeleteEvent(request.Params)
case "getcuratingconfig":
return s.handleGetCuratingConfig(dbACL)
case "listblockedips":
return s.handleListCuratingBlockedIPs(dbACL)
case "unblockip":
return s.handleUnblockCuratingIP(request.Params, dbACL)
case "isconfigured":
return s.handleIsConfigured(dbACL)
default:
return NIP86Response{Error: "Unknown method: " + request.Method}
}
}
// handleCuratingSupportedMethods returns the list of supported methods for curating mode
func (s *Server) handleCuratingSupportedMethods() NIP86Response {
methods := []string{
"supportedmethods",
"trustpubkey",
"untrustpubkey",
"listtrustedpubkeys",
"blacklistpubkey",
"unblacklistpubkey",
"listblacklistedpubkeys",
"listunclassifiedusers",
"markspam",
"unmarkspam",
"listspamevents",
"deleteevent",
"getcuratingconfig",
"listblockedips",
"unblockip",
"isconfigured",
}
return NIP86Response{Result: methods}
}
// handleTrustPubkey adds a pubkey to the trusted list
func (s *Server) handleTrustPubkey(params []interface{}, curatingACL *acl.Curating) NIP86Response {
if len(params) < 1 {
return NIP86Response{Error: "Missing required parameter: pubkey"}
}
pubkey, ok := params[0].(string)
if !ok {
return NIP86Response{Error: "Invalid pubkey parameter"}
}
if len(pubkey) != 64 {
return NIP86Response{Error: "Invalid pubkey format (must be 64 hex characters)"}
}
note := ""
if len(params) > 1 {
if n, ok := params[1].(string); ok {
note = n
}
}
if err := curatingACL.TrustPubkey(pubkey, note); chk.E(err) {
return NIP86Response{Error: "Failed to trust pubkey: " + err.Error()}
}
return NIP86Response{Result: true}
}
// handleUntrustPubkey removes a pubkey from the trusted list
func (s *Server) handleUntrustPubkey(params []interface{}, curatingACL *acl.Curating) NIP86Response {
if len(params) < 1 {
return NIP86Response{Error: "Missing required parameter: pubkey"}
}
pubkey, ok := params[0].(string)
if !ok {
return NIP86Response{Error: "Invalid pubkey parameter"}
}
if err := curatingACL.UntrustPubkey(pubkey); chk.E(err) {
return NIP86Response{Error: "Failed to untrust pubkey: " + err.Error()}
}
return NIP86Response{Result: true}
}
// handleListTrustedPubkeys returns the list of trusted pubkeys
func (s *Server) handleListTrustedPubkeys(dbACL *database.CuratingACL) NIP86Response {
trusted, err := dbACL.ListTrustedPubkeys()
if chk.E(err) {
return NIP86Response{Error: "Failed to list trusted pubkeys: " + err.Error()}
}
result := make([]map[string]interface{}, len(trusted))
for i, t := range trusted {
result[i] = map[string]interface{}{
"pubkey": t.Pubkey,
"note": t.Note,
"added": t.Added.Unix(),
}
}
return NIP86Response{Result: result}
}
// handleBlacklistPubkey adds a pubkey to the blacklist
func (s *Server) handleBlacklistPubkey(params []interface{}, curatingACL *acl.Curating) NIP86Response {
if len(params) < 1 {
return NIP86Response{Error: "Missing required parameter: pubkey"}
}
pubkey, ok := params[0].(string)
if !ok {
return NIP86Response{Error: "Invalid pubkey parameter"}
}
if len(pubkey) != 64 {
return NIP86Response{Error: "Invalid pubkey format (must be 64 hex characters)"}
}
reason := ""
if len(params) > 1 {
if r, ok := params[1].(string); ok {
reason = r
}
}
if err := curatingACL.BlacklistPubkey(pubkey, reason); chk.E(err) {
return NIP86Response{Error: "Failed to blacklist pubkey: " + err.Error()}
}
return NIP86Response{Result: true}
}
// handleUnblacklistPubkey removes a pubkey from the blacklist
func (s *Server) handleUnblacklistPubkey(params []interface{}, curatingACL *acl.Curating) NIP86Response {
if len(params) < 1 {
return NIP86Response{Error: "Missing required parameter: pubkey"}
}
pubkey, ok := params[0].(string)
if !ok {
return NIP86Response{Error: "Invalid pubkey parameter"}
}
if err := curatingACL.UnblacklistPubkey(pubkey); chk.E(err) {
return NIP86Response{Error: "Failed to unblacklist pubkey: " + err.Error()}
}
return NIP86Response{Result: true}
}
// handleListBlacklistedPubkeys returns the list of blacklisted pubkeys
func (s *Server) handleListBlacklistedPubkeys(dbACL *database.CuratingACL) NIP86Response {
blacklisted, err := dbACL.ListBlacklistedPubkeys()
if chk.E(err) {
return NIP86Response{Error: "Failed to list blacklisted pubkeys: " + err.Error()}
}
result := make([]map[string]interface{}, len(blacklisted))
for i, b := range blacklisted {
result[i] = map[string]interface{}{
"pubkey": b.Pubkey,
"reason": b.Reason,
"added": b.Added.Unix(),
}
}
return NIP86Response{Result: result}
}
// handleListUnclassifiedUsers returns unclassified users sorted by event count
func (s *Server) handleListUnclassifiedUsers(params []interface{}, dbACL *database.CuratingACL) NIP86Response {
limit := 100 // Default limit
if len(params) > 0 {
if l, ok := params[0].(float64); ok {
limit = int(l)
}
}
users, err := dbACL.ListUnclassifiedUsers(limit)
if chk.E(err) {
return NIP86Response{Error: "Failed to list unclassified users: " + err.Error()}
}
result := make([]map[string]interface{}, len(users))
for i, u := range users {
result[i] = map[string]interface{}{
"pubkey": u.Pubkey,
"event_count": u.EventCount,
"last_event": u.LastEvent.Unix(),
}
}
return NIP86Response{Result: result}
}
// handleMarkSpam marks an event as spam
func (s *Server) handleMarkSpam(params []interface{}, dbACL *database.CuratingACL) NIP86Response {
if len(params) < 1 {
return NIP86Response{Error: "Missing required parameter: event_id"}
}
eventID, ok := params[0].(string)
if !ok {
return NIP86Response{Error: "Invalid event_id parameter"}
}
if len(eventID) != 64 {
return NIP86Response{Error: "Invalid event_id format (must be 64 hex characters)"}
}
pubkey := ""
if len(params) > 1 {
if p, ok := params[1].(string); ok {
pubkey = p
}
}
reason := ""
if len(params) > 2 {
if r, ok := params[2].(string); ok {
reason = r
}
}
if err := dbACL.MarkEventAsSpam(eventID, pubkey, reason); chk.E(err) {
return NIP86Response{Error: "Failed to mark event as spam: " + err.Error()}
}
return NIP86Response{Result: true}
}
// handleUnmarkSpam removes the spam flag from an event
func (s *Server) handleUnmarkSpam(params []interface{}, dbACL *database.CuratingACL) NIP86Response {
if len(params) < 1 {
return NIP86Response{Error: "Missing required parameter: event_id"}
}
eventID, ok := params[0].(string)
if !ok {
return NIP86Response{Error: "Invalid event_id parameter"}
}
if err := dbACL.UnmarkEventAsSpam(eventID); chk.E(err) {
return NIP86Response{Error: "Failed to unmark event as spam: " + err.Error()}
}
return NIP86Response{Result: true}
}
// handleListSpamEvents returns the list of spam-flagged events
func (s *Server) handleListSpamEvents(dbACL *database.CuratingACL) NIP86Response {
spam, err := dbACL.ListSpamEvents()
if chk.E(err) {
return NIP86Response{Error: "Failed to list spam events: " + err.Error()}
}
result := make([]map[string]interface{}, len(spam))
for i, sp := range spam {
result[i] = map[string]interface{}{
"event_id": sp.EventID,
"pubkey": sp.Pubkey,
"reason": sp.Reason,
"added": sp.Added.Unix(),
}
}
return NIP86Response{Result: result}
}
// handleDeleteEvent permanently deletes an event from the database
func (s *Server) handleDeleteEvent(params []interface{}) NIP86Response {
if len(params) < 1 {
return NIP86Response{Error: "Missing required parameter: event_id"}
}
eventIDHex, ok := params[0].(string)
if !ok {
return NIP86Response{Error: "Invalid event_id parameter"}
}
if len(eventIDHex) != 64 {
return NIP86Response{Error: "Invalid event_id format (must be 64 hex characters)"}
}
// Convert hex to bytes
eventID, err := hex.DecodeString(eventIDHex)
if err != nil {
return NIP86Response{Error: "Invalid event_id hex: " + err.Error()}
}
// Delete from database
if err := s.DB.DeleteEvent(context.Background(), eventID); chk.E(err) {
return NIP86Response{Error: "Failed to delete event: " + err.Error()}
}
return NIP86Response{Result: true}
}
// handleGetCuratingConfig returns the current curating configuration
func (s *Server) handleGetCuratingConfig(dbACL *database.CuratingACL) NIP86Response {
config, err := dbACL.GetConfig()
if chk.E(err) {
return NIP86Response{Error: "Failed to get config: " + err.Error()}
}
result := map[string]interface{}{
"daily_limit": config.DailyLimit,
"first_ban_hours": config.FirstBanHours,
"second_ban_hours": config.SecondBanHours,
"allowed_kinds": config.AllowedKinds,
"allowed_ranges": config.AllowedRanges,
"kind_categories": config.KindCategories,
"config_event_id": config.ConfigEventID,
"config_pubkey": config.ConfigPubkey,
"configured_at": config.ConfiguredAt,
"is_configured": config.ConfigEventID != "",
}
return NIP86Response{Result: result}
}
// handleListCuratingBlockedIPs returns the list of blocked IPs in curating mode
func (s *Server) handleListCuratingBlockedIPs(dbACL *database.CuratingACL) NIP86Response {
blocked, err := dbACL.ListBlockedIPs()
if chk.E(err) {
return NIP86Response{Error: "Failed to list blocked IPs: " + err.Error()}
}
result := make([]map[string]interface{}, len(blocked))
for i, b := range blocked {
result[i] = map[string]interface{}{
"ip": b.IP,
"reason": b.Reason,
"expires_at": b.ExpiresAt.Unix(),
"added": b.Added.Unix(),
}
}
return NIP86Response{Result: result}
}
// handleUnblockCuratingIP unblocks an IP in curating mode
func (s *Server) handleUnblockCuratingIP(params []interface{}, dbACL *database.CuratingACL) NIP86Response {
if len(params) < 1 {
return NIP86Response{Error: "Missing required parameter: ip"}
}
ip, ok := params[0].(string)
if !ok {
return NIP86Response{Error: "Invalid ip parameter"}
}
if err := dbACL.UnblockIP(ip); chk.E(err) {
return NIP86Response{Error: "Failed to unblock IP: " + err.Error()}
}
return NIP86Response{Result: true}
}
// handleIsConfigured checks if curating mode is configured
func (s *Server) handleIsConfigured(dbACL *database.CuratingACL) NIP86Response {
configured, err := dbACL.IsConfigured()
if chk.E(err) {
return NIP86Response{Error: "Failed to check configuration: " + err.Error()}
}
return NIP86Response{Result: configured}
}
// GetKindCategoriesInfo returns information about available kind categories
func GetKindCategoriesInfo() []map[string]interface{} {
categories := []map[string]interface{}{
{
"id": "social",
"name": "Social/Notes",
"description": "Profiles, text notes, follows, reposts, reactions",
"kinds": []int{0, 1, 3, 6, 7, 10002},
},
{
"id": "dm",
"name": "Direct Messages",
"description": "NIP-04 DMs, NIP-17 private messages, gift wraps",
"kinds": []int{4, 14, 1059},
},
{
"id": "longform",
"name": "Long-form Content",
"description": "Articles and drafts",
"kinds": []int{30023, 30024},
},
{
"id": "media",
"name": "Media",
"description": "File metadata, video, audio",
"kinds": []int{1063, 20, 21, 22},
},
{
"id": "marketplace",
"name": "Marketplace",
"description": "Product listings, stalls, auctions",
"kinds": []int{30017, 30018, 30019, 30020, 1021, 1022},
},
{
"id": "groups_nip29",
"name": "Group Messaging (NIP-29)",
"description": "Simple group messages and metadata",
"kinds": []int{9, 10, 11, 12, 9000, 9001, 9002, 39000, 39001, 39002},
},
{
"id": "groups_nip72",
"name": "Communities (NIP-72)",
"description": "Moderated communities and post approvals",
"kinds": []int{34550, 1111, 4550},
},
{
"id": "lists",
"name": "Lists/Bookmarks",
"description": "Mute lists, pins, categorized lists, bookmarks",
"kinds": []int{10000, 10001, 10003, 30000, 30001, 30003},
},
}
return categories
}
// expandKindRange expands a range string like "1000-1999" into individual kinds
func expandKindRange(rangeStr string) []int {
var kinds []int
parts := make([]int, 2)
n, err := parseRange(rangeStr, parts)
if err != nil || n != 2 {
return kinds
}
for i := parts[0]; i <= parts[1]; i++ {
kinds = append(kinds, i)
}
return kinds
}
func parseRange(s string, parts []int) (int, error) {
// Simple parsing of "start-end"
for i, c := range s {
if c == '-' && i > 0 {
start, err := strconv.Atoi(s[:i])
if err != nil {
return 0, err
}
end, err := strconv.Atoi(s[i+1:])
if err != nil {
return 0, err
}
parts[0] = start
parts[1] = end
return 2, nil
}
}
return 0, nil
}

View File

@@ -8,7 +8,7 @@ import (
"lol.mleku.dev/chk"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/protocol/httpauth"
"git.mleku.dev/mleku/nostr/httpauth"
)
// NIP86Request represents a NIP-86 JSON-RPC request
@@ -55,9 +55,16 @@ func (s *Server) handleNIP86Management(w http.ResponseWriter, r *http.Request) {
return
}
// Check if managed ACL is active
if acl.Registry.Type() != "managed" {
http.Error(w, "Managed ACL mode is not active", http.StatusBadRequest)
// Dispatch based on ACL mode
aclType := acl.Registry.Type()
switch aclType {
case "curating":
s.handleCuratingNIP86Request(w, r, pubkey)
return
case "managed":
// Continue with managed ACL handling below
default:
http.Error(w, "NIP-86 requires managed or curating ACL mode", http.StatusBadRequest)
return
}

View File

@@ -35,7 +35,7 @@ func TestHandleNIP86Management_Basic(t *testing.T) {
// Setup server
server := &Server{
Config: cfg,
D: db,
DB: db,
Admins: [][]byte{[]byte("admin1")},
Owners: [][]byte{[]byte("owner1")},
}

448
app/handle-nrc.go Normal file
View File

@@ -0,0 +1,448 @@
package app
import (
"encoding/json"
"net/http"
"strings"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"git.mleku.dev/mleku/nostr/crypto/keys"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/httpauth"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/database"
)
// getCashuMintURL returns the Cashu mint URL based on relay configuration.
// Returns empty string if Cashu is not enabled.
func (s *Server) getCashuMintURL() string {
if !s.Config.CashuEnabled || s.CashuIssuer == nil {
return ""
}
// Use configured relay URL with /cashu/mint path
relayURL := strings.TrimSuffix(s.Config.RelayURL, "/")
if relayURL == "" {
return ""
}
return relayURL + "/cashu/mint"
}
// NRCConnectionResponse is the response structure for NRC connection API.
type NRCConnectionResponse struct {
ID string `json:"id"`
Label string `json:"label"`
CreatedAt int64 `json:"created_at"`
LastUsed int64 `json:"last_used"`
UseCashu bool `json:"use_cashu"`
URI string `json:"uri,omitempty"` // Only included when specifically requested
}
// NRCConnectionsResponse is the response for listing all connections.
type NRCConnectionsResponse struct {
Connections []NRCConnectionResponse `json:"connections"`
Config NRCConfigResponse `json:"config"`
}
// NRCConfigResponse contains NRC configuration status.
type NRCConfigResponse struct {
Enabled bool `json:"enabled"`
RendezvousURL string `json:"rendezvous_url"`
MintURL string `json:"mint_url,omitempty"`
RelayPubkey string `json:"relay_pubkey"`
}
// NRCCreateRequest is the request body for creating a connection.
type NRCCreateRequest struct {
Label string `json:"label"`
UseCashu bool `json:"use_cashu"`
}
// handleNRCConnections handles GET /api/nrc/connections
func (s *Server) handleNRCConnections(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
// Get database (must be Badger)
badgerDB, ok := s.DB.(*database.D)
if !ok {
http.Error(w, "NRC requires Badger database backend", http.StatusServiceUnavailable)
return
}
// Get all connections
conns, err := badgerDB.GetAllNRCConnections()
if chk.E(err) {
http.Error(w, "Failed to get connections", http.StatusInternalServerError)
return
}
// Get relay identity for config
relaySecretKey, err := s.DB.GetOrCreateRelayIdentitySecret()
if chk.E(err) {
http.Error(w, "Failed to get relay identity", http.StatusInternalServerError)
return
}
relayPubkey, _ := keys.SecretBytesToPubKeyBytes(relaySecretKey)
// Get NRC config values
nrcEnabled, nrcRendezvousURL, _, nrcUseCashu, _ := s.Config.GetNRCConfigValues()
// Build response
response := NRCConnectionsResponse{
Connections: make([]NRCConnectionResponse, 0, len(conns)),
Config: NRCConfigResponse{
Enabled: nrcEnabled,
RendezvousURL: nrcRendezvousURL,
RelayPubkey: string(hex.Enc(relayPubkey)),
},
}
// Add mint URL if Cashu is enabled
mintURL := s.getCashuMintURL()
if nrcUseCashu && mintURL != "" {
response.Config.MintURL = mintURL
}
for _, conn := range conns {
response.Connections = append(response.Connections, NRCConnectionResponse{
ID: conn.ID,
Label: conn.Label,
CreatedAt: conn.CreatedAt,
LastUsed: conn.LastUsed,
UseCashu: conn.UseCashu,
})
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}
// handleNRCCreate handles POST /api/nrc/connections
func (s *Server) handleNRCCreate(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
// Get database (must be Badger)
badgerDB, ok := s.DB.(*database.D)
if !ok {
http.Error(w, "NRC requires Badger database backend", http.StatusServiceUnavailable)
return
}
// Parse request body
var req NRCCreateRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, "Invalid request body", http.StatusBadRequest)
return
}
// Validate label
req.Label = strings.TrimSpace(req.Label)
if req.Label == "" {
http.Error(w, "Label is required", http.StatusBadRequest)
return
}
// Create the connection
conn, err := badgerDB.CreateNRCConnection(req.Label, req.UseCashu)
if chk.E(err) {
http.Error(w, "Failed to create connection", http.StatusInternalServerError)
return
}
// Get relay identity for URI generation
relaySecretKey, err := s.DB.GetOrCreateRelayIdentitySecret()
if chk.E(err) {
http.Error(w, "Failed to get relay identity", http.StatusInternalServerError)
return
}
relayPubkey, _ := keys.SecretBytesToPubKeyBytes(relaySecretKey)
// Get NRC config values
_, nrcRendezvousURL, _, nrcUseCashu, _ := s.Config.GetNRCConfigValues()
// Get mint URL if Cashu enabled
mintURL := ""
if nrcUseCashu {
mintURL = s.getCashuMintURL()
}
// Generate URI
uri, err := badgerDB.GetNRCConnectionURI(conn, relayPubkey, nrcRendezvousURL, mintURL)
if chk.E(err) {
log.W.F("failed to generate URI for new connection: %v", err)
}
// Update bridge authorized secrets if bridge is running
s.updateNRCBridgeSecrets(badgerDB)
// Build response with URI
response := NRCConnectionResponse{
ID: conn.ID,
Label: conn.Label,
CreatedAt: conn.CreatedAt,
LastUsed: conn.LastUsed,
UseCashu: conn.UseCashu,
URI: uri,
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(response)
}
// handleNRCDelete handles DELETE /api/nrc/connections/{id}
func (s *Server) handleNRCDelete(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodDelete {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
// Get database (must be Badger)
badgerDB, ok := s.DB.(*database.D)
if !ok {
http.Error(w, "NRC requires Badger database backend", http.StatusServiceUnavailable)
return
}
// Extract connection ID from URL path
// URL format: /api/nrc/connections/{id}
path := strings.TrimPrefix(r.URL.Path, "/api/nrc/connections/")
connID := strings.TrimSpace(path)
if connID == "" {
http.Error(w, "Connection ID required", http.StatusBadRequest)
return
}
// Delete the connection
if err := badgerDB.DeleteNRCConnection(connID); chk.E(err) {
http.Error(w, "Failed to delete connection", http.StatusInternalServerError)
return
}
// Update bridge authorized secrets if bridge is running
s.updateNRCBridgeSecrets(badgerDB)
log.I.F("deleted NRC connection: %s", connID)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]string{"status": "ok"})
}
// handleNRCGetURI handles GET /api/nrc/connections/{id}/uri
func (s *Server) handleNRCGetURI(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
// Get database (must be Badger)
badgerDB, ok := s.DB.(*database.D)
if !ok {
http.Error(w, "NRC requires Badger database backend", http.StatusServiceUnavailable)
return
}
// Extract connection ID from URL path
// URL format: /api/nrc/connections/{id}/uri
path := strings.TrimPrefix(r.URL.Path, "/api/nrc/connections/")
path = strings.TrimSuffix(path, "/uri")
connID := strings.TrimSpace(path)
if connID == "" {
http.Error(w, "Connection ID required", http.StatusBadRequest)
return
}
// Get the connection
conn, err := badgerDB.GetNRCConnection(connID)
if err != nil {
http.Error(w, "Connection not found", http.StatusNotFound)
return
}
// Get relay identity
relaySecretKey, err := s.DB.GetOrCreateRelayIdentitySecret()
if chk.E(err) {
http.Error(w, "Failed to get relay identity", http.StatusInternalServerError)
return
}
relayPubkey, _ := keys.SecretBytesToPubKeyBytes(relaySecretKey)
// Get NRC config values
_, nrcRendezvousURL, _, nrcUseCashu, _ := s.Config.GetNRCConfigValues()
// Get mint URL if Cashu enabled
mintURL := ""
if nrcUseCashu {
mintURL = s.getCashuMintURL()
}
// Generate URI
uri, err := badgerDB.GetNRCConnectionURI(conn, relayPubkey, nrcRendezvousURL, mintURL)
if chk.E(err) {
http.Error(w, "Failed to generate URI", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]string{"uri": uri})
}
// updateNRCBridgeSecrets updates the NRC bridge with current authorized secrets from database.
func (s *Server) updateNRCBridgeSecrets(badgerDB *database.D) {
if s.nrcBridge == nil {
return
}
secrets, err := badgerDB.GetNRCAuthorizedSecrets()
if chk.E(err) {
log.W.F("failed to get NRC authorized secrets: %v", err)
return
}
s.nrcBridge.UpdateAuthorizedSecrets(secrets)
log.D.F("updated NRC bridge with %d authorized secrets", len(secrets))
}
// handleNRCConnectionsRouter routes NRC connection requests.
func (s *Server) handleNRCConnectionsRouter(w http.ResponseWriter, r *http.Request) {
path := r.URL.Path
// Exact match for /api/nrc/connections
if path == "/api/nrc/connections" {
switch r.Method {
case http.MethodGet:
s.handleNRCConnections(w, r)
case http.MethodPost:
s.handleNRCCreate(w, r)
default:
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
return
}
// Check for /api/nrc/connections/{id}/uri
if strings.HasSuffix(path, "/uri") {
s.handleNRCGetURI(w, r)
return
}
// Otherwise it's /api/nrc/connections/{id}
s.handleNRCDelete(w, r)
}
// handleNRCConfig returns NRC configuration status.
func (s *Server) handleNRCConfig(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Get NRC config values
nrcEnabled, nrcRendezvousURL, _, nrcUseCashu, _ := s.Config.GetNRCConfigValues()
// Check if Badger is available (NRC requires Badger)
_, badgerAvailable := s.DB.(*database.D)
response := struct {
Enabled bool `json:"enabled"`
BadgerRequired bool `json:"badger_required"`
RendezvousURL string `json:"rendezvous_url,omitempty"`
UseCashu bool `json:"use_cashu"`
MintURL string `json:"mint_url,omitempty"`
}{
Enabled: nrcEnabled && badgerAvailable,
BadgerRequired: !badgerAvailable,
RendezvousURL: nrcRendezvousURL,
UseCashu: nrcUseCashu,
}
// Add mint URL if Cashu is enabled
if nrcUseCashu {
mintURL := s.getCashuMintURL()
if mintURL != "" {
response.MintURL = mintURL
}
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}

345
app/handle-policy-config.go Normal file
View File

@@ -0,0 +1,345 @@
package app
import (
"bytes"
"fmt"
"lol.mleku.dev/log"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/filter"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/kind"
"git.mleku.dev/mleku/nostr/encoders/tag"
)
// HandlePolicyConfigUpdate processes kind 12345 policy configuration events.
// Owners and policy admins can update policy configuration, with different permissions:
//
// OWNERS can:
// - Modify all fields including owners and policy_admins
// - But owners list must remain non-empty (to prevent lockout)
//
// POLICY ADMINS can:
// - Extend rules (add to allow lists, add new kinds, add blacklists)
// - CANNOT modify owners or policy_admins (protected fields)
// - CANNOT reduce owner-granted permissions
//
// Process flow:
// 1. Check if sender is owner or policy admin
// 2. Validate JSON with appropriate rules for the sender type
// 3. Pause ALL message processing (lock mutex)
// 4. Reload policy (pause policy engine, update, save, resume)
// 5. Resume message processing (unlock mutex)
//
// The message processing mutex is already released by the caller (HandleEvent),
// so we acquire it ourselves for the critical section.
func (l *Listener) HandlePolicyConfigUpdate(ev *event.E) error {
log.I.F("received policy config update from pubkey: %s", hex.Enc(ev.Pubkey))
// 1. Verify sender is owner or policy admin
if l.policyManager == nil {
return fmt.Errorf("policy system is not enabled")
}
isOwner := l.policyManager.IsOwner(ev.Pubkey)
isAdmin := l.policyManager.IsPolicyAdmin(ev.Pubkey)
if !isOwner && !isAdmin {
log.W.F("policy config update rejected: pubkey %s is not an owner or policy admin", hex.Enc(ev.Pubkey))
return fmt.Errorf("only owners and policy administrators can update policy configuration")
}
if isOwner {
log.I.F("owner verified: %s", hex.Enc(ev.Pubkey))
} else {
log.I.F("policy admin verified: %s", hex.Enc(ev.Pubkey))
}
// 2. Parse and validate JSON with appropriate validation rules
policyJSON := []byte(ev.Content)
var validationErr error
if isOwner {
// Owners can modify all fields, but owners list must be non-empty
validationErr = l.policyManager.ValidateOwnerPolicyUpdate(policyJSON)
} else {
// Policy admins have restrictions: can't modify protected fields, can't reduce permissions
validationErr = l.policyManager.ValidatePolicyAdminUpdate(policyJSON, ev.Pubkey)
}
if validationErr != nil {
log.E.F("policy config update validation failed: %v", validationErr)
return fmt.Errorf("invalid policy configuration: %v", validationErr)
}
log.I.F("policy config validation passed")
// Get config path for saving (uses custom path if set, otherwise default)
configPath := l.policyManager.ConfigPath()
// 3. Pause ALL message processing (lock mutex)
// Note: We need to release the RLock first (which caller holds), then acquire exclusive Lock
// Actually, the HandleMessage already released the lock after calling HandleEvent
// So we can directly acquire the exclusive lock
log.I.F("pausing message processing for policy update")
l.Server.PauseMessageProcessing()
defer l.Server.ResumeMessageProcessing()
// 4. Reload policy (this will pause policy engine, update, save, and resume)
log.I.F("applying policy configuration update")
var reloadErr error
if isOwner {
reloadErr = l.policyManager.ReloadAsOwner(policyJSON, configPath)
} else {
reloadErr = l.policyManager.ReloadAsPolicyAdmin(policyJSON, configPath, ev.Pubkey)
}
if reloadErr != nil {
log.E.F("policy config update failed: %v", reloadErr)
return fmt.Errorf("failed to apply policy configuration: %v", reloadErr)
}
if isOwner {
log.I.F("policy configuration updated successfully by owner: %s", hex.Enc(ev.Pubkey))
} else {
log.I.F("policy configuration updated successfully by policy admin: %s", hex.Enc(ev.Pubkey))
}
// 5. Message processing mutex will be unlocked by defer
return nil
}
// HandlePolicyAdminFollowListUpdate processes kind 3 follow list events from policy admins.
// When a policy admin updates their follow list, we immediately refresh the policy follows cache.
//
// Process flow:
// 1. Check if sender is a policy admin
// 2. If yes, extract p-tags from the follow list
// 3. Pause message processing
// 4. Aggregate all policy admin follows and update cache
// 5. Resume message processing
func (l *Listener) HandlePolicyAdminFollowListUpdate(ev *event.E) error {
// Only process if policy system is enabled
if l.policyManager == nil || !l.policyManager.IsEnabled() {
return nil // Not an error, just ignore
}
// Check if sender is a policy admin
if !l.policyManager.IsPolicyAdmin(ev.Pubkey) {
return nil // Not a policy admin, ignore
}
log.I.F("policy admin %s updated their follow list, refreshing policy follows", hex.Enc(ev.Pubkey))
// Extract p-tags from this follow list event
newFollows := extractFollowsFromEvent(ev)
// Pause message processing for atomic update
log.D.F("pausing message processing for follow list update")
l.Server.PauseMessageProcessing()
defer l.Server.ResumeMessageProcessing()
// Get all current follows from database for all policy admins
// For now, we'll merge the new follows with existing ones
// A more complete implementation would re-fetch all admin follows from DB
allFollows, err := l.fetchAllPolicyAdminFollows()
if err != nil {
log.W.F("failed to fetch all policy admin follows: %v, using new follows only", err)
allFollows = newFollows
} else {
// Merge with the new follows (deduplicated)
allFollows = mergeFollows(allFollows, newFollows)
}
// Update the policy follows cache
l.policyManager.UpdatePolicyFollows(allFollows)
log.I.F("policy follows cache updated with %d total pubkeys", len(allFollows))
return nil
}
// extractFollowsFromEvent extracts p-tag pubkeys from a kind 3 follow list event.
// Returns binary pubkeys.
func extractFollowsFromEvent(ev *event.E) [][]byte {
var follows [][]byte
pTags := ev.Tags.GetAll([]byte("p"))
for _, pTag := range pTags {
// ValueHex() handles both binary and hex storage formats automatically
pt, err := hex.Dec(string(pTag.ValueHex()))
if err != nil {
continue
}
follows = append(follows, pt)
}
return follows
}
// fetchAllPolicyAdminFollows fetches kind 3 events for all policy admins from the database
// and aggregates their follows.
func (l *Listener) fetchAllPolicyAdminFollows() ([][]byte, error) {
var allFollows [][]byte
seen := make(map[string]bool)
// Get policy admin pubkeys
admins := l.policyManager.GetPolicyAdminsBin()
if len(admins) == 0 {
return nil, fmt.Errorf("no policy admins configured")
}
// For each admin, query their latest kind 3 event
for _, adminPubkey := range admins {
// Build proper filter for kind 3 from this admin
f := filter.New()
f.Authors = tag.NewFromAny(adminPubkey)
f.Kinds = kind.NewS(kind.FollowList)
limit := uint(1)
f.Limit = &limit
// Query the database for kind 3 events from this admin
events, err := l.DB.QueryEvents(l.ctx, f)
if err != nil {
log.W.F("failed to query follows for admin %s: %v", hex.Enc(adminPubkey), err)
continue
}
// events is []*event.E - iterate over the slice
for _, ev := range events {
// Extract p-tags from this follow list
follows := extractFollowsFromEvent(ev)
for _, follow := range follows {
key := string(follow)
if !seen[key] {
seen[key] = true
allFollows = append(allFollows, follow)
}
}
}
}
return allFollows, nil
}
// mergeFollows merges two follow lists, removing duplicates.
func mergeFollows(existing, newFollows [][]byte) [][]byte {
seen := make(map[string]bool)
var result [][]byte
for _, f := range existing {
key := string(f)
if !seen[key] {
seen[key] = true
result = append(result, f)
}
}
for _, f := range newFollows {
key := string(f)
if !seen[key] {
seen[key] = true
result = append(result, f)
}
}
return result
}
// IsPolicyConfigEvent returns true if the event is a policy configuration event (kind 12345)
func IsPolicyConfigEvent(ev *event.E) bool {
return ev.Kind == kind.PolicyConfig.K
}
// IsPolicyAdminFollowListEvent returns true if this is a follow list event from a policy admin.
// Used to detect when we need to refresh the policy follows cache.
func (l *Listener) IsPolicyAdminFollowListEvent(ev *event.E) bool {
// Must be kind 3 (follow list)
if ev.Kind != kind.FollowList.K {
return false
}
// Policy system must be enabled
if l.policyManager == nil || !l.policyManager.IsEnabled() {
return false
}
// Sender must be a policy admin
return l.policyManager.IsPolicyAdmin(ev.Pubkey)
}
// isPolicyAdmin checks if a pubkey is in the list of policy admins
func isPolicyAdmin(pubkey []byte, admins [][]byte) bool {
for _, admin := range admins {
if bytes.Equal(pubkey, admin) {
return true
}
}
return false
}
// InitializePolicyFollows loads the follow lists of all policy admins at startup.
// This should be called after the policy manager is initialized but before
// the relay starts accepting connections.
// It's a method on Server so it can be called from main.go during initialization.
func (s *Server) InitializePolicyFollows() error {
// Skip if policy system is not enabled
if s.policyManager == nil || !s.policyManager.IsEnabled() {
log.D.F("policy system not enabled, skipping follow list initialization")
return nil
}
// Skip if PolicyFollowWhitelistEnabled is false
if !s.policyManager.IsPolicyFollowWhitelistEnabled() {
log.D.F("policy follow whitelist not enabled, skipping follow list initialization")
return nil
}
log.I.F("initializing policy follows from database")
// Get policy admin pubkeys
admins := s.policyManager.GetPolicyAdminsBin()
if len(admins) == 0 {
log.W.F("no policy admins configured, skipping follow list initialization")
return nil
}
var allFollows [][]byte
seen := make(map[string]bool)
// For each admin, query their latest kind 3 event
for _, adminPubkey := range admins {
// Build proper filter for kind 3 from this admin
f := filter.New()
f.Authors = tag.NewFromAny(adminPubkey)
f.Kinds = kind.NewS(kind.FollowList)
limit := uint(1)
f.Limit = &limit
// Query the database for kind 3 events from this admin
events, err := s.DB.QueryEvents(s.Ctx, f)
if err != nil {
log.W.F("failed to query follows for admin %s: %v", hex.Enc(adminPubkey), err)
continue
}
// Extract p-tags from each follow list event
for _, ev := range events {
follows := extractFollowsFromEvent(ev)
for _, follow := range follows {
key := string(follow)
if !seen[key] {
seen[key] = true
allFollows = append(allFollows, follow)
}
}
}
}
// Update the policy follows cache
s.policyManager.UpdatePolicyFollows(allFollows)
log.I.F("policy follows initialized with %d pubkeys from %d admin(s)",
len(allFollows), len(admins))
return nil
}

View File

@@ -9,12 +9,28 @@ import (
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/interfaces/signer/p8k"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/protocol/relayinfo"
"git.mleku.dev/mleku/nostr/interfaces/signer/p8k"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/relayinfo"
"next.orly.dev/pkg/version"
)
// GraphQueryConfig describes graph query capabilities for NIP-11 advertisement.
type GraphQueryConfig struct {
Enabled bool `json:"enabled"`
MaxDepth int `json:"max_depth"`
MaxResults int `json:"max_results"`
Methods []string `json:"methods"`
}
// ExtendedRelayInfo extends the standard NIP-11 relay info with additional fields.
// The Addresses field contains alternative WebSocket URLs for the relay (e.g., .onion).
type ExtendedRelayInfo struct {
*relayinfo.T
Addresses []string `json:"addresses,omitempty"`
GraphQuery *GraphQueryConfig `json:"graph_query,omitempty"`
}
// HandleRelayInfo generates and returns a relay information document in JSON
// format based on the server's configuration and supported NIPs.
//
@@ -30,7 +46,8 @@ import (
// Informer interface implementation or predefined server configuration. It
// returns this document as a JSON response to the client.
func (s *Server) HandleRelayInfo(w http.ResponseWriter, r *http.Request) {
r.Header.Set("Content-Type", "application/json")
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Vary", "Accept")
log.D.Ln("handling relay information document")
var info *relayinfo.T
nips := []relayinfo.NIP{
@@ -124,6 +141,10 @@ func (s *Server) HandleRelayInfo(w http.ResponseWriter, r *http.Request) {
}
}
// Restricted writes applies when ACL mode is not managed/curating but also not none
// (e.g., follows mode restricts writes to followed pubkeys)
restrictedWrites := s.Config.ACLMode != "managed" && s.Config.ACLMode != "curating" && s.Config.ACLMode != "none"
info = &relayinfo.T{
Name: name,
Description: description,
@@ -133,11 +154,52 @@ func (s *Server) HandleRelayInfo(w http.ResponseWriter, r *http.Request) {
Version: strings.TrimPrefix(version.V, "v"),
Limitation: relayinfo.Limits{
AuthRequired: s.Config.AuthRequired || s.Config.ACLMode != "none",
RestrictedWrites: s.Config.ACLMode != "managed" && s.Config.ACLMode != "none",
RestrictedWrites: restrictedWrites,
PaymentRequired: s.Config.MonthlyPriceSats > 0,
},
Icon: icon,
}
if err := json.NewEncoder(w).Encode(info); chk.E(err) {
// Build addresses list from config and Tor service
var addresses []string
// Add configured relay addresses
if len(s.Config.RelayAddresses) > 0 {
addresses = append(addresses, s.Config.RelayAddresses...)
}
// Add Tor hidden service address if available
if s.torService != nil {
if onionAddr := s.torService.OnionWSAddress(); onionAddr != "" {
addresses = append(addresses, onionAddr)
}
}
// Build graph query config if enabled
var graphConfig *GraphQueryConfig
if s.graphExecutor != nil && s.Config.GraphQueriesEnabled {
graphEnabled, maxDepth, maxResults, _ := s.Config.GetGraphConfigValues()
if graphEnabled {
graphConfig = &GraphQueryConfig{
Enabled: true,
MaxDepth: maxDepth,
MaxResults: maxResults,
Methods: []string{"follows", "followers", "mentions", "thread"},
}
}
}
// Return extended info if we have addresses or graph query support, otherwise standard info
if len(addresses) > 0 || graphConfig != nil {
extInfo := &ExtendedRelayInfo{
T: info,
Addresses: addresses,
GraphQuery: graphConfig,
}
if err := json.NewEncoder(w).Encode(extInfo); chk.E(err) {
}
} else {
if err := json.NewEncoder(w).Encode(info); chk.E(err) {
}
}
}

View File

@@ -12,22 +12,24 @@ import (
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/encoders/bech32encoding"
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
"next.orly.dev/pkg/encoders/envelopes/closedenvelope"
"next.orly.dev/pkg/encoders/envelopes/eoseenvelope"
"next.orly.dev/pkg/encoders/envelopes/eventenvelope"
"next.orly.dev/pkg/encoders/envelopes/reqenvelope"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
hexenc "next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/reason"
"next.orly.dev/pkg/encoders/tag"
"git.mleku.dev/mleku/nostr/encoders/bech32encoding"
"git.mleku.dev/mleku/nostr/encoders/envelopes/authenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/closedenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/eoseenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/eventenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/reqenvelope"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/filter"
hexenc "git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/kind"
"git.mleku.dev/mleku/nostr/encoders/reason"
"git.mleku.dev/mleku/nostr/encoders/tag"
"next.orly.dev/pkg/policy"
"next.orly.dev/pkg/protocol/graph"
"next.orly.dev/pkg/protocol/nip43"
"next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils/normalize"
"next.orly.dev/pkg/utils/pointers"
"next.orly.dev/pkg/protocol/publish"
"git.mleku.dev/mleku/nostr/utils/normalize"
"git.mleku.dev/mleku/nostr/utils/pointers"
)
func (l *Listener) HandleReq(msg []byte) (err error) {
@@ -51,6 +53,51 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
)
},
)
// NIP-46 signer-based authentication:
// If client is not authenticated and requests kind 24133 with exactly one #p tag,
// check if there's an active signer subscription for that pubkey.
// If so, authenticate the client as that pubkey.
const kindNIP46 = 24133
if len(l.authedPubkey.Load()) == 0 && len(*env.Filters) == 1 {
f := (*env.Filters)[0]
if f != nil && f.Kinds != nil && f.Kinds.Len() == 1 {
isNIP46Kind := false
for _, k := range f.Kinds.K {
if k.K == kindNIP46 {
isNIP46Kind = true
break
}
}
if isNIP46Kind && f.Tags != nil {
pTag := f.Tags.GetFirst([]byte("p"))
// Must have exactly one pubkey in the #p tag
if pTag != nil && pTag.Len() == 2 {
signerPubkey := pTag.Value()
// Convert to binary if hex
var signerPubkeyBin []byte
if len(signerPubkey) == 64 {
signerPubkeyBin, _ = hexenc.Dec(string(signerPubkey))
} else if len(signerPubkey) == 32 {
signerPubkeyBin = signerPubkey
}
if len(signerPubkeyBin) == 32 {
// Check if there's an active signer for this pubkey
if socketPub := l.publishers.GetSocketPublisher(); socketPub != nil {
if checker, ok := socketPub.(publish.NIP46SignerChecker); ok {
if checker.HasActiveNIP46Signer(signerPubkeyBin) {
log.I.F("NIP-46 auth: client %s authenticated via active signer %s",
l.remote, hexenc.Enc(signerPubkeyBin))
l.authedPubkey.Store(signerPubkeyBin)
}
}
}
}
}
}
}
}
// send a challenge to the client to auth if an ACL is active, auth is required, or AuthToWrite is enabled
if len(l.authedPubkey.Load()) == 0 && (acl.Registry.Active.Load() != "none" || l.Config.AuthRequired || l.Config.AuthToWrite) {
if err = authenvelope.NewChallengeWith(l.challenge.Load()).
@@ -142,6 +189,92 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
}
}
// Check for NIP-XX graph queries in filters
// Graph queries use the _graph filter extension to traverse the social graph
for _, f := range *env.Filters {
if f != nil && graph.IsGraphQuery(f) {
graphQuery, graphErr := graph.ExtractFromFilter(f)
if graphErr != nil {
log.W.F("invalid _graph query from %s: %v", l.remote, graphErr)
if err = closedenvelope.NewFrom(
env.Subscription,
reason.Error.F("invalid _graph query: %s", graphErr.Error()),
).Write(l); chk.E(err) {
return
}
return
}
if graphQuery != nil {
log.I.F("graph query from %s: method=%s seed=%s depth=%d",
l.remote, graphQuery.Method, graphQuery.Seed, graphQuery.Depth)
// Check if graph executor is available
if l.graphExecutor == nil {
log.W.F("graph query received but executor not initialized")
if err = closedenvelope.NewFrom(
env.Subscription,
reason.Error.F("graph queries not supported on this relay"),
).Write(l); chk.E(err) {
return
}
return
}
// Execute the graph query
resultEvent, execErr := l.graphExecutor.Execute(graphQuery)
if execErr != nil {
log.W.F("graph query execution failed from %s: %v", l.remote, execErr)
if err = closedenvelope.NewFrom(
env.Subscription,
reason.Error.F("graph query failed: %s", execErr.Error()),
).Write(l); chk.E(err) {
return
}
return
}
// Send the result event
var res *eventenvelope.Result
if res, err = eventenvelope.NewResultWith(env.Subscription, resultEvent); chk.E(err) {
return
}
if err = res.Write(l); chk.E(err) {
return
}
// Send EOSE to signal completion
if err = eoseenvelope.NewFrom(env.Subscription).Write(l); chk.E(err) {
return
}
log.I.F("graph query completed for %s: method=%s, returned event kind %d",
l.remote, graphQuery.Method, resultEvent.Kind)
return
}
}
}
// Filter out policy config events (kind 12345) for non-policy-admin users
// Policy config events should only be visible to policy administrators
if l.policyManager != nil && l.policyManager.IsEnabled() {
isPolicyAdmin := l.policyManager.IsPolicyAdmin(l.authedPubkey.Load())
if !isPolicyAdmin {
// Remove kind 12345 from all filters
for _, f := range *env.Filters {
if f != nil && f.Kinds != nil && f.Kinds.Len() > 0 {
// Create a new kinds list without PolicyConfig
var filteredKinds []*kind.K
for _, k := range f.Kinds.K {
if k.K != kind.PolicyConfig.K {
filteredKinds = append(filteredKinds, k)
}
}
f.Kinds.K = filteredKinds
}
}
}
}
var events event.S
// Create a single context for all filter queries, isolated from the connection context
// to prevent query timeouts from affecting the long-lived websocket connection
@@ -154,11 +287,15 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
// Multi-filter queries are not cached as they're more complex
if len(*env.Filters) == 1 && env.Filters != nil {
f := (*env.Filters)[0]
if cachedJSON, found := l.DB.GetCachedJSON(f); found {
log.D.F("REQ %s: cache HIT, sending %d cached events", env.Subscription, len(cachedJSON))
// Send cached JSON directly
for _, jsonEnvelope := range cachedJSON {
if _, err = l.Write(jsonEnvelope); err != nil {
if cachedEvents, found := l.DB.GetCachedEvents(f); found {
log.D.F("REQ %s: cache HIT, sending %d cached events", env.Subscription, len(cachedEvents))
// Wrap cached events with current subscription ID
for _, ev := range cachedEvents {
var res *eventenvelope.Result
if res, err = eventenvelope.NewResultWith(env.Subscription, ev); chk.E(err) {
return
}
if err = res.Write(l); err != nil {
if !strings.Contains(err.Error(), "context canceled") {
chk.E(err)
}
@@ -170,7 +307,7 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
return
}
// Don't create subscription for cached results with satisfied limits
if f.Limit != nil && len(cachedJSON) >= int(*f.Limit) {
if f.Limit != nil && len(cachedEvents) >= int(*f.Limit) {
log.D.F("REQ %s: limit satisfied by cache, not creating subscription", env.Subscription)
return
}
@@ -348,10 +485,12 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
// Event has private tag and user is authorized - continue to privileged check
}
// Always filter privileged events based on kind, regardless of ACLMode
// Filter privileged events based on kind when ACL is active
// When ACL is "none", skip privileged filtering to allow open access
// Privileged events should only be sent to users who are authenticated and
// are either the event author or listed in p tags
if kind.IsPrivileged(ev.Kind) && accessLevel != "admin" { // admins can see all events
aclActive := acl.Registry.Active.Load() != "none"
if kind.IsPrivileged(ev.Kind) && aclActive && accessLevel != "admin" { // admins can see all events
log.T.C(
func() string {
return fmt.Sprintf(
@@ -360,123 +499,39 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
},
)
pk := l.authedPubkey.Load()
if pk == nil {
// Not authenticated - cannot see privileged events
// Use centralized IsPartyInvolved function for consistent privilege checking
if policy.IsPartyInvolved(ev, pk) {
log.T.C(
func() string {
return fmt.Sprintf(
"privileged event %s denied - not authenticated",
ev.ID,
)
},
)
continue
}
// Check if user is authorized to see this privileged event
authorized := false
if utils.FastEqual(ev.Pubkey, pk) {
authorized = true
log.T.C(
func() string {
return fmt.Sprintf(
"privileged event %s is for logged in pubkey %0x",
"privileged event %s allowed for logged in pubkey %0x",
ev.ID, pk,
)
},
)
} else {
// Check p tags
pTags := ev.Tags.GetAll([]byte("p"))
for _, pTag := range pTags {
var pt []byte
if pt, err = hexenc.Dec(string(pTag.Value())); chk.E(err) {
continue
}
if utils.FastEqual(pt, pk) {
authorized = true
log.T.C(
func() string {
return fmt.Sprintf(
"privileged event %s is for logged in pubkey %0x",
ev.ID, pk,
)
},
)
break
}
}
}
if authorized {
tmp = append(tmp, ev)
} else {
log.T.C(
func() string {
return fmt.Sprintf(
"privileged event %s does not contain the logged in pubkey %0x",
"privileged event %s denied for pubkey %0x (not authenticated or not a party involved)",
ev.ID, pk,
)
},
)
}
} else {
// Check if policy defines this event as privileged (even if not in hardcoded list)
// Policy check will handle this later, but we can skip it here if not authenticated
// to avoid unnecessary processing
if l.policyManager != nil && l.policyManager.Manager != nil && l.policyManager.Manager.IsEnabled() {
rule, hasRule := l.policyManager.Rules[int(ev.Kind)]
if hasRule && rule.Privileged && accessLevel != "admin" {
pk := l.authedPubkey.Load()
if pk == nil {
// Not authenticated - cannot see policy-privileged events
log.T.C(
func() string {
return fmt.Sprintf(
"policy-privileged event %s denied - not authenticated",
ev.ID,
)
},
)
continue
}
// Policy check will verify authorization later, but we need to check
// if user is party to the event here
authorized := false
if utils.FastEqual(ev.Pubkey, pk) {
authorized = true
} else {
// Check p tags
pTags := ev.Tags.GetAll([]byte("p"))
for _, pTag := range pTags {
var pt []byte
if pt, err = hexenc.Dec(string(pTag.Value())); chk.E(err) {
continue
}
if utils.FastEqual(pt, pk) {
authorized = true
break
}
}
}
if !authorized {
log.T.C(
func() string {
return fmt.Sprintf(
"policy-privileged event %s does not contain the logged in pubkey %0x",
ev.ID, pk,
)
},
)
continue
}
}
}
// Policy-defined privileged events are handled by the policy engine
// at line 455+. No early filtering needed here - delegate entirely to
// the policy engine to avoid duplicate logic.
tmp = append(tmp, ev)
}
}
events = tmp
// Apply policy filtering for read access if policy is enabled
if l.policyManager != nil && l.policyManager.Manager != nil && l.policyManager.Manager.IsEnabled() {
if l.policyManager.IsEnabled() {
var policyFilteredEvents event.S
for _, ev := range events {
allowed, policyErr := l.policyManager.CheckPolicy("read", ev, l.authedPubkey.Load(), l.remote)
@@ -547,6 +602,27 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
events = aclFilteredEvents
}
// Apply curating ACL filtering for read access if curating ACL is active
if acl.Registry.Active.Load() == "curating" {
// Find the curating ACL instance
for _, aclInstance := range acl.Registry.ACL {
if aclInstance.Type() == "curating" {
if curatingACL, ok := aclInstance.(*acl.Curating); ok {
var curatingFilteredEvents event.S
for _, ev := range events {
if curatingACL.IsEventVisible(ev, accessLevel) {
curatingFilteredEvents = append(curatingFilteredEvents, ev)
} else {
log.D.F("curating ACL filtered out event %s from blacklisted pubkey", hexenc.Enc(ev.ID))
}
}
events = curatingFilteredEvents
}
break
}
}
}
// Apply private tag filtering - only show events with "private" tags to authorized users
var privateFilteredEvents event.S
authedPubkey := l.authedPubkey.Load()
@@ -586,8 +662,7 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
events = privateFilteredEvents
seen := make(map[string]struct{})
// Collect marshaled JSON for caching (only for single-filter queries)
var marshaledForCache [][]byte
// Cache events for single-filter queries (without subscription ID)
shouldCache := len(*env.Filters) == 1 && len(events) > 0
for _, ev := range events {
@@ -611,17 +686,6 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
return
}
// Get serialized envelope for caching
if shouldCache {
serialized := res.Marshal(nil)
if len(serialized) > 0 {
// Make a copy for the cache
cacheCopy := make([]byte, len(serialized))
copy(cacheCopy, serialized)
marshaledForCache = append(marshaledForCache, cacheCopy)
}
}
if err = res.Write(l); err != nil {
// Don't log context canceled errors as they're expected during shutdown
if !strings.Contains(err.Error(), "context canceled") {
@@ -634,10 +698,11 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
}
// Populate cache after successfully sending all events
if shouldCache && len(marshaledForCache) > 0 {
// Cache the events themselves (not marshaled JSON with subscription ID)
if shouldCache && len(events) > 0 {
f := (*env.Filters)[0]
l.DB.CacheMarshaledJSON(f, marshaledForCache)
log.D.F("REQ %s: cached %d marshaled events", env.Subscription, len(marshaledForCache))
l.DB.CacheEvents(f, events)
log.D.F("REQ %s: cached %d events", env.Subscription, len(events))
}
// write the EOSE to signal to the client that all events found have been
// sent.
@@ -646,6 +711,31 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
Write(l); chk.E(err) {
return
}
// Record access for returned events (for GC access-based ranking)
if l.accessTracker != nil && len(events) > 0 {
go func(evts event.S, connID string) {
for _, ev := range evts {
if ser, err := l.DB.GetSerialById(ev.ID); err == nil && ser != nil {
l.accessTracker.RecordAccess(ser.Get(), connID)
}
}
}(events, l.connectionID)
}
// Trigger archive relay query if enabled (background fetch + stream results)
if l.archiveManager != nil && l.archiveManager.IsEnabled() && len(*env.Filters) > 0 {
// Use first filter for archive query
f := (*env.Filters)[0]
go l.archiveManager.QueryArchive(
string(env.Subscription),
l.connectionID,
f,
seen,
l, // implements EventDeliveryChannel
)
}
// if the query was for just Ids, we know there can't be any more results,
// so cancel the subscription.
cancel := true

View File

@@ -3,6 +3,7 @@ package app
import (
"context"
"crypto/rand"
"fmt"
"net/http"
"strings"
"time"
@@ -10,10 +11,11 @@ import (
"github.com/gorilla/websocket"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
"next.orly.dev/pkg/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/envelopes/authenvelope"
"git.mleku.dev/mleku/nostr/encoders/hex"
"next.orly.dev/pkg/cashu/token"
"next.orly.dev/pkg/protocol/publish"
"next.orly.dev/pkg/utils/units"
"git.mleku.dev/mleku/nostr/utils/units"
)
const (
@@ -21,7 +23,10 @@ const (
DefaultPongWait = 60 * time.Second
DefaultPingWait = DefaultPongWait / 2
DefaultWriteTimeout = 3 * time.Second
DefaultMaxMessageSize = 512000 // Match khatru's MaxMessageSize
// DefaultMaxMessageSize is the maximum message size for WebSocket connections
// Increased from 512KB to 10MB to support large kind 3 follow lists (10k+ follows)
// and other large events without truncation
DefaultMaxMessageSize = 10 * 1024 * 1024 // 10MB
// ClientMessageSizeLimit is the maximum message size that clients can handle
// This is set to 100MB to allow large messages
ClientMessageSizeLimit = 100 * 1024 * 1024 // 100MB
@@ -52,6 +57,12 @@ func (s *Server) HandleWebsocket(w http.ResponseWriter, r *http.Request) {
return
}
whitelist:
// Extract and verify Cashu access token if verifier is configured
var cashuToken *token.Token
if s.CashuVerifier != nil {
cashuToken = s.extractWebSocketToken(r, remote)
}
// Create an independent context for this connection
// This context will be cancelled when the connection closes or server shuts down
ctx, cancel := context.WithCancel(s.Ctx)
@@ -83,18 +94,28 @@ whitelist:
})
defer conn.Close()
// Determine handler semaphore size from config
handlerSemSize := s.Config.MaxHandlersPerConnection
if handlerSemSize <= 0 {
handlerSemSize = 100 // Default if not configured
}
now := time.Now()
listener := &Listener{
ctx: ctx,
cancel: cancel,
Server: s,
conn: conn,
remote: remote,
connectionID: fmt.Sprintf("%s-%d", remote, now.UnixNano()), // Unique connection ID for access tracking
req: r,
startTime: time.Now(),
cashuToken: cashuToken, // Verified Cashu access token (nil if none provided)
startTime: now,
writeChan: make(chan publish.WriteRequest, 100), // Buffered channel for writes
writeDone: make(chan struct{}),
messageQueue: make(chan messageRequest, 100), // Buffered channel for message processing
processingDone: make(chan struct{}),
handlerSem: make(chan struct{}, handlerSemSize), // Limits concurrent handlers
subscriptions: make(map[string]context.CancelFunc),
}
@@ -281,3 +302,54 @@ func (s *Server) Pinger(
}
}
}
// extractWebSocketToken extracts and verifies a Cashu access token from a WebSocket upgrade request.
// Checks query param first (for browser WebSocket clients), then headers.
// Returns nil if no token is provided or if token verification fails.
func (s *Server) extractWebSocketToken(r *http.Request, remote string) *token.Token {
// Try query param first (WebSocket clients often can't set custom headers)
tokenStr := r.URL.Query().Get("token")
// Try X-Cashu-Token header
if tokenStr == "" {
tokenStr = r.Header.Get("X-Cashu-Token")
}
// Try Authorization: Cashu scheme
if tokenStr == "" {
auth := r.Header.Get("Authorization")
if strings.HasPrefix(auth, "Cashu ") {
tokenStr = strings.TrimPrefix(auth, "Cashu ")
}
}
// No token provided - this is fine, connection proceeds without token
if tokenStr == "" {
return nil
}
// Parse the token
tok, err := token.Parse(tokenStr)
if err != nil {
log.W.F("ws %s: invalid Cashu token format: %v", remote, err)
return nil
}
// Verify token - accept both "relay" and "nip46" scopes for WebSocket connections
// NIP-46 connections are also WebSocket-based
ctx := context.Background()
if err := s.CashuVerifier.Verify(ctx, tok, remote); err != nil {
log.W.F("ws %s: Cashu token verification failed: %v", remote, err)
return nil
}
// Check scope - allow "relay" or "nip46"
if tok.Scope != token.ScopeRelay && tok.Scope != token.ScopeNIP46 {
log.W.F("ws %s: Cashu token has invalid scope %q for WebSocket", remote, tok.Scope)
return nil
}
log.D.F("ws %s: verified Cashu token with scope %q, expires %v",
remote, tok.Scope, tok.ExpiresAt())
return tok
}

514
app/handle-wireguard.go Normal file
View File

@@ -0,0 +1,514 @@
package app
import (
"encoding/base64"
"encoding/json"
"fmt"
"net/http"
"git.mleku.dev/mleku/nostr/encoders/bech32encoding"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/httpauth"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/database"
)
// WireGuardConfigResponse is returned by the /api/wireguard/config endpoint.
type WireGuardConfigResponse struct {
ConfigText string `json:"config_text"`
Interface WGInterface `json:"interface"`
Peer WGPeer `json:"peer"`
}
// WGInterface represents the [Interface] section of a WireGuard config.
type WGInterface struct {
Address string `json:"address"`
PrivateKey string `json:"private_key"`
}
// WGPeer represents the [Peer] section of a WireGuard config.
type WGPeer struct {
PublicKey string `json:"public_key"`
Endpoint string `json:"endpoint"`
AllowedIPs string `json:"allowed_ips"`
}
// BunkerURLResponse is returned by the /api/bunker/url endpoint.
type BunkerURLResponse struct {
URL string `json:"url"`
RelayNpub string `json:"relay_npub"`
RelayPubkey string `json:"relay_pubkey"`
InternalIP string `json:"internal_ip"`
}
// handleWireGuardConfig returns the user's WireGuard configuration.
// Requires NIP-98 authentication and write+ access.
func (s *Server) handleWireGuardConfig(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Check if WireGuard is enabled
if !s.Config.WGEnabled {
http.Error(w, "WireGuard is not enabled on this relay", http.StatusNotFound)
return
}
// Check if ACL mode supports WireGuard
if s.Config.ACLMode == "none" {
http.Error(w, "WireGuard requires ACL mode 'follows' or 'managed'", http.StatusForbidden)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
http.Error(w, "NIP-98 authentication required", http.StatusUnauthorized)
return
}
// Check user has write+ access
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "write" && accessLevel != "admin" && accessLevel != "owner" {
http.Error(w, "Write access required for WireGuard", http.StatusForbidden)
return
}
// Type assert to Badger database for WireGuard methods
badgerDB, ok := s.DB.(*database.D)
if !ok {
http.Error(w, "WireGuard requires Badger database backend", http.StatusInternalServerError)
return
}
// Check subnet pool is available
if s.subnetPool == nil {
http.Error(w, "WireGuard subnet pool not initialized", http.StatusInternalServerError)
return
}
// Get or create WireGuard peer for this user
peer, err := badgerDB.GetOrCreateWireGuardPeer(pubkey, s.subnetPool)
if chk.E(err) {
log.E.F("failed to get/create WireGuard peer: %v", err)
http.Error(w, "Failed to create WireGuard configuration", http.StatusInternalServerError)
return
}
// Derive subnet IPs from sequence
subnet := s.subnetPool.SubnetForSequence(peer.Sequence)
clientIP := subnet.ClientIP.String()
serverIP := subnet.ServerIP.String()
// Get server public key
serverKey, err := badgerDB.GetOrCreateWireGuardServerKey()
if chk.E(err) {
log.E.F("failed to get WireGuard server key: %v", err)
http.Error(w, "WireGuard server not configured", http.StatusInternalServerError)
return
}
serverPubKey, err := deriveWGPublicKey(serverKey)
if chk.E(err) {
log.E.F("failed to derive server public key: %v", err)
http.Error(w, "WireGuard server error", http.StatusInternalServerError)
return
}
// Build endpoint
endpoint := fmt.Sprintf("%s:%d", s.Config.WGEndpoint, s.Config.WGPort)
// Build response
resp := WireGuardConfigResponse{
Interface: WGInterface{
Address: clientIP + "/32",
PrivateKey: base64.StdEncoding.EncodeToString(peer.WGPrivateKey),
},
Peer: WGPeer{
PublicKey: base64.StdEncoding.EncodeToString(serverPubKey),
Endpoint: endpoint,
AllowedIPs: serverIP + "/32", // Only route bunker traffic to this peer's server IP
},
}
// Generate config text
resp.ConfigText = fmt.Sprintf(`[Interface]
Address = %s
PrivateKey = %s
[Peer]
PublicKey = %s
Endpoint = %s
AllowedIPs = %s
PersistentKeepalive = 25
`, resp.Interface.Address, resp.Interface.PrivateKey,
resp.Peer.PublicKey, resp.Peer.Endpoint, resp.Peer.AllowedIPs)
// If WireGuard server is running, add the peer
if s.wireguardServer != nil && s.wireguardServer.IsRunning() {
if err := s.wireguardServer.AddPeer(pubkey, peer.WGPublicKey, clientIP); chk.E(err) {
log.W.F("failed to add peer to running WireGuard server: %v", err)
}
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(resp)
}
// handleWireGuardRegenerate generates a new WireGuard keypair for the user.
// Requires NIP-98 authentication and write+ access.
func (s *Server) handleWireGuardRegenerate(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Check if WireGuard is enabled
if !s.Config.WGEnabled {
http.Error(w, "WireGuard is not enabled on this relay", http.StatusNotFound)
return
}
// Check if ACL mode supports WireGuard
if s.Config.ACLMode == "none" {
http.Error(w, "WireGuard requires ACL mode 'follows' or 'managed'", http.StatusForbidden)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
http.Error(w, "NIP-98 authentication required", http.StatusUnauthorized)
return
}
// Check user has write+ access
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "write" && accessLevel != "admin" && accessLevel != "owner" {
http.Error(w, "Write access required for WireGuard", http.StatusForbidden)
return
}
// Type assert to Badger database for WireGuard methods
badgerDB, ok := s.DB.(*database.D)
if !ok {
http.Error(w, "WireGuard requires Badger database backend", http.StatusInternalServerError)
return
}
// Check subnet pool is available
if s.subnetPool == nil {
http.Error(w, "WireGuard subnet pool not initialized", http.StatusInternalServerError)
return
}
// Remove old peer from running server if exists
oldPeer, err := badgerDB.GetWireGuardPeer(pubkey)
if err == nil && oldPeer != nil && s.wireguardServer != nil && s.wireguardServer.IsRunning() {
s.wireguardServer.RemovePeer(oldPeer.WGPublicKey)
}
// Regenerate keypair
peer, err := badgerDB.RegenerateWireGuardPeer(pubkey, s.subnetPool)
if chk.E(err) {
log.E.F("failed to regenerate WireGuard peer: %v", err)
http.Error(w, "Failed to regenerate WireGuard configuration", http.StatusInternalServerError)
return
}
// Derive subnet IPs from sequence (same sequence as before)
subnet := s.subnetPool.SubnetForSequence(peer.Sequence)
clientIP := subnet.ClientIP.String()
log.I.F("regenerated WireGuard keypair for user: %s", hex.Enc(pubkey[:8]))
// Return success with IP (same subnet as before)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]string{
"status": "regenerated",
"assigned_ip": clientIP,
})
}
// handleBunkerURL returns the bunker connection URL.
// Requires NIP-98 authentication and write+ access.
func (s *Server) handleBunkerURL(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Check if bunker is enabled
if !s.Config.BunkerEnabled {
http.Error(w, "Bunker is not enabled on this relay", http.StatusNotFound)
return
}
// Check if WireGuard is enabled (required for bunker)
if !s.Config.WGEnabled {
http.Error(w, "WireGuard is required for bunker access", http.StatusNotFound)
return
}
// Check if ACL mode supports WireGuard
if s.Config.ACLMode == "none" {
http.Error(w, "Bunker requires ACL mode 'follows' or 'managed'", http.StatusForbidden)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
http.Error(w, "NIP-98 authentication required", http.StatusUnauthorized)
return
}
// Check user has write+ access
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "write" && accessLevel != "admin" && accessLevel != "owner" {
http.Error(w, "Write access required for bunker", http.StatusForbidden)
return
}
// Type assert to Badger database for WireGuard methods
badgerDB, ok := s.DB.(*database.D)
if !ok {
http.Error(w, "Bunker requires Badger database backend", http.StatusInternalServerError)
return
}
// Check subnet pool is available
if s.subnetPool == nil {
http.Error(w, "WireGuard subnet pool not initialized", http.StatusInternalServerError)
return
}
// Get or create WireGuard peer to get their subnet
peer, err := badgerDB.GetOrCreateWireGuardPeer(pubkey, s.subnetPool)
if chk.E(err) {
log.E.F("failed to get/create WireGuard peer for bunker: %v", err)
http.Error(w, "Failed to get WireGuard configuration", http.StatusInternalServerError)
return
}
// Derive server IP for this peer's subnet
subnet := s.subnetPool.SubnetForSequence(peer.Sequence)
serverIP := subnet.ServerIP.String()
// Get relay identity
relaySecret, err := s.DB.GetOrCreateRelayIdentitySecret()
if chk.E(err) {
log.E.F("failed to get relay identity: %v", err)
http.Error(w, "Failed to get relay identity", http.StatusInternalServerError)
return
}
relayPubkey, err := deriveNostrPublicKey(relaySecret)
if chk.E(err) {
log.E.F("failed to derive relay public key: %v", err)
http.Error(w, "Failed to derive relay public key", http.StatusInternalServerError)
return
}
// Encode as npub
relayNpubBytes, err := bech32encoding.BinToNpub(relayPubkey)
relayNpub := string(relayNpubBytes)
if chk.E(err) {
relayNpub = hex.Enc(relayPubkey) // Fallback to hex
}
// Build bunker URL using this peer's server IP
// Format: bunker://<relay-pubkey-hex>?relay=ws://<server-ip>:3335
relayPubkeyHex := hex.Enc(relayPubkey)
bunkerURL := fmt.Sprintf("bunker://%s?relay=ws://%s:%d",
relayPubkeyHex,
serverIP,
s.Config.BunkerPort,
)
resp := BunkerURLResponse{
URL: bunkerURL,
RelayNpub: relayNpub,
RelayPubkey: relayPubkeyHex,
InternalIP: serverIP,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(resp)
}
// handleWireGuardStatus returns whether WireGuard/Bunker are available.
func (s *Server) handleWireGuardStatus(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
resp := map[string]interface{}{
"wireguard_enabled": s.Config.WGEnabled,
"bunker_enabled": s.Config.BunkerEnabled,
"acl_mode": s.Config.ACLMode,
"available": s.Config.WGEnabled && s.Config.ACLMode != "none",
}
if s.wireguardServer != nil {
resp["wireguard_running"] = s.wireguardServer.IsRunning()
resp["peer_count"] = s.wireguardServer.PeerCount()
}
if s.bunkerServer != nil {
resp["bunker_sessions"] = s.bunkerServer.SessionCount()
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(resp)
}
// RevokedKeyResponse is the JSON response for revoked keys.
type RevokedKeyResponse struct {
NostrPubkey string `json:"nostr_pubkey"`
WGPublicKey string `json:"wg_public_key"`
Sequence uint32 `json:"sequence"`
ClientIP string `json:"client_ip"`
ServerIP string `json:"server_ip"`
CreatedAt int64 `json:"created_at"`
RevokedAt int64 `json:"revoked_at"`
AccessCount int `json:"access_count"`
LastAccessAt int64 `json:"last_access_at"`
}
// AccessLogResponse is the JSON response for access logs.
type AccessLogResponse struct {
NostrPubkey string `json:"nostr_pubkey"`
WGPublicKey string `json:"wg_public_key"`
Sequence uint32 `json:"sequence"`
ClientIP string `json:"client_ip"`
Timestamp int64 `json:"timestamp"`
RemoteAddr string `json:"remote_addr"`
}
// handleWireGuardAudit returns the user's own revoked keys and access logs.
// This lets users see if their old WireGuard keys are still being used,
// which could indicate they left something on or someone copied their credentials.
// Requires NIP-98 authentication and write+ access.
func (s *Server) handleWireGuardAudit(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Check if WireGuard is enabled
if !s.Config.WGEnabled {
http.Error(w, "WireGuard is not enabled on this relay", http.StatusNotFound)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
http.Error(w, "NIP-98 authentication required", http.StatusUnauthorized)
return
}
// Check user has write+ access (same as other WireGuard endpoints)
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "write" && accessLevel != "admin" && accessLevel != "owner" {
http.Error(w, "Write access required", http.StatusForbidden)
return
}
// Type assert to Badger database for WireGuard methods
badgerDB, ok := s.DB.(*database.D)
if !ok {
http.Error(w, "WireGuard requires Badger database backend", http.StatusInternalServerError)
return
}
// Check subnet pool is available
if s.subnetPool == nil {
http.Error(w, "WireGuard subnet pool not initialized", http.StatusInternalServerError)
return
}
// Get this user's revoked keys only
revokedKeys, err := badgerDB.GetRevokedKeys(pubkey)
if chk.E(err) {
log.E.F("failed to get revoked keys: %v", err)
http.Error(w, "Failed to get revoked keys", http.StatusInternalServerError)
return
}
// Get this user's access logs only
accessLogs, err := badgerDB.GetAccessLogs(pubkey)
if chk.E(err) {
log.E.F("failed to get access logs: %v", err)
http.Error(w, "Failed to get access logs", http.StatusInternalServerError)
return
}
// Convert to response format
var revokedResp []RevokedKeyResponse
for _, key := range revokedKeys {
subnet := s.subnetPool.SubnetForSequence(key.Sequence)
revokedResp = append(revokedResp, RevokedKeyResponse{
NostrPubkey: hex.Enc(key.NostrPubkey),
WGPublicKey: hex.Enc(key.WGPublicKey),
Sequence: key.Sequence,
ClientIP: subnet.ClientIP.String(),
ServerIP: subnet.ServerIP.String(),
CreatedAt: key.CreatedAt,
RevokedAt: key.RevokedAt,
AccessCount: key.AccessCount,
LastAccessAt: key.LastAccessAt,
})
}
var accessResp []AccessLogResponse
for _, logEntry := range accessLogs {
subnet := s.subnetPool.SubnetForSequence(logEntry.Sequence)
accessResp = append(accessResp, AccessLogResponse{
NostrPubkey: hex.Enc(logEntry.NostrPubkey),
WGPublicKey: hex.Enc(logEntry.WGPublicKey),
Sequence: logEntry.Sequence,
ClientIP: subnet.ClientIP.String(),
Timestamp: logEntry.Timestamp,
RemoteAddr: logEntry.RemoteAddr,
})
}
resp := map[string]interface{}{
"revoked_keys": revokedResp,
"access_logs": accessResp,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(resp)
}
// deriveWGPublicKey derives a Curve25519 public key from a private key.
func deriveWGPublicKey(privateKey []byte) ([]byte, error) {
if len(privateKey) != 32 {
return nil, fmt.Errorf("invalid private key length: %d", len(privateKey))
}
// Use wireguard package
return derivePublicKey(privateKey)
}
// deriveNostrPublicKey derives a secp256k1 public key from a secret key.
func deriveNostrPublicKey(secretKey []byte) ([]byte, error) {
if len(secretKey) != 32 {
return nil, fmt.Errorf("invalid secret key length: %d", len(secretKey))
}
// Use nostr library's key derivation
pk, err := deriveSecp256k1PublicKey(secretKey)
if err != nil {
return nil, err
}
return pk, nil
}

View File

@@ -0,0 +1,475 @@
package app
import (
"context"
"os"
"path/filepath"
"sync"
"testing"
"time"
"github.com/adrg/xdg"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/kind"
"git.mleku.dev/mleku/nostr/encoders/tag"
"git.mleku.dev/mleku/nostr/interfaces/signer/p8k"
"next.orly.dev/app/config"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/policy"
"next.orly.dev/pkg/protocol/publish"
)
// setupPolicyTestListener creates a test listener with policy system enabled
func setupPolicyTestListener(t *testing.T, policyAdminHex string) (*Listener, *database.D, func()) {
tempDir, err := os.MkdirTemp("", "policy_handler_test_*")
if err != nil {
t.Fatalf("failed to create temp dir: %v", err)
}
// Use a unique app name per test to avoid conflicts
appName := "test-policy-" + filepath.Base(tempDir)
// Create the XDG config directory and default policy file BEFORE creating the policy manager
configDir := filepath.Join(xdg.ConfigHome, appName)
if err := os.MkdirAll(configDir, 0755); err != nil {
os.RemoveAll(tempDir)
t.Fatalf("failed to create config dir: %v", err)
}
// Create initial policy file with admin if provided
var initialPolicy []byte
if policyAdminHex != "" {
initialPolicy = []byte(`{
"default_policy": "allow",
"policy_admins": ["` + policyAdminHex + `"],
"policy_follow_whitelist_enabled": true
}`)
} else {
initialPolicy = []byte(`{"default_policy": "allow"}`)
}
policyPath := filepath.Join(configDir, "policy.json")
if err := os.WriteFile(policyPath, initialPolicy, 0644); err != nil {
os.RemoveAll(tempDir)
os.RemoveAll(configDir)
t.Fatalf("failed to write policy file: %v", err)
}
ctx, cancel := context.WithCancel(context.Background())
db, err := database.New(ctx, cancel, tempDir, "info")
if err != nil {
os.RemoveAll(tempDir)
os.RemoveAll(configDir)
t.Fatalf("failed to open database: %v", err)
}
cfg := &config.C{
PolicyEnabled: true,
RelayURL: "wss://test.relay",
Listen: "localhost",
Port: 3334,
ACLMode: "none",
AppName: appName,
}
// Create policy manager - now config file exists at XDG path
policyManager := policy.NewWithManager(ctx, cfg.AppName, cfg.PolicyEnabled, "")
server := &Server{
Ctx: ctx,
Config: cfg,
DB: db,
publishers: publish.New(NewPublisher(ctx)),
policyManager: policyManager,
cfg: cfg,
db: db,
messagePauseMutex: sync.RWMutex{},
}
// Configure ACL registry
acl.Registry.SetMode(cfg.ACLMode)
if err = acl.Registry.Configure(cfg, db, ctx); err != nil {
db.Close()
os.RemoveAll(tempDir)
os.RemoveAll(configDir)
t.Fatalf("failed to configure ACL: %v", err)
}
listener := &Listener{
Server: server,
ctx: ctx,
writeChan: make(chan publish.WriteRequest, 100),
writeDone: make(chan struct{}),
messageQueue: make(chan messageRequest, 100),
processingDone: make(chan struct{}),
subscriptions: make(map[string]context.CancelFunc),
}
// Start write worker and message processor
go listener.writeWorker()
go listener.messageProcessor()
cleanup := func() {
close(listener.writeChan)
<-listener.writeDone
close(listener.messageQueue)
<-listener.processingDone
db.Close()
os.RemoveAll(tempDir)
os.RemoveAll(configDir)
}
return listener, db, cleanup
}
// createPolicyConfigEvent creates a kind 12345 policy config event
func createPolicyConfigEvent(t *testing.T, signer *p8k.Signer, policyJSON string) *event.E {
ev := event.New()
ev.CreatedAt = time.Now().Unix()
ev.Kind = kind.PolicyConfig.K
ev.Content = []byte(policyJSON)
ev.Tags = tag.NewS()
if err := ev.Sign(signer); err != nil {
t.Fatalf("Failed to sign event: %v", err)
}
return ev
}
// TestHandlePolicyConfigUpdate_ValidAdmin tests policy update from valid admin
// Policy admins can extend rules but cannot modify protected fields (owners, policy_admins)
func TestHandlePolicyConfigUpdate_ValidAdmin(t *testing.T) {
// Create admin signer
adminSigner := p8k.MustNew()
if err := adminSigner.Generate(); err != nil {
t.Fatalf("Failed to generate admin keypair: %v", err)
}
adminHex := hex.Enc(adminSigner.Pub())
listener, _, cleanup := setupPolicyTestListener(t, adminHex)
defer cleanup()
// Create valid policy update event that ONLY extends, doesn't modify protected fields
// Note: policy_admins must stay the same (policy admins cannot change this field)
newPolicyJSON := `{
"default_policy": "allow",
"policy_admins": ["` + adminHex + `"],
"kind": {"whitelist": [1, 3, 7]}
}`
ev := createPolicyConfigEvent(t, adminSigner, newPolicyJSON)
// Handle the event
err := listener.HandlePolicyConfigUpdate(ev)
if err != nil {
t.Errorf("Expected success but got error: %v", err)
}
// Verify policy was updated (kind whitelist was extended)
// Note: default_policy should still be "allow" from original
if listener.policyManager.DefaultPolicy != "allow" {
t.Errorf("Policy was not updated correctly, default_policy = %q, expected 'allow'",
listener.policyManager.DefaultPolicy)
}
}
// TestHandlePolicyConfigUpdate_NonAdmin tests policy update rejection from non-admin
func TestHandlePolicyConfigUpdate_NonAdmin(t *testing.T) {
// Create admin signer
adminSigner := p8k.MustNew()
if err := adminSigner.Generate(); err != nil {
t.Fatalf("Failed to generate admin keypair: %v", err)
}
adminHex := hex.Enc(adminSigner.Pub())
// Create non-admin signer
nonAdminSigner := p8k.MustNew()
if err := nonAdminSigner.Generate(); err != nil {
t.Fatalf("Failed to generate non-admin keypair: %v", err)
}
listener, _, cleanup := setupPolicyTestListener(t, adminHex)
defer cleanup()
// Create policy update event from non-admin
newPolicyJSON := `{"default_policy": "deny"}`
ev := createPolicyConfigEvent(t, nonAdminSigner, newPolicyJSON)
// Handle the event - should be rejected
err := listener.HandlePolicyConfigUpdate(ev)
if err == nil {
t.Error("Expected error for non-admin update but got none")
}
// Verify policy was NOT updated
if listener.policyManager.DefaultPolicy != "allow" {
t.Error("Policy should not have been updated by non-admin")
}
}
// TestHandlePolicyConfigUpdate_InvalidJSON tests rejection of invalid JSON
func TestHandlePolicyConfigUpdate_InvalidJSON(t *testing.T) {
adminSigner := p8k.MustNew()
if err := adminSigner.Generate(); err != nil {
t.Fatalf("Failed to generate admin keypair: %v", err)
}
adminHex := hex.Enc(adminSigner.Pub())
listener, _, cleanup := setupPolicyTestListener(t, adminHex)
defer cleanup()
// Create event with invalid JSON
ev := createPolicyConfigEvent(t, adminSigner, `{"invalid json`)
err := listener.HandlePolicyConfigUpdate(ev)
if err == nil {
t.Error("Expected error for invalid JSON but got none")
}
// Policy should remain unchanged
if listener.policyManager.DefaultPolicy != "allow" {
t.Error("Policy should not have been updated with invalid JSON")
}
}
// TestHandlePolicyConfigUpdate_InvalidPubkey tests rejection of invalid admin pubkeys
func TestHandlePolicyConfigUpdate_InvalidPubkey(t *testing.T) {
adminSigner := p8k.MustNew()
if err := adminSigner.Generate(); err != nil {
t.Fatalf("Failed to generate admin keypair: %v", err)
}
adminHex := hex.Enc(adminSigner.Pub())
listener, _, cleanup := setupPolicyTestListener(t, adminHex)
defer cleanup()
// Try to update with invalid admin pubkey
invalidPolicyJSON := `{
"default_policy": "deny",
"policy_admins": ["not-a-valid-pubkey"]
}`
ev := createPolicyConfigEvent(t, adminSigner, invalidPolicyJSON)
err := listener.HandlePolicyConfigUpdate(ev)
if err == nil {
t.Error("Expected error for invalid admin pubkey but got none")
}
// Policy should remain unchanged
if listener.policyManager.DefaultPolicy != "allow" {
t.Error("Policy should not have been updated with invalid admin pubkey")
}
}
// TestHandlePolicyConfigUpdate_PolicyAdminCannotModifyProtectedFields tests that policy admins
// cannot modify the owners or policy_admins fields (these are protected, owner-only fields)
func TestHandlePolicyConfigUpdate_PolicyAdminCannotModifyProtectedFields(t *testing.T) {
adminSigner := p8k.MustNew()
if err := adminSigner.Generate(); err != nil {
t.Fatalf("Failed to generate admin keypair: %v", err)
}
adminHex := hex.Enc(adminSigner.Pub())
// Create second admin
admin2Hex := "fedcba9876543210fedcba9876543210fedcba9876543210fedcba9876543210"
listener, _, cleanup := setupPolicyTestListener(t, adminHex)
defer cleanup()
// Try to add second admin (policy_admins is a protected field)
newPolicyJSON := `{
"default_policy": "allow",
"policy_admins": ["` + adminHex + `", "` + admin2Hex + `"]
}`
ev := createPolicyConfigEvent(t, adminSigner, newPolicyJSON)
// This should FAIL because policy admins cannot modify the policy_admins field
err := listener.HandlePolicyConfigUpdate(ev)
if err == nil {
t.Error("Expected error when policy admin tries to modify policy_admins (protected field)")
}
// Second admin should NOT be in the list since update was rejected
admin2Bin, _ := hex.Dec(admin2Hex)
if listener.policyManager.IsPolicyAdmin(admin2Bin) {
t.Error("Second admin should NOT have been added - policy_admins is protected")
}
}
// TestHandlePolicyAdminFollowListUpdate tests follow list update from admin
func TestHandlePolicyAdminFollowListUpdate(t *testing.T) {
adminSigner := p8k.MustNew()
if err := adminSigner.Generate(); err != nil {
t.Fatalf("Failed to generate admin keypair: %v", err)
}
adminHex := hex.Enc(adminSigner.Pub())
listener, db, cleanup := setupPolicyTestListener(t, adminHex)
defer cleanup()
// Create a kind 3 follow list event from admin
ev := event.New()
ev.CreatedAt = time.Now().Unix()
ev.Kind = kind.FollowList.K
ev.Content = []byte("")
ev.Tags = tag.NewS()
// Add some follows
follow1Hex := "1111111111111111111111111111111111111111111111111111111111111111"
follow2Hex := "2222222222222222222222222222222222222222222222222222222222222222"
ev.Tags.Append(tag.NewFromAny("p", follow1Hex))
ev.Tags.Append(tag.NewFromAny("p", follow2Hex))
if err := ev.Sign(adminSigner); err != nil {
t.Fatalf("Failed to sign event: %v", err)
}
// Save the event to database first
if _, err := db.SaveEvent(listener.ctx, ev); err != nil {
t.Fatalf("Failed to save follow list event: %v", err)
}
// Handle the follow list update
err := listener.HandlePolicyAdminFollowListUpdate(ev)
if err != nil {
t.Errorf("Expected success but got error: %v", err)
}
// Verify follows were added
follow1Bin, _ := hex.Dec(follow1Hex)
follow2Bin, _ := hex.Dec(follow2Hex)
if !listener.policyManager.IsPolicyFollow(follow1Bin) {
t.Error("Follow 1 should have been added to policy follows")
}
if !listener.policyManager.IsPolicyFollow(follow2Bin) {
t.Error("Follow 2 should have been added to policy follows")
}
}
// TestIsPolicyAdminFollowListEvent tests detection of admin follow list events
func TestIsPolicyAdminFollowListEvent(t *testing.T) {
adminSigner := p8k.MustNew()
if err := adminSigner.Generate(); err != nil {
t.Fatalf("Failed to generate admin keypair: %v", err)
}
adminHex := hex.Enc(adminSigner.Pub())
nonAdminSigner := p8k.MustNew()
if err := nonAdminSigner.Generate(); err != nil {
t.Fatalf("Failed to generate non-admin keypair: %v", err)
}
listener, _, cleanup := setupPolicyTestListener(t, adminHex)
defer cleanup()
// Test admin's kind 3 event
adminFollowEv := event.New()
adminFollowEv.Kind = kind.FollowList.K
adminFollowEv.Tags = tag.NewS()
if err := adminFollowEv.Sign(adminSigner); err != nil {
t.Fatalf("Failed to sign event: %v", err)
}
if !listener.IsPolicyAdminFollowListEvent(adminFollowEv) {
t.Error("Should detect admin's follow list event")
}
// Test non-admin's kind 3 event
nonAdminFollowEv := event.New()
nonAdminFollowEv.Kind = kind.FollowList.K
nonAdminFollowEv.Tags = tag.NewS()
if err := nonAdminFollowEv.Sign(nonAdminSigner); err != nil {
t.Fatalf("Failed to sign event: %v", err)
}
if listener.IsPolicyAdminFollowListEvent(nonAdminFollowEv) {
t.Error("Should not detect non-admin's follow list event")
}
// Test admin's non-kind-3 event
adminOtherEv := event.New()
adminOtherEv.Kind = 1 // Kind 1, not follow list
adminOtherEv.Tags = tag.NewS()
if err := adminOtherEv.Sign(adminSigner); err != nil {
t.Fatalf("Failed to sign event: %v", err)
}
if listener.IsPolicyAdminFollowListEvent(adminOtherEv) {
t.Error("Should not detect admin's non-follow-list event")
}
}
// TestIsPolicyConfigEvent tests detection of policy config events
func TestIsPolicyConfigEvent(t *testing.T) {
signer := p8k.MustNew()
if err := signer.Generate(); err != nil {
t.Fatalf("Failed to generate keypair: %v", err)
}
// Kind 12345 event
policyEv := event.New()
policyEv.Kind = kind.PolicyConfig.K
policyEv.Tags = tag.NewS()
if err := policyEv.Sign(signer); err != nil {
t.Fatalf("Failed to sign event: %v", err)
}
if !IsPolicyConfigEvent(policyEv) {
t.Error("Should detect kind 12345 as policy config event")
}
// Non-policy event
otherEv := event.New()
otherEv.Kind = 1
otherEv.Tags = tag.NewS()
if err := otherEv.Sign(signer); err != nil {
t.Fatalf("Failed to sign event: %v", err)
}
if IsPolicyConfigEvent(otherEv) {
t.Error("Should not detect kind 1 as policy config event")
}
}
// TestMessageProcessingPauseDuringPolicyUpdate tests that message processing is paused
func TestMessageProcessingPauseDuringPolicyUpdate(t *testing.T) {
adminSigner := p8k.MustNew()
if err := adminSigner.Generate(); err != nil {
t.Fatalf("Failed to generate admin keypair: %v", err)
}
adminHex := hex.Enc(adminSigner.Pub())
listener, _, cleanup := setupPolicyTestListener(t, adminHex)
defer cleanup()
// Track if pause was called
pauseCalled := false
resumeCalled := false
// We can't easily mock the mutex, but we can verify the policy update succeeds
// which implies the pause/resume cycle completed
// Note: policy_admins must stay the same (protected field)
newPolicyJSON := `{
"default_policy": "allow",
"policy_admins": ["` + adminHex + `"],
"kind": {"whitelist": [1, 3, 5, 7]}
}`
ev := createPolicyConfigEvent(t, adminSigner, newPolicyJSON)
err := listener.HandlePolicyConfigUpdate(ev)
if err != nil {
t.Errorf("Policy update failed: %v", err)
}
// If we got here without deadlock, the pause/resume worked
_ = pauseCalled
_ = resumeCalled
// Verify policy was actually updated (kind whitelist was extended)
if listener.policyManager.DefaultPolicy != "allow" {
t.Error("Policy should have been updated")
}
}

View File

@@ -1,6 +1,7 @@
package app
import (
"bytes"
"context"
"net/http"
"strings"
@@ -12,9 +13,10 @@ import (
"lol.mleku.dev/errorf"
"lol.mleku.dev/log"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/cashu/token"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/filter"
"next.orly.dev/pkg/protocol/publish"
"next.orly.dev/pkg/utils"
atomicutils "next.orly.dev/pkg/utils/atomic"
@@ -26,9 +28,11 @@ type Listener struct {
ctx context.Context
cancel context.CancelFunc // Cancel function for this listener's context
remote string
connectionID string // Unique identifier for this connection (for access tracking)
req *http.Request
challenge atomicutils.Bytes
authedPubkey atomicutils.Bytes
cashuToken *token.Token // Verified Cashu access token for this connection (nil if no token)
startTime time.Time
isBlacklisted bool // Marker to identify blacklisted IPs
blacklistTimeout time.Time // When to timeout blacklisted connections
@@ -38,6 +42,8 @@ type Listener struct {
messageQueue chan messageRequest // Buffered channel for message processing
processingDone chan struct{} // Closed when message processor exits
handlerWg sync.WaitGroup // Tracks spawned message handler goroutines
handlerSem chan struct{} // Limits concurrent message handlers per connection
authProcessing sync.RWMutex // Ensures AUTH completes before other messages check authentication
// Flow control counters (atomic for concurrent access)
droppedMessages atomic.Int64 // Messages dropped due to full queue
// Diagnostics: per-connection counters
@@ -107,6 +113,29 @@ func (l *Listener) Write(p []byte) (n int, err error) {
}
}
// SendEvent sends an event to the client. Implements archive.EventDeliveryChannel.
func (l *Listener) SendEvent(ev *event.E) error {
if ev == nil {
return nil
}
// Serialize the event as an EVENT envelope
data := ev.Serialize()
// Use Write to send
_, err := l.Write(data)
return err
}
// IsConnected returns whether the client connection is still active.
// Implements archive.EventDeliveryChannel.
func (l *Listener) IsConnected() bool {
select {
case <-l.ctx.Done():
return false
default:
return l.conn != nil
}
}
// WriteControl sends a control message through the write channel
func (l *Listener) WriteControl(messageType int, data []byte, deadline time.Time) (err error) {
// Defensive: recover from any panic when sending to closed channel
@@ -161,6 +190,12 @@ func (l *Listener) writeWorker() {
return
}
// Skip writes if no connection (unit tests)
if l.conn == nil {
log.T.F("ws->%s skipping write (no connection)", l.remote)
continue
}
// Handle the write request
var err error
if req.IsPing {
@@ -212,14 +247,43 @@ func (l *Listener) messageProcessor() {
return
}
// Process the message in a separate goroutine to avoid blocking
// This allows multiple messages to be processed concurrently (like khatru does)
// Track the goroutine so we can wait for it during cleanup
l.handlerWg.Add(1)
go func(data []byte, remote string) {
defer l.handlerWg.Done()
l.HandleMessage(data, remote)
}(req.data, req.remote)
// Lock immediately to ensure AUTH is processed before subsequent messages
// are dequeued. This prevents race conditions where EVENT checks authentication
// before AUTH completes.
l.authProcessing.Lock()
// Check if this is an AUTH message by looking for the ["AUTH" prefix
isAuthMessage := len(req.data) > 7 && bytes.HasPrefix(req.data, []byte(`["AUTH"`))
if isAuthMessage {
// Process AUTH message synchronously while holding lock
// This blocks the messageProcessor from dequeuing the next message
// until authentication is complete and authedPubkey is set
log.D.F("ws->%s processing AUTH synchronously with lock", req.remote)
l.HandleMessage(req.data, req.remote)
// Unlock after AUTH completes so subsequent messages see updated authedPubkey
l.authProcessing.Unlock()
} else {
// Not AUTH - unlock immediately and process concurrently
// The next message can now be dequeued (possibly another non-AUTH to process concurrently)
l.authProcessing.Unlock()
// Acquire semaphore to limit concurrent handlers (blocking with context awareness)
select {
case l.handlerSem <- struct{}{}:
// Semaphore acquired
case <-l.ctx.Done():
return
}
l.handlerWg.Add(1)
go func(data []byte, remote string) {
defer func() {
<-l.handlerSem // Release semaphore
l.handlerWg.Done()
}()
l.HandleMessage(data, remote)
}(req.data, req.remote)
}
}
}
}
@@ -237,6 +301,22 @@ func (l *Listener) getManagedACL() *database.ManagedACL {
return nil
}
// getFollowsThrottleDelay returns the progressive throttle delay for follows ACL mode.
// Returns 0 if not in follows mode, throttle is disabled, or user is exempt.
func (l *Listener) getFollowsThrottleDelay(ev *event.E) time.Duration {
// Only applies to follows ACL mode
if acl.Registry.Active.Load() != "follows" {
return 0
}
// Find the Follows ACL instance and get the throttle delay
for _, aclInstance := range acl.Registry.ACL {
if follows, ok := aclInstance.(*acl.Follows); ok {
return follows.GetThrottleDelay(ev.Pubkey, l.remote)
}
}
return 0
}
// QueryEvents queries events using the database QueryEvents method
func (l *Listener) QueryEvents(ctx context.Context, f *filter.F) (event.S, error) {
return l.DB.QueryEvents(ctx, f)

View File

@@ -6,6 +6,7 @@ import (
"net/http"
"os"
"path/filepath"
"strings"
"sync"
"time"
@@ -14,18 +15,34 @@ import (
"lol.mleku.dev/log"
"next.orly.dev/app/config"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/crypto/keys"
"git.mleku.dev/mleku/nostr/crypto/keys"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/encoders/bech32encoding"
"git.mleku.dev/mleku/nostr/encoders/bech32encoding"
"git.mleku.dev/mleku/nostr/encoders/hex"
"next.orly.dev/pkg/neo4j"
"next.orly.dev/pkg/policy"
"next.orly.dev/pkg/protocol/graph"
"next.orly.dev/pkg/protocol/nip43"
"next.orly.dev/pkg/protocol/publish"
"next.orly.dev/pkg/bunker"
"next.orly.dev/pkg/cashu/issuer"
"next.orly.dev/pkg/cashu/keyset"
"next.orly.dev/pkg/cashu/verifier"
"next.orly.dev/pkg/protocol/nrc"
cashuiface "next.orly.dev/pkg/interfaces/cashu"
"next.orly.dev/pkg/ratelimit"
"next.orly.dev/pkg/spider"
"next.orly.dev/pkg/storage"
dsync "next.orly.dev/pkg/sync"
"next.orly.dev/pkg/wireguard"
"next.orly.dev/pkg/archive"
"next.orly.dev/pkg/tor"
"git.mleku.dev/mleku/nostr/interfaces/signer/p8k"
)
func Run(
ctx context.Context, cfg *config.C, db database.Database,
ctx context.Context, cfg *config.C, db database.Database, limiter *ratelimit.Limiter,
) (quit chan struct{}) {
quit = make(chan struct{})
var once sync.Once
@@ -63,14 +80,15 @@ func Run(
}
// start listener
l := &Server{
Ctx: ctx,
Config: cfg,
DB: db,
publishers: publish.New(NewPublisher(ctx)),
Admins: adminKeys,
Owners: ownerKeys,
cfg: cfg,
db: db,
Ctx: ctx,
Config: cfg,
DB: db,
publishers: publish.New(NewPublisher(ctx)),
Admins: adminKeys,
Owners: ownerKeys,
rateLimiter: limiter,
cfg: cfg,
db: db,
}
// Initialize NIP-43 invite manager if enabled
@@ -83,7 +101,181 @@ func Run(
l.sprocketManager = NewSprocketManager(ctx, cfg.AppName, cfg.SprocketEnabled)
// Initialize policy manager
l.policyManager = policy.NewWithManager(ctx, cfg.AppName, cfg.PolicyEnabled)
l.policyManager = policy.NewWithManager(ctx, cfg.AppName, cfg.PolicyEnabled, cfg.PolicyPath)
// Merge policy-defined owners with environment-defined owners
// This allows cloud deployments to add owners via policy.json when env vars cannot be modified
if l.policyManager != nil {
policyOwners := l.policyManager.GetOwnersBin()
if len(policyOwners) > 0 {
// Deduplicate when merging
existingOwners := make(map[string]struct{})
for _, owner := range l.Owners {
existingOwners[string(owner)] = struct{}{}
}
for _, policyOwner := range policyOwners {
if _, exists := existingOwners[string(policyOwner)]; !exists {
l.Owners = append(l.Owners, policyOwner)
existingOwners[string(policyOwner)] = struct{}{}
}
}
log.I.F("merged %d policy-defined owners with %d environment-defined owners (total: %d unique owners)",
len(policyOwners), len(ownerKeys), len(l.Owners))
}
}
// Initialize policy follows from database (load follow lists of policy admins)
// This must be done after policy manager initialization but before accepting connections
if err := l.InitializePolicyFollows(); err != nil {
log.W.F("failed to initialize policy follows: %v", err)
// Continue anyway - follows can be loaded when admins update their follow lists
}
// Cleanup any kind 3 events that lost their p tags (only for Badger backend)
if badgerDB, ok := db.(*database.D); ok {
if err := badgerDB.CleanupKind3WithoutPTags(ctx); chk.E(err) {
log.E.F("failed to cleanup kind 3 events: %v", err)
}
}
// Initialize graph query executor (Badger backend) if enabled
if badgerDB, ok := db.(*database.D); ok && cfg.GraphQueriesEnabled {
// Get relay identity key for signing graph query responses
relaySecretKey, err := badgerDB.GetOrCreateRelayIdentitySecret()
if err != nil {
log.E.F("failed to get relay identity key for graph executor: %v", err)
} else {
// Create the graph adapter and executor
graphAdapter := database.NewGraphAdapter(badgerDB)
if l.graphExecutor, err = graph.NewExecutor(graphAdapter, relaySecretKey); err != nil {
log.E.F("failed to create graph executor: %v", err)
} else {
graphEnabled, maxDepth, maxResults, rateLimitRPM := cfg.GetGraphConfigValues()
log.I.F("graph query executor initialized (Badger backend, enabled=%v, max_depth=%d, max_results=%d, rate_limit=%d/min)",
graphEnabled, maxDepth, maxResults, rateLimitRPM)
}
}
}
// Initialize graph query executor (Neo4j backend) if enabled
if neo4jDB, ok := db.(*neo4j.N); ok && cfg.GraphQueriesEnabled {
// Get relay identity key for signing graph query responses
relaySecretKey, err := neo4jDB.GetOrCreateRelayIdentitySecret()
if err != nil {
log.E.F("failed to get relay identity key for graph executor: %v", err)
} else {
// Create the graph adapter and executor
graphAdapter := neo4j.NewGraphAdapter(neo4jDB)
if l.graphExecutor, err = graph.NewExecutor(graphAdapter, relaySecretKey); err != nil {
log.E.F("failed to create graph executor: %v", err)
} else {
graphEnabled, maxDepth, maxResults, rateLimitRPM := cfg.GetGraphConfigValues()
log.I.F("graph query executor initialized (Neo4j backend, enabled=%v, max_depth=%d, max_results=%d, rate_limit=%d/min)",
graphEnabled, maxDepth, maxResults, rateLimitRPM)
}
}
}
// Initialize Cashu access token system when ACL is active
if cfg.ACLMode != "none" {
// Create keyset manager with file-based store (keysets persist across restarts)
keysetPath := filepath.Join(cfg.DataDir, "cashu-keysets.json")
keysetStore, err := keyset.NewFileStore(keysetPath)
if err != nil {
log.E.F("failed to create Cashu keyset store at %s: %v", keysetPath, err)
} else {
keysetManager := keyset.NewManager(keysetStore, keyset.DefaultActiveWindow, keyset.DefaultVerifyWindow)
// Initialize keyset manager (loads existing keysets or creates new one)
if err := keysetManager.Init(); err != nil {
log.E.F("failed to initialize Cashu keyset manager: %v", err)
} else {
// Create issuer with permissive checker (ACL handles authorization)
issuerCfg := issuer.DefaultConfig()
l.CashuIssuer = issuer.New(keysetManager, cashuiface.AllowAllChecker{}, issuerCfg)
// Create verifier for validating tokens
l.CashuVerifier = verifier.New(keysetManager, cashuiface.AllowAllChecker{}, verifier.DefaultConfig())
log.I.F("Cashu access token system enabled (ACL mode: %s, keysets: %s)", cfg.ACLMode, keysetPath)
}
}
}
// Initialize NRC (Nostr Relay Connect) bridge if enabled
nrcEnabled, nrcRendezvousURL, nrcAuthorizedKeys, nrcUseCashu, nrcSessionTimeout := cfg.GetNRCConfigValues()
if nrcEnabled && nrcRendezvousURL != "" {
// Get relay identity for signing NRC responses
relaySecretKey, err := db.GetOrCreateRelayIdentitySecret()
if err != nil {
log.E.F("failed to get relay identity for NRC bridge: %v", err)
} else {
// Create signer from secret key
relaySigner, sigErr := p8k.New()
if sigErr != nil {
log.E.F("failed to create signer for NRC bridge: %v", sigErr)
} else if sigErr = relaySigner.InitSec(relaySecretKey); sigErr != nil {
log.E.F("failed to init signer for NRC bridge: %v", sigErr)
} else {
// Parse authorized secrets (format: secret:name,secret:name,...)
authorizedSecrets := make(map[string]string)
for _, entry := range nrcAuthorizedKeys {
parts := strings.SplitN(entry, ":", 2)
if len(parts) >= 1 {
secretHex := parts[0]
name := ""
if len(parts) == 2 {
name = parts[1]
}
// Derive pubkey from secret
secretBytes, decErr := hex.Dec(secretHex)
if decErr != nil || len(secretBytes) != 32 {
log.W.F("NRC: skipping invalid secret key: %s", secretHex[:8])
continue
}
derivedSigner, signerErr := p8k.New()
if signerErr != nil {
log.W.F("NRC: failed to create signer: %v", signerErr)
continue
}
if signerErr = derivedSigner.InitSec(secretBytes); signerErr != nil {
log.W.F("NRC: failed to init signer: %v", signerErr)
continue
}
derivedPubkeyHex := string(hex.Enc(derivedSigner.Pub()))
authorizedSecrets[derivedPubkeyHex] = name
}
}
// Construct local relay URL
localRelayURL := fmt.Sprintf("ws://localhost:%d", cfg.Port)
// Create bridge config
bridgeConfig := &nrc.BridgeConfig{
RendezvousURL: nrcRendezvousURL,
LocalRelayURL: localRelayURL,
Signer: relaySigner,
AuthorizedSecrets: authorizedSecrets,
SessionTimeout: nrcSessionTimeout,
}
// Add Cashu verifier if enabled
if nrcUseCashu && l.CashuVerifier != nil {
bridgeConfig.CashuVerifier = l.CashuVerifier
}
// Create and start the bridge
l.nrcBridge = nrc.NewBridge(bridgeConfig)
if err := l.nrcBridge.Start(); err != nil {
log.E.F("failed to start NRC bridge: %v", err)
l.nrcBridge = nil
} else {
log.I.F("NRC bridge started (rendezvous: %s, authorized: %d, cashu: %v)",
nrcRendezvousURL, len(authorizedSecrets), nrcUseCashu && l.CashuVerifier != nil)
}
}
}
}
// Initialize spider manager based on mode (only for Badger backend)
if badgerDB, ok := db.(*database.D); ok && cfg.SpiderMode != "none" {
@@ -141,6 +333,44 @@ func Run(
}
}
// Initialize directory spider if enabled (only for Badger backend)
if badgerDB, ok := db.(*database.D); ok && cfg.DirectorySpiderEnabled {
if l.directorySpider, err = spider.NewDirectorySpider(
ctx,
badgerDB,
l.publishers,
cfg.DirectorySpiderInterval,
cfg.DirectorySpiderMaxHops,
); chk.E(err) {
log.E.F("failed to create directory spider: %v", err)
} else {
// Set up callback to get seed pubkeys (whitelisted users)
l.directorySpider.SetSeedCallback(func() [][]byte {
var pubkeys [][]byte
// Get followed pubkeys from follows ACL if available
for _, aclInstance := range acl.Registry.ACL {
if aclInstance.Type() == "follows" {
if follows, ok := aclInstance.(*acl.Follows); ok {
pubkeys = append(pubkeys, follows.GetFollowedPubkeys()...)
}
}
}
// Fall back to admin keys if no follows ACL
if len(pubkeys) == 0 {
pubkeys = adminKeys
}
return pubkeys
})
if err = l.directorySpider.Start(); chk.E(err) {
log.E.F("failed to start directory spider: %v", err)
} else {
log.I.F("directory spider started (interval: %v, max hops: %d)",
cfg.DirectorySpiderInterval, cfg.DirectorySpiderMaxHops)
}
}
}
// Initialize relay group manager (only for Badger backend)
if badgerDB, ok := db.(*database.D); ok {
l.relayGroupMgr = dsync.NewRelayGroupManager(badgerDB, cfg.RelayGroupAdmins)
@@ -203,19 +433,125 @@ func Run(
}
}
// Initialize the user interface
l.UserInterface()
// Initialize Blossom blob storage server (only for Badger backend)
if badgerDB, ok := db.(*database.D); ok {
// MUST be done before UserInterface() which registers routes
if badgerDB, ok := db.(*database.D); ok && cfg.BlossomEnabled {
log.I.F("Badger backend detected, initializing Blossom server...")
if l.blossomServer, err = initializeBlossomServer(ctx, cfg, badgerDB); err != nil {
log.E.F("failed to initialize blossom server: %v", err)
// Continue without blossom server
} else if l.blossomServer != nil {
log.I.F("blossom blob storage server initialized")
} else {
log.W.F("blossom server initialization returned nil without error")
}
} else if !cfg.BlossomEnabled {
log.I.F("Blossom server disabled via ORLY_BLOSSOM_ENABLED=false")
} else {
log.I.F("Non-Badger backend detected (type: %T), Blossom server not available", db)
}
// Initialize WireGuard VPN and NIP-46 Bunker (only for Badger backend)
// Requires ACL mode 'follows' or 'managed' - no point for open relays
if badgerDB, ok := db.(*database.D); ok && cfg.WGEnabled && cfg.ACLMode != "none" {
if cfg.WGEndpoint == "" {
log.E.F("WireGuard enabled but ORLY_WG_ENDPOINT not set - skipping")
} else {
// Get or create the subnet pool (restores seed and allocations from DB)
subnetPool, err := badgerDB.GetOrCreateSubnetPool(cfg.WGNetwork)
if err != nil {
log.E.F("failed to create subnet pool: %v", err)
} else {
l.subnetPool = subnetPool
// Get or create WireGuard server key
wgServerKey, err := badgerDB.GetOrCreateWireGuardServerKey()
if err != nil {
log.E.F("failed to get WireGuard server key: %v", err)
} else {
// Create WireGuard server
wgConfig := &wireguard.Config{
Port: cfg.WGPort,
Endpoint: cfg.WGEndpoint,
PrivateKey: wgServerKey,
Network: cfg.WGNetwork,
ServerIP: "10.73.0.1",
}
l.wireguardServer, err = wireguard.New(wgConfig)
if err != nil {
log.E.F("failed to create WireGuard server: %v", err)
} else {
if err = l.wireguardServer.Start(); err != nil {
log.E.F("failed to start WireGuard server: %v", err)
} else {
log.I.F("WireGuard VPN server started on UDP port %d", cfg.WGPort)
// Load existing peers from database and add to server
peers, err := badgerDB.GetAllWireGuardPeers()
if err != nil {
log.W.F("failed to load existing WireGuard peers: %v", err)
} else {
for _, peer := range peers {
// Derive client IP from sequence
subnet := subnetPool.SubnetForSequence(peer.Sequence)
clientIP := subnet.ClientIP.String()
if err := l.wireguardServer.AddPeer(peer.NostrPubkey, peer.WGPublicKey, clientIP); err != nil {
log.W.F("failed to add existing peer: %v", err)
}
}
if len(peers) > 0 {
log.I.F("loaded %d existing WireGuard peers", len(peers))
}
}
// Initialize bunker if enabled
if cfg.BunkerEnabled {
// Get relay identity for signing
relaySecretKey, err := badgerDB.GetOrCreateRelayIdentitySecret()
if err != nil {
log.E.F("failed to get relay identity for bunker: %v", err)
} else {
// Create signer from secret key
relaySigner, sigErr := p8k.New()
if sigErr != nil {
log.E.F("failed to create signer for bunker: %v", sigErr)
} else if sigErr = relaySigner.InitSec(relaySecretKey); sigErr != nil {
log.E.F("failed to init signer for bunker: %v", sigErr)
} else {
relayPubkey := relaySigner.Pub()
bunkerConfig := &bunker.Config{
RelaySigner: relaySigner,
RelayPubkey: relayPubkey[:],
Netstack: l.wireguardServer.GetNetstack(),
ListenAddr: fmt.Sprintf("10.73.0.1:%d", cfg.BunkerPort),
}
l.bunkerServer = bunker.New(bunkerConfig)
if err = l.bunkerServer.Start(); err != nil {
log.E.F("failed to start bunker server: %v", err)
} else {
log.I.F("NIP-46 bunker server started on 10.73.0.1:%d (WireGuard only)", cfg.BunkerPort)
}
}
}
}
}
}
}
}
}
} else if cfg.WGEnabled && cfg.ACLMode == "none" {
log.I.F("WireGuard disabled: requires ACL mode 'follows' or 'managed' (currently: 'none')")
}
// Initialize event domain services (validation, routing, processing)
l.InitEventServices()
// Initialize the user interface (registers routes)
l.UserInterface()
// Ensure a relay identity secret key exists when subscriptions and NWC are enabled
if cfg.SubscriptionEnabled && cfg.NWCUri != "" {
if skb, e := db.GetOrCreateRelayIdentitySecret(); e != nil {
@@ -263,6 +599,74 @@ func Run(
}
}
// Initialize access tracker for storage management (only for Badger backend)
if badgerDB, ok := db.(*database.D); ok {
l.accessTracker = storage.NewAccessTracker(badgerDB, 100000) // 100k dedup cache
l.accessTracker.Start()
log.I.F("access tracker initialized")
// Initialize garbage collector if enabled
maxBytes, gcEnabled, gcIntervalSec, gcBatchSize := cfg.GetStorageConfigValues()
if gcEnabled {
gcCfg := storage.GCConfig{
MaxStorageBytes: maxBytes,
Interval: time.Duration(gcIntervalSec) * time.Second,
BatchSize: gcBatchSize,
MinAgeSec: 3600, // Minimum 1 hour before eviction
}
l.garbageCollector = storage.NewGarbageCollector(ctx, badgerDB, l.accessTracker, gcCfg)
l.garbageCollector.Start()
log.I.F("garbage collector started (interval: %ds, batch: %d)", gcIntervalSec, gcBatchSize)
}
}
// Initialize archive relay manager if enabled
archiveEnabled, archiveRelays, archiveTimeoutSec, archiveCacheTTLHrs := cfg.GetArchiveConfigValues()
if archiveEnabled && len(archiveRelays) > 0 {
archiveCfg := archive.Config{
Enabled: true,
Relays: archiveRelays,
TimeoutSec: archiveTimeoutSec,
CacheTTLHrs: archiveCacheTTLHrs,
}
l.archiveManager = archive.New(ctx, db, archiveCfg)
log.I.F("archive relay manager initialized with %d relays", len(archiveRelays))
}
// Initialize Tor hidden service if enabled (spawns tor subprocess)
torEnabled, torPort, torDataDir, torBinary, torSOCKSPort := cfg.GetTorConfigValues()
if torEnabled {
torCfg := &tor.Config{
Port: torPort,
DataDir: torDataDir,
Binary: torBinary,
SOCKSPort: torSOCKSPort,
Handler: l,
}
var err error
l.torService, err = tor.New(torCfg)
if err != nil {
log.W.F("Tor disabled: %v", err)
} else {
if err = l.torService.Start(); err != nil {
log.W.F("failed to start Tor service: %v", err)
l.torService = nil
} else {
if addr := l.torService.OnionWSAddress(); addr != "" {
log.I.F("Tor hidden service listening on port %d, address: %s", torPort, addr)
} else {
log.I.F("Tor hidden service listening on port %d (waiting for .onion address)", torPort)
}
}
}
}
// Start rate limiter if enabled
if limiter != nil && limiter.IsEnabled() {
limiter.Start()
log.I.F("adaptive rate limiter started")
}
// Wait for database to be ready before accepting requests
log.I.F("waiting for database warmup to complete...")
<-db.Ready()
@@ -354,6 +758,60 @@ func Run(
log.I.F("spider manager stopped")
}
// Stop directory spider if running
if l.directorySpider != nil {
l.directorySpider.Stop()
log.I.F("directory spider stopped")
}
// Stop rate limiter if running
if l.rateLimiter != nil && l.rateLimiter.IsEnabled() {
l.rateLimiter.Stop()
log.I.F("rate limiter stopped")
}
// Stop archive manager if running
if l.archiveManager != nil {
l.archiveManager.Stop()
log.I.F("archive manager stopped")
}
// Stop Tor service if running
if l.torService != nil {
l.torService.Stop()
log.I.F("Tor service stopped")
}
// Stop garbage collector if running
if l.garbageCollector != nil {
l.garbageCollector.Stop()
log.I.F("garbage collector stopped")
}
// Stop access tracker if running
if l.accessTracker != nil {
l.accessTracker.Stop()
log.I.F("access tracker stopped")
}
// Stop bunker server if running
if l.bunkerServer != nil {
l.bunkerServer.Stop()
log.I.F("bunker server stopped")
}
// Stop NRC bridge if running
if l.nrcBridge != nil {
l.nrcBridge.Stop()
log.I.F("NRC bridge stopped")
}
// Stop WireGuard server if running
if l.wireguardServer != nil {
l.wireguardServer.Stop()
log.I.F("WireGuard server stopped")
}
// Create shutdown context with timeout
shutdownCtx, cancelShutdown := context.WithTimeout(context.Background(), 10*time.Second)
defer cancelShutdown()

View File

@@ -5,21 +5,50 @@ import (
"encoding/json"
"net/http"
"net/http/httptest"
"next.orly.dev/pkg/interfaces/signer/p8k"
"git.mleku.dev/mleku/nostr/interfaces/signer/p8k"
"os"
"testing"
"time"
"next.orly.dev/app/config"
"next.orly.dev/pkg/crypto/keys"
"next.orly.dev/pkg/acl"
"git.mleku.dev/mleku/nostr/crypto/keys"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/tag"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/tag"
"next.orly.dev/pkg/protocol/nip43"
"next.orly.dev/pkg/protocol/publish"
"next.orly.dev/pkg/protocol/relayinfo"
"git.mleku.dev/mleku/nostr/relayinfo"
)
// newTestListener creates a properly initialized Listener for testing
func newTestListener(server *Server, ctx context.Context) *Listener {
listener := &Listener{
Server: server,
ctx: ctx,
writeChan: make(chan publish.WriteRequest, 100),
writeDone: make(chan struct{}),
messageQueue: make(chan messageRequest, 100),
processingDone: make(chan struct{}),
subscriptions: make(map[string]context.CancelFunc),
}
// Start write worker and message processor
go listener.writeWorker()
go listener.messageProcessor()
return listener
}
// closeTestListener properly closes a test listener
func closeTestListener(listener *Listener) {
close(listener.writeChan)
<-listener.writeDone
close(listener.messageQueue)
<-listener.processingDone
}
// setupE2ETest creates a full test server for end-to-end testing
func setupE2ETest(t *testing.T) (*Server, *httptest.Server, func()) {
tempDir, err := os.MkdirTemp("", "nip43_e2e_test_*")
@@ -61,16 +90,28 @@ func setupE2ETest(t *testing.T) (*Server, *httptest.Server, func()) {
}
adminPubkey := adminSigner.Pub()
// Add admin to config for ACL
cfg.Admins = []string{hex.Enc(adminPubkey)}
server := &Server{
Ctx: ctx,
Config: cfg,
D: db,
DB: db,
publishers: publish.New(NewPublisher(ctx)),
Admins: [][]byte{adminPubkey},
InviteManager: nip43.NewInviteManager(cfg.NIP43InviteExpiry),
cfg: cfg,
db: db,
}
// Configure ACL registry
acl.Registry.SetMode(cfg.ACLMode)
if err = acl.Registry.Configure(cfg, db, ctx); err != nil {
db.Close()
os.RemoveAll(tempDir)
t.Fatalf("failed to configure ACL: %v", err)
}
server.mux = http.NewServeMux()
// Set up HTTP handlers
@@ -177,6 +218,7 @@ func TestE2E_CompleteJoinFlow(t *testing.T) {
joinEv := event.New()
joinEv.Kind = nip43.KindJoinRequest
copy(joinEv.Pubkey, userPubkey)
joinEv.Tags = tag.NewS()
joinEv.Tags.Append(tag.NewFromAny("-"))
joinEv.Tags.Append(tag.NewFromAny("claim", inviteCode))
joinEv.CreatedAt = time.Now().Unix()
@@ -186,17 +228,15 @@ func TestE2E_CompleteJoinFlow(t *testing.T) {
}
// Step 3: Process join request
listener := &Listener{
Server: server,
ctx: server.Ctx,
}
listener := newTestListener(server, server.Ctx)
defer closeTestListener(listener)
err = listener.HandleNIP43JoinRequest(joinEv)
if err != nil {
t.Fatalf("failed to handle join request: %v", err)
}
// Step 4: Verify membership
isMember, err := server.D.IsNIP43Member(userPubkey)
isMember, err := server.DB.IsNIP43Member(userPubkey)
if err != nil {
t.Fatalf("failed to check membership: %v", err)
}
@@ -204,7 +244,7 @@ func TestE2E_CompleteJoinFlow(t *testing.T) {
t.Error("user was not added as member")
}
membership, err := server.D.GetNIP43Membership(userPubkey)
membership, err := server.DB.GetNIP43Membership(userPubkey)
if err != nil {
t.Fatalf("failed to get membership: %v", err)
}
@@ -227,10 +267,8 @@ func TestE2E_InviteCodeReuse(t *testing.T) {
t.Fatalf("failed to generate invite code: %v", err)
}
listener := &Listener{
Server: server,
ctx: server.Ctx,
}
listener := newTestListener(server, server.Ctx)
defer closeTestListener(listener)
// First user uses the code
user1Secret, err := keys.GenerateSecretKey()
@@ -249,6 +287,7 @@ func TestE2E_InviteCodeReuse(t *testing.T) {
joinEv1 := event.New()
joinEv1.Kind = nip43.KindJoinRequest
copy(joinEv1.Pubkey, user1Pubkey)
joinEv1.Tags = tag.NewS()
joinEv1.Tags.Append(tag.NewFromAny("-"))
joinEv1.Tags.Append(tag.NewFromAny("claim", code))
joinEv1.CreatedAt = time.Now().Unix()
@@ -263,7 +302,7 @@ func TestE2E_InviteCodeReuse(t *testing.T) {
}
// Verify first user is member
isMember, err := server.D.IsNIP43Member(user1Pubkey)
isMember, err := server.DB.IsNIP43Member(user1Pubkey)
if err != nil {
t.Fatalf("failed to check user1 membership: %v", err)
}
@@ -288,6 +327,7 @@ func TestE2E_InviteCodeReuse(t *testing.T) {
joinEv2 := event.New()
joinEv2.Kind = nip43.KindJoinRequest
copy(joinEv2.Pubkey, user2Pubkey)
joinEv2.Tags = tag.NewS()
joinEv2.Tags.Append(tag.NewFromAny("-"))
joinEv2.Tags.Append(tag.NewFromAny("claim", code))
joinEv2.CreatedAt = time.Now().Unix()
@@ -303,7 +343,7 @@ func TestE2E_InviteCodeReuse(t *testing.T) {
}
// Verify second user is NOT member
isMember, err = server.D.IsNIP43Member(user2Pubkey)
isMember, err = server.DB.IsNIP43Member(user2Pubkey)
if err != nil {
t.Fatalf("failed to check user2 membership: %v", err)
}
@@ -317,10 +357,8 @@ func TestE2E_MembershipListGeneration(t *testing.T) {
server, _, cleanup := setupE2ETest(t)
defer cleanup()
listener := &Listener{
Server: server,
ctx: server.Ctx,
}
listener := newTestListener(server, server.Ctx)
defer closeTestListener(listener)
// Add multiple members
memberCount := 5
@@ -338,7 +376,7 @@ func TestE2E_MembershipListGeneration(t *testing.T) {
members[i] = userPubkey
// Add directly to database for speed
err = server.D.AddNIP43Member(userPubkey, "code")
err = server.DB.AddNIP43Member(userPubkey, "code")
if err != nil {
t.Fatalf("failed to add member %d: %v", i, err)
}
@@ -379,17 +417,15 @@ func TestE2E_ExpiredInviteCode(t *testing.T) {
server := &Server{
Ctx: ctx,
Config: cfg,
D: db,
DB: db,
publishers: publish.New(NewPublisher(ctx)),
InviteManager: nip43.NewInviteManager(cfg.NIP43InviteExpiry),
cfg: cfg,
db: db,
}
listener := &Listener{
Server: server,
ctx: ctx,
}
listener := newTestListener(server, ctx)
defer closeTestListener(listener)
// Generate invite code
code, err := server.InviteManager.GenerateCode()
@@ -417,6 +453,7 @@ func TestE2E_ExpiredInviteCode(t *testing.T) {
joinEv := event.New()
joinEv.Kind = nip43.KindJoinRequest
copy(joinEv.Pubkey, userPubkey)
joinEv.Tags = tag.NewS()
joinEv.Tags.Append(tag.NewFromAny("-"))
joinEv.Tags.Append(tag.NewFromAny("claim", code))
joinEv.CreatedAt = time.Now().Unix()
@@ -445,10 +482,8 @@ func TestE2E_InvalidTimestampRejected(t *testing.T) {
server, _, cleanup := setupE2ETest(t)
defer cleanup()
listener := &Listener{
Server: server,
ctx: server.Ctx,
}
listener := newTestListener(server, server.Ctx)
defer closeTestListener(listener)
// Generate invite code
code, err := server.InviteManager.GenerateCode()
@@ -474,6 +509,7 @@ func TestE2E_InvalidTimestampRejected(t *testing.T) {
joinEv := event.New()
joinEv.Kind = nip43.KindJoinRequest
copy(joinEv.Pubkey, userPubkey)
joinEv.Tags = tag.NewS()
joinEv.Tags.Append(tag.NewFromAny("-"))
joinEv.Tags.Append(tag.NewFromAny("claim", code))
joinEv.CreatedAt = time.Now().Unix() - 700 // More than 10 minutes ago
@@ -489,7 +525,7 @@ func TestE2E_InvalidTimestampRejected(t *testing.T) {
}
// Verify user was NOT added
isMember, err := server.D.IsNIP43Member(userPubkey)
isMember, err := server.DB.IsNIP43Member(userPubkey)
if err != nil {
t.Fatalf("failed to check membership: %v", err)
}
@@ -523,17 +559,15 @@ func BenchmarkJoinRequestProcessing(b *testing.B) {
server := &Server{
Ctx: ctx,
Config: cfg,
D: db,
DB: db,
publishers: publish.New(NewPublisher(ctx)),
InviteManager: nip43.NewInviteManager(cfg.NIP43InviteExpiry),
cfg: cfg,
db: db,
}
listener := &Listener{
Server: server,
ctx: ctx,
}
listener := newTestListener(server, ctx)
defer closeTestListener(listener)
b.ResetTimer()
@@ -547,6 +581,7 @@ func BenchmarkJoinRequestProcessing(b *testing.B) {
joinEv := event.New()
joinEv.Kind = nip43.KindJoinRequest
copy(joinEv.Pubkey, userPubkey)
joinEv.Tags = tag.NewS()
joinEv.Tags.Append(tag.NewFromAny("-"))
joinEv.Tags.Append(tag.NewFromAny("claim", code))
joinEv.CreatedAt = time.Now().Unix()

View File

@@ -1,9 +1,9 @@
package app
import (
"next.orly.dev/pkg/encoders/envelopes/eventenvelope"
"next.orly.dev/pkg/encoders/envelopes/okenvelope"
"next.orly.dev/pkg/encoders/reason"
"git.mleku.dev/mleku/nostr/encoders/envelopes/eventenvelope"
"git.mleku.dev/mleku/nostr/encoders/envelopes/okenvelope"
"git.mleku.dev/mleku/nostr/encoders/reason"
)
// OK represents a function that processes events or operations, using provided

View File

@@ -15,14 +15,14 @@ import (
"lol.mleku.dev/log"
"next.orly.dev/app/config"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/interfaces/signer/p8k"
"git.mleku.dev/mleku/nostr/interfaces/signer/p8k"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/encoders/bech32encoding"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/encoders/timestamp"
"git.mleku.dev/mleku/nostr/encoders/bech32encoding"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/kind"
"git.mleku.dev/mleku/nostr/encoders/tag"
"git.mleku.dev/mleku/nostr/encoders/timestamp"
"next.orly.dev/pkg/protocol/nwc"
)

View File

@@ -5,10 +5,10 @@ import (
"testing"
"time"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/kind"
"git.mleku.dev/mleku/nostr/encoders/tag"
)
// Test helper to create a test event
@@ -54,9 +54,18 @@ func testPrivilegedEventFiltering(events event.S, authedPubkey []byte, aclMode s
// Check p tags
pTags := ev.Tags.GetAll([]byte("p"))
for _, pTag := range pTags {
var pt []byte
var err error
if pt, err = hex.Dec(string(pTag.Value())); err != nil {
// First try binary format (optimized storage)
if pt := pTag.ValueBinary(); pt != nil {
if bytes.Equal(pt, authedPubkey) {
authorized = true
break
}
continue
}
// Fall back to hex decoding for non-binary values
// Use ValueHex() which handles both binary and hex storage formats
pt, err := hex.Dec(string(pTag.ValueHex()))
if err != nil {
continue
}
if bytes.Equal(pt, authedPubkey) {
@@ -395,6 +404,82 @@ func TestPrivilegedEventEdgeCases(t *testing.T) {
}
}
// TestPrivilegedEventsWithACLNone tests that privileged events are accessible
// to anyone when ACL mode is set to "none" (open relay)
func TestPrivilegedEventsWithACLNone(t *testing.T) {
authorPubkey := []byte("author-pubkey-12345")
recipientPubkey := []byte("recipient-pubkey-67")
unauthorizedPubkey := []byte("unauthorized-pubkey")
// Create a privileged event (encrypted DM)
privilegedEvent := createTestEvent(
"event-id-1",
hex.Enc(authorPubkey),
"private message",
kind.EncryptedDirectMessage.K,
createPTag(hex.Enc(recipientPubkey)),
)
tests := []struct {
name string
authedPubkey []byte
aclMode string
accessLevel string
shouldAllow bool
description string
}{
{
name: "ACL none - unauthorized user can see privileged event",
authedPubkey: unauthorizedPubkey,
aclMode: "none",
accessLevel: "write", // default for ACL=none
shouldAllow: true,
description: "When ACL is 'none', privileged events should be visible to anyone",
},
{
name: "ACL none - unauthenticated user can see privileged event",
authedPubkey: nil,
aclMode: "none",
accessLevel: "write", // default for ACL=none
shouldAllow: true,
description: "When ACL is 'none', even unauthenticated users can see privileged events",
},
{
name: "ACL managed - unauthorized user cannot see privileged event",
authedPubkey: unauthorizedPubkey,
aclMode: "managed",
accessLevel: "write",
shouldAllow: false,
description: "When ACL is 'managed', unauthorized users cannot see privileged events",
},
{
name: "ACL follows - unauthorized user cannot see privileged event",
authedPubkey: unauthorizedPubkey,
aclMode: "follows",
accessLevel: "write",
shouldAllow: false,
description: "When ACL is 'follows', unauthorized users cannot see privileged events",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
events := event.S{privilegedEvent}
filtered := testPrivilegedEventFiltering(events, tt.authedPubkey, tt.aclMode, tt.accessLevel)
if tt.shouldAllow {
if len(filtered) != 1 {
t.Errorf("%s: Expected event to be allowed, but it was filtered out. %s", tt.name, tt.description)
}
} else {
if len(filtered) != 0 {
t.Errorf("%s: Expected event to be filtered out, but it was allowed. %s", tt.name, tt.description)
}
}
})
}
}
func TestPrivilegedEventPolicyIntegration(t *testing.T) {
// Test that the policy system also correctly handles privileged events
// This tests the policy.go implementation

View File

@@ -9,12 +9,13 @@ import (
"github.com/gorilla/websocket"
"lol.mleku.dev/log"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/filter"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/kind"
"next.orly.dev/pkg/interfaces/publisher"
"next.orly.dev/pkg/interfaces/typer"
"next.orly.dev/pkg/policy"
"next.orly.dev/pkg/protocol/publish"
"next.orly.dev/pkg/utils"
)
@@ -183,36 +184,12 @@ func (p *P) Deliver(ev *event.E) {
// either the event pubkey or appears in any 'p' tag of the event.
// Only check authentication if AuthRequired is true (ACL is active)
if kind.IsPrivileged(ev.Kind) && d.sub.AuthRequired {
if len(d.sub.AuthedPubkey) == 0 {
// Not authenticated - cannot see privileged events
log.D.F(
"subscription delivery DENIED for privileged event %s to %s (not authenticated)",
hex.Enc(ev.ID), d.sub.remote,
)
continue
}
pk := d.sub.AuthedPubkey
allowed := false
// Direct author match
if utils.FastEqual(ev.Pubkey, pk) {
allowed = true
} else if ev.Tags != nil {
for _, pTag := range ev.Tags.GetAll([]byte("p")) {
// pTag.Value() returns []byte hex string; decode to bytes
dec, derr := hex.Dec(string(pTag.Value()))
if derr != nil {
continue
}
if utils.FastEqual(dec, pk) {
allowed = true
break
}
}
}
if !allowed {
// Use centralized IsPartyInvolved function for consistent privilege checking
if !policy.IsPartyInvolved(ev, pk) {
log.D.F(
"subscription delivery DENIED for privileged event %s to %s (auth mismatch)",
"subscription delivery DENIED for privileged event %s to %s (not authenticated or not a party involved)",
hex.Enc(ev.ID), d.sub.remote,
)
// Skip delivery for this subscriber
@@ -343,6 +320,67 @@ func (p *P) removeSubscriber(ws *websocket.Conn) {
delete(p.WriteChans, ws)
}
// HasActiveNIP46Signer checks if there's an active subscription for kind 24133
// where the given pubkey is involved (either as author filter or in #p tag filter).
// This is used to authenticate clients by proving a signer is connected for that pubkey.
func (p *P) HasActiveNIP46Signer(signerPubkey []byte) bool {
const kindNIP46 = 24133
p.Mx.RLock()
defer p.Mx.RUnlock()
for _, subs := range p.Map {
for _, sub := range subs {
if sub.S == nil {
continue
}
for _, f := range *sub.S {
if f == nil || f.Kinds == nil {
continue
}
// Check if filter is for kind 24133
hasNIP46Kind := false
for _, k := range f.Kinds.K {
if k.K == kindNIP46 {
hasNIP46Kind = true
break
}
}
if !hasNIP46Kind {
continue
}
// Check if the signer pubkey matches the #p tag filter
if f.Tags != nil {
pTag := f.Tags.GetFirst([]byte("p"))
if pTag != nil && pTag.Len() >= 2 {
for i := 1; i < pTag.Len(); i++ {
tagValue := pTag.T[i]
// Compare - handle both binary and hex formats
if len(tagValue) == 32 && len(signerPubkey) == 32 {
if utils.FastEqual(tagValue, signerPubkey) {
return true
}
} else if len(tagValue) == 64 && len(signerPubkey) == 32 {
// tagValue is hex, signerPubkey is binary
if string(tagValue) == hex.Enc(signerPubkey) {
return true
}
} else if len(tagValue) == 32 && len(signerPubkey) == 64 {
// tagValue is binary, signerPubkey is hex
if hex.Enc(tagValue) == string(signerPubkey) {
return true
}
} else if utils.FastEqual(tagValue, signerPubkey) {
return true
}
}
}
}
}
}
}
return false
}
// canSeePrivateEvent checks if the authenticated user can see an event with a private tag
func (p *P) canSeePrivateEvent(
authedPubkey, privatePubkey []byte, remote string,

View File

@@ -19,17 +19,31 @@ import (
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/blossom"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/event/authorization"
"next.orly.dev/pkg/event/processing"
"next.orly.dev/pkg/event/routing"
"next.orly.dev/pkg/event/validation"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/filter"
"git.mleku.dev/mleku/nostr/encoders/hex"
"git.mleku.dev/mleku/nostr/encoders/tag"
"next.orly.dev/pkg/policy"
"next.orly.dev/pkg/protocol/auth"
"next.orly.dev/pkg/protocol/httpauth"
"git.mleku.dev/mleku/nostr/protocol/auth"
"git.mleku.dev/mleku/nostr/httpauth"
"next.orly.dev/pkg/protocol/graph"
"next.orly.dev/pkg/protocol/nip43"
"next.orly.dev/pkg/protocol/publish"
"next.orly.dev/pkg/bunker"
"next.orly.dev/pkg/cashu/issuer"
"next.orly.dev/pkg/cashu/verifier"
"next.orly.dev/pkg/protocol/nrc"
"next.orly.dev/pkg/ratelimit"
"next.orly.dev/pkg/spider"
"next.orly.dev/pkg/storage"
dsync "next.orly.dev/pkg/sync"
"next.orly.dev/pkg/wireguard"
"next.orly.dev/pkg/archive"
"next.orly.dev/pkg/tor"
)
type Server struct {
@@ -48,17 +62,50 @@ type Server struct {
challengeMutex sync.RWMutex
challenges map[string][]byte
paymentProcessor *PaymentProcessor
sprocketManager *SprocketManager
policyManager *policy.P
spiderManager *spider.Spider
syncManager *dsync.Manager
relayGroupMgr *dsync.RelayGroupManager
clusterManager *dsync.ClusterManager
blossomServer *blossom.Server
InviteManager *nip43.InviteManager
cfg *config.C
db database.Database // Changed from *database.D to interface
// Message processing pause mutex for policy/follow list updates
// Use RLock() for normal message processing, Lock() for updates
messagePauseMutex sync.RWMutex
paymentProcessor *PaymentProcessor
sprocketManager *SprocketManager
policyManager *policy.P
spiderManager *spider.Spider
directorySpider *spider.DirectorySpider
syncManager *dsync.Manager
relayGroupMgr *dsync.RelayGroupManager
clusterManager *dsync.ClusterManager
blossomServer *blossom.Server
InviteManager *nip43.InviteManager
graphExecutor *graph.Executor
rateLimiter *ratelimit.Limiter
cfg *config.C
db database.Database // Changed from *database.D to interface
// Domain services for event handling
eventValidator *validation.Service
eventAuthorizer *authorization.Service
eventRouter *routing.DefaultRouter
eventProcessor *processing.Service
// WireGuard VPN and NIP-46 Bunker
wireguardServer *wireguard.Server
bunkerServer *bunker.Server
subnetPool *wireguard.SubnetPool
// Cashu access token system (NIP-XX)
CashuIssuer *issuer.Issuer
CashuVerifier *verifier.Verifier
// NRC (Nostr Relay Connect) bridge for remote relay access
nrcBridge *nrc.Bridge
// Archive relay and storage management
archiveManager *archive.Manager
accessTracker *storage.AccessTracker
garbageCollector *storage.GarbageCollector
// Tor hidden service
torService *tor.Service
}
// isIPBlacklisted checks if an IP address is blacklisted using the managed ACL system
@@ -91,12 +138,33 @@ func (s *Server) isIPBlacklisted(remote string) bool {
}
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// CORS headers should be handled by the reverse proxy (Caddy/nginx)
// to avoid duplicate headers. If running without a reverse proxy,
// uncomment the CORS configuration below or configure via environment variable.
// Check if this is a blossom-related path (needs CORS headers)
path := r.URL.Path
isBlossomPath := path == "/upload" || path == "/media" ||
path == "/mirror" || path == "/report" ||
strings.HasPrefix(path, "/list/") ||
strings.HasPrefix(path, "/blossom/") ||
(len(path) == 65 && path[0] == '/') // /<sha256> blob downloads
// Handle preflight OPTIONS requests
if r.Method == "OPTIONS" {
// Set CORS headers for all blossom-related requests
if isBlossomPath {
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET, HEAD, PUT, DELETE, OPTIONS")
w.Header().Set("Access-Control-Allow-Headers", "Authorization, authorization, Content-Type, content-type, X-SHA-256, x-sha-256, X-Content-Length, x-content-length, X-Content-Type, x-content-type, Accept, accept")
w.Header().Set("Access-Control-Expose-Headers", "X-Reason, Content-Length, Content-Type, Accept-Ranges")
w.Header().Set("Access-Control-Max-Age", "86400")
// Handle preflight OPTIONS requests for blossom paths
if r.Method == "OPTIONS" {
w.WriteHeader(http.StatusOK)
return
}
} else if r.Method == "OPTIONS" {
// Handle OPTIONS for non-blossom paths
if s.mux != nil {
s.mux.ServeHTTP(w, r)
return
}
w.WriteHeader(http.StatusOK)
return
}
@@ -131,6 +199,16 @@ func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}
func (s *Server) ServiceURL(req *http.Request) (url string) {
// Use configured RelayURL if available
if s.Config != nil && s.Config.RelayURL != "" {
relayURL := strings.TrimSuffix(s.Config.RelayURL, "/")
// Ensure it has a protocol
if !strings.HasPrefix(relayURL, "http://") && !strings.HasPrefix(relayURL, "https://") {
relayURL = "http://" + relayURL
}
return relayURL
}
proto := req.Header.Get("X-Forwarded-Proto")
if proto == "" {
if req.TLS != nil {
@@ -201,6 +279,15 @@ func (s *Server) UserInterface() {
origDirector(req)
req.Host = target.Host
}
// Suppress noisy "context canceled" errors from browser navigation
s.devProxy.ErrorHandler = func(w http.ResponseWriter, r *http.Request, err error) {
if r.Context().Err() == context.Canceled {
// Browser canceled the request - this is normal, don't log it
return
}
log.Printf("proxy error: %v", err)
http.Error(w, "Bad Gateway", http.StatusBadGateway)
}
}
}
}
@@ -212,7 +299,7 @@ func (s *Server) UserInterface() {
s.challengeMutex.Unlock()
}
// Serve favicon.ico by serving orly-favicon.png
// Serve favicon.ico by serving favicon.png
s.mux.HandleFunc("/favicon.ico", s.handleFavicon)
// Serve the main login interface (and static assets) or proxy in dev mode
@@ -243,6 +330,10 @@ func (s *Server) UserInterface() {
s.mux.HandleFunc("/api/nip86", s.handleNIP86Management)
// ACL mode endpoint
s.mux.HandleFunc("/api/acl-mode", s.handleACLMode)
// Log viewer endpoints (owner only)
s.mux.HandleFunc("/api/logs", s.handleGetLogs)
s.mux.HandleFunc("/api/logs/clear", s.handleClearLogs)
s.mux.HandleFunc("/api/logs/level", s.handleLogLevel)
// Sync endpoints for distributed synchronization
if s.syncManager != nil {
@@ -253,8 +344,17 @@ func (s *Server) UserInterface() {
// Blossom blob storage API endpoint
if s.blossomServer != nil {
// Primary routes under /blossom/
s.mux.HandleFunc("/blossom/", s.blossomHandler)
log.Printf("Blossom blob storage API enabled at /blossom")
// Root-level routes for clients that expect blossom at root (like Jumble)
s.mux.HandleFunc("/upload", s.blossomRootHandler)
s.mux.HandleFunc("/list/", s.blossomRootHandler)
s.mux.HandleFunc("/media", s.blossomRootHandler)
s.mux.HandleFunc("/mirror", s.blossomRootHandler)
s.mux.HandleFunc("/report", s.blossomRootHandler)
log.Printf("Blossom blob storage API enabled at /blossom and root")
} else {
log.Printf("WARNING: Blossom server is nil, routes not registered")
}
// Cluster replication API endpoints
@@ -263,9 +363,31 @@ func (s *Server) UserInterface() {
s.mux.HandleFunc("/cluster/events", s.clusterManager.HandleEventsRange)
log.Printf("Cluster replication API enabled at /cluster")
}
// WireGuard VPN and Bunker API endpoints
// These are always registered but will return errors if not enabled
s.mux.HandleFunc("/api/wireguard/config", s.handleWireGuardConfig)
s.mux.HandleFunc("/api/wireguard/regenerate", s.handleWireGuardRegenerate)
s.mux.HandleFunc("/api/wireguard/status", s.handleWireGuardStatus)
s.mux.HandleFunc("/api/wireguard/audit", s.handleWireGuardAudit)
s.mux.HandleFunc("/api/bunker/url", s.handleBunkerURL)
s.mux.HandleFunc("/api/bunker/info", s.handleBunkerInfo)
// Cashu access token endpoints (NIP-XX)
s.mux.HandleFunc("/cashu/mint", s.handleCashuMint)
s.mux.HandleFunc("/cashu/keysets", s.handleCashuKeysets)
s.mux.HandleFunc("/cashu/info", s.handleCashuInfo)
if s.CashuIssuer != nil {
log.Printf("Cashu access token API enabled at /cashu")
}
// NRC (Nostr Relay Connect) management endpoints
s.mux.HandleFunc("/api/nrc/connections", s.handleNRCConnectionsRouter)
s.mux.HandleFunc("/api/nrc/connections/", s.handleNRCConnectionsRouter)
s.mux.HandleFunc("/api/nrc/config", s.handleNRCConfig)
}
// handleFavicon serves orly-favicon.png as favicon.ico
// handleFavicon serves favicon.png as favicon.ico
func (s *Server) handleFavicon(w http.ResponseWriter, r *http.Request) {
// In dev mode with proxy configured, forward to dev server
if s.devProxy != nil {
@@ -273,14 +395,20 @@ func (s *Server) handleFavicon(w http.ResponseWriter, r *http.Request) {
return
}
// Serve orly-favicon.png as favicon.ico from embedded web app
// If web UI is disabled without a proxy, return 404
if s.Config != nil && s.Config.WebDisableEmbedded {
http.NotFound(w, r)
return
}
// Serve favicon.png as favicon.ico from embedded web app
w.Header().Set("Content-Type", "image/png")
w.Header().Set("Cache-Control", "public, max-age=86400") // Cache for 1 day
// Create a request for orly-favicon.png and serve it
// Create a request for favicon.png and serve it
faviconReq := &http.Request{
Method: "GET",
URL: &url.URL{Path: "/orly-favicon.png"},
URL: &url.URL{Path: "/favicon.png"},
}
ServeEmbeddedWeb(w, faviconReq)
}
@@ -293,6 +421,12 @@ func (s *Server) handleLoginInterface(w http.ResponseWriter, r *http.Request) {
return
}
// If web UI is disabled without a proxy, return 404
if s.Config != nil && s.Config.WebDisableEmbedded {
http.NotFound(w, r)
return
}
// Serve embedded web interface
ServeEmbeddedWeb(w, r)
}
@@ -541,25 +675,28 @@ func (s *Server) handleExport(w http.ResponseWriter, r *http.Request) {
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
// Skip authentication and permission checks when ACL is "none" (open relay mode)
if acl.Registry.Active.Load() != "none" {
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require write, admin, or owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "write" && accessLevel != "admin" && accessLevel != "owner" {
http.Error(
w, "Write, admin, or owner permission required",
http.StatusForbidden,
)
return
// Check permissions - require write, admin, or owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "write" && accessLevel != "admin" && accessLevel != "owner" {
http.Error(
w, "Write, admin, or owner permission required",
http.StatusForbidden,
)
return
}
}
// Parse pubkeys from request
@@ -610,6 +747,12 @@ func (s *Server) handleExport(w http.ResponseWriter, r *http.Request) {
w.Header().Set(
"Content-Disposition", "attachment; filename=\""+filename+"\"",
)
w.Header().Set("X-Content-Type-Options", "nosniff")
// Flush headers to start streaming immediately
if flusher, ok := w.(http.Flusher); ok {
flusher.Flush()
}
// Stream export
s.DB.Export(s.Ctx, w, pks...)
@@ -710,24 +853,27 @@ func (s *Server) handleImport(w http.ResponseWriter, r *http.Request) {
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
// Skip authentication and permission checks when ACL is "none" (open relay mode)
if acl.Registry.Active.Load() != "none" {
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require admin or owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "admin" && accessLevel != "owner" {
http.Error(
w, "Admin or owner permission required", http.StatusForbidden,
)
return
// Check permissions - require admin or owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "admin" && accessLevel != "owner" {
http.Error(
w, "Admin or owner permission required", http.StatusForbidden,
)
return
}
}
ct := r.Header.Get("Content-Type")
@@ -1133,3 +1279,250 @@ func (s *Server) updatePeerAdminACL(peerPubkey []byte) {
}
}
}
// =============================================================================
// Event Service Initialization
// =============================================================================
// InitEventServices initializes the domain services for event handling.
// This should be called after the Server is created but before accepting connections.
func (s *Server) InitEventServices() {
// Initialize validation service
s.eventValidator = validation.NewWithConfig(&validation.Config{
MaxFutureSeconds: 3600, // 1 hour
})
// Initialize authorization service
authCfg := &authorization.Config{
AuthRequired: s.Config.AuthRequired,
AuthToWrite: s.Config.AuthToWrite,
Admins: s.Admins,
Owners: s.Owners,
}
s.eventAuthorizer = authorization.New(
authCfg,
s.wrapAuthACLRegistry(),
s.wrapAuthPolicyManager(),
s.wrapAuthSyncManager(),
)
// Initialize router with handlers for special event kinds
s.eventRouter = routing.New()
// Register ephemeral event handler (kinds 20000-29999)
s.eventRouter.RegisterKindCheck(
"ephemeral",
routing.IsEphemeral,
routing.MakeEphemeralHandler(s.publishers),
)
// Initialize processing service
procCfg := &processing.Config{
Admins: s.Admins,
Owners: s.Owners,
WriteTimeout: 30 * time.Second,
}
s.eventProcessor = processing.New(procCfg, s.wrapDB(), s.publishers)
// Wire up optional dependencies to processing service
if s.rateLimiter != nil {
s.eventProcessor.SetRateLimiter(s.wrapRateLimiter())
}
if s.syncManager != nil {
s.eventProcessor.SetSyncManager(s.wrapSyncManager())
}
if s.relayGroupMgr != nil {
s.eventProcessor.SetRelayGroupManager(s.wrapRelayGroupManager())
}
if s.clusterManager != nil {
s.eventProcessor.SetClusterManager(s.wrapClusterManager())
}
s.eventProcessor.SetACLRegistry(s.wrapACLRegistry())
}
// Database wrapper for processing.Database interface
type processingDBWrapper struct {
db database.Database
}
func (s *Server) wrapDB() processing.Database {
return &processingDBWrapper{db: s.DB}
}
func (w *processingDBWrapper) SaveEvent(ctx context.Context, ev *event.E) (exists bool, err error) {
return w.db.SaveEvent(ctx, ev)
}
func (w *processingDBWrapper) CheckForDeleted(ev *event.E, adminOwners [][]byte) error {
return w.db.CheckForDeleted(ev, adminOwners)
}
// RateLimiter wrapper for processing.RateLimiter interface
type processingRateLimiterWrapper struct {
rl *ratelimit.Limiter
}
func (s *Server) wrapRateLimiter() processing.RateLimiter {
return &processingRateLimiterWrapper{rl: s.rateLimiter}
}
func (w *processingRateLimiterWrapper) IsEnabled() bool {
return w.rl.IsEnabled()
}
func (w *processingRateLimiterWrapper) Wait(ctx context.Context, opType int) error {
w.rl.Wait(ctx, opType)
return nil
}
// SyncManager wrapper for processing.SyncManager interface
type processingSyncManagerWrapper struct {
sm *dsync.Manager
}
func (s *Server) wrapSyncManager() processing.SyncManager {
return &processingSyncManagerWrapper{sm: s.syncManager}
}
func (w *processingSyncManagerWrapper) UpdateSerial() {
w.sm.UpdateSerial()
}
// RelayGroupManager wrapper for processing.RelayGroupManager interface
type processingRelayGroupManagerWrapper struct {
rgm *dsync.RelayGroupManager
}
func (s *Server) wrapRelayGroupManager() processing.RelayGroupManager {
return &processingRelayGroupManagerWrapper{rgm: s.relayGroupMgr}
}
func (w *processingRelayGroupManagerWrapper) ValidateRelayGroupEvent(ev *event.E) error {
return w.rgm.ValidateRelayGroupEvent(ev)
}
func (w *processingRelayGroupManagerWrapper) HandleRelayGroupEvent(ev *event.E, syncMgr any) {
if sm, ok := syncMgr.(*dsync.Manager); ok {
w.rgm.HandleRelayGroupEvent(ev, sm)
}
}
// ClusterManager wrapper for processing.ClusterManager interface
type processingClusterManagerWrapper struct {
cm *dsync.ClusterManager
}
func (s *Server) wrapClusterManager() processing.ClusterManager {
return &processingClusterManagerWrapper{cm: s.clusterManager}
}
func (w *processingClusterManagerWrapper) HandleMembershipEvent(ev *event.E) error {
return w.cm.HandleMembershipEvent(ev)
}
// ACLRegistry wrapper for processing.ACLRegistry interface
type processingACLRegistryWrapper struct{}
func (s *Server) wrapACLRegistry() processing.ACLRegistry {
return &processingACLRegistryWrapper{}
}
func (w *processingACLRegistryWrapper) Configure(cfg ...any) error {
return acl.Registry.Configure(cfg...)
}
func (w *processingACLRegistryWrapper) Active() string {
return acl.Registry.Active.Load()
}
// =============================================================================
// Authorization Service Wrappers
// =============================================================================
// ACLRegistry wrapper for authorization.ACLRegistry interface
type authACLRegistryWrapper struct{}
func (s *Server) wrapAuthACLRegistry() authorization.ACLRegistry {
return &authACLRegistryWrapper{}
}
func (w *authACLRegistryWrapper) GetAccessLevel(pub []byte, address string) string {
return acl.Registry.GetAccessLevel(pub, address)
}
func (w *authACLRegistryWrapper) CheckPolicy(ev *event.E) (bool, error) {
return acl.Registry.CheckPolicy(ev)
}
func (w *authACLRegistryWrapper) Active() string {
return acl.Registry.Active.Load()
}
// PolicyManager wrapper for authorization.PolicyManager interface
type authPolicyManagerWrapper struct {
pm *policy.P
}
func (s *Server) wrapAuthPolicyManager() authorization.PolicyManager {
if s.policyManager == nil {
return nil
}
return &authPolicyManagerWrapper{pm: s.policyManager}
}
func (w *authPolicyManagerWrapper) IsEnabled() bool {
return w.pm.IsEnabled()
}
func (w *authPolicyManagerWrapper) CheckPolicy(action string, ev *event.E, pubkey []byte, remote string) (bool, error) {
return w.pm.CheckPolicy(action, ev, pubkey, remote)
}
// SyncManager wrapper for authorization.SyncManager interface
type authSyncManagerWrapper struct {
sm *dsync.Manager
}
func (s *Server) wrapAuthSyncManager() authorization.SyncManager {
if s.syncManager == nil {
return nil
}
return &authSyncManagerWrapper{sm: s.syncManager}
}
func (w *authSyncManagerWrapper) GetPeers() []string {
return w.sm.GetPeers()
}
func (w *authSyncManagerWrapper) IsAuthorizedPeer(url, pubkey string) bool {
return w.sm.IsAuthorizedPeer(url, pubkey)
}
// =============================================================================
// Message Processing Pause/Resume for Policy and Follow List Updates
// =============================================================================
// PauseMessageProcessing acquires an exclusive lock to pause all message processing.
// This should be called before updating policy configuration or follow lists.
// Call ResumeMessageProcessing to release the lock after updates are complete.
func (s *Server) PauseMessageProcessing() {
s.messagePauseMutex.Lock()
}
// ResumeMessageProcessing releases the exclusive lock to resume message processing.
// This should be called after policy configuration or follow list updates are complete.
func (s *Server) ResumeMessageProcessing() {
s.messagePauseMutex.Unlock()
}
// AcquireMessageProcessingLock acquires a read lock for normal message processing.
// This allows concurrent message processing while blocking during policy updates.
// Call ReleaseMessageProcessingLock when message processing is complete.
func (s *Server) AcquireMessageProcessingLock() {
s.messagePauseMutex.RLock()
}
// ReleaseMessageProcessingLock releases the read lock after message processing.
func (s *Server) ReleaseMessageProcessingLock() {
s.messagePauseMutex.RUnlock()
}

View File

@@ -16,7 +16,7 @@ import (
"github.com/adrg/xdg"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/event"
)
// SprocketResponse represents a response from the sprocket script

View File

@@ -1,449 +0,0 @@
package app
import (
"context"
"encoding/json"
"fmt"
"net"
"net/http/httptest"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/gorilla/websocket"
"next.orly.dev/app/config"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/interfaces/signer/p8k"
"next.orly.dev/pkg/protocol/publish"
)
// createSignedTestEvent creates a properly signed test event for use in tests
func createSignedTestEvent(t *testing.T, kind uint16, content string, tags ...*tag.T) *event.E {
t.Helper()
// Create a signer
signer, err := p8k.New()
if err != nil {
t.Fatalf("Failed to create signer: %v", err)
}
defer signer.Zero()
// Generate a keypair
if err := signer.Generate(); err != nil {
t.Fatalf("Failed to generate keypair: %v", err)
}
// Create event
ev := &event.E{
Kind: kind,
Content: []byte(content),
CreatedAt: time.Now().Unix(),
Tags: &tag.S{},
}
// Add any provided tags
for _, tg := range tags {
*ev.Tags = append(*ev.Tags, tg)
}
// Sign the event (this sets Pubkey, ID, and Sig)
if err := ev.Sign(signer); err != nil {
t.Fatalf("Failed to sign event: %v", err)
}
return ev
}
// TestLongRunningSubscriptionStability verifies that subscriptions remain active
// for extended periods and correctly receive real-time events without dropping.
func TestLongRunningSubscriptionStability(t *testing.T) {
// Create test server
server, cleanup := setupTestServer(t)
defer cleanup()
// Start HTTP test server
httpServer := httptest.NewServer(server)
defer httpServer.Close()
// Convert HTTP URL to WebSocket URL
wsURL := strings.Replace(httpServer.URL, "http://", "ws://", 1)
// Connect WebSocket client
conn, _, err := websocket.DefaultDialer.Dial(wsURL, nil)
if err != nil {
t.Fatalf("Failed to connect WebSocket: %v", err)
}
defer conn.Close()
// Subscribe to kind 1 events
subID := "test-long-running"
reqMsg := fmt.Sprintf(`["REQ","%s",{"kinds":[1]}]`, subID)
if err := conn.WriteMessage(websocket.TextMessage, []byte(reqMsg)); err != nil {
t.Fatalf("Failed to send REQ: %v", err)
}
// Read until EOSE
gotEOSE := false
for !gotEOSE {
_, msg, err := conn.ReadMessage()
if err != nil {
t.Fatalf("Failed to read message: %v", err)
}
if strings.Contains(string(msg), `"EOSE"`) && strings.Contains(string(msg), subID) {
gotEOSE = true
t.Logf("Received EOSE for subscription %s", subID)
}
}
// Set up event counter
var receivedCount atomic.Int64
var mu sync.Mutex
receivedEvents := make(map[string]bool)
// Start goroutine to read events
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
readDone := make(chan struct{})
go func() {
defer close(readDone)
defer func() {
// Recover from any panic in read goroutine
if r := recover(); r != nil {
t.Logf("Read goroutine panic (recovered): %v", r)
}
}()
for {
// Check context first before attempting any read
select {
case <-ctx.Done():
return
default:
}
// Use a longer deadline and check context more frequently
conn.SetReadDeadline(time.Now().Add(2 * time.Second))
_, msg, err := conn.ReadMessage()
if err != nil {
// Immediately check if context is done - if so, just exit without continuing
if ctx.Err() != nil {
return
}
// Check for normal close
if websocket.IsCloseError(err, websocket.CloseNormalClosure) {
return
}
// Check if this is a timeout error - those are recoverable
if netErr, ok := err.(net.Error); ok && netErr.Timeout() {
// Double-check context before continuing
if ctx.Err() != nil {
return
}
continue
}
// Any other error means connection is broken, exit
t.Logf("Read error (non-timeout): %v", err)
return
}
// Parse message to check if it's an EVENT for our subscription
var envelope []interface{}
if err := json.Unmarshal(msg, &envelope); err != nil {
continue
}
if len(envelope) >= 3 && envelope[0] == "EVENT" && envelope[1] == subID {
// Extract event ID
eventMap, ok := envelope[2].(map[string]interface{})
if !ok {
continue
}
eventID, ok := eventMap["id"].(string)
if !ok {
continue
}
mu.Lock()
if !receivedEvents[eventID] {
receivedEvents[eventID] = true
receivedCount.Add(1)
t.Logf("Received event %s (total: %d)", eventID[:8], receivedCount.Load())
}
mu.Unlock()
}
}
}()
// Publish events at regular intervals over 30 seconds
const numEvents = 30
const publishInterval = 1 * time.Second
publishCtx, publishCancel := context.WithTimeout(context.Background(), 35*time.Second)
defer publishCancel()
for i := 0; i < numEvents; i++ {
select {
case <-publishCtx.Done():
t.Fatalf("Publish timeout exceeded")
default:
}
// Create and sign test event
ev := createSignedTestEvent(t, 1, fmt.Sprintf("Test event %d for long-running subscription", i))
// Save event to database
if _, err := server.D.SaveEvent(context.Background(), ev); err != nil {
t.Errorf("Failed to save event %d: %v", i, err)
continue
}
// Manually trigger publisher to deliver event to subscriptions
server.publishers.Deliver(ev)
t.Logf("Published event %d", i)
// Wait before next publish
if i < numEvents-1 {
time.Sleep(publishInterval)
}
}
// Wait a bit more for all events to be delivered
time.Sleep(3 * time.Second)
// Cancel context and wait for reader to finish
cancel()
<-readDone
// Check results
received := receivedCount.Load()
t.Logf("Test complete: published %d events, received %d events", numEvents, received)
// We should receive at least 90% of events (allowing for some timing edge cases)
minExpected := int64(float64(numEvents) * 0.9)
if received < minExpected {
t.Errorf("Subscription stability issue: expected at least %d events, got %d", minExpected, received)
}
// Close subscription
closeMsg := fmt.Sprintf(`["CLOSE","%s"]`, subID)
if err := conn.WriteMessage(websocket.TextMessage, []byte(closeMsg)); err != nil {
t.Errorf("Failed to send CLOSE: %v", err)
}
t.Logf("Long-running subscription test PASSED: %d/%d events delivered", received, numEvents)
}
// TestMultipleConcurrentSubscriptions verifies that multiple subscriptions
// can coexist on the same connection without interfering with each other.
func TestMultipleConcurrentSubscriptions(t *testing.T) {
// Create test server
server, cleanup := setupTestServer(t)
defer cleanup()
// Start HTTP test server
httpServer := httptest.NewServer(server)
defer httpServer.Close()
// Convert HTTP URL to WebSocket URL
wsURL := strings.Replace(httpServer.URL, "http://", "ws://", 1)
// Connect WebSocket client
conn, _, err := websocket.DefaultDialer.Dial(wsURL, nil)
if err != nil {
t.Fatalf("Failed to connect WebSocket: %v", err)
}
defer conn.Close()
// Create 3 subscriptions for different kinds
subscriptions := []struct {
id string
kind int
}{
{"sub1", 1},
{"sub2", 3},
{"sub3", 7},
}
// Subscribe to all
for _, sub := range subscriptions {
reqMsg := fmt.Sprintf(`["REQ","%s",{"kinds":[%d]}]`, sub.id, sub.kind)
if err := conn.WriteMessage(websocket.TextMessage, []byte(reqMsg)); err != nil {
t.Fatalf("Failed to send REQ for %s: %v", sub.id, err)
}
}
// Read until we get EOSE for all subscriptions
eoseCount := 0
for eoseCount < len(subscriptions) {
_, msg, err := conn.ReadMessage()
if err != nil {
t.Fatalf("Failed to read message: %v", err)
}
if strings.Contains(string(msg), `"EOSE"`) {
eoseCount++
t.Logf("Received EOSE %d/%d", eoseCount, len(subscriptions))
}
}
// Track received events per subscription
var mu sync.Mutex
receivedByKind := make(map[int]int)
// Start reader goroutine
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
readDone := make(chan struct{})
go func() {
defer close(readDone)
defer func() {
// Recover from any panic in read goroutine
if r := recover(); r != nil {
t.Logf("Read goroutine panic (recovered): %v", r)
}
}()
for {
// Check context first before attempting any read
select {
case <-ctx.Done():
return
default:
}
conn.SetReadDeadline(time.Now().Add(2 * time.Second))
_, msg, err := conn.ReadMessage()
if err != nil {
// Immediately check if context is done - if so, just exit without continuing
if ctx.Err() != nil {
return
}
// Check for normal close
if websocket.IsCloseError(err, websocket.CloseNormalClosure) {
return
}
// Check if this is a timeout error - those are recoverable
if netErr, ok := err.(net.Error); ok && netErr.Timeout() {
// Double-check context before continuing
if ctx.Err() != nil {
return
}
continue
}
// Any other error means connection is broken, exit
t.Logf("Read error (non-timeout): %v", err)
return
}
// Parse message
var envelope []interface{}
if err := json.Unmarshal(msg, &envelope); err != nil {
continue
}
if len(envelope) >= 3 && envelope[0] == "EVENT" {
eventMap, ok := envelope[2].(map[string]interface{})
if !ok {
continue
}
kindFloat, ok := eventMap["kind"].(float64)
if !ok {
continue
}
kind := int(kindFloat)
mu.Lock()
receivedByKind[kind]++
t.Logf("Received event for kind %d (count: %d)", kind, receivedByKind[kind])
mu.Unlock()
}
}
}()
// Publish events for each kind
for _, sub := range subscriptions {
for i := 0; i < 5; i++ {
// Create and sign test event
ev := createSignedTestEvent(t, uint16(sub.kind), fmt.Sprintf("Test for kind %d event %d", sub.kind, i))
if _, err := server.D.SaveEvent(context.Background(), ev); err != nil {
t.Errorf("Failed to save event: %v", err)
}
// Manually trigger publisher to deliver event to subscriptions
server.publishers.Deliver(ev)
time.Sleep(100 * time.Millisecond)
}
}
// Wait for events to be delivered
time.Sleep(2 * time.Second)
// Cancel and cleanup
cancel()
<-readDone
// Verify each subscription received its events
mu.Lock()
defer mu.Unlock()
for _, sub := range subscriptions {
count := receivedByKind[sub.kind]
if count < 4 { // Allow for some timing issues, expect at least 4/5
t.Errorf("Subscription %s (kind %d) only received %d/5 events", sub.id, sub.kind, count)
}
}
t.Logf("Multiple concurrent subscriptions test PASSED")
}
// setupTestServer creates a test relay server for subscription testing
func setupTestServer(t *testing.T) (*Server, func()) {
// Setup test database
ctx, cancel := context.WithCancel(context.Background())
// Use a temporary directory for the test database
tmpDir := t.TempDir()
db, err := database.New(ctx, cancel, tmpDir, "test.db")
if err != nil {
t.Fatalf("Failed to create test database: %v", err)
}
// Setup basic config
cfg := &config.C{
AuthRequired: false,
Owners: []string{},
Admins: []string{},
ACLMode: "none",
}
// Setup server
server := &Server{
Config: cfg,
D: db,
Ctx: ctx,
publishers: publish.New(NewPublisher(ctx)),
Admins: [][]byte{},
Owners: [][]byte{},
challenges: make(map[string][]byte),
}
// Cleanup function
cleanup := func() {
db.Close()
cancel()
}
return server, cleanup
}

3
app/web/.gitignore vendored
View File

@@ -1,5 +1,8 @@
node_modules/
dist/
public/bundle.js
public/bundle.js.map
public/bundle.css
.vite/
.tanstack/
.idea/

View File

@@ -1,12 +1,17 @@
{
"lockfileVersion": 1,
"configVersion": 0,
"workspaces": {
"": {
"name": "svelte-app",
"dependencies": {
"applesauce-core": "^4.1.0",
"applesauce-signers": "^4.1.0",
"@noble/curves": "^1.4.0",
"@noble/hashes": "^1.4.0",
"applesauce-core": "^4.4.2",
"applesauce-signers": "^4.2.0",
"hash-wasm": "^4.12.0",
"nostr-tools": "^2.17.0",
"qrcode": "^1.5.3",
"sirv-cli": "^2.0.0",
},
"devDependencies": {
@@ -35,7 +40,7 @@
"@noble/ciphers": ["@noble/ciphers@0.5.3", "", {}, "sha512-B0+6IIHiqEs3BPMT0hcRmHvEj2QHOLu+uwt+tqDDeVd0oyVzh7BPrDcPjRnV1PV/5LaknXJJQvOuRGR0zQJz+w=="],
"@noble/curves": ["@noble/curves@1.2.0", "", { "dependencies": { "@noble/hashes": "1.3.2" } }, "sha512-oYclrNgRaM9SsBUBVbb8M6DTV7ZHRTKugureoYEncY5c65HOmRzvSiTE3y5CYaPYJA/GVkrhXEoF0M3Ya9PMnw=="],
"@noble/curves": ["@noble/curves@1.9.7", "", { "dependencies": { "@noble/hashes": "1.8.0" } }, "sha512-gbKGcRUYIjA3/zCCNaWDciTMFI0dCkvou3TL8Zmy5Nc7sJ47a0jtOeZoTaMxkuqRo9cRhjOdZJXegxYE5FN/xw=="],
"@noble/hashes": ["@noble/hashes@1.8.0", "", {}, "sha512-jCs9ldd7NwzpgXDIf6P3+NrHh9/sD6CQdxHyjQI+h/6rDNo88ypBxxz45UDuZHz9r3tNz7N/VInSVoVdtXEI4A=="],
@@ -77,11 +82,15 @@
"acorn": ["acorn@8.15.0", "", { "bin": { "acorn": "bin/acorn" } }, "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg=="],
"ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
"ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="],
"anymatch": ["anymatch@3.1.3", "", { "dependencies": { "normalize-path": "^3.0.0", "picomatch": "^2.0.4" } }, "sha512-KMReFUr0B4t+D+OBkjR3KYqvocp2XaSzO55UcB6mgQMd3KbcE+mWTyvVV7D/zsdEbNnV6acZUutkiHQXvTr1Rw=="],
"applesauce-core": ["applesauce-core@4.1.0", "", { "dependencies": { "@noble/hashes": "^1.7.1", "@scure/base": "^1.2.4", "debug": "^4.4.0", "fast-deep-equal": "^3.1.3", "hash-sum": "^2.0.0", "light-bolt11-decoder": "^3.2.0", "nanoid": "^5.0.9", "nostr-tools": "~2.17", "rxjs": "^7.8.1" } }, "sha512-vFOHfqWW4DJfvPkMYLYNiy2ozO2IF+ZNwetGqaLuPjgE1Iwu4trZmG3GJUH+lO1Oq1N4e/OQ/EcotJoEBEiW7Q=="],
"applesauce-core": ["applesauce-core@4.4.2", "", { "dependencies": { "@noble/hashes": "^1.7.1", "@scure/base": "^1.2.4", "debug": "^4.4.0", "fast-deep-equal": "^3.1.3", "hash-sum": "^2.0.0", "light-bolt11-decoder": "^3.2.0", "nanoid": "^5.0.9", "nostr-tools": "~2.17", "rxjs": "^7.8.1" } }, "sha512-zuZB74Pp28UGM4e8DWbN1atR95xL7ODENvjkaGGnvAjIKvfdgMznU7m9gLxr/Hu+IHOmVbbd4YxwNmKBzCWhHQ=="],
"applesauce-signers": ["applesauce-signers@4.1.0", "", { "dependencies": { "@noble/hashes": "^1.7.1", "@noble/secp256k1": "^1.7.1", "@scure/base": "^1.2.4", "applesauce-core": "^4.1.0", "debug": "^4.4.0", "nanoid": "^5.0.9", "nostr-tools": "~2.17", "rxjs": "^7.8.2" } }, "sha512-S+nTkAt1CAGhalwI7warLTINsxxjBpS3NqbViz6LVy1ZrzEqaNirlalX+rbCjxjRrvIGhYV+rszkxDFhCYbPkg=="],
"applesauce-signers": ["applesauce-signers@4.2.0", "", { "dependencies": { "@noble/hashes": "^1.7.1", "@noble/secp256k1": "^1.7.1", "@scure/base": "^1.2.4", "applesauce-core": "^4.2.0", "debug": "^4.4.0", "nanoid": "^5.0.9", "nostr-tools": "~2.17", "rxjs": "^7.8.2" } }, "sha512-celexNd+aLt6/vhf72XXw2oAk8ohjna+aWEg/Z2liqPwP+kbVjnqq4Z1RXvt79QQbTIQbXYGWqervXWLE8HmHg=="],
"array-union": ["array-union@2.1.0", "", {}, "sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw=="],
@@ -95,8 +104,16 @@
"buffer-from": ["buffer-from@1.1.2", "", {}, "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ=="],
"camelcase": ["camelcase@5.3.1", "", {}, "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg=="],
"chokidar": ["chokidar@3.6.0", "", { "dependencies": { "anymatch": "~3.1.2", "braces": "~3.0.2", "glob-parent": "~5.1.2", "is-binary-path": "~2.1.0", "is-glob": "~4.0.1", "normalize-path": "~3.0.0", "readdirp": "~3.6.0" }, "optionalDependencies": { "fsevents": "~2.3.2" } }, "sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw=="],
"cliui": ["cliui@6.0.0", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.0", "wrap-ansi": "^6.2.0" } }, "sha512-t6wbgtoCXvAzst7QgXxJYqPt0usEfbgQdftEPbLL/cvv6HPE5VgvqCuAIDR0NgU52ds6rFwqrgakNLrHEjCbrQ=="],
"color-convert": ["color-convert@2.0.1", "", { "dependencies": { "color-name": "~1.1.4" } }, "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ=="],
"color-name": ["color-name@1.1.4", "", {}, "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="],
"colorette": ["colorette@1.4.0", "", {}, "sha512-Y2oEozpomLn7Q3HFP7dpww7AtMJplbM9lGZP6RDfHqmbeRjiwRg4n6VM6j4KLmRke85uWEI7JqF17f3pqdRA0g=="],
"commander": ["commander@2.20.3", "", {}, "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ=="],
@@ -109,10 +126,16 @@
"debug": ["debug@4.4.3", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="],
"decamelize": ["decamelize@1.2.0", "", {}, "sha512-z2S+W9X73hAUUki+N+9Za2lBlun89zigOyGrsax+KUQ6wKW4ZoWpEYBkGhQjwAjjDCkWxhY0VKEhk8wzY7F5cA=="],
"deepmerge": ["deepmerge@4.3.1", "", {}, "sha512-3sUqbMEc77XqpdNO7FRyRog+eW3ph+GYCbj+rK+uYyRMuwsVy0rMiVtPn+QJlKFvWP/1PYpapqYn0Me2knFn+A=="],
"dijkstrajs": ["dijkstrajs@1.0.3", "", {}, "sha512-qiSlmBq9+BCdCA/L46dw8Uy93mloxsPSbwnm5yrKn2vMPiy8KyAskTF6zuV/j5BMsmOGZDPs7KjU+mjb670kfA=="],
"dir-glob": ["dir-glob@3.0.1", "", { "dependencies": { "path-type": "^4.0.0" } }, "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA=="],
"emoji-regex": ["emoji-regex@8.0.0", "", {}, "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="],
"estree-walker": ["estree-walker@2.0.2", "", {}, "sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w=="],
"fast-deep-equal": ["fast-deep-equal@3.1.3", "", {}, "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q=="],
@@ -123,6 +146,8 @@
"fill-range": ["fill-range@7.1.1", "", { "dependencies": { "to-regex-range": "^5.0.1" } }, "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg=="],
"find-up": ["find-up@4.1.0", "", { "dependencies": { "locate-path": "^5.0.0", "path-exists": "^4.0.0" } }, "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw=="],
"fs-extra": ["fs-extra@8.1.0", "", { "dependencies": { "graceful-fs": "^4.2.0", "jsonfile": "^4.0.0", "universalify": "^0.1.0" } }, "sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g=="],
"fs.realpath": ["fs.realpath@1.0.0", "", {}, "sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw=="],
@@ -131,6 +156,8 @@
"function-bind": ["function-bind@1.1.2", "", {}, "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA=="],
"get-caller-file": ["get-caller-file@2.0.5", "", {}, "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg=="],
"get-port": ["get-port@3.2.0", "", {}, "sha512-x5UJKlgeUiNT8nyo/AcnwLnZuZNcSjSw0kogRB+Whd1fjjFq4B1hySFxSFWWSn4mIBzg3sRNUDFYc4g5gjPoLg=="],
"glob": ["glob@8.1.0", "", { "dependencies": { "fs.realpath": "^1.0.0", "inflight": "^1.0.4", "inherits": "2", "minimatch": "^5.0.1", "once": "^1.3.0" } }, "sha512-r8hpEjiQEYlF2QU0df3dS+nxxSIreXQS1qRhMJM0Q5NDdR386C7jb7Hwwod8Fgiuex+k0GFjgft18yvxm5XoCQ=="],
@@ -143,6 +170,8 @@
"hash-sum": ["hash-sum@2.0.0", "", {}, "sha512-WdZTbAByD+pHfl/g9QSsBIIwy8IT+EsPiKDs0KNX+zSHhdDLFKdZu0BQHljvO+0QI/BasbMSUa8wYNCZTvhslg=="],
"hash-wasm": ["hash-wasm@4.12.0", "", {}, "sha512-+/2B2rYLb48I/evdOIhP+K/DD2ca2fgBjp6O+GBEnCDk2e4rpeXIK8GvIyRPjTezgmWn9gmKwkQjjx6BtqDHVQ=="],
"hasown": ["hasown@2.0.2", "", { "dependencies": { "function-bind": "^1.1.2" } }, "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ=="],
"ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="],
@@ -157,6 +186,8 @@
"is-extglob": ["is-extglob@2.1.1", "", {}, "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ=="],
"is-fullwidth-code-point": ["is-fullwidth-code-point@3.0.0", "", {}, "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="],
"is-glob": ["is-glob@4.0.3", "", { "dependencies": { "is-extglob": "^2.1.1" } }, "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg=="],
"is-module": ["is-module@1.0.0", "", {}, "sha512-51ypPSPCoTEIN9dy5Oy+h4pShgJmPCygKfyRCISBI+JoWT/2oJvK8QPxmwv7b/p239jXrm9M1mlQbyKJ5A152g=="],
@@ -179,6 +210,8 @@
"local-access": ["local-access@1.1.0", "", {}, "sha512-XfegD5pyTAfb+GY6chk283Ox5z8WexG56OvM06RWLpAc/UHozO8X6xAxEkIitZOtsSMM1Yr3DkHgW5W+onLhCw=="],
"locate-path": ["locate-path@5.0.0", "", { "dependencies": { "p-locate": "^4.1.0" } }, "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g=="],
"magic-string": ["magic-string@0.27.0", "", { "dependencies": { "@jridgewell/sourcemap-codec": "^1.4.13" } }, "sha512-8UnnX2PeRAPZuN12svgR9j7M1uWMovg/CEnIwIG0LFkXSJJe4PdfUGiTGl8V9bsBHFUtfVINcSyYxd7q+kx9fA=="],
"merge2": ["merge2@1.4.1", "", {}, "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg=="],
@@ -205,6 +238,14 @@
"opts": ["opts@2.0.2", "", {}, "sha512-k41FwbcLnlgnFh69f4qdUfvDQ+5vaSDnVPFI/y5XuhKRq97EnVVneO9F1ESVCdiVu4fCS2L8usX3mU331hB7pg=="],
"p-limit": ["p-limit@2.3.0", "", { "dependencies": { "p-try": "^2.0.0" } }, "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w=="],
"p-locate": ["p-locate@4.1.0", "", { "dependencies": { "p-limit": "^2.2.0" } }, "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A=="],
"p-try": ["p-try@2.2.0", "", {}, "sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ=="],
"path-exists": ["path-exists@4.0.0", "", {}, "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w=="],
"path-is-absolute": ["path-is-absolute@1.0.1", "", {}, "sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg=="],
"path-parse": ["path-parse@1.0.7", "", {}, "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw=="],
@@ -213,12 +254,20 @@
"picomatch": ["picomatch@4.0.3", "", {}, "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q=="],
"pngjs": ["pngjs@5.0.0", "", {}, "sha512-40QW5YalBNfQo5yRYmiw7Yz6TKKVr3h6970B2YE+3fQpsWcrbj1PzJgxeJ19DRQjhMbKPIuMY8rFaXc8moolVw=="],
"qrcode": ["qrcode@1.5.4", "", { "dependencies": { "dijkstrajs": "^1.0.1", "pngjs": "^5.0.0", "yargs": "^15.3.1" }, "bin": { "qrcode": "bin/qrcode" } }, "sha512-1ca71Zgiu6ORjHqFBDpnSMTR2ReToX4l1Au1VFLyVeBTFavzQnv5JxMFr3ukHVKpSrSA2MCk0lNJSykjUfz7Zg=="],
"queue-microtask": ["queue-microtask@1.2.3", "", {}, "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A=="],
"randombytes": ["randombytes@2.1.0", "", { "dependencies": { "safe-buffer": "^5.1.0" } }, "sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ=="],
"readdirp": ["readdirp@3.6.0", "", { "dependencies": { "picomatch": "^2.2.1" } }, "sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA=="],
"require-directory": ["require-directory@2.1.1", "", {}, "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q=="],
"require-main-filename": ["require-main-filename@2.0.0", "", {}, "sha512-NKN5kMDylKuldxYLSUfrbo5Tuzh4hd+2E8NPPX02mZtn1VuREQToYe/ZdlJy+J3uCpfaiGF05e7B8W0iXbQHmg=="],
"resolve": ["resolve@1.22.10", "", { "dependencies": { "is-core-module": "^2.16.0", "path-parse": "^1.0.7", "supports-preserve-symlinks-flag": "^1.0.0" }, "bin": { "resolve": "bin/resolve" } }, "sha512-NPRy+/ncIMeDlTAsuqwKIiferiawhefFJtkNSW0qZJEqMEb+qBt/77B/jGeeek+F0uOeN05CDa6HXbbIgtVX4w=="],
"resolve.exports": ["resolve.exports@2.0.3", "", {}, "sha512-OcXjMsGdhL4XnbShKpAcSqPMzQoYkYyhbEaeSko47MjRP9NfEQMhZkXL1DoFlt9LWQn4YttrdnV6X2OiyzBi+A=="],
@@ -247,6 +296,8 @@
"serialize-javascript": ["serialize-javascript@6.0.2", "", { "dependencies": { "randombytes": "^2.1.0" } }, "sha512-Saa1xPByTTq2gdeFZYLLo+RFE35NHZkAbqZeWNd3BpzppeVisAqpDjcp8dyf6uIvEqJRd46jemmyA4iFIeVk8g=="],
"set-blocking": ["set-blocking@2.0.0", "", {}, "sha512-KiKBS8AnWGEyLzofFfmvKwpdPzqiy16LvQfK3yv/fVH7Bj13/wl3JSR1J+rfgRE9q7xUJK4qvgS8raSOeLUehw=="],
"sirv": ["sirv@2.0.4", "", { "dependencies": { "@polka/url": "^1.0.0-next.24", "mrmime": "^2.0.0", "totalist": "^3.0.0" } }, "sha512-94Bdh3cC2PKrbgSOUqTiGPWVZeSiXfKOVZNJniWoqrWrRkB1CJzBU3NEbiTsPcYy1lDsANA/THzS+9WBiy5nfQ=="],
"sirv-cli": ["sirv-cli@2.0.2", "", { "dependencies": { "console-clear": "^1.1.0", "get-port": "^3.2.0", "kleur": "^4.1.4", "local-access": "^1.0.1", "sade": "^1.6.0", "semiver": "^1.0.0", "sirv": "^2.0.0", "tinydate": "^1.0.0" }, "bin": { "sirv": "bin.js" } }, "sha512-OtSJDwxsF1NWHc7ps3Sa0s+dPtP15iQNJzfKVz+MxkEo3z72mCD+yu30ct79rPr0CaV1HXSOBp+MIY5uIhHZ1A=="],
@@ -259,6 +310,10 @@
"source-map-support": ["source-map-support@0.5.21", "", { "dependencies": { "buffer-from": "^1.0.0", "source-map": "^0.6.0" } }, "sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w=="],
"string-width": ["string-width@4.2.3", "", { "dependencies": { "emoji-regex": "^8.0.0", "is-fullwidth-code-point": "^3.0.0", "strip-ansi": "^6.0.1" } }, "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g=="],
"strip-ansi": ["strip-ansi@6.0.1", "", { "dependencies": { "ansi-regex": "^5.0.1" } }, "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="],
"supports-preserve-symlinks-flag": ["supports-preserve-symlinks-flag@1.0.0", "", {}, "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w=="],
"svelte": ["svelte@3.59.2", "", {}, "sha512-vzSyuGr3eEoAtT/A6bmajosJZIUWySzY2CzB3w2pgPvnkUjGqlDnsNnA0PMO+mMAhuyMul6C2uuZzY6ELSkzyA=="],
@@ -277,11 +332,19 @@
"universalify": ["universalify@0.1.2", "", {}, "sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg=="],
"which-module": ["which-module@2.0.1", "", {}, "sha512-iBdZ57RDvnOR9AGBhML2vFZf7h8vmBjhoaZqODJBFWHVtKkDmKuHai3cx5PgVMrX5YDNp27AofYbAwctSS+vhQ=="],
"wrap-ansi": ["wrap-ansi@6.2.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "string-width": "^4.1.0", "strip-ansi": "^6.0.0" } }, "sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA=="],
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
"ws": ["ws@7.5.10", "", { "peerDependencies": { "bufferutil": "^4.0.1", "utf-8-validate": "^5.0.2" }, "optionalPeers": ["bufferutil", "utf-8-validate"] }, "sha512-+dbF1tHwZpXcbOJdVOkzLDxZP1ailvSxM6ZweXTegylPny803bFhA+vqBYw4s31NSAk4S2Qz+AKXK9a4wkdjcQ=="],
"@noble/curves/@noble/hashes": ["@noble/hashes@1.3.2", "", {}, "sha512-MVC8EAQp7MvEcm30KWENFjgR+Mkmf+D189XJTkFIlwohU5hcBbn1ZkKq7KVTi2Hme3PMGF390DaL52beVrIihQ=="],
"y18n": ["y18n@4.0.3", "", {}, "sha512-JKhqTOwSrqNA1NY5lSztJ1GrBiUodLMmIZuLiDaMRJ+itFd+ABVE8XBjOvIWL+rSqNDC74LCSFmlb/U4UZ4hJQ=="],
"yargs": ["yargs@15.4.1", "", { "dependencies": { "cliui": "^6.0.0", "decamelize": "^1.2.0", "find-up": "^4.1.0", "get-caller-file": "^2.0.1", "require-directory": "^2.1.1", "require-main-filename": "^2.0.0", "set-blocking": "^2.0.0", "string-width": "^4.2.0", "which-module": "^2.0.0", "y18n": "^4.0.0", "yargs-parser": "^18.1.2" } }, "sha512-aePbxDmcYW++PaqBsJ+HYUFwCdv4LVvdnhBy78E57PIor8/OVvhMrADFFEDh8DHDFRv/O9i3lPhsENjO7QX0+A=="],
"yargs-parser": ["yargs-parser@18.1.3", "", { "dependencies": { "camelcase": "^5.0.0", "decamelize": "^1.2.0" } }, "sha512-o50j0JeToy/4K6OZcaQmW6lyXXKhq7csREXcDwk2omFPJEwUNOVtJKvmDr9EI1fAJZUyZcRF7kxGBWmRXudrCQ=="],
"@scure/bip32/@noble/curves": ["@noble/curves@1.1.0", "", { "dependencies": { "@noble/hashes": "1.3.1" } }, "sha512-091oBExgENk/kGj3AZmtBDMpxQPDtxQABR2B9lb1JbVTs6ytdzZNwvhxQ4MWasRNEzlbEH8jCWFCwhF/Obj5AA=="],
@@ -301,6 +364,8 @@
"micromatch/picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
"nostr-tools/@noble/curves": ["@noble/curves@1.2.0", "", { "dependencies": { "@noble/hashes": "1.3.2" } }, "sha512-oYclrNgRaM9SsBUBVbb8M6DTV7ZHRTKugureoYEncY5c65HOmRzvSiTE3y5CYaPYJA/GVkrhXEoF0M3Ya9PMnw=="],
"nostr-tools/@noble/hashes": ["@noble/hashes@1.3.1", "", {}, "sha512-EbqwksQwz9xDRGfDST86whPBgM65E0OH/pCgqW0GBVzO22bNE+NuIbeTb714+IfSjU3aRk47EUvXIb5bTsenKA=="],
"nostr-tools/@scure/base": ["@scure/base@1.1.1", "", {}, "sha512-ZxOhsSyxYwLJj3pLZCefNitxsj093tb2vq90mp2txoYeBqbcjDjqFhyM8eUjq/uFm6zJ+mUuqxlS2FkuSY1MTA=="],
@@ -313,6 +378,8 @@
"globby/glob/minimatch": ["minimatch@3.1.2", "", { "dependencies": { "brace-expansion": "^1.1.7" } }, "sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw=="],
"nostr-tools/@noble/curves/@noble/hashes": ["@noble/hashes@1.3.2", "", {}, "sha512-MVC8EAQp7MvEcm30KWENFjgR+Mkmf+D189XJTkFIlwohU5hcBbn1ZkKq7KVTi2Hme3PMGF390DaL52beVrIihQ=="],
"rollup-plugin-svelte/@rollup/pluginutils/picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
"globby/glob/minimatch/brace-expansion": ["brace-expansion@1.1.12", "", { "dependencies": { "balanced-match": "^1.0.0", "concat-map": "0.0.1" } }, "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg=="],

90
app/web/dist/bundle.css vendored Normal file

File diff suppressed because one or more lines are too long

25
app/web/dist/bundle.js vendored Normal file

File diff suppressed because one or more lines are too long

1
app/web/dist/bundle.js.map vendored Normal file

File diff suppressed because one or more lines are too long

BIN
app/web/dist/favicon.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 379 KiB

69
app/web/dist/global.css vendored Normal file
View File

@@ -0,0 +1,69 @@
html,
body {
position: relative;
width: 100%;
height: 100%;
}
body {
color: #333;
margin: 0;
padding: 8px;
box-sizing: border-box;
font-family:
-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen-Sans, Ubuntu,
Cantarell, "Helvetica Neue", sans-serif;
}
a {
color: rgb(0, 100, 200);
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
a:visited {
color: rgb(0, 80, 160);
}
label {
display: block;
}
input,
button,
select,
textarea {
font-family: inherit;
font-size: inherit;
-webkit-padding: 0.4em 0;
padding: 0.4em;
margin: 0 0 0.5em 0;
box-sizing: border-box;
border: 1px solid #ccc;
border-radius: 2px;
}
input:disabled {
color: #ccc;
}
button {
color: #333;
background-color: #f4f4f4;
outline: none;
}
button:disabled {
color: #999;
}
button:not(:disabled):active {
background-color: #ddd;
}
button:focus {
border-color: #666;
}

View File

@@ -3,10 +3,32 @@
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<meta name="color-scheme" content="light dark" />
<title>ORLY?</title>
<style>
:root {
color-scheme: light dark;
}
html, body {
background-color: #fff;
color: #000;
}
@media (prefers-color-scheme: dark) {
html, body {
background-color: #000;
color: #fff;
}
}
</style>
<link rel="icon" type="image/png" href="/favicon.png" />
<link rel="manifest" href="/manifest.json" />
<link rel="apple-touch-icon" href="/icon-192.png" />
<meta name="theme-color" content="#000000" />
<meta name="apple-mobile-web-app-capable" content="yes" />
<meta name="apple-mobile-web-app-status-bar-style" content="black" />
<link rel="stylesheet" href="/global.css" />
<link rel="stylesheet" href="/bundle.css" />
@@ -14,4 +36,9 @@
</head>
<body></body>
<script>
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('/sw.js');
}
</script>
</html>

BIN
app/web/dist/orly.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 514 KiB

View File

@@ -8,9 +8,13 @@
"name": "svelte-app",
"version": "1.0.0",
"dependencies": {
"applesauce-core": "^4.1.0",
"applesauce-signers": "^4.1.0",
"@noble/curves": "^1.4.0",
"@noble/hashes": "^1.4.0",
"applesauce-core": "^4.4.2",
"applesauce-signers": "^4.2.0",
"hash-wasm": "^4.12.0",
"nostr-tools": "^2.17.0",
"qrcode": "^1.5.3",
"sirv-cli": "^2.0.0"
},
"devDependencies": {
@@ -73,30 +77,27 @@
}
},
"node_modules/@noble/curves": {
"version": "1.2.0",
"version": "1.9.7",
"resolved": "https://registry.npmjs.org/@noble/curves/-/curves-1.9.7.tgz",
"integrity": "sha512-gbKGcRUYIjA3/zCCNaWDciTMFI0dCkvou3TL8Zmy5Nc7sJ47a0jtOeZoTaMxkuqRo9cRhjOdZJXegxYE5FN/xw==",
"license": "MIT",
"dependencies": {
"@noble/hashes": "1.3.2"
"@noble/hashes": "1.8.0"
},
"funding": {
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/@noble/curves/node_modules/@noble/hashes": {
"version": "1.3.2",
"license": "MIT",
"engines": {
"node": ">= 16"
"node": "^14.21.3 || >=16"
},
"funding": {
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/@noble/hashes": {
"version": "1.3.1",
"version": "1.8.0",
"resolved": "https://registry.npmjs.org/@noble/hashes/-/hashes-1.8.0.tgz",
"integrity": "sha512-jCs9ldd7NwzpgXDIf6P3+NrHh9/sD6CQdxHyjQI+h/6rDNo88ypBxxz45UDuZHz9r3tNz7N/VInSVoVdtXEI4A==",
"license": "MIT",
"engines": {
"node": ">= 16"
"node": "^14.21.3 || >=16"
},
"funding": {
"url": "https://paulmillr.com/funding/"
@@ -365,6 +366,30 @@
"node": ">=0.4.0"
}
},
"node_modules/ansi-regex": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
"license": "MIT",
"engines": {
"node": ">=8"
}
},
"node_modules/ansi-styles": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz",
"integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==",
"license": "MIT",
"dependencies": {
"color-convert": "^2.0.1"
},
"engines": {
"node": ">=8"
},
"funding": {
"url": "https://github.com/chalk/ansi-styles?sponsor=1"
}
},
"node_modules/anymatch": {
"version": "3.1.3",
"dev": true,
@@ -389,9 +414,9 @@
}
},
"node_modules/applesauce-core": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/applesauce-core/-/applesauce-core-4.1.0.tgz",
"integrity": "sha512-vFOHfqWW4DJfvPkMYLYNiy2ozO2IF+ZNwetGqaLuPjgE1Iwu4trZmG3GJUH+lO1Oq1N4e/OQ/EcotJoEBEiW7Q==",
"version": "4.4.2",
"resolved": "https://registry.npmjs.org/applesauce-core/-/applesauce-core-4.4.2.tgz",
"integrity": "sha512-zuZB74Pp28UGM4e8DWbN1atR95xL7ODENvjkaGGnvAjIKvfdgMznU7m9gLxr/Hu+IHOmVbbd4YxwNmKBzCWhHQ==",
"license": "MIT",
"dependencies": {
"@noble/hashes": "^1.7.1",
@@ -409,18 +434,6 @@
"url": "lightning:nostrudel@geyser.fund"
}
},
"node_modules/applesauce-core/node_modules/@noble/hashes": {
"version": "1.8.0",
"resolved": "https://registry.npmjs.org/@noble/hashes/-/hashes-1.8.0.tgz",
"integrity": "sha512-jCs9ldd7NwzpgXDIf6P3+NrHh9/sD6CQdxHyjQI+h/6rDNo88ypBxxz45UDuZHz9r3tNz7N/VInSVoVdtXEI4A==",
"license": "MIT",
"engines": {
"node": "^14.21.3 || >=16"
},
"funding": {
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/applesauce-core/node_modules/@scure/base": {
"version": "1.2.6",
"resolved": "https://registry.npmjs.org/@scure/base/-/base-1.2.6.tgz",
@@ -431,15 +444,15 @@
}
},
"node_modules/applesauce-signers": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/applesauce-signers/-/applesauce-signers-4.1.0.tgz",
"integrity": "sha512-S+nTkAt1CAGhalwI7warLTINsxxjBpS3NqbViz6LVy1ZrzEqaNirlalX+rbCjxjRrvIGhYV+rszkxDFhCYbPkg==",
"version": "4.2.0",
"resolved": "https://registry.npmjs.org/applesauce-signers/-/applesauce-signers-4.2.0.tgz",
"integrity": "sha512-celexNd+aLt6/vhf72XXw2oAk8ohjna+aWEg/Z2liqPwP+kbVjnqq4Z1RXvt79QQbTIQbXYGWqervXWLE8HmHg==",
"license": "MIT",
"dependencies": {
"@noble/hashes": "^1.7.1",
"@noble/secp256k1": "^1.7.1",
"@scure/base": "^1.2.4",
"applesauce-core": "^4.1.0",
"applesauce-core": "^4.2.0",
"debug": "^4.4.0",
"nanoid": "^5.0.9",
"nostr-tools": "~2.17",
@@ -450,18 +463,6 @@
"url": "lightning:nostrudel@geyser.fund"
}
},
"node_modules/applesauce-signers/node_modules/@noble/hashes": {
"version": "1.8.0",
"resolved": "https://registry.npmjs.org/@noble/hashes/-/hashes-1.8.0.tgz",
"integrity": "sha512-jCs9ldd7NwzpgXDIf6P3+NrHh9/sD6CQdxHyjQI+h/6rDNo88ypBxxz45UDuZHz9r3tNz7N/VInSVoVdtXEI4A==",
"license": "MIT",
"engines": {
"node": "^14.21.3 || >=16"
},
"funding": {
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/applesauce-signers/node_modules/@noble/secp256k1": {
"version": "1.7.2",
"resolved": "https://registry.npmjs.org/@noble/secp256k1/-/secp256k1-1.7.2.tgz",
@@ -533,6 +534,15 @@
"dev": true,
"license": "MIT"
},
"node_modules/camelcase": {
"version": "5.3.1",
"resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz",
"integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg==",
"license": "MIT",
"engines": {
"node": ">=6"
}
},
"node_modules/chokidar": {
"version": "3.6.0",
"dev": true,
@@ -556,6 +566,35 @@
"fsevents": "~2.3.2"
}
},
"node_modules/cliui": {
"version": "6.0.0",
"resolved": "https://registry.npmjs.org/cliui/-/cliui-6.0.0.tgz",
"integrity": "sha512-t6wbgtoCXvAzst7QgXxJYqPt0usEfbgQdftEPbLL/cvv6HPE5VgvqCuAIDR0NgU52ds6rFwqrgakNLrHEjCbrQ==",
"license": "ISC",
"dependencies": {
"string-width": "^4.2.0",
"strip-ansi": "^6.0.0",
"wrap-ansi": "^6.2.0"
}
},
"node_modules/color-convert": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
"integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
"license": "MIT",
"dependencies": {
"color-name": "~1.1.4"
},
"engines": {
"node": ">=7.0.0"
}
},
"node_modules/color-name": {
"version": "1.1.4",
"resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
"integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==",
"license": "MIT"
},
"node_modules/colorette": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/colorette/-/colorette-1.4.0.tgz",
@@ -604,6 +643,15 @@
}
}
},
"node_modules/decamelize": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/decamelize/-/decamelize-1.2.0.tgz",
"integrity": "sha512-z2S+W9X73hAUUki+N+9Za2lBlun89zigOyGrsax+KUQ6wKW4ZoWpEYBkGhQjwAjjDCkWxhY0VKEhk8wzY7F5cA==",
"license": "MIT",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/deepmerge": {
"version": "4.3.1",
"dev": true,
@@ -612,6 +660,12 @@
"node": ">=0.10.0"
}
},
"node_modules/dijkstrajs": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/dijkstrajs/-/dijkstrajs-1.0.3.tgz",
"integrity": "sha512-qiSlmBq9+BCdCA/L46dw8Uy93mloxsPSbwnm5yrKn2vMPiy8KyAskTF6zuV/j5BMsmOGZDPs7KjU+mjb670kfA==",
"license": "MIT"
},
"node_modules/dir-glob": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz",
@@ -625,6 +679,12 @@
"node": ">=8"
}
},
"node_modules/emoji-regex": {
"version": "8.0.0",
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==",
"license": "MIT"
},
"node_modules/estree-walker": {
"version": "2.0.2",
"dev": true,
@@ -674,6 +734,19 @@
"node": ">=8"
}
},
"node_modules/find-up": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz",
"integrity": "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==",
"license": "MIT",
"dependencies": {
"locate-path": "^5.0.0",
"path-exists": "^4.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/fs-extra": {
"version": "8.1.0",
"resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-8.1.0.tgz",
@@ -702,6 +775,15 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/get-caller-file": {
"version": "2.0.5",
"resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz",
"integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==",
"license": "ISC",
"engines": {
"node": "6.* || 8.* || >= 10.*"
}
},
"node_modules/get-port": {
"version": "3.2.0",
"license": "MIT",
@@ -817,6 +899,12 @@
"integrity": "sha512-WdZTbAByD+pHfl/g9QSsBIIwy8IT+EsPiKDs0KNX+zSHhdDLFKdZu0BQHljvO+0QI/BasbMSUa8wYNCZTvhslg==",
"license": "MIT"
},
"node_modules/hash-wasm": {
"version": "4.12.0",
"resolved": "https://registry.npmjs.org/hash-wasm/-/hash-wasm-4.12.0.tgz",
"integrity": "sha512-+/2B2rYLb48I/evdOIhP+K/DD2ca2fgBjp6O+GBEnCDk2e4rpeXIK8GvIyRPjTezgmWn9gmKwkQjjx6BtqDHVQ==",
"license": "MIT"
},
"node_modules/hasown": {
"version": "2.0.2",
"dev": true,
@@ -885,6 +973,15 @@
"node": ">=0.10.0"
}
},
"node_modules/is-fullwidth-code-point": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
"integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==",
"license": "MIT",
"engines": {
"node": ">=8"
}
},
"node_modules/is-glob": {
"version": "4.0.3",
"dev": true,
@@ -982,6 +1079,18 @@
"node": ">=6"
}
},
"node_modules/locate-path": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz",
"integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==",
"license": "MIT",
"dependencies": {
"p-locate": "^4.1.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/magic-string": {
"version": "0.27.0",
"dev": true,
@@ -1110,6 +1219,42 @@
}
}
},
"node_modules/nostr-tools/node_modules/@noble/curves": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/@noble/curves/-/curves-1.2.0.tgz",
"integrity": "sha512-oYclrNgRaM9SsBUBVbb8M6DTV7ZHRTKugureoYEncY5c65HOmRzvSiTE3y5CYaPYJA/GVkrhXEoF0M3Ya9PMnw==",
"license": "MIT",
"dependencies": {
"@noble/hashes": "1.3.2"
},
"funding": {
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/nostr-tools/node_modules/@noble/curves/node_modules/@noble/hashes": {
"version": "1.3.2",
"resolved": "https://registry.npmjs.org/@noble/hashes/-/hashes-1.3.2.tgz",
"integrity": "sha512-MVC8EAQp7MvEcm30KWENFjgR+Mkmf+D189XJTkFIlwohU5hcBbn1ZkKq7KVTi2Hme3PMGF390DaL52beVrIihQ==",
"license": "MIT",
"engines": {
"node": ">= 16"
},
"funding": {
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/nostr-tools/node_modules/@noble/hashes": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/@noble/hashes/-/hashes-1.3.1.tgz",
"integrity": "sha512-EbqwksQwz9xDRGfDST86whPBgM65E0OH/pCgqW0GBVzO22bNE+NuIbeTb714+IfSjU3aRk47EUvXIb5bTsenKA==",
"license": "MIT",
"engines": {
"node": ">= 16"
},
"funding": {
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/nostr-wasm": {
"version": "0.1.0",
"license": "MIT"
@@ -1127,6 +1272,51 @@
"dev": true,
"license": "BSD-2-Clause"
},
"node_modules/p-limit": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz",
"integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==",
"license": "MIT",
"dependencies": {
"p-try": "^2.0.0"
},
"engines": {
"node": ">=6"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/p-locate": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz",
"integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==",
"license": "MIT",
"dependencies": {
"p-limit": "^2.2.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/p-try": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/p-try/-/p-try-2.2.0.tgz",
"integrity": "sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ==",
"license": "MIT",
"engines": {
"node": ">=6"
}
},
"node_modules/path-exists": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz",
"integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==",
"license": "MIT",
"engines": {
"node": ">=8"
}
},
"node_modules/path-is-absolute": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz",
@@ -1163,6 +1353,32 @@
"url": "https://github.com/sponsors/jonschlinkert"
}
},
"node_modules/pngjs": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/pngjs/-/pngjs-5.0.0.tgz",
"integrity": "sha512-40QW5YalBNfQo5yRYmiw7Yz6TKKVr3h6970B2YE+3fQpsWcrbj1PzJgxeJ19DRQjhMbKPIuMY8rFaXc8moolVw==",
"license": "MIT",
"engines": {
"node": ">=10.13.0"
}
},
"node_modules/qrcode": {
"version": "1.5.4",
"resolved": "https://registry.npmjs.org/qrcode/-/qrcode-1.5.4.tgz",
"integrity": "sha512-1ca71Zgiu6ORjHqFBDpnSMTR2ReToX4l1Au1VFLyVeBTFavzQnv5JxMFr3ukHVKpSrSA2MCk0lNJSykjUfz7Zg==",
"license": "MIT",
"dependencies": {
"dijkstrajs": "^1.0.1",
"pngjs": "^5.0.0",
"yargs": "^15.3.1"
},
"bin": {
"qrcode": "bin/qrcode"
},
"engines": {
"node": ">=10.13.0"
}
},
"node_modules/queue-microtask": {
"version": "1.2.3",
"resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
@@ -1214,6 +1430,21 @@
"url": "https://github.com/sponsors/jonschlinkert"
}
},
"node_modules/require-directory": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz",
"integrity": "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==",
"license": "MIT",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/require-main-filename": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/require-main-filename/-/require-main-filename-2.0.0.tgz",
"integrity": "sha512-NKN5kMDylKuldxYLSUfrbo5Tuzh4hd+2E8NPPX02mZtn1VuREQToYe/ZdlJy+J3uCpfaiGF05e7B8W0iXbQHmg==",
"license": "ISC"
},
"node_modules/resolve": {
"version": "1.22.10",
"dev": true,
@@ -1425,6 +1656,12 @@
"randombytes": "^2.1.0"
}
},
"node_modules/set-blocking": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/set-blocking/-/set-blocking-2.0.0.tgz",
"integrity": "sha512-KiKBS8AnWGEyLzofFfmvKwpdPzqiy16LvQfK3yv/fVH7Bj13/wl3JSR1J+rfgRE9q7xUJK4qvgS8raSOeLUehw==",
"license": "ISC"
},
"node_modules/sirv": {
"version": "2.0.4",
"license": "MIT",
@@ -1489,6 +1726,32 @@
"source-map": "^0.6.0"
}
},
"node_modules/string-width": {
"version": "4.2.3",
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
"license": "MIT",
"dependencies": {
"emoji-regex": "^8.0.0",
"is-fullwidth-code-point": "^3.0.0",
"strip-ansi": "^6.0.1"
},
"engines": {
"node": ">=8"
}
},
"node_modules/strip-ansi": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
"license": "MIT",
"dependencies": {
"ansi-regex": "^5.0.1"
},
"engines": {
"node": ">=8"
}
},
"node_modules/supports-preserve-symlinks-flag": {
"version": "1.0.0",
"dev": true,
@@ -1573,6 +1836,26 @@
"node": ">= 4.0.0"
}
},
"node_modules/which-module": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/which-module/-/which-module-2.0.1.tgz",
"integrity": "sha512-iBdZ57RDvnOR9AGBhML2vFZf7h8vmBjhoaZqODJBFWHVtKkDmKuHai3cx5PgVMrX5YDNp27AofYbAwctSS+vhQ==",
"license": "ISC"
},
"node_modules/wrap-ansi": {
"version": "6.2.0",
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-6.2.0.tgz",
"integrity": "sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA==",
"license": "MIT",
"dependencies": {
"ansi-styles": "^4.0.0",
"string-width": "^4.1.0",
"strip-ansi": "^6.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/wrappy": {
"version": "1.0.2",
"dev": true,
@@ -1597,6 +1880,47 @@
"optional": true
}
}
},
"node_modules/y18n": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/y18n/-/y18n-4.0.3.tgz",
"integrity": "sha512-JKhqTOwSrqNA1NY5lSztJ1GrBiUodLMmIZuLiDaMRJ+itFd+ABVE8XBjOvIWL+rSqNDC74LCSFmlb/U4UZ4hJQ==",
"license": "ISC"
},
"node_modules/yargs": {
"version": "15.4.1",
"resolved": "https://registry.npmjs.org/yargs/-/yargs-15.4.1.tgz",
"integrity": "sha512-aePbxDmcYW++PaqBsJ+HYUFwCdv4LVvdnhBy78E57PIor8/OVvhMrADFFEDh8DHDFRv/O9i3lPhsENjO7QX0+A==",
"license": "MIT",
"dependencies": {
"cliui": "^6.0.0",
"decamelize": "^1.2.0",
"find-up": "^4.1.0",
"get-caller-file": "^2.0.1",
"require-directory": "^2.1.1",
"require-main-filename": "^2.0.0",
"set-blocking": "^2.0.0",
"string-width": "^4.2.0",
"which-module": "^2.0.0",
"y18n": "^4.0.0",
"yargs-parser": "^18.1.2"
},
"engines": {
"node": ">=8"
}
},
"node_modules/yargs-parser": {
"version": "18.1.3",
"resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-18.1.3.tgz",
"integrity": "sha512-o50j0JeToy/4K6OZcaQmW6lyXXKhq7csREXcDwk2omFPJEwUNOVtJKvmDr9EI1fAJZUyZcRF7kxGBWmRXudrCQ==",
"license": "ISC",
"dependencies": {
"camelcase": "^5.0.0",
"decamelize": "^1.2.0"
},
"engines": {
"node": ">=6"
}
}
}
}

View File

@@ -4,9 +4,11 @@
"private": true,
"type": "module",
"scripts": {
"fetch-kinds": "node scripts/fetch-kinds.js",
"prebuild": "npm run fetch-kinds",
"build": "rollup -c",
"dev": "rollup -c -w",
"start": "sirv public --no-clear"
"start": "sirv public --no-clear --single"
},
"devDependencies": {
"@rollup/plugin-commonjs": "^24.0.0",
@@ -20,9 +22,13 @@
"svelte": "^3.55.0"
},
"dependencies": {
"applesauce-core": "^4.1.0",
"applesauce-signers": "^4.1.0",
"@noble/curves": "^1.4.0",
"@noble/hashes": "^1.4.0",
"applesauce-core": "^4.4.2",
"applesauce-signers": "^4.2.0",
"hash-wasm": "^4.12.0",
"nostr-tools": "^2.17.0",
"qrcode": "^1.5.3",
"sirv-cli": "^2.0.0"
}
}

BIN
app/web/public/icon-192.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

BIN
app/web/public/icon-512.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 224 KiB

View File

@@ -3,10 +3,32 @@
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<meta name="color-scheme" content="light dark" />
<title>ORLY?</title>
<style>
:root {
color-scheme: light dark;
}
html, body {
background-color: #fff;
color: #000;
}
@media (prefers-color-scheme: dark) {
html, body {
background-color: #000;
color: #fff;
}
}
</style>
<link rel="icon" type="image/png" href="/favicon.png" />
<link rel="manifest" href="/manifest.json" />
<link rel="apple-touch-icon" href="/icon-192.png" />
<meta name="theme-color" content="#000000" />
<meta name="apple-mobile-web-app-capable" content="yes" />
<meta name="apple-mobile-web-app-status-bar-style" content="black" />
<link rel="stylesheet" href="/global.css" />
<link rel="stylesheet" href="/bundle.css" />
@@ -14,4 +36,9 @@
</head>
<body></body>
<script>
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('/sw.js');
}
</script>
</html>

View File

@@ -0,0 +1,39 @@
{
"id": "/",
"name": "ORLY Nostr Relay",
"short_name": "ORLY",
"description": "High-performance Nostr relay",
"display": "standalone",
"orientation": "any",
"start_url": "/",
"scope": "/",
"theme_color": "#000000",
"background_color": "#000000",
"categories": ["utilities", "social"],
"icons": [
{
"src": "/icon-192.png",
"sizes": "192x192",
"type": "image/png",
"purpose": "any"
},
{
"src": "/icon-512.png",
"sizes": "512x512",
"type": "image/png",
"purpose": "any"
},
{
"src": "/icon-192.png",
"sizes": "192x192",
"type": "image/png",
"purpose": "maskable"
},
{
"src": "/icon-512.png",
"sizes": "512x512",
"type": "image/png",
"purpose": "maskable"
}
]
}

95
app/web/public/sw.js Normal file
View File

@@ -0,0 +1,95 @@
const CACHE_VERSION = 'orly-v1';
const STATIC_ASSETS = [
'/',
'/index.html',
'/bundle.js',
'/bundle.css',
'/global.css',
'/favicon.png',
'/icon-192.png',
'/icon-512.png',
'/orly.png'
];
self.addEventListener('install', (event) => {
event.waitUntil(
caches.open(CACHE_VERSION).then((cache) => {
return cache.addAll(STATIC_ASSETS);
})
);
self.skipWaiting();
});
self.addEventListener('activate', (event) => {
event.waitUntil(
caches.keys().then((cacheNames) => {
return Promise.all(
cacheNames
.filter((name) => name !== CACHE_VERSION)
.map((name) => caches.delete(name))
);
})
);
self.clients.claim();
});
self.addEventListener('fetch', (event) => {
const url = new URL(event.request.url);
// Skip WebSocket requests
if (url.protocol === 'ws:' || url.protocol === 'wss:') {
return;
}
// Skip non-GET requests
if (event.request.method !== 'GET') {
return;
}
// API calls: network-first with cache fallback
if (url.pathname.startsWith('/api/')) {
event.respondWith(
fetch(event.request)
.then((response) => {
if (response.ok) {
const clone = response.clone();
caches.open(CACHE_VERSION).then((cache) => {
cache.put(event.request, clone);
});
}
return response;
})
.catch(() => {
return caches.match(event.request);
})
);
return;
}
// Static assets: cache-first with network fallback
event.respondWith(
caches.match(event.request).then((cached) => {
if (cached) {
// Update cache in background
fetch(event.request).then((response) => {
if (response.ok) {
caches.open(CACHE_VERSION).then((cache) => {
cache.put(event.request, response);
});
}
}).catch(() => {});
return cached;
}
return fetch(event.request).then((response) => {
if (response.ok) {
const clone = response.clone();
caches.open(CACHE_VERSION).then((cache) => {
cache.put(event.request, clone);
});
}
return response;
});
})
);
});

View File

@@ -9,6 +9,10 @@ import copy from "rollup-plugin-copy";
const production = !process.env.ROLLUP_WATCH;
// In dev mode, output to public/ so sirv can serve it
// In production, output to dist/ for embedding
const outputDir = production ? "dist" : "public";
function serve() {
let server;
@@ -36,7 +40,7 @@ export default {
sourcemap: true,
format: "iife",
name: "app",
file: "dist/bundle.js",
file: `${outputDir}/bundle.js`,
},
plugins: [
svelte({
@@ -73,14 +77,17 @@ export default {
// instead of npm run dev), minify
production && terser(),
// Copy static files from public to dist
copy({
// Copy static files from public to dist (only in production)
production && copy({
targets: [
{ src: 'public/index.html', dest: 'dist' },
{ src: 'public/global.css', dest: 'dist' },
{ src: 'public/favicon.png', dest: 'dist' },
{ src: 'public/orly.png', dest: 'dist' },
{ src: 'public/orly-favicon.png', dest: 'dist' }
{ src: 'public/manifest.json', dest: 'dist' },
{ src: 'public/sw.js', dest: 'dist' },
{ src: 'public/icon-192.png', dest: 'dist' },
{ src: 'public/icon-512.png', dest: 'dist' }
]
}),
],

View File

@@ -0,0 +1,233 @@
#!/usr/bin/env node
/**
* Fetches kinds.json from the nostr library and generates eventKinds.js
* Run: node scripts/fetch-kinds.js
*/
import { fileURLToPath } from 'url';
import { dirname, join } from 'path';
import { writeFileSync, existsSync } from 'fs';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const KINDS_URL = 'https://git.mleku.dev/mleku/nostr/raw/branch/main/encoders/kind/kinds.json';
const OUTPUT_PATH = join(__dirname, '..', 'src', 'eventKinds.js');
async function fetchKinds() {
console.log(`Fetching kinds from ${KINDS_URL}...`);
try {
const response = await fetch(KINDS_URL, { timeout: 10000 });
if (!response.ok) {
throw new Error(`HTTP ${response.status} ${response.statusText}`);
}
const data = await response.json();
console.log(`Fetched ${Object.keys(data.kinds).length} kinds (version: ${data.version})`);
return data;
} catch (error) {
// Check if we have an existing eventKinds.js we can use
if (existsSync(OUTPUT_PATH)) {
console.warn(`Warning: Could not fetch kinds.json (${error.message})`);
console.log(`Using existing ${OUTPUT_PATH}`);
return null; // Signal to skip generation
}
throw new Error(`Failed to fetch kinds.json and no existing file: ${error.message}`);
}
}
function generateEventKinds(data) {
const kinds = [];
for (const [kindNum, info] of Object.entries(data.kinds)) {
const k = parseInt(kindNum, 10);
// Determine classification
let isReplaceable = false;
let isAddressable = false;
let isEphemeral = false;
if (info.classification === 'replaceable' || k === 0 || k === 3 ||
(k >= data.ranges.replaceable.start && k < data.ranges.replaceable.end)) {
isReplaceable = true;
} else if (info.classification === 'parameterized' ||
(k >= data.ranges.parameterized.start && k <= data.ranges.parameterized.end)) {
isAddressable = true;
} else if (info.classification === 'ephemeral' ||
(k >= data.ranges.ephemeral.start && k < data.ranges.ephemeral.end)) {
isEphemeral = true;
}
const entry = {
kind: k,
name: info.name,
description: info.description,
nip: info.nip || null,
};
if (isReplaceable) entry.isReplaceable = true;
if (isAddressable) entry.isAddressable = true;
if (isEphemeral) entry.isEphemeral = true;
if (info.deprecated) entry.deprecated = true;
if (info.spec) entry.spec = info.spec;
// Add basic template
entry.template = {
kind: k,
content: "",
tags: []
};
// Add d tag for addressable events
if (isAddressable) {
entry.template.tags = [["d", "identifier"]];
}
kinds.push(entry);
}
// Sort by kind number
kinds.sort((a, b) => a.kind - b.kind);
return kinds;
}
function generateJS(kinds, data) {
return `/**
* Nostr Event Kinds Database
* Auto-generated from ${KINDS_URL}
* Version: ${data.version}
* Source: ${data.source}
*
* DO NOT EDIT - This file is auto-generated by scripts/fetch-kinds.js
*/
export const eventKinds = ${JSON.stringify(kinds, null, 2)};
// Kind ranges for classification
export const kindRanges = ${JSON.stringify(data.ranges, null, 2)};
// Privileged kinds (require auth)
export const privilegedKinds = ${JSON.stringify(data.privileged)};
// Directory kinds (public discovery)
export const directoryKinds = ${JSON.stringify(data.directory)};
// Kind aliases
export const kindAliases = ${JSON.stringify(data.aliases, null, 2)};
// Helper function to get event kind by number
export function getEventKind(kindNumber) {
return eventKinds.find(k => k.kind === kindNumber);
}
// Alias for compatibility
export function getKindInfo(kind) {
return getEventKind(kind);
}
export function getKindName(kind) {
const info = getEventKind(kind);
return info ? info.name : \`Kind \${kind}\`;
}
// Helper function to search event kinds by name or description
export function searchEventKinds(query) {
const lowerQuery = query.toLowerCase();
return eventKinds.filter(k =>
k.name.toLowerCase().includes(lowerQuery) ||
k.description.toLowerCase().includes(lowerQuery) ||
k.kind.toString().includes(query)
);
}
// Helper function to get all event kinds grouped by category
export function getEventKindsByCategory() {
return {
regular: eventKinds.filter(k => k.kind < 10000 && !k.isReplaceable),
replaceable: eventKinds.filter(k => k.isReplaceable),
ephemeral: eventKinds.filter(k => k.isEphemeral),
addressable: eventKinds.filter(k => k.isAddressable)
};
}
// Helper function to create a template event with current timestamp
export function createTemplateEvent(kindNumber, userPubkey = null) {
const kindInfo = getEventKind(kindNumber);
if (!kindInfo) {
return {
kind: kindNumber,
content: "",
tags: [],
created_at: Math.floor(Date.now() / 1000),
pubkey: userPubkey || "<your_pubkey_here>"
};
}
return {
...kindInfo.template,
created_at: Math.floor(Date.now() / 1000),
pubkey: userPubkey || "<your_pubkey_here>"
};
}
export function isReplaceable(kind) {
if (kind === 0 || kind === 3) return true;
return kind >= ${data.ranges.replaceable.start} && kind < ${data.ranges.replaceable.end};
}
export function isEphemeral(kind) {
return kind >= ${data.ranges.ephemeral.start} && kind < ${data.ranges.ephemeral.end};
}
export function isAddressable(kind) {
return kind >= ${data.ranges.parameterized.start} && kind <= ${data.ranges.parameterized.end};
}
export function isPrivileged(kind) {
return privilegedKinds.includes(kind);
}
// Export kind categories for filtering in UI
export const kindCategories = [
{ id: "all", name: "All Kinds", filter: () => true },
{ id: "regular", name: "Regular Events (0-9999)", filter: k => k.kind < 10000 && !k.isReplaceable },
{ id: "replaceable", name: "Replaceable (10000-19999)", filter: k => k.isReplaceable },
{ id: "ephemeral", name: "Ephemeral (20000-29999)", filter: k => k.isEphemeral },
{ id: "addressable", name: "Addressable (30000-39999)", filter: k => k.isAddressable },
{ id: "social", name: "Social", filter: k => [0, 1, 3, 6, 7].includes(k.kind) },
{ id: "messaging", name: "Messaging", filter: k => [4, 9, 10, 11, 12, 14, 15, 40, 41, 42].includes(k.kind) },
{ id: "lists", name: "Lists", filter: k => k.name.toLowerCase().includes("list") || k.name.toLowerCase().includes("set") },
{ id: "marketplace", name: "Marketplace", filter: k => [30017, 30018, 30019, 30020, 1021, 1022, 30402, 30403].includes(k.kind) },
{ id: "lightning", name: "Lightning/Zaps", filter: k => [9734, 9735, 9041, 9321, 7374, 7375, 7376].includes(k.kind) },
{ id: "media", name: "Media", filter: k => [20, 21, 22, 1063, 1222, 1244].includes(k.kind) },
{ id: "git", name: "Git/Code", filter: k => [818, 1337, 1617, 1618, 1619, 1621, 1622, 30617, 30618].includes(k.kind) },
{ id: "calendar", name: "Calendar", filter: k => [31922, 31923, 31924, 31925].includes(k.kind) },
{ id: "groups", name: "Groups", filter: k => (k.kind >= 9000 && k.kind <= 9030) || (k.kind >= 39000 && k.kind <= 39009) },
];
`;
}
async function main() {
try {
const data = await fetchKinds();
// If fetchKinds returned null, we're using the existing file
if (data === null) {
console.log('Skipping generation, using existing eventKinds.js');
return;
}
const kinds = generateEventKinds(data);
const js = generateJS(kinds, data);
writeFileSync(OUTPUT_PATH, js);
console.log(`Generated ${OUTPUT_PATH} with ${kinds.length} kinds`);
} catch (error) {
console.error('Error:', error.message);
process.exit(1);
}
}
main();

Some files were not shown because too many files have changed in this diff Show More