v0.17.10
This file contains invisible Unicode characters
This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
= next.orly.dev
:toc:
:note-caption: note 👉
image:./docs/orly.png[orly.dev]
image:https://img.shields.io/badge/godoc-documentation-blue.svg[Documentation,link=https://pkg.go.dev/next.orly.dev]
image:https://img.shields.io/badge/donate-geyser_crowdfunding_project_page-orange.svg[Support this project,link=https://geyser.fund/project/orly]
zap me: ⚡️mlekudev@getalby.com
follow me on link:https://jumble.social/users/npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku[nostr]
== about
ORLY is a nostr relay written from the ground up to be performant, low latency, and built with a number of features designed to make it well suited for
- personal relays
- small community relays
- business deployments and RaaS (Relay as a Service) with a nostr-native NWC client to allow accepting payments through NWC capable lightning nodes
- high availability clusters for reliability and/or providing a unified data set across multiple regions
ORLY uses a fast embedded link:https://github.com/hypermodeinc/badger[badger] database with a database designed for high performance querying and event storage.
On linux platforms, it uses https://github.com/bitcoin/secp256k1[libsecp256k1]-enabled signature and signature verification (see link:pkg/crypto/p256k/README.md[here]).
== building
ORLY is a standard Go application that can be built using the Go toolchain.
=== prerequisites
- Go 1.25.0 or later
- Git
- For web UI: link:https://bun.sh/[Bun] JavaScript runtime
=== basic build
To build the relay binary only:
[source,bash]
----
git clone <repository-url>
cd next.orly.dev
go build -o orly
----
=== building with web UI
To build with the embedded web interface:
[source,bash]
----
# Build the Svelte web application
cd app/web
bun install
bun run build
# Build the Go binary from project root
cd ../../
go build -o orly
----
The recommended way to build and embed the web UI is using the provided script:
[source,bash]
----
./scripts/update-embedded-web.sh
----
This script will:
- Build the Svelte app in `app/web` to `app/web/dist` using Bun (preferred) or fall back to npm/yarn/pnpm
- Run `go install` from the repository root so the binary picks up the new embedded assets
- Automatically detect and use the best available JavaScript package manager
For manual builds, you can also use:
[source,bash]
----
#!/bin/bash
# build.sh
echo "Building Svelte app..."
cd app/web
bun install
bun run build
echo "Building Go binary..."
cd ../../
go build -o orly
echo "Build complete!"
----
Make it executable with `chmod +x build.sh` and run with `./build.sh`.
== web UI
ORLY includes a modern web-based user interface built with link:https://svelte.dev/[Svelte] that provides comprehensive relay management capabilities.
=== features
The web UI offers:
* **Authentication**: Secure login using Nostr key pairs with challenge-response authentication
* **Event Management**: View, export, and import Nostr events with advanced filtering and search
* **User Administration**: Manage user permissions and roles (admin/owner)
* **Sprocket Management**: Configure and manage external event processing scripts
* **Real-time Updates**: Live event streaming and status updates
* **Dark/Light Theme**: Toggle between themes with persistent preferences
* **Responsive Design**: Works on desktop and mobile devices
=== authentication
The web UI uses Nostr-native authentication:
1. **Challenge Generation**: Server generates a cryptographic challenge
2. **Signature Verification**: Client signs the challenge with their private key
3. **Session Management**: Authenticated sessions with role-based permissions
Supported authentication methods:
- Direct private key input
- Nostr extension integration
- Hardware wallet support
=== user roles
* **Guest**: Read-only access to public events
* **User**: Can publish events and manage their own content
* **Admin**: Full relay management except sprocket configuration
* **Owner**: Complete control including sprocket management and system configuration
=== event management
The interface provides comprehensive event management:
* **Event Browser**: Paginated view of all events with filtering by kind, author, and content
* **Export Functionality**: Export events in JSON format with configurable date ranges
* **Import Capability**: Bulk import events (admin/owner only)
* **Search**: Full-text search across event content and metadata
* **Event Details**: Expandable view showing full event JSON and metadata
=== sprocket integration
The web UI includes a dedicated sprocket management interface:
* **Status Monitoring**: Real-time status of sprocket scripts
* **Script Upload**: Upload and manage sprocket scripts
* **Version Control**: Track and manage multiple script versions
* **Configuration**: Configure sprocket parameters and settings
* **Logs**: View sprocket execution logs and errors
=== development mode
For development, the web UI supports hot-reloading:
[source,bash]
----
# Enable development proxy
export ORLY_WEB_DISABLE_EMBEDDED=true
export ORLY_WEB_DEV_PROXY_URL=localhost:5000
# Start relay
./orly
# In another terminal, start Svelte dev server
cd app/web
bun run dev
----
This allows for rapid development with automatic reloading of changes.
== sprocket event sifter interface
The sprocket system provides a powerful interface for external event processing scripts, allowing you to implement custom filtering, validation, and processing logic for Nostr events before they are stored in the relay.
=== overview
Sprocket scripts receive events via stdin and respond with JSONL (JSON Lines) format, enabling real-time event processing with three possible actions:
* **accept**: Continue with normal event processing
* **reject**: Return OK false to client with rejection message
* **shadowReject**: Return OK true to client but abort processing (useful for spam filtering)
=== how it works
1. **Event Reception**: Events are sent to the sprocket script as JSON objects via stdin
2. **Processing**: Script analyzes the event and applies custom logic
3. **Response**: Script responds with JSONL containing the decision and optional message
4. **Action**: Relay processes the response and either accepts, rejects, or shadow rejects the event
=== script protocol
==== input format
Events are sent as JSON objects, one per line:
```json
{
"id": "event_id_here",
"kind": 1,
"content": "Hello, world!",
"pubkey": "author_pubkey",
"tags": [["t", "hashtag"], ["p", "reply_pubkey"]],
"created_at": 1640995200,
"sig": "signature_here"
}
```
==== output format
Scripts must respond with JSONL format:
```json
{"id": "event_id", "action": "accept", "msg": ""}
{"id": "event_id", "action": "reject", "msg": "reason for rejection"}
{"id": "event_id", "action": "shadowReject", "msg": ""}
```
=== configuration
Enable sprocket processing:
[source,bash]
----
export ORLY_SPROCKET_ENABLED=true
export ORLY_APP_NAME="ORLY"
----
The sprocket script should be placed at:
`~/.config/{ORLY_APP_NAME}/sprocket.sh`
For example, with default `ORLY_APP_NAME="ORLY"`:
`~/.config/ORLY/sprocket.sh`
Backup files are automatically created when updating sprocket scripts via the web UI, with timestamps like:
`~/.config/ORLY/sprocket.sh.20240101120000`
=== manual sprocket updates
For manual sprocket script updates, you can use the stop/write/restart method:
1. **Stop the relay**:
```bash
# Send SIGINT to gracefully stop
kill -INT <relay_pid>
```
2. **Write new sprocket script**:
```bash
# Create/update the sprocket script
cat > ~/.config/ORLY/sprocket.sh << 'EOF'
#!/bin/bash
while read -r line; do
if [[ -n "$line" ]]; then
event_id=$(echo "$line" | jq -r '.id')
echo "{\"id\":\"$event_id\",\"action\":\"accept\",\"msg\":\"\"}"
fi
done
EOF
# Make it executable
chmod +x ~/.config/ORLY/sprocket.sh
```
3. **Restart the relay**:
```bash
./orly
```
The relay will automatically detect the new sprocket script and start it. If the script fails, sprocket will be disabled and all events rejected until the script is fixed.
=== failure handling
When sprocket is enabled but fails to start or crashes:
1. **Automatic Disable**: Sprocket is automatically disabled
2. **Event Rejection**: All incoming events are rejected with error message
3. **Periodic Recovery**: Every 30 seconds, the system checks if the sprocket script becomes available
4. **Auto-Restart**: If the script is found, sprocket is automatically re-enabled and restarted
This ensures that:
- Relay continues running even when sprocket fails
- No events are processed without proper sprocket filtering
- Sprocket automatically recovers when the script is fixed
- Clear error messages inform users about the sprocket status
- Error messages include the exact file location for easy fixes
When sprocket fails, the error message will show:
`sprocket disabled due to failure - all events will be rejected (script location: ~/.config/ORLY/sprocket.sh)`
This makes it easy to locate and fix the sprocket script file.
=== example script
Here's a Python example that implements various filtering criteria:
[source,python]
----
#!/usr/bin/env python3
import json
import sys
def process_event(event_json):
event_id = event_json.get('id', '')
event_content = event_json.get('content', '')
event_kind = event_json.get('kind', 0)
# Reject spam content
if 'spam' in event_content.lower():
return {
'id': event_id,
'action': 'reject',
'msg': 'Content contains spam'
}
# Shadow reject test events
if event_kind == 9999:
return {
'id': event_id,
'action': 'shadowReject',
'msg': ''
}
# Accept all other events
return {
'id': event_id,
'action': 'accept',
'msg': ''
}
# Main processing loop
for line in sys.stdin:
if line.strip():
try:
event = json.loads(line)
response = process_event(event)
print(json.dumps(response))
sys.stdout.flush()
except json.JSONDecodeError:
continue
----
=== bash example
A simple bash script example:
[source,bash]
----
#!/bin/bash
while read -r line; do
if [[ -n "$line" ]]; then
# Extract event ID
event_id=$(echo "$line" | jq -r '.id')
# Check for spam content
if echo "$line" | jq -r '.content' | grep -qi "spam"; then
echo "{\"id\":\"$event_id\",\"action\":\"reject\",\"msg\":\"Spam detected\"}"
else
echo "{\"id\":\"$event_id\",\"action\":\"accept\",\"msg\":\"\"}"
fi
fi
done
----
=== testing
Test your sprocket script directly:
[source,bash]
----
# Test with sample event
echo '{"id":"test","kind":1,"content":"spam test"}' | python3 sprocket.py
# Expected output:
# {"id": "test", "action": "reject", "msg": "Content contains spam"}
----
Run the comprehensive test suite:
[source,bash]
----
./test-sprocket-complete.sh
----
=== web UI management
The web UI provides a complete sprocket management interface:
* **Status Monitoring**: View real-time sprocket status and health
* **Script Upload**: Upload new sprocket scripts via the web interface
* **Version Management**: Track and manage multiple script versions
* **Configuration**: Configure sprocket parameters and settings
* **Logs**: View execution logs and error messages
* **Restart**: Restart sprocket scripts without relay restart
=== use cases
Common sprocket use cases include:
* **Spam Filtering**: Detect and reject spam content
* **Content Moderation**: Implement custom content policies
* **Rate Limiting**: Control event publishing rates
* **Event Validation**: Additional validation beyond Nostr protocol
* **Analytics**: Log and analyze event patterns
* **Integration**: Connect with external services and APIs
=== performance considerations
* Sprocket scripts run synchronously and can impact relay performance
* Keep processing logic efficient and fast
* Use appropriate timeouts to prevent blocking
* Consider using shadow reject for non-critical filtering to maintain user experience
== secp256k1 dependency
ORLY uses the optimized `libsecp256k1` C library from Bitcoin Core for schnorr signatures, providing 4x faster signing and ECDH operations compared to pure Go implementations.
=== installation
For Ubuntu/Debian, you can use the provided installation script:
[source,bash]
----
./scripts/ubuntu_install_libsecp256k1.sh
----
Or install manually:
[source,bash]
----
# Install build dependencies
sudo apt -y install build-essential autoconf libtool
# Initialize and build secp256k1
cd pkg/crypto/p256k/secp256k1
git submodule init
git submodule update
./autogen.sh
./configure --enable-module-schnorrsig --enable-module-ecdh --prefix=/usr
make
sudo make install
----
=== fallback mode
If you need to build without the C library dependency, disable CGO:
[source,bash]
----
export CGO_ENABLED=0
go build -o orly
----
This uses the pure Go `btcec` fallback library, which is slower but doesn't require system dependencies.
== stress testing
The stress tester is a tool for performance testing relay implementations under various load conditions.
=== usage
[source,bash]
----
cd cmd/stresstest
go run . [options]
----
Or use the compiled binary:
[source,bash]
----
./cmd/stresstest/stresstest [options]
----
=== options
* `--address` - Relay address (default: localhost)
* `--port` - Relay port (default: 3334)
* `--workers` - Number of concurrent publisher workers (default: 8)
* `--duration` - How long to run the stress test (default: 60s)
* `--publish-timeout` - Timeout waiting for OK per publish (default: 15s)
* `--query-workers` - Number of concurrent query workers (default: 4)
* `--query-timeout` - Subscription timeout for queries (default: 3s)
* `--query-min-interval` - Minimum interval between queries per worker (default: 50ms)
* `--query-max-interval` - Maximum interval between queries per worker (default: 300ms)
* `--skip-cache` - Skip uploading example events before running
=== example
[source,bash]
----
# Run stress test against local relay for 2 minutes with 16 workers
go run cmd/stresstest/main.go --address localhost --port 3334 --workers 16 --duration 120s
# Test a remote relay with higher query load
go run cmd/stresstest/main.go --address relay.example.com --port 443 --query-workers 8 --duration 300s
----
The stress tester will show real-time statistics including events sent/received per second, query counts, and results.
== benchmarks
The benchmark suite provides comprehensive performance testing and comparison across multiple relay implementations.
=== quick start
1. **Setup external relays:**
+
[source,bash]
----
cd cmd/benchmark
./setup-external-relays.sh
----
2. **Run all benchmarks:**
+
[source,bash]
----
docker compose up --build
----
3. **View results:**
+
[source,bash]
----
# View aggregate report
cat reports/run_YYYYMMDD_HHMMSS/aggregate_report.txt
# List individual relay results
ls reports/run_YYYYMMDD_HHMMSS/
----
=== benchmark types
The suite includes three main benchmark patterns:
==== peak throughput test
Tests maximum event ingestion rate with concurrent workers pushing events as fast as possible. Measures events/second, latency distribution, and success rate.
==== burst pattern test
Simulates real-world traffic with alternating high-activity bursts and quiet periods to test relay behavior under varying loads.
==== mixed read/write test
Concurrent read and write operations to test query performance while events are being ingested. Measures combined throughput and latency.
=== tested relays
The benchmark suite compares:
* **next.orly.dev** (this repository) - BadgerDB-based relay
* **Khatru** - SQLite and Badger variants
* **Relayer** - Basic example implementation
* **Strfry** - C++ LMDB-based relay
* **nostr-rs-relay** - Rust-based relay with SQLite
=== metrics reported
* **Throughput**: Events processed per second
* **Latency**: Average, P95, and P99 response times
* **Success Rate**: Percentage of successful operations
* **Memory Usage**: Peak memory consumption during tests
* **Error Analysis**: Detailed error reporting and categorization
Results are timestamped and stored in the `reports/` directory for tracking performance improvements over time.
== follows ACL
The follows ACL (Access Control List) system provides a flexible way to control relay access based on social relationships in the Nostr network. It grants different access levels to users based on whether they are followed by designated admin users.
=== how it works
The follows ACL system operates by:
1. **Admin Configuration**: Designated admin users are specified in the relay configuration
2. **Follow List Discovery**: The system fetches follow lists (kind 3 events) from admin users
3. **Access Level Assignment**:
- **Admin access**: Users listed as admins get full administrative privileges
- **Write access**: Users followed by any admin can publish events to the relay
- **Read access**: All other users can only read events from the relay
=== configuration
Enable the follows ACL system by setting the ACL mode:
[source,bash]
----
export ORLY_ACL_MODE=follows
export ORLY_ADMINS=npub1abc...,npub1xyz...
----
Or in your environment configuration:
[source,env]
----
ORLY_ACL_MODE=follows
ORLY_ADMINS=npub1abc123...,npub1xyz456...
----
=== usage example
[source,bash]
----
# Set up a relay with follows ACL
export ORLY_ACL_MODE=follows
export ORLY_ADMINS=npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku
# Start the relay
./orly
----
The relay will automatically:
- Load the follow lists of the specified admin users
- Grant write access to anyone followed by these admins
- Provide read-only access to everyone else
- Update follow lists dynamically as admins modify their follows
== relay sync spider
The relay sync spider is an intelligent synchronization system that discovers and syncs events from other Nostr relays based on social relationships. It works in conjunction with the follows ACL to create a distributed network of synchronized content.
=== how it works
The spider operates in two phases:
1. **Relay Discovery**:
- Finds relay lists (kind 10002 events) from followed users
- Builds a list of relays used by people in your social network
- Prioritizes relays mentioned by admin users
2. **Event Synchronization**:
- Queries discovered relays for events from followed users
- Performs one-time historical sync (default: 1 month back)
- Runs periodic syncs to stay current with new events
- Validates and stores events locally
=== configuration
Enable the spider by setting the spider mode to "follows":
[source,bash]
----
export ORLY_SPIDER_MODE=follows
export ORLY_SPIDER_FREQUENCY=1h
----
Configuration options:
* `ORLY_SPIDER_MODE` - Spider mode: "none" (disabled) or "follow" (enabled)
* `ORLY_SPIDER_FREQUENCY` - How often to sync (default: 1h)
=== usage example
[source,bash]
----
# Enable both follows ACL and spider sync
export ORLY_ACL_MODE=follows
export ORLY_SPIDER_MODE=follows
export ORLY_SPIDER_FREQUENCY=30m
export ORLY_ADMINS=npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku
# Start the relay
./orly
----
The spider will:
- Perform a one-time sync of the last month's events
- Discover relays from followed users' relay lists
- Sync events from those relays every 30 minutes
- Only sync events from users in the follow network
=== benefits
* **Decentralized Content**: Automatically aggregates content from your social network
* **Reduced Relay Dependency**: Less reliance on single large relays
* **Improved User Experience**: Users see content from their social circle even when offline from other relays
* **Network Resilience**: Content remains accessible even if origin relays go offline
=== technical notes
* The spider only runs when `ORLY_ACL_MODE=follows` to ensure proper authorization
* One-time sync is marked to prevent repeated historical syncs on restart
* Event validation ensures only properly signed events are stored
* Sync windows are configurable to balance freshness with resource usage
Languages
Go
78.2%
Svelte
9%
Shell
6.1%
TypeScript
4%
JavaScript
1.9%
Other
0.7%