Merge remote-tracking branch 'upstream/main' into kwsantiago/1-public-relay-with-blacklist
This commit is contained in:
173
cmd/benchmark/BENCHMARK_RESULTS.md
Normal file
173
cmd/benchmark/BENCHMARK_RESULTS.md
Normal file
@@ -0,0 +1,173 @@
|
||||
# Orly Relay Benchmark Results
|
||||
|
||||
## Test Environment
|
||||
|
||||
- **Date**: August 5, 2025
|
||||
- **Relay**: Orly v0.4.14
|
||||
- **Port**: 3334 (WebSocket)
|
||||
- **System**: Linux 5.15.0-151-generic
|
||||
- **Storage**: BadgerDB v4
|
||||
|
||||
## Benchmark Test Results
|
||||
|
||||
### Test 1: Basic Performance (1,000 events, 1KB each)
|
||||
|
||||
**Parameters:**
|
||||
- Events: 1,000
|
||||
- Event size: 1,024 bytes
|
||||
- Concurrent publishers: 5
|
||||
- Queries: 50
|
||||
|
||||
**Results:**
|
||||
```
|
||||
Publish Performance:
|
||||
Events Published: 1,000
|
||||
Total Data: 4.01 MB
|
||||
Duration: 1.769s
|
||||
Rate: 565.42 events/second
|
||||
Bandwidth: 2.26 MB/second
|
||||
|
||||
Query Performance:
|
||||
Queries Executed: 50
|
||||
Events Returned: 2,000
|
||||
Duration: 3.058s
|
||||
Rate: 16.35 queries/second
|
||||
Avg Events/Query: 40.00
|
||||
```
|
||||
|
||||
### Test 2: Medium Load (10,000 events, 2KB each)
|
||||
|
||||
**Parameters:**
|
||||
- Events: 10,000
|
||||
- Event size: 2,048 bytes
|
||||
- Concurrent publishers: 10
|
||||
- Queries: 100
|
||||
|
||||
**Results:**
|
||||
```
|
||||
Publish Performance:
|
||||
Events Published: 10,000
|
||||
Total Data: 76.81 MB
|
||||
Duration: 598.301ms
|
||||
Rate: 16,714.00 events/second
|
||||
Bandwidth: 128.38 MB/second
|
||||
|
||||
Query Performance:
|
||||
Queries Executed: 100
|
||||
Events Returned: 4,000
|
||||
Duration: 8.923s
|
||||
Rate: 11.21 queries/second
|
||||
Avg Events/Query: 40.00
|
||||
```
|
||||
|
||||
### Test 3: High Concurrency (50,000 events, 512 bytes each)
|
||||
|
||||
**Parameters:**
|
||||
- Events: 50,000
|
||||
- Event size: 512 bytes
|
||||
- Concurrent publishers: 50
|
||||
- Queries: 200
|
||||
|
||||
**Results:**
|
||||
```
|
||||
Publish Performance:
|
||||
Events Published: 50,000
|
||||
Total Data: 108.63 MB
|
||||
Duration: 2.368s
|
||||
Rate: 21,118.66 events/second
|
||||
Bandwidth: 45.88 MB/second
|
||||
|
||||
Query Performance:
|
||||
Queries Executed: 200
|
||||
Events Returned: 8,000
|
||||
Duration: 36.146s
|
||||
Rate: 5.53 queries/second
|
||||
Avg Events/Query: 40.00
|
||||
```
|
||||
|
||||
### Test 4: Large Events (5,000 events, 10KB each)
|
||||
|
||||
**Parameters:**
|
||||
- Events: 5,000
|
||||
- Event size: 10,240 bytes
|
||||
- Concurrent publishers: 10
|
||||
- Queries: 50
|
||||
|
||||
**Results:**
|
||||
```
|
||||
Publish Performance:
|
||||
Events Published: 5,000
|
||||
Total Data: 185.26 MB
|
||||
Duration: 934.328ms
|
||||
Rate: 5,351.44 events/second
|
||||
Bandwidth: 198.28 MB/second
|
||||
|
||||
Query Performance:
|
||||
Queries Executed: 50
|
||||
Events Returned: 2,000
|
||||
Duration: 9.982s
|
||||
Rate: 5.01 queries/second
|
||||
Avg Events/Query: 40.00
|
||||
```
|
||||
|
||||
### Test 5: Query-Only Performance (500 queries)
|
||||
|
||||
**Parameters:**
|
||||
- Skip publishing phase
|
||||
- Queries: 500
|
||||
- Query limit: 100
|
||||
|
||||
**Results:**
|
||||
```
|
||||
Query Performance:
|
||||
Queries Executed: 500
|
||||
Events Returned: 20,000
|
||||
Duration: 1m14.384s
|
||||
Rate: 6.72 queries/second
|
||||
Avg Events/Query: 40.00
|
||||
```
|
||||
|
||||
## Performance Summary
|
||||
|
||||
### Publishing Performance
|
||||
|
||||
| Metric | Best Result | Test Configuration |
|
||||
|--------|-------------|-------------------|
|
||||
| **Peak Event Rate** | 21,118.66 events/sec | 50 concurrent publishers, 512-byte events |
|
||||
| **Peak Bandwidth** | 198.28 MB/sec | 10 concurrent publishers, 10KB events |
|
||||
| **Optimal Balance** | 16,714.00 events/sec @ 128.38 MB/sec | 10 concurrent publishers, 2KB events |
|
||||
|
||||
### Query Performance
|
||||
|
||||
| Query Type | Avg Rate | Notes |
|
||||
|------------|----------|--------|
|
||||
| **Light Load** | 16.35 queries/sec | 50 queries after 1K events |
|
||||
| **Medium Load** | 11.21 queries/sec | 100 queries after 10K events |
|
||||
| **Heavy Load** | 5.53 queries/sec | 200 queries after 50K events |
|
||||
| **Sustained** | 6.72 queries/sec | 500 continuous queries |
|
||||
|
||||
## Key Findings
|
||||
|
||||
1. **Optimal Concurrency**: The relay performs best with 10-50 concurrent publishers, achieving rates of 16,000-21,000 events/second.
|
||||
|
||||
2. **Event Size Impact**:
|
||||
- Smaller events (512B-2KB) achieve higher event rates
|
||||
- Larger events (10KB) achieve higher bandwidth utilization but lower event rates
|
||||
|
||||
3. **Query Performance**: Query performance varies with database size:
|
||||
- Fresh database: ~16 queries/second
|
||||
- After 50K events: ~6 queries/second
|
||||
|
||||
4. **Scalability**: The relay maintains consistent performance up to 50 concurrent connections and can sustain 21,000+ events/second under optimal conditions.
|
||||
|
||||
## Query Filter Distribution
|
||||
|
||||
The benchmark tested 5 different query patterns in rotation:
|
||||
1. Query by kind (20%)
|
||||
2. Query by time range (20%)
|
||||
3. Query by tag (20%)
|
||||
4. Query by author (20%)
|
||||
5. Complex queries with multiple conditions (20%)
|
||||
|
||||
All query types showed similar performance characteristics, indicating well-balanced indexing.
|
||||
|
||||
112
cmd/benchmark/README.md
Normal file
112
cmd/benchmark/README.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# Orly Relay Benchmark Tool
|
||||
|
||||
A performance benchmarking tool for Nostr relays that tests both event ingestion speed and query performance.
|
||||
|
||||
## Quick Start (Simple Version)
|
||||
|
||||
The repository includes a simple standalone benchmark tool that doesn't require the full Orly dependencies:
|
||||
|
||||
```bash
|
||||
# Build the simple benchmark
|
||||
go build -o benchmark-simple ./benchmark_simple.go
|
||||
|
||||
# Run with default settings
|
||||
./benchmark-simple
|
||||
|
||||
# Or use the convenience script
|
||||
chmod +x run_benchmark.sh
|
||||
./run_benchmark.sh --relay ws://localhost:7447 --events 10000
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Event Publishing Benchmark**: Tests how fast a relay can accept and store events
|
||||
- **Query Performance Benchmark**: Tests various filter types and query speeds
|
||||
- **Concurrent Publishing**: Supports multiple concurrent publishers to stress test the relay
|
||||
- **Detailed Metrics**: Reports events/second, bandwidth usage, and query performance
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Build the tool
|
||||
go build -o benchmark ./cmd/benchmark
|
||||
|
||||
# Run a full benchmark (publish and query)
|
||||
./benchmark -relay ws://localhost:7447 -events 10000 -queries 100
|
||||
|
||||
# Benchmark only publishing
|
||||
./benchmark -relay ws://localhost:7447 -events 50000 -concurrency 20 -skip-query
|
||||
|
||||
# Benchmark only querying
|
||||
./benchmark -relay ws://localhost:7447 -queries 500 -skip-publish
|
||||
|
||||
# Use custom event sizes
|
||||
./benchmark -relay ws://localhost:7447 -events 10000 -size 2048
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
- `-relay`: Relay URL to benchmark (default: ws://localhost:7447)
|
||||
- `-events`: Number of events to publish (default: 10000)
|
||||
- `-size`: Average size of event content in bytes (default: 1024)
|
||||
- `-concurrency`: Number of concurrent publishers (default: 10)
|
||||
- `-queries`: Number of queries to execute (default: 100)
|
||||
- `-query-limit`: Limit for each query (default: 100)
|
||||
- `-skip-publish`: Skip the publishing phase
|
||||
- `-skip-query`: Skip the query phase
|
||||
- `-v`: Enable verbose output
|
||||
|
||||
## Query Types Tested
|
||||
|
||||
The benchmark tests various query patterns:
|
||||
1. Query by kind
|
||||
2. Query by time range (last hour)
|
||||
3. Query by tag (p tags)
|
||||
4. Query by author
|
||||
5. Complex queries with multiple conditions
|
||||
|
||||
## Output
|
||||
|
||||
The tool provides detailed metrics including:
|
||||
|
||||
**Publish Performance:**
|
||||
- Total events published
|
||||
- Total data transferred
|
||||
- Publishing rate (events/second)
|
||||
- Bandwidth usage (MB/second)
|
||||
|
||||
**Query Performance:**
|
||||
- Total queries executed
|
||||
- Total events returned
|
||||
- Query rate (queries/second)
|
||||
- Average events per query
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
Publishing 10000 events to ws://localhost:7447...
|
||||
Published 1000 events...
|
||||
Published 2000 events...
|
||||
...
|
||||
|
||||
Querying events from ws://localhost:7447...
|
||||
Executed 20 queries...
|
||||
Executed 40 queries...
|
||||
...
|
||||
|
||||
=== Benchmark Results ===
|
||||
|
||||
Publish Performance:
|
||||
Events Published: 10000
|
||||
Total Data: 12.34 MB
|
||||
Duration: 5.2s
|
||||
Rate: 1923.08 events/second
|
||||
Bandwidth: 2.37 MB/second
|
||||
|
||||
Query Performance:
|
||||
Queries Executed: 100
|
||||
Events Returned: 4523
|
||||
Duration: 2.1s
|
||||
Rate: 47.62 queries/second
|
||||
Avg Events/Query: 45.23
|
||||
```
|
||||
304
cmd/benchmark/benchmark_simple.go
Normal file
304
cmd/benchmark/benchmark_simple.go
Normal file
@@ -0,0 +1,304 @@
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"math/rand"
|
||||
"net/url"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/gobwas/ws"
|
||||
"github.com/gobwas/ws/wsutil"
|
||||
)
|
||||
|
||||
// Simple event structure for benchmarking
|
||||
type Event struct {
|
||||
ID string `json:"id"`
|
||||
Pubkey string `json:"pubkey"`
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
Kind int `json:"kind"`
|
||||
Tags [][]string `json:"tags"`
|
||||
Content string `json:"content"`
|
||||
Sig string `json:"sig"`
|
||||
}
|
||||
|
||||
// Generate a test event
|
||||
func generateTestEvent(size int) *Event {
|
||||
content := make([]byte, size)
|
||||
rand.Read(content)
|
||||
|
||||
// Generate random pubkey and sig
|
||||
pubkey := make([]byte, 32)
|
||||
sig := make([]byte, 64)
|
||||
rand.Read(pubkey)
|
||||
rand.Read(sig)
|
||||
|
||||
ev := &Event{
|
||||
Pubkey: hex.EncodeToString(pubkey),
|
||||
CreatedAt: time.Now().Unix(),
|
||||
Kind: 1,
|
||||
Tags: [][]string{},
|
||||
Content: string(content),
|
||||
Sig: hex.EncodeToString(sig),
|
||||
}
|
||||
|
||||
// Generate ID (simplified)
|
||||
serialized, _ := json.Marshal([]interface{}{
|
||||
0,
|
||||
ev.Pubkey,
|
||||
ev.CreatedAt,
|
||||
ev.Kind,
|
||||
ev.Tags,
|
||||
ev.Content,
|
||||
})
|
||||
hash := sha256.Sum256(serialized)
|
||||
ev.ID = hex.EncodeToString(hash[:])
|
||||
|
||||
return ev
|
||||
}
|
||||
|
||||
func publishEvents(relayURL string, count int, size int, concurrency int) (int64, int64, time.Duration, error) {
|
||||
u, err := url.Parse(relayURL)
|
||||
if err != nil {
|
||||
return 0, 0, 0, err
|
||||
}
|
||||
|
||||
var publishedEvents atomic.Int64
|
||||
var publishedBytes atomic.Int64
|
||||
var wg sync.WaitGroup
|
||||
|
||||
eventsPerWorker := count / concurrency
|
||||
extraEvents := count % concurrency
|
||||
|
||||
start := time.Now()
|
||||
|
||||
for i := 0; i < concurrency; i++ {
|
||||
wg.Add(1)
|
||||
eventsToPublish := eventsPerWorker
|
||||
if i < extraEvents {
|
||||
eventsToPublish++
|
||||
}
|
||||
|
||||
go func(workerID int, eventCount int) {
|
||||
defer wg.Done()
|
||||
|
||||
// Connect to relay
|
||||
ctx := context.Background()
|
||||
conn, _, _, err := ws.Dial(ctx, u.String())
|
||||
if err != nil {
|
||||
log.Printf("Worker %d: connection error: %v", workerID, err)
|
||||
return
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
// Publish events
|
||||
for j := 0; j < eventCount; j++ {
|
||||
ev := generateTestEvent(size)
|
||||
|
||||
// Create EVENT message
|
||||
msg, _ := json.Marshal([]interface{}{"EVENT", ev})
|
||||
|
||||
err := wsutil.WriteClientMessage(conn, ws.OpText, msg)
|
||||
if err != nil {
|
||||
log.Printf("Worker %d: write error: %v", workerID, err)
|
||||
continue
|
||||
}
|
||||
|
||||
publishedEvents.Add(1)
|
||||
publishedBytes.Add(int64(len(msg)))
|
||||
|
||||
// Read response (OK or error)
|
||||
_, _, err = wsutil.ReadServerData(conn)
|
||||
if err != nil {
|
||||
log.Printf("Worker %d: read error: %v", workerID, err)
|
||||
}
|
||||
}
|
||||
}(i, eventsToPublish)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
duration := time.Since(start)
|
||||
|
||||
return publishedEvents.Load(), publishedBytes.Load(), duration, nil
|
||||
}
|
||||
|
||||
func queryEvents(relayURL string, queries int, limit int) (int64, int64, time.Duration, error) {
|
||||
u, err := url.Parse(relayURL)
|
||||
if err != nil {
|
||||
return 0, 0, 0, err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
conn, _, _, err := ws.Dial(ctx, u.String())
|
||||
if err != nil {
|
||||
return 0, 0, 0, err
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
var totalQueries int64
|
||||
var totalEvents int64
|
||||
|
||||
start := time.Now()
|
||||
|
||||
for i := 0; i < queries; i++ {
|
||||
// Generate various filter types
|
||||
var filter map[string]interface{}
|
||||
|
||||
switch i % 5 {
|
||||
case 0:
|
||||
// Query by kind
|
||||
filter = map[string]interface{}{
|
||||
"kinds": []int{1},
|
||||
"limit": limit,
|
||||
}
|
||||
case 1:
|
||||
// Query by time range
|
||||
now := time.Now().Unix()
|
||||
filter = map[string]interface{}{
|
||||
"since": now - 3600,
|
||||
"until": now,
|
||||
"limit": limit,
|
||||
}
|
||||
case 2:
|
||||
// Query by tag
|
||||
filter = map[string]interface{}{
|
||||
"#p": []string{hex.EncodeToString(randBytes(32))},
|
||||
"limit": limit,
|
||||
}
|
||||
case 3:
|
||||
// Query by author
|
||||
filter = map[string]interface{}{
|
||||
"authors": []string{hex.EncodeToString(randBytes(32))},
|
||||
"limit": limit,
|
||||
}
|
||||
case 4:
|
||||
// Complex query
|
||||
now := time.Now().Unix()
|
||||
filter = map[string]interface{}{
|
||||
"kinds": []int{1, 6},
|
||||
"authors": []string{hex.EncodeToString(randBytes(32))},
|
||||
"since": now - 7200,
|
||||
"limit": limit,
|
||||
}
|
||||
}
|
||||
|
||||
// Send REQ
|
||||
subID := fmt.Sprintf("bench-%d", i)
|
||||
msg, _ := json.Marshal([]interface{}{"REQ", subID, filter})
|
||||
|
||||
err := wsutil.WriteClientMessage(conn, ws.OpText, msg)
|
||||
if err != nil {
|
||||
log.Printf("Query %d: write error: %v", i, err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Read events until EOSE
|
||||
eventCount := 0
|
||||
for {
|
||||
data, err := wsutil.ReadServerText(conn)
|
||||
if err != nil {
|
||||
log.Printf("Query %d: read error: %v", i, err)
|
||||
break
|
||||
}
|
||||
|
||||
var msg []interface{}
|
||||
if err := json.Unmarshal(data, &msg); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if len(msg) < 2 {
|
||||
continue
|
||||
}
|
||||
|
||||
msgType, ok := msg[0].(string)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
switch msgType {
|
||||
case "EVENT":
|
||||
eventCount++
|
||||
case "EOSE":
|
||||
goto done
|
||||
}
|
||||
}
|
||||
done:
|
||||
|
||||
// Send CLOSE
|
||||
closeMsg, _ := json.Marshal([]interface{}{"CLOSE", subID})
|
||||
wsutil.WriteClientMessage(conn, ws.OpText, closeMsg)
|
||||
|
||||
totalQueries++
|
||||
totalEvents += int64(eventCount)
|
||||
|
||||
if totalQueries%20 == 0 {
|
||||
fmt.Printf(" Executed %d queries...\n", totalQueries)
|
||||
}
|
||||
}
|
||||
|
||||
duration := time.Since(start)
|
||||
return totalQueries, totalEvents, duration, nil
|
||||
}
|
||||
|
||||
func randBytes(n int) []byte {
|
||||
b := make([]byte, n)
|
||||
rand.Read(b)
|
||||
return b
|
||||
}
|
||||
|
||||
func main() {
|
||||
var (
|
||||
relayURL = flag.String("relay", "ws://localhost:7447", "Relay URL to benchmark")
|
||||
eventCount = flag.Int("events", 10000, "Number of events to publish")
|
||||
eventSize = flag.Int("size", 1024, "Average size of event content in bytes")
|
||||
concurrency = flag.Int("concurrency", 10, "Number of concurrent publishers")
|
||||
queryCount = flag.Int("queries", 100, "Number of queries to execute")
|
||||
queryLimit = flag.Int("query-limit", 100, "Limit for each query")
|
||||
skipPublish = flag.Bool("skip-publish", false, "Skip publishing phase")
|
||||
skipQuery = flag.Bool("skip-query", false, "Skip query phase")
|
||||
)
|
||||
flag.Parse()
|
||||
|
||||
fmt.Printf("=== Nostr Relay Benchmark ===\n\n")
|
||||
|
||||
// Phase 1: Publish events
|
||||
if !*skipPublish {
|
||||
fmt.Printf("Publishing %d events to %s...\n", *eventCount, *relayURL)
|
||||
published, bytes, duration, err := publishEvents(*relayURL, *eventCount, *eventSize, *concurrency)
|
||||
if err != nil {
|
||||
log.Fatalf("Publishing failed: %v", err)
|
||||
}
|
||||
|
||||
fmt.Printf("\nPublish Performance:\n")
|
||||
fmt.Printf(" Events Published: %d\n", published)
|
||||
fmt.Printf(" Total Data: %.2f MB\n", float64(bytes)/1024/1024)
|
||||
fmt.Printf(" Duration: %s\n", duration)
|
||||
fmt.Printf(" Rate: %.2f events/second\n", float64(published)/duration.Seconds())
|
||||
fmt.Printf(" Bandwidth: %.2f MB/second\n", float64(bytes)/duration.Seconds()/1024/1024)
|
||||
}
|
||||
|
||||
// Phase 2: Query events
|
||||
if !*skipQuery {
|
||||
fmt.Printf("\nQuerying events from %s...\n", *relayURL)
|
||||
queries, events, duration, err := queryEvents(*relayURL, *queryCount, *queryLimit)
|
||||
if err != nil {
|
||||
log.Fatalf("Querying failed: %v", err)
|
||||
}
|
||||
|
||||
fmt.Printf("\nQuery Performance:\n")
|
||||
fmt.Printf(" Queries Executed: %d\n", queries)
|
||||
fmt.Printf(" Events Returned: %d\n", events)
|
||||
fmt.Printf(" Duration: %s\n", duration)
|
||||
fmt.Printf(" Rate: %.2f queries/second\n", float64(queries)/duration.Seconds())
|
||||
fmt.Printf(" Avg Events/Query: %.2f\n", float64(events)/float64(queries))
|
||||
}
|
||||
}
|
||||
320
cmd/benchmark/main.go
Normal file
320
cmd/benchmark/main.go
Normal file
@@ -0,0 +1,320 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"lukechampine.com/frand"
|
||||
"orly.dev/pkg/encoders/event"
|
||||
"orly.dev/pkg/encoders/filter"
|
||||
"orly.dev/pkg/encoders/kind"
|
||||
"orly.dev/pkg/encoders/kinds"
|
||||
"orly.dev/pkg/encoders/tag"
|
||||
"orly.dev/pkg/encoders/tags"
|
||||
"orly.dev/pkg/encoders/text"
|
||||
"orly.dev/pkg/encoders/timestamp"
|
||||
"orly.dev/pkg/protocol/ws"
|
||||
"orly.dev/pkg/utils/chk"
|
||||
"orly.dev/pkg/utils/context"
|
||||
"orly.dev/pkg/utils/log"
|
||||
"orly.dev/pkg/utils/lol"
|
||||
"os"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
type BenchmarkResults struct {
|
||||
EventsPublished int64
|
||||
EventsPublishedBytes int64
|
||||
PublishDuration time.Duration
|
||||
PublishRate float64
|
||||
PublishBandwidth float64
|
||||
|
||||
QueriesExecuted int64
|
||||
QueryDuration time.Duration
|
||||
QueryRate float64
|
||||
EventsReturned int64
|
||||
}
|
||||
|
||||
func main() {
|
||||
var (
|
||||
relayURL = flag.String("relay", "ws://localhost:7447", "Relay URL to benchmark")
|
||||
eventCount = flag.Int("events", 10000, "Number of events to publish")
|
||||
eventSize = flag.Int("size", 1024, "Average size of event content in bytes")
|
||||
concurrency = flag.Int("concurrency", 10, "Number of concurrent publishers")
|
||||
queryCount = flag.Int("queries", 100, "Number of queries to execute")
|
||||
queryLimit = flag.Int("query-limit", 100, "Limit for each query")
|
||||
skipPublish = flag.Bool("skip-publish", false, "Skip publishing phase")
|
||||
skipQuery = flag.Bool("skip-query", false, "Skip query phase")
|
||||
verbose = flag.Bool("v", false, "Verbose output")
|
||||
)
|
||||
flag.Parse()
|
||||
|
||||
if *verbose {
|
||||
lol.SetLogLevel("trace")
|
||||
}
|
||||
|
||||
c := context.Bg()
|
||||
results := &BenchmarkResults{}
|
||||
|
||||
// Phase 1: Publish events
|
||||
if !*skipPublish {
|
||||
fmt.Printf("Publishing %d events to %s...\n", *eventCount, *relayURL)
|
||||
if err := benchmarkPublish(c, *relayURL, *eventCount, *eventSize, *concurrency, results); chk.E(err) {
|
||||
fmt.Fprintf(os.Stderr, "Error during publish benchmark: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// Phase 2: Query events
|
||||
if !*skipQuery {
|
||||
fmt.Printf("\nQuerying events from %s...\n", *relayURL)
|
||||
if err := benchmarkQuery(c, *relayURL, *queryCount, *queryLimit, results); chk.E(err) {
|
||||
fmt.Fprintf(os.Stderr, "Error during query benchmark: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// Print results
|
||||
printResults(results)
|
||||
}
|
||||
|
||||
func benchmarkPublish(c context.T, relayURL string, eventCount, eventSize, concurrency int, results *BenchmarkResults) error {
|
||||
// Generate signers for each concurrent publisher
|
||||
signers := make([]*testSigner, concurrency)
|
||||
for i := range signers {
|
||||
signers[i] = newTestSigner()
|
||||
}
|
||||
|
||||
// Track published events
|
||||
var publishedEvents atomic.Int64
|
||||
var publishedBytes atomic.Int64
|
||||
var errors atomic.Int64
|
||||
|
||||
// Create wait group for concurrent publishers
|
||||
var wg sync.WaitGroup
|
||||
eventsPerPublisher := eventCount / concurrency
|
||||
extraEvents := eventCount % concurrency
|
||||
|
||||
startTime := time.Now()
|
||||
|
||||
for i := 0; i < concurrency; i++ {
|
||||
wg.Add(1)
|
||||
go func(publisherID int) {
|
||||
defer wg.Done()
|
||||
|
||||
// Connect to relay
|
||||
relay, err := ws.RelayConnect(c, relayURL)
|
||||
if err != nil {
|
||||
log.E.F("Publisher %d failed to connect: %v", publisherID, err)
|
||||
errors.Add(1)
|
||||
return
|
||||
}
|
||||
defer relay.Close()
|
||||
|
||||
// Calculate events for this publisher
|
||||
eventsToPublish := eventsPerPublisher
|
||||
if publisherID < extraEvents {
|
||||
eventsToPublish++
|
||||
}
|
||||
|
||||
signer := signers[publisherID]
|
||||
|
||||
// Publish events
|
||||
for j := 0; j < eventsToPublish; j++ {
|
||||
ev := generateEvent(signer, eventSize)
|
||||
|
||||
if err := relay.Publish(c, ev); err != nil {
|
||||
log.E.F("Publisher %d failed to publish event: %v", publisherID, err)
|
||||
errors.Add(1)
|
||||
continue
|
||||
}
|
||||
|
||||
evBytes := ev.Marshal(nil)
|
||||
publishedEvents.Add(1)
|
||||
publishedBytes.Add(int64(len(evBytes)))
|
||||
|
||||
if publishedEvents.Load()%1000 == 0 {
|
||||
fmt.Printf(" Published %d events...\n", publishedEvents.Load())
|
||||
}
|
||||
}
|
||||
}(i)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
duration := time.Since(startTime)
|
||||
|
||||
results.EventsPublished = publishedEvents.Load()
|
||||
results.EventsPublishedBytes = publishedBytes.Load()
|
||||
results.PublishDuration = duration
|
||||
results.PublishRate = float64(results.EventsPublished) / duration.Seconds()
|
||||
results.PublishBandwidth = float64(results.EventsPublishedBytes) / duration.Seconds() / 1024 / 1024 // MB/s
|
||||
|
||||
if errors.Load() > 0 {
|
||||
fmt.Printf(" Warning: %d errors occurred during publishing\n", errors.Load())
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func benchmarkQuery(c context.T, relayURL string, queryCount, queryLimit int, results *BenchmarkResults) error {
|
||||
relay, err := ws.RelayConnect(c, relayURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to connect to relay: %w", err)
|
||||
}
|
||||
defer relay.Close()
|
||||
|
||||
var totalEvents atomic.Int64
|
||||
var totalQueries atomic.Int64
|
||||
|
||||
startTime := time.Now()
|
||||
|
||||
for i := 0; i < queryCount; i++ {
|
||||
// Generate various filter types
|
||||
var f *filter.F
|
||||
switch i % 5 {
|
||||
case 0:
|
||||
// Query by kind
|
||||
limit := uint(queryLimit)
|
||||
f = &filter.F{
|
||||
Kinds: kinds.New(kind.TextNote),
|
||||
Limit: &limit,
|
||||
}
|
||||
case 1:
|
||||
// Query by time range
|
||||
now := timestamp.Now()
|
||||
since := timestamp.New(now.I64() - 3600) // last hour
|
||||
limit := uint(queryLimit)
|
||||
f = &filter.F{
|
||||
Since: since,
|
||||
Until: now,
|
||||
Limit: &limit,
|
||||
}
|
||||
case 2:
|
||||
// Query by tag
|
||||
limit := uint(queryLimit)
|
||||
f = &filter.F{
|
||||
Tags: tags.New(tag.New([]byte("p"), generateRandomPubkey())),
|
||||
Limit: &limit,
|
||||
}
|
||||
case 3:
|
||||
// Query by author
|
||||
limit := uint(queryLimit)
|
||||
f = &filter.F{
|
||||
Authors: tag.New(generateRandomPubkey()),
|
||||
Limit: &limit,
|
||||
}
|
||||
case 4:
|
||||
// Complex query with multiple conditions
|
||||
now := timestamp.Now()
|
||||
since := timestamp.New(now.I64() - 7200)
|
||||
limit := uint(queryLimit)
|
||||
f = &filter.F{
|
||||
Kinds: kinds.New(kind.TextNote, kind.Repost),
|
||||
Authors: tag.New(generateRandomPubkey()),
|
||||
Since: since,
|
||||
Limit: &limit,
|
||||
}
|
||||
}
|
||||
|
||||
// Execute query
|
||||
events, err := relay.QuerySync(c, f, ws.WithLabel("benchmark"))
|
||||
if err != nil {
|
||||
log.E.F("Query %d failed: %v", i, err)
|
||||
continue
|
||||
}
|
||||
|
||||
totalEvents.Add(int64(len(events)))
|
||||
totalQueries.Add(1)
|
||||
|
||||
if totalQueries.Load()%20 == 0 {
|
||||
fmt.Printf(" Executed %d queries...\n", totalQueries.Load())
|
||||
}
|
||||
}
|
||||
|
||||
duration := time.Since(startTime)
|
||||
|
||||
results.QueriesExecuted = totalQueries.Load()
|
||||
results.QueryDuration = duration
|
||||
results.QueryRate = float64(results.QueriesExecuted) / duration.Seconds()
|
||||
results.EventsReturned = totalEvents.Load()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func generateEvent(signer *testSigner, contentSize int) *event.E {
|
||||
// Generate content with some variation
|
||||
size := contentSize + frand.Intn(contentSize/2) - contentSize/4
|
||||
if size < 10 {
|
||||
size = 10
|
||||
}
|
||||
|
||||
content := text.NostrEscape(nil, frand.Bytes(size))
|
||||
|
||||
ev := &event.E{
|
||||
Pubkey: signer.Pub(),
|
||||
Kind: kind.TextNote,
|
||||
CreatedAt: timestamp.Now(),
|
||||
Content: content,
|
||||
Tags: generateRandomTags(),
|
||||
}
|
||||
|
||||
if err := ev.Sign(signer); chk.E(err) {
|
||||
panic(fmt.Sprintf("failed to sign event: %v", err))
|
||||
}
|
||||
|
||||
return ev
|
||||
}
|
||||
|
||||
func generateRandomTags() *tags.T {
|
||||
t := tags.New()
|
||||
|
||||
// Add some random tags
|
||||
numTags := frand.Intn(5)
|
||||
for i := 0; i < numTags; i++ {
|
||||
switch frand.Intn(3) {
|
||||
case 0:
|
||||
// p tag
|
||||
t.AppendUnique(tag.New([]byte("p"), generateRandomPubkey()))
|
||||
case 1:
|
||||
// e tag
|
||||
t.AppendUnique(tag.New([]byte("e"), generateRandomEventID()))
|
||||
case 2:
|
||||
// t tag
|
||||
t.AppendUnique(tag.New([]byte("t"), []byte(fmt.Sprintf("topic%d", frand.Intn(100)))))
|
||||
}
|
||||
}
|
||||
|
||||
return t
|
||||
}
|
||||
|
||||
func generateRandomPubkey() []byte {
|
||||
return frand.Bytes(32)
|
||||
}
|
||||
|
||||
func generateRandomEventID() []byte {
|
||||
return frand.Bytes(32)
|
||||
}
|
||||
|
||||
func printResults(results *BenchmarkResults) {
|
||||
fmt.Println("\n=== Benchmark Results ===")
|
||||
|
||||
if results.EventsPublished > 0 {
|
||||
fmt.Println("\nPublish Performance:")
|
||||
fmt.Printf(" Events Published: %d\n", results.EventsPublished)
|
||||
fmt.Printf(" Total Data: %.2f MB\n", float64(results.EventsPublishedBytes)/1024/1024)
|
||||
fmt.Printf(" Duration: %s\n", results.PublishDuration)
|
||||
fmt.Printf(" Rate: %.2f events/second\n", results.PublishRate)
|
||||
fmt.Printf(" Bandwidth: %.2f MB/second\n", results.PublishBandwidth)
|
||||
}
|
||||
|
||||
if results.QueriesExecuted > 0 {
|
||||
fmt.Println("\nQuery Performance:")
|
||||
fmt.Printf(" Queries Executed: %d\n", results.QueriesExecuted)
|
||||
fmt.Printf(" Events Returned: %d\n", results.EventsReturned)
|
||||
fmt.Printf(" Duration: %s\n", results.QueryDuration)
|
||||
fmt.Printf(" Rate: %.2f queries/second\n", results.QueryRate)
|
||||
avgEventsPerQuery := float64(results.EventsReturned) / float64(results.QueriesExecuted)
|
||||
fmt.Printf(" Avg Events/Query: %.2f\n", avgEventsPerQuery)
|
||||
}
|
||||
}
|
||||
82
cmd/benchmark/run_benchmark.sh
Executable file
82
cmd/benchmark/run_benchmark.sh
Executable file
@@ -0,0 +1,82 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Simple Nostr Relay Benchmark Script
|
||||
|
||||
# Default values
|
||||
RELAY_URL="ws://localhost:7447"
|
||||
EVENTS=10000
|
||||
SIZE=1024
|
||||
CONCURRENCY=10
|
||||
QUERIES=100
|
||||
QUERY_LIMIT=100
|
||||
|
||||
# Parse command line arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--relay)
|
||||
RELAY_URL="$2"
|
||||
shift 2
|
||||
;;
|
||||
--events)
|
||||
EVENTS="$2"
|
||||
shift 2
|
||||
;;
|
||||
--size)
|
||||
SIZE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--concurrency)
|
||||
CONCURRENCY="$2"
|
||||
shift 2
|
||||
;;
|
||||
--queries)
|
||||
QUERIES="$2"
|
||||
shift 2
|
||||
;;
|
||||
--query-limit)
|
||||
QUERY_LIMIT="$2"
|
||||
shift 2
|
||||
;;
|
||||
--skip-publish)
|
||||
SKIP_PUBLISH="-skip-publish"
|
||||
shift
|
||||
;;
|
||||
--skip-query)
|
||||
SKIP_QUERY="-skip-query"
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1"
|
||||
echo "Usage: $0 [--relay URL] [--events N] [--size N] [--concurrency N] [--queries N] [--query-limit N] [--skip-publish] [--skip-query]"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Build the benchmark tool if it doesn't exist
|
||||
if [ ! -f benchmark-simple ]; then
|
||||
echo "Building benchmark tool..."
|
||||
go build -o benchmark-simple ./benchmark_simple.go
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed to build benchmark tool"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Run the benchmark
|
||||
echo "Running Nostr relay benchmark..."
|
||||
echo "Relay: $RELAY_URL"
|
||||
echo "Events: $EVENTS (size: $SIZE bytes)"
|
||||
echo "Concurrency: $CONCURRENCY"
|
||||
echo "Queries: $QUERIES (limit: $QUERY_LIMIT)"
|
||||
echo ""
|
||||
|
||||
./benchmark-simple \
|
||||
-relay "$RELAY_URL" \
|
||||
-events $EVENTS \
|
||||
-size $SIZE \
|
||||
-concurrency $CONCURRENCY \
|
||||
-queries $QUERIES \
|
||||
-query-limit $QUERY_LIMIT \
|
||||
$SKIP_PUBLISH \
|
||||
$SKIP_QUERY
|
||||
63
cmd/benchmark/test_signer.go
Normal file
63
cmd/benchmark/test_signer.go
Normal file
@@ -0,0 +1,63 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"lukechampine.com/frand"
|
||||
"orly.dev/pkg/interfaces/signer"
|
||||
)
|
||||
|
||||
// testSigner is a simple signer implementation for benchmarking
|
||||
type testSigner struct {
|
||||
pub []byte
|
||||
sec []byte
|
||||
}
|
||||
|
||||
func newTestSigner() *testSigner {
|
||||
return &testSigner{
|
||||
pub: frand.Bytes(32),
|
||||
sec: frand.Bytes(32),
|
||||
}
|
||||
}
|
||||
|
||||
func (s *testSigner) Pub() []byte {
|
||||
return s.pub
|
||||
}
|
||||
|
||||
func (s *testSigner) Sec() []byte {
|
||||
return s.sec
|
||||
}
|
||||
|
||||
func (s *testSigner) Sign(msg []byte) ([]byte, error) {
|
||||
return frand.Bytes(64), nil
|
||||
}
|
||||
|
||||
func (s *testSigner) Verify(msg, sig []byte) (bool, error) {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func (s *testSigner) InitSec(sec []byte) error {
|
||||
s.sec = sec
|
||||
s.pub = frand.Bytes(32)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *testSigner) InitPub(pub []byte) error {
|
||||
s.pub = pub
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *testSigner) Zero() {
|
||||
for i := range s.sec {
|
||||
s.sec[i] = 0
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
func (s *testSigner) ECDH(pubkey []byte) ([]byte, error) {
|
||||
return frand.Bytes(32), nil
|
||||
}
|
||||
|
||||
func (s *testSigner) Generate() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
var _ signer.I = (*testSigner)(nil)
|
||||
@@ -26,26 +26,27 @@ import (
|
||||
// and default values. It defines parameters for app behaviour, storage
|
||||
// locations, logging, and network settings used across the relay service.
|
||||
type C struct {
|
||||
AppName string `env:"ORLY_APP_NAME" default:"orly"`
|
||||
Config string `env:"ORLY_CONFIG_DIR" usage:"location for configuration file, which has the name '.env' to make it harder to delete, and is a standard environment KEY=value<newline>... style" default:"~/.config/orly"`
|
||||
State string `env:"ORLY_STATE_DATA_DIR" usage:"storage location for state data affected by dynamic interactive interfaces" default:"~/.local/state/orly"`
|
||||
DataDir string `env:"ORLY_DATA_DIR" usage:"storage location for the event store" default:"~/.local/cache/orly"`
|
||||
Listen string `env:"ORLY_LISTEN" default:"0.0.0.0" usage:"network listen address"`
|
||||
Port int `env:"ORLY_PORT" default:"3334" usage:"port to listen on"`
|
||||
LogLevel string `env:"ORLY_LOG_LEVEL" default:"info" usage:"debug level: fatal error warn info debug trace"`
|
||||
DbLogLevel string `env:"ORLY_DB_LOG_LEVEL" default:"info" usage:"debug level: fatal error warn info debug trace"`
|
||||
Pprof string `env:"ORLY_PPROF" usage:"enable pprof on 127.0.0.1:6060" enum:"cpu,memory,allocation"`
|
||||
AuthRequired bool `env:"ORLY_AUTH_REQUIRED" default:"false" usage:"require authentication for all requests"`
|
||||
PublicReadable bool `env:"ORLY_PUBLIC_READABLE" default:"true" usage:"allow public read access to regardless of whether the client is authed"`
|
||||
SpiderSeeds []string `env:"ORLY_SPIDER_SEEDS" usage:"seeds to use for the spider (relays that are looked up initially to find owner relay lists) (comma separated)" default:"wss://profiles.nostr1.com/,wss://relay.nostr.band/,wss://relay.damus.io/,wss://nostr.wine/,wss://nostr.land/,wss://theforest.nostr1.com/"`
|
||||
SpiderType string `env:"ORLY_SPIDER_TYPE" usage:"whether to spider, and what degree of spidering: none, directory, follows (follows means to the second degree of the follow graph)" default:"directory"`
|
||||
SpiderTime time.Duration `env:"ORLY_SPIDER_FREQUENCY" usage:"how often to run the spider, uses notation 0h0m0s" default:"1h"`
|
||||
Owners []string `env:"ORLY_OWNERS" usage:"list of users whose follow lists designate whitelisted users who can publish events, and who can read if public readable is false (comma separated)"`
|
||||
Private bool `env:"ORLY_PRIVATE" usage:"do not spider for user metadata because the relay is private and this would leak relay memberships" default:"false"`
|
||||
Whitelist []string `env:"ORLY_WHITELIST" usage:"only allow connections from this list of IP addresses"`
|
||||
Blacklist []string `env:"ORLY_BLACKLIST" usage:"list of pubkeys to block when auth is not required (comma separated)"`
|
||||
RelaySecret string `env:"ORLY_SECRET_KEY" usage:"secret key for relay cluster replication authentication"`
|
||||
PeerRelays []string `env:"ORLY_PEER_RELAYS" usage:"list of peer relays URLs that new events are pushed to in format <pubkey>|<url>"`
|
||||
AppName string `env:"ORLY_APP_NAME" default:"orly"`
|
||||
Config string `env:"ORLY_CONFIG_DIR" usage:"location for configuration file, which has the name '.env' to make it harder to delete, and is a standard environment KEY=value<newline>... style" default:"~/.config/orly"`
|
||||
State string `env:"ORLY_STATE_DATA_DIR" usage:"storage location for state data affected by dynamic interactive interfaces" default:"~/.local/state/orly"`
|
||||
DataDir string `env:"ORLY_DATA_DIR" usage:"storage location for the event store" default:"~/.local/cache/orly"`
|
||||
Listen string `env:"ORLY_LISTEN" default:"0.0.0.0" usage:"network listen address"`
|
||||
Port int `env:"ORLY_PORT" default:"3334" usage:"port to listen on"`
|
||||
LogLevel string `env:"ORLY_LOG_LEVEL" default:"info" usage:"debug level: fatal error warn info debug trace"`
|
||||
DbLogLevel string `env:"ORLY_DB_LOG_LEVEL" default:"info" usage:"debug level: fatal error warn info debug trace"`
|
||||
Pprof string `env:"ORLY_PPROF" usage:"enable pprof on 127.0.0.1:6060" enum:"cpu,memory,allocation"`
|
||||
AuthRequired bool `env:"ORLY_AUTH_REQUIRED" default:"false" usage:"require authentication for all requests"`
|
||||
PublicReadable bool `env:"ORLY_PUBLIC_READABLE" default:"true" usage:"allow public read access to regardless of whether the client is authed"`
|
||||
SpiderSeeds []string `env:"ORLY_SPIDER_SEEDS" usage:"seeds to use for the spider (relays that are looked up initially to find owner relay lists) (comma separated)" default:"wss://profiles.nostr1.com/,wss://relay.nostr.band/,wss://relay.damus.io/,wss://nostr.wine/,wss://nostr.land/,wss://theforest.nostr1.com/,wss://profiles.nostr1.com/"`
|
||||
SpiderType string `env:"ORLY_SPIDER_TYPE" usage:"whether to spider, and what degree of spidering: none, directory, follows (follows means to the second degree of the follow graph)" default:"directory"`
|
||||
SpiderTime time.Duration `env:"ORLY_SPIDER_FREQUENCY" usage:"how often to run the spider, uses notation 0h0m0s" default:"1h"`
|
||||
SpiderSecondDegree bool `env:"ORLY_SPIDER_SECOND_DEGREE" default:"true" usage:"whether to enable spidering the second degree of follows for non-directory events if ORLY_SPIDER_TYPE is set to 'follows'"`
|
||||
Owners []string `env:"ORLY_OWNERS" usage:"list of users whose follow lists designate whitelisted users who can publish events, and who can read if public readable is false (comma separated)"`
|
||||
Private bool `env:"ORLY_PRIVATE" usage:"do not spider for user metadata because the relay is private and this would leak relay memberships" default:"false"`
|
||||
Whitelist []string `env:"ORLY_WHITELIST" usage:"only allow connections from this list of IP addresses"`
|
||||
Blacklist []string `env:"ORLY_BLACKLIST" usage:"list of pubkeys to block when auth is not required (comma separated)"`
|
||||
RelaySecret string `env:"ORLY_SECRET_KEY" usage:"secret key for relay cluster replication authentication"`
|
||||
PeerRelays []string `env:"ORLY_PEER_RELAYS" usage:"list of peer relays URLs that new events are pushed to in format <pubkey>|<url>"`
|
||||
}
|
||||
|
||||
// New creates and initializes a new configuration object for the relay
|
||||
|
||||
@@ -122,7 +122,7 @@ func (s *Server) SpiderFetch(
|
||||
l := &lim
|
||||
var since *timestamp.T
|
||||
if k == nil {
|
||||
since = timestamp.FromTime(time.Now().Add(-1 * time.Hour))
|
||||
since = timestamp.FromTime(time.Now().Add(-1 * s.C.SpiderTime * 3 / 2))
|
||||
} else {
|
||||
l = nil
|
||||
}
|
||||
|
||||
@@ -103,13 +103,32 @@ func (s *Server) Spider(noFetch ...bool) (err error) {
|
||||
if s.C.SpiderType == "directory" {
|
||||
k = kinds.New(
|
||||
kind.ProfileMetadata, kind.RelayListMetadata,
|
||||
kind.DMRelaysList,
|
||||
kind.DMRelaysList, kind.MuteList,
|
||||
)
|
||||
}
|
||||
everyone := append(ownersFollowed, followedFollows...)
|
||||
everyone := ownersFollowed
|
||||
if s.C.SpiderSecondDegree &&
|
||||
(s.C.SpiderType == "follows" ||
|
||||
s.C.SpiderType == "directory") {
|
||||
everyone = append(ownersFollowed, followedFollows...)
|
||||
}
|
||||
_, _ = s.SpiderFetch(
|
||||
k, false, true, everyone...,
|
||||
)
|
||||
// get the directory events also for second degree if spider
|
||||
// type is directory but second degree is disabled, so all
|
||||
// directory data is available for all whitelisted users.
|
||||
if !s.C.SpiderSecondDegree && s.C.SpiderType == "directory" {
|
||||
k = kinds.New(
|
||||
kind.ProfileMetadata, kind.RelayListMetadata,
|
||||
kind.DMRelaysList, kind.MuteList,
|
||||
)
|
||||
everyone = append(ownersFollowed, followedFollows...)
|
||||
_, _ = s.SpiderFetch(
|
||||
k, false, true, everyone...,
|
||||
)
|
||||
|
||||
}
|
||||
}()
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -1 +1 @@
|
||||
v0.4.8
|
||||
v0.4.14
|
||||
50
readme.adoc
50
readme.adoc
@@ -12,12 +12,12 @@ and https://github.com/fiatjaf/relayer[fiatjaf/relayer] aimed at maximum perform
|
||||
|
||||
== Features
|
||||
|
||||
* a lot of bits and pieces accumulated from nearly 8 years of working with Go, logging and run control, XDG user data directories (windows, mac, linux, android) (todo: this is mostly built and designed but not currently available)
|
||||
* a cleaned up and unified fork of the btcd/dcred BIP-340 signatures, including the use of bitcoin core's BIP-340 implementation (more than 4x faster than btcd) (todo: ECDH from the C library tbd). (todo: HTTP API not in this repo yet but coming soon TM)
|
||||
* a lot of bits and pieces accumulated from nearly 8 years of working with Go, logging and run control, XDG user data directories (windows, mac, linux, android)
|
||||
* a cleaned up and unified fork of the btcd/dcred BIP-340 signatures, including the use of bitcoin core's BIP-340 implementation (more than 4x faster than btcd) (todo: ECDH from the C library tbd).
|
||||
* AVX/AVX2 optimized SHA256 and SIMD hex encoder
|
||||
* https://github.com/bitcoin/secp256k1[libsecp256k1]-enabled signature and signature verification (see link:p256k/README.md[here]).
|
||||
* efficient, mutable byte slice-based hash/pubkey/signature encoding in memory (zero allocation decode from wire, can tolerate whitespace, at a speed penalty)
|
||||
* custom badger-based event store with an optional garbage collector that uses fast binary encoder for storage of events.
|
||||
* efficient, mutable byte slice-based hash/pubkey/signature encoding in memory (zero allocation decode from wire of all but id/pubkey/signature, can tolerate whitespace, at a speed penalty)
|
||||
* custom badger-based event store that uses fast binary encoder for storage of events, and has a complete set of indexes so it doesn't need to decode events for any query until delivering them.
|
||||
* link:cmd/vainstr[vainstr] vanity npub generator that can mine a 5-letter suffix in around 15 minutes on a 6 core Ryzen 5 processor using the CGO bitcoin core signature library.
|
||||
* reverse proxy tool link:cmd/lerproxy[lerproxy] with support for Go vanity imports and https://github.com/nostr-protocol/nips/blob/master/05.md[nip-05] npub DNS verification and own TLS certificates
|
||||
* link:https://github.com/nostr-protocol/nips/blob/master/98.md[nip-98] implementation with new expiring variant for vanilla HTTP tools and browsers.
|
||||
@@ -85,6 +85,46 @@ To see the current active configuration:
|
||||
orly env
|
||||
----
|
||||
|
||||
To see the help information:
|
||||
|
||||
----
|
||||
orly help
|
||||
----
|
||||
|
||||
Environment variables that configure orly:
|
||||
|
||||
[cols="4"]
|
||||
|===
|
||||
| Environment variable | type | default | description
|
||||
| ORLY_APP_NAME | string | orly |
|
||||
| ORLY_CONFIG_DIR | string | ~/.config/orly | location for configuration file, which has the name '.env' to make it harder to delete, and is a standard environment KEY=value<newline>... style
|
||||
| ORLY_STATE_DATA_DIR | string | ~/.local/state/orly | storage location for state data affected by dynamic interactive interfaces
|
||||
| ORLY_DATA_DIR | string | ~/.local/cache/orly | storage location for the event store
|
||||
| ORLY_LISTEN | string | 0.0.0.0 | network listen address
|
||||
| ORLY_PORT | int | 3334 | port to listen on
|
||||
| ORLY_LOG_LEVEL | string | info | debug level: fatal error warn info debug trace
|
||||
| ORLY_DB_LOG_LEVEL | string | info | debug level: fatal error warn info debug trace
|
||||
| ORLY_PPROF | string | <empty> | enable pprof on 127.0.0.1:6060
|
||||
| ORLY_AUTH_REQUIRED | bool | false | require authentication for all requests
|
||||
| ORLY_PUBLIC_READABLE | bool | true | allow public read access to regardless of whether the client is authed
|
||||
| ORLY_SPIDER_SEEDS | []string | wss://profiles.nostr1.com/,
|
||||
wss://relay.nostr.band/,
|
||||
wss://relay.damus.io/,
|
||||
wss://nostr.wine/,
|
||||
wss://nostr.land/,
|
||||
wss://theforest.nostr1.com/,
|
||||
wss://profiles.nostr1.com
|
||||
| seeds to use for the spider (relays that are looked up initially to find owner relay lists) (comma separated)
|
||||
| ORLY_SPIDER_TYPE | string | directory | whether to spider, and what degree of spidering: none, directory, follows (follows means to the second degree of the follow graph)
|
||||
| ORLY_SPIDER_FREQUENCY | time.Duration | 1h | how often to run the spider, uses notation 0h0m0s
|
||||
| ORLY_SPIDER_SECOND_DEGREE | bool | true | whether to enable spidering the second degree of follows for non-directory events if ORLY_SPIDER_TYPE is set to 'follows'
|
||||
| ORLY_OWNERS | []string | [] | list of users whose follow lists designate whitelisted users who can publish events, and who can read if public readable is false (comma separated)
|
||||
| ORLY_PRIVATE | bool | false | do not spider for user metadata because the relay is private and this would leak relay memberships
|
||||
| ORLY_WHITELIST | []string | [] | only allow connections from this list of IP addresses
|
||||
| ORLY_SECRET_KEY | string | <empty> | secret key for relay cluster replication authentication
|
||||
| ORLY_PEER_RELAYS | []string | [] | list of peer relays URLs that new events are pushed to in format <pubkey>\|<url>
|
||||
|===
|
||||
|
||||
=== Create Persistent Configuration
|
||||
|
||||
This output can be directed to the profile location to make the settings editable without manually setting them on the
|
||||
@@ -122,8 +162,6 @@ messages and it uses and parses relay lists, and all that other stuff.
|
||||
[#_simplified_nostr]
|
||||
=== Simplified Nostr
|
||||
|
||||
NOTE: this is not currently implemented. coming soon TM
|
||||
|
||||
Rather than write a text that will likely fall out of date very quickly, simply run `orly` and visit its listener
|
||||
address (eg link:http://localhost:3334/api[http://localhost:3334/api]) to see the full documentation.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user