Compare commits

...

16 Commits

Author SHA1 Message Date
4532def9f5 Remove large outdated stacktrace.txt log file.
Some checks failed
Go / build (push) Has been cancelled
- Deleted auto-generated `stacktrace.txt` file to reduce repository clutter and maintain relevance of retained files.
2025-09-20 12:07:55 +01:00
90f21fbcd1 Add detailed benchmark results for multiple relays.
- Included results for `relayer-basic`, `strfry`, and `nostr-rs-relay` relay benchmarks.
- Comprehensive performance metrics added for throughput, latency, query, and concurrent operations.
- Reports saved as plain text and AsciiDoc formats.
2025-09-20 12:06:57 +01:00
81a40c04e5 Refactor publishCacheEvents for concurrent publishing and optimize database access.
- Updated `publishCacheEvents` to utilize multiple concurrent connections for event publishing.
- Introduced worker-based architecture leveraging `runtime.NumCPU` for parallel uploads.
- Optimized database fetch logic in `FetchEventsBySerials` for improved maintainability and performance.
- Bumped version to `v0.4.8`.
2025-09-20 04:10:59 +01:00
58a9e83038 Refactor publishCacheEvents and publisherWorker to use fire-and-forget publishing.
- Replaced `Publish` calls with direct event envelope writes, removing wait-for-OK behavior.
- Simplified `publishCacheEvents` logic, removed per-publish timeout contexts, and updated return values.
- Adjusted log messages to reflect "sent" instead of "published."
- Enhanced relay stability with delays between successful publishes.
- Removed unused `publishTimeout` parameter from `publisherWorker` and main logic.
2025-09-20 03:48:50 +01:00
22cde96f3f Remove bufpool references and unused imports, optimize memory operations.
- Removed `bufpool` usage throughout `tag`, `tags`, and `event` packages for memory efficiency.
- Replaced in-place buffer modifications with independent, deep-copied allocations to prevent unintended mutations.
- Added new `Clone` method for deep copying `event.E`.
- Ensured valid JSON emission for nil `Tags` in `event` marshaling.
- Introduced `cmd/stresstest` for relay stress-testing with detailed workload generation and query simulation.
2025-09-19 16:17:44 +01:00
49a172820a Remove unused dependencies and update lol.mleku.dev to v1.0.3. 2025-09-15 05:08:16 +01:00
9d2bf173fe Bump lol.mleku.dev to v1.0.3. 2025-09-15 05:05:52 +01:00
e521b788fb Delete outdated benchmark reports and results.
Removed old benchmark reports and detailed logs from the repository to clean up unnecessary files. These reports appear to be auto-generated and no longer relevant for ongoing development.
2025-09-15 05:00:19 +01:00
f5cce92bf8 Handle nil receiver S in ContainsAny method within tags.go. 2025-09-13 21:23:59 +01:00
2ccdc5e756 Bump version to v0.4.7. 2025-09-13 21:19:01 +01:00
173a34784f Remove redundant logging in acl/follows.go and get-indexes-from-filter.go, handle nil Tags in event.go. 2025-09-13 21:17:53 +01:00
a75e0994f9 Add debug logging for admins in ACL follows evaluation logic 2025-09-13 21:08:29 +01:00
60e925d748 added profiler tooling to enable automated generation of profile reports 2025-09-13 21:05:30 +01:00
3d2f970f04 added profiler tooling to enable automated generation of profile reports 2025-09-13 20:49:25 +01:00
935eb1fb0b added profiler tooling to enable automated generation of profile reports 2025-09-13 13:06:52 +01:00
509aac3819 Remove unused ACL integration and related configuration logic, bump version to v0.4.6.
Some checks failed
Go / build (push) Has been cancelled
2025-09-13 11:33:01 +01:00
58 changed files with 3719 additions and 17229 deletions

18
.dockerignore Normal file
View File

@@ -0,0 +1,18 @@
# Exclude heavy or host-specific data from Docker build context
# Fixes: failed to solve: error from sender: open cmd/benchmark/data/postgres: permission denied
# Benchmark data and reports (mounted at runtime via volumes)
cmd/benchmark/data/
cmd/benchmark/reports/
# VCS and OS cruft
.git
.gitignore
**/.DS_Store
**/Thumbs.db
# Go build cache and binaries
**/bin/
**/dist/
**/build/
**/*.out

1
.gitignore vendored
View File

@@ -91,6 +91,7 @@ cmd/benchmark/data
!Dockerfile*
!strfry.conf
!config.toml
!.dockerignore
# ...even if they are in subdirectories
!*/
/blocklist.json

View File

@@ -23,19 +23,23 @@ import (
// and default values. It defines parameters for app behaviour, storage
// locations, logging, and network settings used across the relay service.
type C struct {
AppName string `env:"ORLY_APP_NAME" usage:"set a name to display on information about the relay" default:"ORLY"`
DataDir string `env:"ORLY_DATA_DIR" usage:"storage location for the event store" default:"~/.local/share/ORLY"`
Listen string `env:"ORLY_LISTEN" default:"0.0.0.0" usage:"network listen address"`
Port int `env:"ORLY_PORT" default:"3334" usage:"port to listen on"`
HealthPort int `env:"ORLY_HEALTH_PORT" default:"0" usage:"optional health check HTTP port; 0 disables"`
LogLevel string `env:"ORLY_LOG_LEVEL" default:"info" usage:"relay log level: fatal error warn info debug trace"`
DBLogLevel string `env:"ORLY_DB_LOG_LEVEL" default:"info" usage:"database log level: fatal error warn info debug trace"`
LogToStdout bool `env:"ORLY_LOG_TO_STDOUT" default:"false" usage:"log to stdout instead of stderr"`
Pprof string `env:"ORLY_PPROF" usage:"enable pprof in modes: cpu,memory,allocation"`
IPWhitelist []string `env:"ORLY_IP_WHITELIST" usage:"comma-separated list of IP addresses to allow access from, matches on prefixes to allow private subnets, eg 10.0.0 = 10.0.0.0/8"`
Admins []string `env:"ORLY_ADMINS" usage:"comma-separated list of admin npubs"`
Owners []string `env:"ORLY_OWNERS" usage:"comma-separated list of owner npubs, who have full control of the relay for wipe and restart and other functions"`
ACLMode string `env:"ORLY_ACL_MODE" usage:"ACL mode: follows,none" default:"none"`
AppName string `env:"ORLY_APP_NAME" usage:"set a name to display on information about the relay" default:"ORLY"`
DataDir string `env:"ORLY_DATA_DIR" usage:"storage location for the event store" default:"~/.local/share/ORLY"`
Listen string `env:"ORLY_LISTEN" default:"0.0.0.0" usage:"network listen address"`
Port int `env:"ORLY_PORT" default:"3334" usage:"port to listen on"`
HealthPort int `env:"ORLY_HEALTH_PORT" default:"0" usage:"optional health check HTTP port; 0 disables"`
EnableShutdown bool `env:"ORLY_ENABLE_SHUTDOWN" default:"false" usage:"if true, expose /shutdown on the health port to gracefully stop the process (for profiling)"`
LogLevel string `env:"ORLY_LOG_LEVEL" default:"info" usage:"relay log level: fatal error warn info debug trace"`
DBLogLevel string `env:"ORLY_DB_LOG_LEVEL" default:"info" usage:"database log level: fatal error warn info debug trace"`
LogToStdout bool `env:"ORLY_LOG_TO_STDOUT" default:"false" usage:"log to stdout instead of stderr"`
Pprof string `env:"ORLY_PPROF" usage:"enable pprof in modes: cpu,memory,allocation,heap,block,goroutine,threadcreate,mutex"`
PprofPath string `env:"ORLY_PPROF_PATH" usage:"optional directory to write pprof profiles into (inside container); default is temporary dir"`
PprofHTTP bool `env:"ORLY_PPROF_HTTP" default:"false" usage:"if true, expose net/http/pprof on port 6060"`
OpenPprofWeb bool `env:"ORLY_OPEN_PPROF_WEB" default:"false" usage:"if true, automatically open the pprof web viewer when profiling is enabled"`
IPWhitelist []string `env:"ORLY_IP_WHITELIST" usage:"comma-separated list of IP addresses to allow access from, matches on prefixes to allow private subnets, eg 10.0.0 = 10.0.0.0/8"`
Admins []string `env:"ORLY_ADMINS" usage:"comma-separated list of admin npubs"`
Owners []string `env:"ORLY_OWNERS" usage:"comma-separated list of owner npubs, who have full control of the relay for wipe and restart and other functions"`
ACLMode string `env:"ORLY_ACL_MODE" usage:"ACL mode: follows,none" default:"none"`
}
// New creates and initializes a new configuration object for the relay

View File

@@ -151,7 +151,9 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
return
}
// Deliver the event to subscribers immediately after sending OK response
go l.publishers.Deliver(env.E)
// Clone the event to prevent corruption when the original is freed
clonedEvent := env.E.Clone()
go l.publishers.Deliver(clonedEvent)
log.D.F("saved event %0x", env.E.ID)
var isNewFromAdmin bool
for _, admin := range l.Admins {

View File

@@ -19,7 +19,7 @@ const (
DefaultWriteWait = 10 * time.Second
DefaultPongWait = 60 * time.Second
DefaultPingWait = DefaultPongWait / 2
DefaultReadTimeout = 3 * time.Second // Read timeout to detect stalled connections
DefaultReadTimeout = 7 * time.Second // Read timeout to detect stalled connections
DefaultWriteTimeout = 3 * time.Second
DefaultMaxMessageSize = 1 * units.Mb

View File

@@ -8,7 +8,6 @@ import (
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/app/config"
acl "next.orly.dev/pkg/acl"
database "next.orly.dev/pkg/database"
"next.orly.dev/pkg/encoders/bech32encoding"
"next.orly.dev/pkg/protocol/publish"
@@ -46,10 +45,6 @@ func Run(
publishers: publish.New(NewPublisher(ctx)),
Admins: adminKeys,
}
// provide publisher to ACL so background sync can dispatch events
if err := acl.Registry.Configure(cfg, db, ctx, l.publishers); chk.E(err) {
// if configuration fails, proceed but log; ACL might be 'none'
}
addr := fmt.Sprintf("%s:%d", cfg.Listen, cfg.Port)
log.I.F("starting listener on http://%s", addr)
go func() {

View File

@@ -101,17 +101,17 @@ func (p *P) Receive(msg typer.T) {
if m.Cancel {
if m.Id == "" {
p.removeSubscriber(m.Conn)
log.D.F("removed listener %s", m.remote)
// log.D.F("removed listener %s", m.remote)
} else {
p.removeSubscriberId(m.Conn, m.Id)
log.D.C(
func() string {
return fmt.Sprintf(
"removed subscription %s for %s", m.Id,
m.remote,
)
},
)
// log.D.C(
// func() string {
// return fmt.Sprintf(
// "removed subscription %s for %s", m.Id,
// m.remote,
// )
// },
// )
}
return
}
@@ -123,27 +123,27 @@ func (p *P) Receive(msg typer.T) {
S: m.Filters, remote: m.remote, AuthedPubkey: m.AuthedPubkey,
}
p.Map[m.Conn] = subs
log.D.C(
func() string {
return fmt.Sprintf(
"created new subscription for %s, %s",
m.remote,
m.Filters.Marshal(nil),
)
},
)
// log.D.C(
// func() string {
// return fmt.Sprintf(
// "created new subscription for %s, %s",
// m.remote,
// m.Filters.Marshal(nil),
// )
// },
// )
} else {
subs[m.Id] = Subscription{
S: m.Filters, remote: m.remote, AuthedPubkey: m.AuthedPubkey,
}
log.D.C(
func() string {
return fmt.Sprintf(
"added subscription %s for %s", m.Id,
m.remote,
)
},
)
// log.D.C(
// func() string {
// return fmt.Sprintf(
// "added subscription %s for %s", m.Id,
// m.remote,
// )
// },
// )
}
}
}

View File

@@ -34,13 +34,18 @@ COPY cmd/benchmark/benchmark-runner.sh /app/benchmark-runner
# Make scripts executable
RUN chmod +x /app/benchmark-runner
# Create reports directory
RUN mkdir -p /reports
# Create runtime user and reports directory owned by uid 1000
RUN adduser -u 1000 -D appuser && \
mkdir -p /reports && \
chown -R 1000:1000 /app /reports
# Environment variables
ENV BENCHMARK_EVENTS=10000
ENV BENCHMARK_WORKERS=8
ENV BENCHMARK_DURATION=60s
# Drop privileges: run as uid 1000
USER 1000:1000
# Run the benchmark runner
CMD ["/app/benchmark-runner"]

View File

@@ -6,7 +6,7 @@ WORKDIR /build
COPY . .
# Build the basic-badger example
RUN cd examples/basic-badger && \
RUN echo ${pwd};cd examples/basic-badger && \
go mod tidy && \
CGO_ENABLED=0 go build -o khatru-badger .

View File

@@ -46,7 +46,13 @@ RUN go mod download
COPY . .
# Build the relay
RUN CGO_ENABLED=1 GOOS=linux go build -o relay .
RUN CGO_ENABLED=1 GOOS=linux go build -gcflags "all=-N -l" -o relay .
# Create non-root user (uid 1000) for runtime in builder stage (used by analyzer)
RUN useradd -u 1000 -m -s /bin/bash appuser && \
chown -R 1000:1000 /build
# Switch to uid 1000 for any subsequent runtime use of this stage
USER 1000:1000
# Final stage
FROM ubuntu:22.04
@@ -60,8 +66,10 @@ WORKDIR /app
# Copy binary from builder
COPY --from=builder /build/relay /app/relay
# Create data directory
RUN mkdir -p /data
# Create runtime user and writable directories
RUN useradd -u 1000 -m -s /bin/bash appuser && \
mkdir -p /data /profiles /app && \
chown -R 1000:1000 /data /profiles /app
# Expose port
EXPOSE 8080
@@ -70,11 +78,14 @@ EXPOSE 8080
ENV ORLY_DATA_DIR=/data
ENV ORLY_LISTEN=0.0.0.0
ENV ORLY_PORT=8080
ENV ORLY_LOG_LEVEL=info
ENV ORLY_LOG_LEVEL=off
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD bash -lc "code=\$(curl -s -o /dev/null -w '%{http_code}' http://127.0.0.1:8080 || echo 000); echo \$code | grep -E '^(101|200|400|404|426)$' >/dev/null || exit 1"
# Drop privileges: run as uid 1000
USER 1000:1000
# Run the relay
CMD ["/app/relay"]

View File

@@ -11,7 +11,7 @@ services:
- ORLY_DATA_DIR=/data
- ORLY_LISTEN=0.0.0.0
- ORLY_PORT=8080
- ORLY_LOG_LEVEL=info
- ORLY_LOG_LEVEL=off
volumes:
- ./data/next-orly:/data
ports:

View File

@@ -2,7 +2,6 @@ package main
import (
"context"
"crypto/rand"
"flag"
"fmt"
"log"
@@ -63,6 +62,7 @@ type Benchmark struct {
}
func main() {
// lol.SetLogLevel("trace")
config := parseFlags()
if config.RelayURL != "" {
@@ -96,7 +96,7 @@ func parseFlags() *BenchmarkConfig {
&config.DataDir, "datadir", "/tmp/benchmark_db", "Database directory",
)
flag.IntVar(
&config.NumEvents, "events", 100000, "Number of events to generate",
&config.NumEvents, "events", 10000, "Number of events to generate",
)
flag.IntVar(
&config.ConcurrentWorkers, "workers", runtime.NumCPU(),
@@ -133,8 +133,16 @@ func runNetworkLoad(cfg *BenchmarkConfig) {
"Network mode: relay=%s workers=%d rate=%d ev/s per worker duration=%s\n",
cfg.RelayURL, cfg.NetWorkers, cfg.NetRate, cfg.TestDuration,
)
ctx, cancel := context.WithTimeout(context.Background(), cfg.TestDuration)
// Create a timeout context for benchmark control only, not for connections
timeoutCtx, cancel := context.WithTimeout(
context.Background(), cfg.TestDuration,
)
defer cancel()
// Use a separate background context for relay connections to avoid
// cancelling the server when the benchmark timeout expires
connCtx := context.Background()
var wg sync.WaitGroup
if cfg.NetWorkers <= 0 {
cfg.NetWorkers = 1
@@ -146,8 +154,8 @@ func runNetworkLoad(cfg *BenchmarkConfig) {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
// Connect to relay
rl, err := ws.RelayConnect(ctx, cfg.RelayURL)
// Connect to relay using non-cancellable context
rl, err := ws.RelayConnect(connCtx, cfg.RelayURL)
if err != nil {
fmt.Printf(
"worker %d: failed to connect to %s: %v\n", workerID,
@@ -174,17 +182,28 @@ func runNetworkLoad(cfg *BenchmarkConfig) {
f.Authors = tag.NewWithCap(1)
f.Authors.T = append(f.Authors.T, keys.Pub())
f.Since = timestamp.FromUnix(since)
sub, err := rl.Subscribe(ctx, filter.NewS(f))
sub, err := rl.Subscribe(connCtx, filter.NewS(f))
if err != nil {
fmt.Printf("worker %d: subscribe error: %v\n", workerID, err)
fmt.Printf(
"worker %d: subscribe error: %v\n", workerID, err,
)
return
}
defer sub.Unsub()
recv := 0
for {
select {
case <-ctx.Done():
fmt.Printf("worker %d: subscriber exiting after %d events\n", workerID, recv)
case <-timeoutCtx.Done():
fmt.Printf(
"worker %d: subscriber exiting after %d events (benchmark timeout: %v)\n",
workerID, recv, timeoutCtx.Err(),
)
return
case <-rl.Context().Done():
fmt.Printf(
"worker %d: relay connection closed; cause=%v lastErr=%v\n",
workerID, rl.ConnectionCause(), rl.LastError(),
)
return
case <-sub.EndOfStoredEvents:
// continue streaming live events
@@ -194,7 +213,10 @@ func runNetworkLoad(cfg *BenchmarkConfig) {
}
recv++
if recv%100 == 0 {
fmt.Printf("worker %d: received %d matching events\n", workerID, recv)
fmt.Printf(
"worker %d: received %d matching events\n",
workerID, recv,
)
}
ev.Free()
}
@@ -207,7 +229,7 @@ func runNetworkLoad(cfg *BenchmarkConfig) {
count := 0
for {
select {
case <-ctx.Done():
case <-timeoutCtx.Done():
fmt.Printf(
"worker %d: stopping after %d publishes\n", workerID,
count,
@@ -233,12 +255,16 @@ func runNetworkLoad(cfg *BenchmarkConfig) {
select {
case err := <-ch:
if err != nil {
fmt.Printf("worker %d: write error: %v\n", workerID, err)
fmt.Printf(
"worker %d: write error: %v\n", workerID, err,
)
}
default:
}
if count%100 == 0 {
fmt.Printf("worker %d: sent %d events\n", workerID, count)
fmt.Printf(
"worker %d: sent %d events\n", workerID, count,
)
}
ev.Free()
count++
@@ -284,15 +310,25 @@ func (b *Benchmark) Close() {
func (b *Benchmark) RunSuite() {
for round := 1; round <= 2; round++ {
fmt.Printf("\n=== Starting test round %d/2 ===\n", round)
fmt.Printf("RunPeakThroughputTest..\n")
b.RunPeakThroughputTest()
time.Sleep(10 * time.Second)
fmt.Printf("RunBurstPatternTest..\n")
b.RunBurstPatternTest()
time.Sleep(10 * time.Second)
fmt.Printf("RunMixedReadWriteTest..\n")
b.RunMixedReadWriteTest()
time.Sleep(10 * time.Second)
fmt.Printf("RunQueryTest..\n")
b.RunQueryTest()
time.Sleep(10 * time.Second)
fmt.Printf("RunConcurrentQueryStoreTest..\n")
b.RunConcurrentQueryStoreTest()
if round < 2 {
fmt.Println("\nPausing 10s before next round...")
time.Sleep(10 * time.Second)
}
fmt.Println("\n=== Test round completed ===\n")
}
}
@@ -595,21 +631,343 @@ func (b *Benchmark) RunMixedReadWriteTest() {
fmt.Printf("Combined ops/sec: %.2f\n", result.EventsPerSecond)
}
// RunQueryTest specifically benchmarks the QueryEvents function performance
func (b *Benchmark) RunQueryTest() {
fmt.Println("\n=== Query Test ===")
start := time.Now()
var totalQueries int64
var queryLatencies []time.Duration
var errors []error
var mu sync.Mutex
// Pre-populate with events for querying
numSeedEvents := 10000
seedEvents := b.generateEvents(numSeedEvents)
ctx := context.Background()
fmt.Printf(
"Pre-populating database with %d events for query tests...\n",
numSeedEvents,
)
for _, ev := range seedEvents {
b.db.SaveEvent(ctx, ev)
}
// Create different types of filters for querying
filters := []*filter.F{
func() *filter.F { // Kind filter
f := filter.New()
f.Kinds = kind.NewS(kind.TextNote)
limit := uint(100)
f.Limit = &limit
return f
}(),
func() *filter.F { // Tag filter
f := filter.New()
f.Tags = tag.NewS(
tag.NewFromBytesSlice(
[]byte("t"), []byte("benchmark"),
),
)
limit := uint(100)
f.Limit = &limit
return f
}(),
func() *filter.F { // Mixed filter
f := filter.New()
f.Kinds = kind.NewS(kind.TextNote)
f.Tags = tag.NewS(
tag.NewFromBytesSlice(
[]byte("t"), []byte("benchmark"),
),
)
limit := uint(50)
f.Limit = &limit
return f
}(),
}
var wg sync.WaitGroup
// Start query workers
for i := 0; i < b.config.ConcurrentWorkers; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
filterIndex := workerID % len(filters)
queryCount := 0
for time.Since(start) < b.config.TestDuration {
// Rotate through different filters
f := filters[filterIndex]
filterIndex = (filterIndex + 1) % len(filters)
// Execute query
queryStart := time.Now()
events, err := b.db.QueryEvents(ctx, f)
queryLatency := time.Since(queryStart)
mu.Lock()
if err != nil {
errors = append(errors, err)
} else {
totalQueries++
queryLatencies = append(queryLatencies, queryLatency)
// Free event memory
for _, ev := range events {
ev.Free()
}
}
mu.Unlock()
queryCount++
if queryCount%10 == 0 {
time.Sleep(10 * time.Millisecond) // Small delay every 10 queries
}
}
}(i)
}
wg.Wait()
duration := time.Since(start)
// Calculate metrics
result := &BenchmarkResult{
TestName: "Query Performance",
Duration: duration,
TotalEvents: int(totalQueries),
EventsPerSecond: float64(totalQueries) / duration.Seconds(),
ConcurrentWorkers: b.config.ConcurrentWorkers,
MemoryUsed: getMemUsage(),
}
if len(queryLatencies) > 0 {
result.AvgLatency = calculateAvgLatency(queryLatencies)
result.P90Latency = calculatePercentileLatency(queryLatencies, 0.90)
result.P95Latency = calculatePercentileLatency(queryLatencies, 0.95)
result.P99Latency = calculatePercentileLatency(queryLatencies, 0.99)
result.Bottom10Avg = calculateBottom10Avg(queryLatencies)
}
result.SuccessRate = 100.0 // No specific target count for queries
for _, err := range errors {
result.Errors = append(result.Errors, err.Error())
}
b.mu.Lock()
b.results = append(b.results, result)
b.mu.Unlock()
fmt.Printf(
"Query test completed: %d queries in %v\n", totalQueries, duration,
)
fmt.Printf("Queries/sec: %.2f\n", result.EventsPerSecond)
fmt.Printf("Avg query latency: %v\n", result.AvgLatency)
fmt.Printf("P95 query latency: %v\n", result.P95Latency)
fmt.Printf("P99 query latency: %v\n", result.P99Latency)
}
// RunConcurrentQueryStoreTest benchmarks the performance of concurrent query and store operations
func (b *Benchmark) RunConcurrentQueryStoreTest() {
fmt.Println("\n=== Concurrent Query/Store Test ===")
start := time.Now()
var totalQueries, totalWrites int64
var queryLatencies, writeLatencies []time.Duration
var errors []error
var mu sync.Mutex
// Pre-populate with some events
numSeedEvents := 5000
seedEvents := b.generateEvents(numSeedEvents)
ctx := context.Background()
fmt.Printf(
"Pre-populating database with %d events for concurrent query/store test...\n",
numSeedEvents,
)
for _, ev := range seedEvents {
b.db.SaveEvent(ctx, ev)
}
// Generate events for writing during the test
writeEvents := b.generateEvents(b.config.NumEvents)
// Create filters for querying
filters := []*filter.F{
func() *filter.F { // Recent events filter
f := filter.New()
f.Since = timestamp.FromUnix(time.Now().Add(-10 * time.Minute).Unix())
limit := uint(100)
f.Limit = &limit
return f
}(),
func() *filter.F { // Kind and tag filter
f := filter.New()
f.Kinds = kind.NewS(kind.TextNote)
f.Tags = tag.NewS(
tag.NewFromBytesSlice(
[]byte("t"), []byte("benchmark"),
),
)
limit := uint(50)
f.Limit = &limit
return f
}(),
}
var wg sync.WaitGroup
// Half of the workers will be readers, half will be writers
numReaders := b.config.ConcurrentWorkers / 2
numWriters := b.config.ConcurrentWorkers - numReaders
// Start query workers (readers)
for i := 0; i < numReaders; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
filterIndex := workerID % len(filters)
queryCount := 0
for time.Since(start) < b.config.TestDuration {
// Select a filter
f := filters[filterIndex]
filterIndex = (filterIndex + 1) % len(filters)
// Execute query
queryStart := time.Now()
events, err := b.db.QueryEvents(ctx, f)
queryLatency := time.Since(queryStart)
mu.Lock()
if err != nil {
errors = append(errors, err)
} else {
totalQueries++
queryLatencies = append(queryLatencies, queryLatency)
// Free event memory
for _, ev := range events {
ev.Free()
}
}
mu.Unlock()
queryCount++
if queryCount%5 == 0 {
time.Sleep(5 * time.Millisecond) // Small delay
}
}
}(i)
}
// Start write workers
for i := 0; i < numWriters; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
eventIndex := workerID
writeCount := 0
for time.Since(start) < b.config.TestDuration && eventIndex < len(writeEvents) {
// Write operation
writeStart := time.Now()
_, _, err := b.db.SaveEvent(ctx, writeEvents[eventIndex])
writeLatency := time.Since(writeStart)
mu.Lock()
if err != nil {
errors = append(errors, err)
} else {
totalWrites++
writeLatencies = append(writeLatencies, writeLatency)
}
mu.Unlock()
eventIndex += numWriters
writeCount++
if writeCount%10 == 0 {
time.Sleep(10 * time.Millisecond) // Small delay every 10 writes
}
}
}(i)
}
wg.Wait()
duration := time.Since(start)
// Calculate metrics
totalOps := totalQueries + totalWrites
result := &BenchmarkResult{
TestName: "Concurrent Query/Store",
Duration: duration,
TotalEvents: int(totalOps),
EventsPerSecond: float64(totalOps) / duration.Seconds(),
ConcurrentWorkers: b.config.ConcurrentWorkers,
MemoryUsed: getMemUsage(),
}
// Calculate combined latencies for overall metrics
allLatencies := append(queryLatencies, writeLatencies...)
if len(allLatencies) > 0 {
result.AvgLatency = calculateAvgLatency(allLatencies)
result.P90Latency = calculatePercentileLatency(allLatencies, 0.90)
result.P95Latency = calculatePercentileLatency(allLatencies, 0.95)
result.P99Latency = calculatePercentileLatency(allLatencies, 0.99)
result.Bottom10Avg = calculateBottom10Avg(allLatencies)
}
result.SuccessRate = 100.0 // No specific target
for _, err := range errors {
result.Errors = append(result.Errors, err.Error())
}
b.mu.Lock()
b.results = append(b.results, result)
b.mu.Unlock()
// Calculate separate metrics for queries and writes
var queryAvg, writeAvg time.Duration
if len(queryLatencies) > 0 {
queryAvg = calculateAvgLatency(queryLatencies)
}
if len(writeLatencies) > 0 {
writeAvg = calculateAvgLatency(writeLatencies)
}
fmt.Printf(
"Concurrent test completed: %d operations (%d queries, %d writes) in %v\n",
totalOps, totalQueries, totalWrites, duration,
)
fmt.Printf("Operations/sec: %.2f\n", result.EventsPerSecond)
fmt.Printf("Avg latency: %v\n", result.AvgLatency)
fmt.Printf("Avg query latency: %v\n", queryAvg)
fmt.Printf("Avg write latency: %v\n", writeAvg)
fmt.Printf("P95 latency: %v\n", result.P95Latency)
fmt.Printf("P99 latency: %v\n", result.P99Latency)
}
func (b *Benchmark) generateEvents(count int) []*event.E {
events := make([]*event.E, count)
now := timestamp.Now()
// Generate a keypair for signing all events
var keys p256k.Signer
if err := keys.Generate(); err != nil {
log.Fatalf("Failed to generate keys for benchmark events: %v", err)
}
for i := 0; i < count; i++ {
ev := event.New()
// Generate random 32-byte ID
ev.ID = make([]byte, 32)
rand.Read(ev.ID)
// Generate random 32-byte pubkey
ev.Pubkey = make([]byte, 32)
rand.Read(ev.Pubkey)
ev.CreatedAt = now.I64()
ev.Kind = kind.TextNote.K
ev.Content = []byte(fmt.Sprintf(
@@ -624,6 +982,11 @@ func (b *Benchmark) generateEvents(count int) []*event.E {
),
)
// Properly sign the event instead of generating fake signatures
if err := ev.Sign(&keys); err != nil {
log.Fatalf("Failed to sign event %d: %v", i, err)
}
events[i] = ev
}

View File

@@ -1,104 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_khatru-badger_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912195906053114 INF /tmp/benchmark_khatru-badger_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912195906053741 INF /tmp/benchmark_khatru-badger_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912195906053768 INF /tmp/benchmark_khatru-badger_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912195906054020 INF (*types.Uint32)(0xc00570406c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912195906054071 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 611.579176ms
Events/sec: 16351.11
Avg latency: 474.016µs
P95 latency: 479.03µs
P99 latency: 594.73µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 160.976517ms
Burst completed: 1000 events in 153.010415ms
Burst completed: 1000 events in 146.10015ms
Burst completed: 1000 events in 148.403729ms
Burst completed: 1000 events in 141.681801ms
Burst completed: 1000 events in 154.663067ms
Burst completed: 1000 events in 135.960988ms
Burst completed: 1000 events in 136.240589ms
Burst completed: 1000 events in 141.75454ms
Burst completed: 1000 events in 152.485379ms
Burst test completed: 10000 events in 6.496690038s
Events/sec: 1539.25
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 37.695370694s
Combined ops/sec: 265.28
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 611.579176ms
Total Events: 10000
Events/sec: 16351.11
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 154 MB
Avg Latency: 474.016µs
P95 Latency: 479.03µs
P99 Latency: 594.73µs
----------------------------------------
Test: Burst Pattern
Duration: 6.496690038s
Total Events: 10000
Events/sec: 1539.25
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 207 MB
Avg Latency: 226.602µs
P95 Latency: 239.525µs
P99 Latency: 168.561µs
----------------------------------------
Test: Mixed Read/Write
Duration: 37.695370694s
Total Events: 10000
Events/sec: 265.28
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 132 MB
Avg Latency: 9.930935ms
P95 Latency: 17.75358ms
P99 Latency: 24.256293ms
----------------------------------------
Report saved to: /tmp/benchmark_khatru-badger_8/benchmark_report.txt
20250912195950858706 INF /tmp/benchmark_khatru-badger_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912195951643646 INF /tmp/benchmark_khatru-badger_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 01. Size: 21 MiB of 21 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912195951645255 INF /tmp/benchmark_khatru-badger_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: khatru-badger
RELAY_URL: ws://khatru-badger:3334
TEST_TIMESTAMP: 2025-09-12T19:59:51+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,104 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_khatru-sqlite_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912195817361580 INF /tmp/benchmark_khatru-sqlite_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912195817362030 INF /tmp/benchmark_khatru-sqlite_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912195817362064 INF /tmp/benchmark_khatru-sqlite_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912195817362711 INF (*types.Uint32)(0xc00000005c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912195817362777 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 699.706889ms
Events/sec: 14291.70
Avg latency: 545.724µs
P95 latency: 473.43µs
P99 latency: 478.349µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 138.253122ms
Burst completed: 1000 events in 153.709429ms
Burst completed: 1000 events in 158.711026ms
Burst completed: 1000 events in 152.54677ms
Burst completed: 1000 events in 144.735244ms
Burst completed: 1000 events in 153.236893ms
Burst completed: 1000 events in 150.180515ms
Burst completed: 1000 events in 154.733588ms
Burst completed: 1000 events in 151.252182ms
Burst completed: 1000 events in 150.610613ms
Burst test completed: 10000 events in 6.534724469s
Events/sec: 1530.29
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 35.563312501s
Combined ops/sec: 281.19
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 699.706889ms
Total Events: 10000
Events/sec: 14291.70
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 154 MB
Avg Latency: 545.724µs
P95 Latency: 473.43µs
P99 Latency: 478.349µs
----------------------------------------
Test: Burst Pattern
Duration: 6.534724469s
Total Events: 10000
Events/sec: 1530.29
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 208 MB
Avg Latency: 205.962µs
P95 Latency: 165.525µs
P99 Latency: 253.411µs
----------------------------------------
Test: Mixed Read/Write
Duration: 35.563312501s
Total Events: 10000
Events/sec: 281.19
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 146 MB
Avg Latency: 9.092604ms
P95 Latency: 19.302571ms
P99 Latency: 16.944829ms
----------------------------------------
Report saved to: /tmp/benchmark_khatru-sqlite_8/benchmark_report.txt
20250912195900161526 INF /tmp/benchmark_khatru-sqlite_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912195900909573 INF /tmp/benchmark_khatru-sqlite_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 01. Size: 21 MiB of 21 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912195900911092 INF /tmp/benchmark_khatru-sqlite_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: khatru-sqlite
RELAY_URL: ws://khatru-sqlite:3334
TEST_TIMESTAMP: 2025-09-12T19:59:01+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,104 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_next-orly_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912195729240522 INF /tmp/benchmark_next-orly_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912195729241087 INF /tmp/benchmark_next-orly_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912195729241168 INF /tmp/benchmark_next-orly_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912195729241759 INF (*types.Uint32)(0xc0001de49c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912195729241847 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 558.618706ms
Events/sec: 17901.30
Avg latency: 433.058µs
P95 latency: 456.738µs
P99 latency: 337.231µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 172.949275ms
Burst completed: 1000 events in 175.209401ms
Burst completed: 1000 events in 156.532197ms
Burst completed: 1000 events in 157.913421ms
Burst completed: 1000 events in 151.37659ms
Burst completed: 1000 events in 161.938783ms
Burst completed: 1000 events in 168.47761ms
Burst completed: 1000 events in 159.951768ms
Burst completed: 1000 events in 170.308111ms
Burst completed: 1000 events in 146.767432ms
Burst test completed: 10000 events in 6.646634323s
Events/sec: 1504.52
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 35.548232107s
Combined ops/sec: 281.31
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 558.618706ms
Total Events: 10000
Events/sec: 17901.30
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 154 MB
Avg Latency: 433.058µs
P95 Latency: 456.738µs
P99 Latency: 337.231µs
----------------------------------------
Test: Burst Pattern
Duration: 6.646634323s
Total Events: 10000
Events/sec: 1504.52
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 207 MB
Avg Latency: 182.813µs
P95 Latency: 152.86µs
P99 Latency: 204.198µs
----------------------------------------
Test: Mixed Read/Write
Duration: 35.548232107s
Total Events: 10000
Events/sec: 281.31
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 215 MB
Avg Latency: 9.086952ms
P95 Latency: 18.156339ms
P99 Latency: 24.346573ms
----------------------------------------
Report saved to: /tmp/benchmark_next-orly_8/benchmark_report.txt
20250912195811996353 INF /tmp/benchmark_next-orly_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912195812308400 INF /tmp/benchmark_next-orly_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 01. Size: 21 MiB of 21 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912195812310341 INF /tmp/benchmark_next-orly_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: next-orly
RELAY_URL: ws://next-orly:8080
TEST_TIMESTAMP: 2025-09-12T19:58:12+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,104 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_nostr-rs-relay_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912200137539643 INF /tmp/benchmark_nostr-rs-relay_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912200137540391 INF /tmp/benchmark_nostr-rs-relay_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912200137540449 INF /tmp/benchmark_nostr-rs-relay_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912200137540903 INF (*types.Uint32)(0xc0001c24cc)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912200137540961 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 657.896815ms
Events/sec: 15199.95
Avg latency: 508.699µs
P95 latency: 1.011413ms
P99 latency: 710.782µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 149.389787ms
Burst completed: 1000 events in 138.154354ms
Burst completed: 1000 events in 139.952633ms
Burst completed: 1000 events in 148.684306ms
Burst completed: 1000 events in 154.779586ms
Burst completed: 1000 events in 163.72717ms
Burst completed: 1000 events in 142.665132ms
Burst completed: 1000 events in 151.637082ms
Burst completed: 1000 events in 143.018896ms
Burst completed: 1000 events in 157.963802ms
Burst test completed: 10000 events in 6.519459944s
Events/sec: 1533.87
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 36.26569002s
Combined ops/sec: 275.74
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 657.896815ms
Total Events: 10000
Events/sec: 15199.95
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 153 MB
Avg Latency: 508.699µs
P95 Latency: 1.011413ms
P99 Latency: 710.782µs
----------------------------------------
Test: Burst Pattern
Duration: 6.519459944s
Total Events: 10000
Events/sec: 1533.87
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 206 MB
Avg Latency: 217.187µs
P95 Latency: 130.018µs
P99 Latency: 261.728µs
----------------------------------------
Test: Mixed Read/Write
Duration: 36.26569002s
Total Events: 10000
Events/sec: 275.74
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 225 MB
Avg Latency: 9.38757ms
P95 Latency: 19.250416ms
P99 Latency: 20.049957ms
----------------------------------------
Report saved to: /tmp/benchmark_nostr-rs-relay_8/benchmark_report.txt
20250912200220985006 INF /tmp/benchmark_nostr-rs-relay_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912200221295381 INF /tmp/benchmark_nostr-rs-relay_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 01. Size: 21 MiB of 21 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912200221297677 INF /tmp/benchmark_nostr-rs-relay_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: nostr-rs-relay
RELAY_URL: ws://nostr-rs-relay:8080
TEST_TIMESTAMP: 2025-09-12T20:02:21+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,104 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_relayer-basic_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912195956808180 INF /tmp/benchmark_relayer-basic_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912195956808720 INF /tmp/benchmark_relayer-basic_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912195956808755 INF /tmp/benchmark_relayer-basic_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912195956809102 INF (*types.Uint32)(0xc0001bc04c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912195956809190 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 605.231707ms
Events/sec: 16522.60
Avg latency: 466.066µs
P95 latency: 514.849µs
P99 latency: 451.358µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 149.715312ms
Burst completed: 1000 events in 146.385191ms
Burst completed: 1000 events in 147.010481ms
Burst completed: 1000 events in 151.671062ms
Burst completed: 1000 events in 143.215087ms
Burst completed: 1000 events in 137.331431ms
Burst completed: 1000 events in 155.735079ms
Burst completed: 1000 events in 161.246126ms
Burst completed: 1000 events in 140.174417ms
Burst completed: 1000 events in 144.819799ms
Burst test completed: 10000 events in 6.503155987s
Events/sec: 1537.71
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 37.45410417s
Combined ops/sec: 266.99
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 605.231707ms
Total Events: 10000
Events/sec: 16522.60
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 152 MB
Avg Latency: 466.066µs
P95 Latency: 514.849µs
P99 Latency: 451.358µs
----------------------------------------
Test: Burst Pattern
Duration: 6.503155987s
Total Events: 10000
Events/sec: 1537.71
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 203 MB
Avg Latency: 215.609µs
P95 Latency: 141.91µs
P99 Latency: 204.819µs
----------------------------------------
Test: Mixed Read/Write
Duration: 37.45410417s
Total Events: 10000
Events/sec: 266.99
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 148 MB
Avg Latency: 9.851217ms
P95 Latency: 23.101412ms
P99 Latency: 17.889412ms
----------------------------------------
Report saved to: /tmp/benchmark_relayer-basic_8/benchmark_report.txt
20250912200041372670 INF /tmp/benchmark_relayer-basic_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912200041686782 INF /tmp/benchmark_relayer-basic_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 01. Size: 21 MiB of 21 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912200041689009 INF /tmp/benchmark_relayer-basic_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: relayer-basic
RELAY_URL: ws://relayer-basic:7447
TEST_TIMESTAMP: 2025-09-12T20:00:41+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,35 +0,0 @@
= NOSTR Relay Benchmark Results
Generated from: aggregate_report.txt
[cols="1,^1,^1,^1,^1,^1,^1",options="header"]
|===
| Metric | next-orly | khatru-sqlite | khatru-badger | relayer-basic | strfry | nostr-rs-relay
| Store Events/sec
| 17901.30 | 14291.70 | 16351.11 | 16522.60 | 15346.12 | 15199.95
| Store Avg Latency #1
| 433.058µs | 545.724µs | 474.016µs | 466.066µs | 506.51µs | 508.699µs
| Store P95 Latency #1
| 456.738µs | 473.43µs | 479.03µs | 514.849µs | 590.442µs | 1.011413ms
| Query Events/sec #2
| 1504.52 | 1530.29 | 1539.25 | 1537.71 | 1534.88 | 1533.87
| Query Avg Latency #2
| 182.813µs | 205.962µs | 226.602µs | 215.609µs | 216.564µs | 217.187µs
| Query P95 Latency #2
| 152.86µs | 165.525µs | 239.525µs | 141.91µs | 267.91µs | 130.018µs
| Concurrent Store/Query Events/sec #3
| 17901.30 | 14291.70 | 16351.11 | 16522.60 | 15346.12 | 15199.95
| Concurrent Store/Query Avg Latency #3
| 9.086952ms | 9.092604ms | 9.930935ms | 9.851217ms | 9.938991ms | 9.38757ms
| Concurrent Store/Query P95 Latency #3
| 18.156339ms | 19.302571ms | 17.75358ms | 23.101412ms | 19.784708ms | 19.250416ms
|===

View File

@@ -1,104 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_strfry_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912200046745432 INF /tmp/benchmark_strfry_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912200046746116 INF /tmp/benchmark_strfry_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912200046746193 INF /tmp/benchmark_strfry_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912200046746576 INF (*types.Uint32)(0xc0002a9c4c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912200046746636 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 651.630667ms
Events/sec: 15346.12
Avg latency: 506.51µs
P95 latency: 590.442µs
P99 latency: 278.399µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 148.701372ms
Burst completed: 1000 events in 161.333951ms
Burst completed: 1000 events in 146.993646ms
Burst completed: 1000 events in 155.768019ms
Burst completed: 1000 events in 143.83944ms
Burst completed: 1000 events in 156.208347ms
Burst completed: 1000 events in 150.769887ms
Burst completed: 1000 events in 140.217044ms
Burst completed: 1000 events in 150.831164ms
Burst completed: 1000 events in 135.759058ms
Burst test completed: 10000 events in 6.515183689s
Events/sec: 1534.88
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 37.667054484s
Combined ops/sec: 265.48
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 651.630667ms
Total Events: 10000
Events/sec: 15346.12
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 152 MB
Avg Latency: 506.51µs
P95 Latency: 590.442µs
P99 Latency: 278.399µs
----------------------------------------
Test: Burst Pattern
Duration: 6.515183689s
Total Events: 10000
Events/sec: 1534.88
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 203 MB
Avg Latency: 216.564µs
P95 Latency: 267.91µs
P99 Latency: 310.46µs
----------------------------------------
Test: Mixed Read/Write
Duration: 37.667054484s
Total Events: 10000
Events/sec: 265.48
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 136 MB
Avg Latency: 9.938991ms
P95 Latency: 19.784708ms
P99 Latency: 18.788985ms
----------------------------------------
Report saved to: /tmp/benchmark_strfry_8/benchmark_report.txt
20250912200131581470 INF /tmp/benchmark_strfry_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912200132372653 INF /tmp/benchmark_strfry_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 01. Size: 21 MiB of 21 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912200132384548 INF /tmp/benchmark_strfry_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: strfry
RELAY_URL: ws://strfry:8080
TEST_TIMESTAMP: 2025-09-12T20:01:32+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,140 +0,0 @@
================================================================
NOSTR RELAY BENCHMARK AGGREGATE REPORT
================================================================
Generated: 2025-09-12T22:43:29+00:00
Benchmark Configuration:
Events per test: 10000
Concurrent workers: 8
Test duration: 60s
Relays tested: 6
================================================================
SUMMARY BY RELAY
================================================================
Relay: next-orly
----------------------------------------
Status: COMPLETED
Events/sec: 18056.94
Events/sec: 1492.32
Events/sec: 16750.82
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 428.869µs
Bottom 10% Avg Latency: 643.51µs
Avg Latency: 178.04µs
P95 Latency: 607.997µs
P95 Latency: 243.954µs
P95 Latency: 21.665387ms
Relay: khatru-sqlite
----------------------------------------
Status: COMPLETED
Events/sec: 17635.76
Events/sec: 1510.39
Events/sec: 16509.10
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 437.941µs
Bottom 10% Avg Latency: 659.71µs
Avg Latency: 203.563µs
P95 Latency: 621.964µs
P95 Latency: 330.729µs
P95 Latency: 21.838576ms
Relay: khatru-badger
----------------------------------------
Status: COMPLETED
Events/sec: 17312.60
Events/sec: 1508.54
Events/sec: 15933.99
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 448.778µs
Bottom 10% Avg Latency: 664.268µs
Avg Latency: 196.38µs
P95 Latency: 633.085µs
P95 Latency: 293.579µs
P95 Latency: 22.727378ms
Relay: relayer-basic
----------------------------------------
Status: COMPLETED
Events/sec: 15155.00
Events/sec: 1545.44
Events/sec: 14255.58
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 513.243µs
Bottom 10% Avg Latency: 864.746µs
Avg Latency: 273.645µs
P95 Latency: 792.685µs
P95 Latency: 498.989µs
P95 Latency: 22.924497ms
Relay: strfry
----------------------------------------
Status: COMPLETED
Events/sec: 15245.05
Events/sec: 1533.59
Events/sec: 15507.07
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 510.383µs
Bottom 10% Avg Latency: 831.211µs
Avg Latency: 223.359µs
P95 Latency: 769.085µs
P95 Latency: 378.145µs
P95 Latency: 22.152884ms
Relay: nostr-rs-relay
----------------------------------------
Status: COMPLETED
Events/sec: 16312.24
Events/sec: 1502.05
Events/sec: 14131.23
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 476.418µs
Bottom 10% Avg Latency: 722.179µs
Avg Latency: 182.765µs
P95 Latency: 686.836µs
P95 Latency: 257.082µs
P95 Latency: 20.680962ms
================================================================
DETAILED RESULTS
================================================================
Individual relay reports are available in:
- /reports/run_20250912_222649/khatru-badger_results.txt
- /reports/run_20250912_222649/khatru-sqlite_results.txt
- /reports/run_20250912_222649/next-orly_results.txt
- /reports/run_20250912_222649/nostr-rs-relay_results.txt
- /reports/run_20250912_222649/relayer-basic_results.txt
- /reports/run_20250912_222649/strfry_results.txt
================================================================
BENCHMARK COMPARISON TABLE
================================================================
Relay Status Peak Tput/s Avg Latency Success Rate
---- ------ ----------- ----------- ------------
next-orly OK 18056.94 428.869µs 100.0%
khatru-sqlite OK 17635.76 437.941µs 100.0%
khatru-badger OK 17312.60 448.778µs 100.0%
relayer-basic OK 15155.00 513.243µs 100.0%
strfry OK 15245.05 510.383µs 100.0%
nostr-rs-relay OK 16312.24 476.418µs 100.0%
================================================================
End of Report
================================================================

View File

@@ -1,190 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_khatru-badger_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912223222496620 INF /tmp/benchmark_khatru-badger_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912223222497154 INF /tmp/benchmark_khatru-badger_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912223222497184 INF /tmp/benchmark_khatru-badger_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912223222497402 INF (*types.Uint32)(0xc0000100fc)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912223222497454 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 577.614152ms
Events/sec: 17312.60
Avg latency: 448.778µs
P90 latency: 584.783µs
P95 latency: 633.085µs
P99 latency: 749.537µs
Bottom 10% Avg latency: 664.268µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 161.62554ms
Burst completed: 1000 events in 154.666063ms
Burst completed: 1000 events in 149.999903ms
Burst completed: 1000 events in 169.141205ms
Burst completed: 1000 events in 153.987041ms
Burst completed: 1000 events in 141.227756ms
Burst completed: 1000 events in 168.989116ms
Burst completed: 1000 events in 161.032171ms
Burst completed: 1000 events in 182.128996ms
Burst completed: 1000 events in 161.86147ms
Burst test completed: 10000 events in 6.628942674s
Events/sec: 1508.54
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 36.466065909s
Combined ops/sec: 274.23
Pausing 10s before next round...
=== Starting test round 2/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 627.589155ms
Events/sec: 15933.99
Avg latency: 489.881µs
P90 latency: 628.857µs
P95 latency: 679.363µs
P99 latency: 828.307µs
Bottom 10% Avg latency: 716.862µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 150.262543ms
Burst completed: 1000 events in 148.027109ms
Burst completed: 1000 events in 139.184066ms
Burst completed: 1000 events in 147.196277ms
Burst completed: 1000 events in 141.143557ms
Burst completed: 1000 events in 138.727197ms
Burst completed: 1000 events in 143.014207ms
Burst completed: 1000 events in 143.355055ms
Burst completed: 1000 events in 162.573956ms
Burst completed: 1000 events in 142.875393ms
Burst test completed: 10000 events in 6.475822519s
Events/sec: 1544.21
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4742 reads in 1m0.036644794s
Combined ops/sec: 162.27
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 577.614152ms
Total Events: 10000
Events/sec: 17312.60
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 152 MB
Avg Latency: 448.778µs
P90 Latency: 584.783µs
P95 Latency: 633.085µs
P99 Latency: 749.537µs
Bottom 10% Avg Latency: 664.268µs
----------------------------------------
Test: Burst Pattern
Duration: 6.628942674s
Total Events: 10000
Events/sec: 1508.54
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 204 MB
Avg Latency: 196.38µs
P90 Latency: 260.706µs
P95 Latency: 293.579µs
P99 Latency: 385.694µs
Bottom 10% Avg Latency: 317.532µs
----------------------------------------
Test: Mixed Read/Write
Duration: 36.466065909s
Total Events: 10000
Events/sec: 274.23
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 128 MB
Avg Latency: 9.448363ms
P90 Latency: 20.988228ms
P95 Latency: 22.727378ms
P99 Latency: 25.094784ms
Bottom 10% Avg Latency: 23.01277ms
----------------------------------------
Test: Peak Throughput
Duration: 627.589155ms
Total Events: 10000
Events/sec: 15933.99
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 124 MB
Avg Latency: 489.881µs
P90 Latency: 628.857µs
P95 Latency: 679.363µs
P99 Latency: 828.307µs
Bottom 10% Avg Latency: 716.862µs
----------------------------------------
Test: Burst Pattern
Duration: 6.475822519s
Total Events: 10000
Events/sec: 1544.21
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 170 MB
Avg Latency: 215.418µs
P90 Latency: 287.237µs
P95 Latency: 339.025µs
P99 Latency: 510.682µs
Bottom 10% Avg Latency: 378.172µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.036644794s
Total Events: 9742
Events/sec: 162.27
Success Rate: 97.4%
Concurrent Workers: 8
Memory Used: 181 MB
Avg Latency: 19.714686ms
P90 Latency: 44.573506ms
P95 Latency: 46.895555ms
P99 Latency: 50.425027ms
Bottom 10% Avg Latency: 47.384489ms
----------------------------------------
Report saved to: /tmp/benchmark_khatru-badger_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_khatru-badger_8/benchmark_report.adoc
20250912223503335481 INF /tmp/benchmark_khatru-badger_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912223504473151 INF /tmp/benchmark_khatru-badger_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 02. Size: 41 MiB of 41 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912223504475627 INF /tmp/benchmark_khatru-badger_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: khatru-badger
RELAY_URL: ws://khatru-badger:3334
TEST_TIMESTAMP: 2025-09-12T22:35:04+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,190 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_khatru-sqlite_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912222936300616 INF /tmp/benchmark_khatru-sqlite_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912222936301606 INF /tmp/benchmark_khatru-sqlite_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912222936301647 INF /tmp/benchmark_khatru-sqlite_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912222936301987 INF (*types.Uint32)(0xc0001c23f0)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912222936302060 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 567.02963ms
Events/sec: 17635.76
Avg latency: 437.941µs
P90 latency: 574.133µs
P95 latency: 621.964µs
P99 latency: 768.473µs
Bottom 10% Avg latency: 659.71µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 172.012448ms
Burst completed: 1000 events in 145.502701ms
Burst completed: 1000 events in 153.928098ms
Burst completed: 1000 events in 169.995269ms
Burst completed: 1000 events in 147.617375ms
Burst completed: 1000 events in 157.211387ms
Burst completed: 1000 events in 153.332744ms
Burst completed: 1000 events in 172.374938ms
Burst completed: 1000 events in 167.518935ms
Burst completed: 1000 events in 155.211871ms
Burst test completed: 10000 events in 6.620785215s
Events/sec: 1510.39
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 35.700582016s
Combined ops/sec: 280.11
Pausing 10s before next round...
=== Starting test round 2/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 605.726547ms
Events/sec: 16509.10
Avg latency: 470.577µs
P90 latency: 609.791µs
P95 latency: 660.256µs
P99 latency: 788.641µs
Bottom 10% Avg latency: 687.847µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 135.310723ms
Burst completed: 1000 events in 166.604305ms
Burst completed: 1000 events in 141.453184ms
Burst completed: 1000 events in 146.579351ms
Burst completed: 1000 events in 154.453638ms
Burst completed: 1000 events in 156.212516ms
Burst completed: 1000 events in 142.309354ms
Burst completed: 1000 events in 152.268188ms
Burst completed: 1000 events in 144.187829ms
Burst completed: 1000 events in 147.609002ms
Burst test completed: 10000 events in 6.508461808s
Events/sec: 1536.46
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4662 reads in 1m0.040595326s
Combined ops/sec: 160.92
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 567.02963ms
Total Events: 10000
Events/sec: 17635.76
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 154 MB
Avg Latency: 437.941µs
P90 Latency: 574.133µs
P95 Latency: 621.964µs
P99 Latency: 768.473µs
Bottom 10% Avg Latency: 659.71µs
----------------------------------------
Test: Burst Pattern
Duration: 6.620785215s
Total Events: 10000
Events/sec: 1510.39
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 208 MB
Avg Latency: 203.563µs
P90 Latency: 274.152µs
P95 Latency: 330.729µs
P99 Latency: 521.483µs
Bottom 10% Avg Latency: 378.237µs
----------------------------------------
Test: Mixed Read/Write
Duration: 35.700582016s
Total Events: 10000
Events/sec: 280.11
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 232 MB
Avg Latency: 9.150925ms
P90 Latency: 20.1434ms
P95 Latency: 21.838576ms
P99 Latency: 24.0106ms
Bottom 10% Avg Latency: 22.04901ms
----------------------------------------
Test: Peak Throughput
Duration: 605.726547ms
Total Events: 10000
Events/sec: 16509.10
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 139 MB
Avg Latency: 470.577µs
P90 Latency: 609.791µs
P95 Latency: 660.256µs
P99 Latency: 788.641µs
Bottom 10% Avg Latency: 687.847µs
----------------------------------------
Test: Burst Pattern
Duration: 6.508461808s
Total Events: 10000
Events/sec: 1536.46
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 182 MB
Avg Latency: 199.49µs
P90 Latency: 261.427µs
P95 Latency: 294.771µs
P99 Latency: 406.814µs
Bottom 10% Avg Latency: 332.083µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.040595326s
Total Events: 9662
Events/sec: 160.92
Success Rate: 96.6%
Concurrent Workers: 8
Memory Used: 204 MB
Avg Latency: 19.935937ms
P90 Latency: 44.802034ms
P95 Latency: 48.282589ms
P99 Latency: 52.169026ms
Bottom 10% Avg Latency: 48.641697ms
----------------------------------------
Report saved to: /tmp/benchmark_khatru-sqlite_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_khatru-sqlite_8/benchmark_report.adoc
20250912223216370778 INF /tmp/benchmark_khatru-sqlite_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912223217349356 INF /tmp/benchmark_khatru-sqlite_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 02. Size: 41 MiB of 41 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912223217352393 INF /tmp/benchmark_khatru-sqlite_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: khatru-sqlite
RELAY_URL: ws://khatru-sqlite:3334
TEST_TIMESTAMP: 2025-09-12T22:32:17+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,190 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_next-orly_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912222650025765 INF /tmp/benchmark_next-orly_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912222650026455 INF /tmp/benchmark_next-orly_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912222650026497 INF /tmp/benchmark_next-orly_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912222650026747 INF (*types.Uint32)(0xc0001f63cc)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912222650026778 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 553.803776ms
Events/sec: 18056.94
Avg latency: 428.869µs
P90 latency: 558.663µs
P95 latency: 607.997µs
P99 latency: 749.787µs
Bottom 10% Avg latency: 643.51µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 190.801687ms
Burst completed: 1000 events in 168.170564ms
Burst completed: 1000 events in 161.16591ms
Burst completed: 1000 events in 161.43364ms
Burst completed: 1000 events in 148.293941ms
Burst completed: 1000 events in 172.875177ms
Burst completed: 1000 events in 178.930553ms
Burst completed: 1000 events in 161.052715ms
Burst completed: 1000 events in 162.071335ms
Burst completed: 1000 events in 171.849756ms
Burst test completed: 10000 events in 6.70096222s
Events/sec: 1492.32
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 35.645619485s
Combined ops/sec: 280.54
Pausing 10s before next round...
=== Starting test round 2/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 596.985601ms
Events/sec: 16750.82
Avg latency: 465.438µs
P90 latency: 594.151µs
P95 latency: 636.592µs
P99 latency: 757.953µs
Bottom 10% Avg latency: 672.673µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 152.121077ms
Burst completed: 1000 events in 160.774367ms
Burst completed: 1000 events in 137.913676ms
Burst completed: 1000 events in 142.916647ms
Burst completed: 1000 events in 166.771131ms
Burst completed: 1000 events in 160.016244ms
Burst completed: 1000 events in 156.369302ms
Burst completed: 1000 events in 158.850666ms
Burst completed: 1000 events in 154.842287ms
Burst completed: 1000 events in 146.828122ms
Burst test completed: 10000 events in 6.557799732s
Events/sec: 1524.90
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4782 reads in 1m0.043775785s
Combined ops/sec: 162.91
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 553.803776ms
Total Events: 10000
Events/sec: 18056.94
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 153 MB
Avg Latency: 428.869µs
P90 Latency: 558.663µs
P95 Latency: 607.997µs
P99 Latency: 749.787µs
Bottom 10% Avg Latency: 643.51µs
----------------------------------------
Test: Burst Pattern
Duration: 6.70096222s
Total Events: 10000
Events/sec: 1492.32
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 204 MB
Avg Latency: 178.04µs
P90 Latency: 224.367µs
P95 Latency: 243.954µs
P99 Latency: 318.225µs
Bottom 10% Avg Latency: 264.418µs
----------------------------------------
Test: Mixed Read/Write
Duration: 35.645619485s
Total Events: 10000
Events/sec: 280.54
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 120 MB
Avg Latency: 9.118653ms
P90 Latency: 19.852346ms
P95 Latency: 21.665387ms
P99 Latency: 23.946919ms
Bottom 10% Avg Latency: 21.867062ms
----------------------------------------
Test: Peak Throughput
Duration: 596.985601ms
Total Events: 10000
Events/sec: 16750.82
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 121 MB
Avg Latency: 465.438µs
P90 Latency: 594.151µs
P95 Latency: 636.592µs
P99 Latency: 757.953µs
Bottom 10% Avg Latency: 672.673µs
----------------------------------------
Test: Burst Pattern
Duration: 6.557799732s
Total Events: 10000
Events/sec: 1524.90
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 167 MB
Avg Latency: 189.538µs
P90 Latency: 247.511µs
P95 Latency: 274.011µs
P99 Latency: 360.977µs
Bottom 10% Avg Latency: 296.967µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.043775785s
Total Events: 9782
Events/sec: 162.91
Success Rate: 97.8%
Concurrent Workers: 8
Memory Used: 193 MB
Avg Latency: 19.562536ms
P90 Latency: 43.431835ms
P95 Latency: 46.326204ms
P99 Latency: 50.533302ms
Bottom 10% Avg Latency: 46.979603ms
----------------------------------------
Report saved to: /tmp/benchmark_next-orly_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_next-orly_8/benchmark_report.adoc
20250912222930150767 INF /tmp/benchmark_next-orly_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912222931147258 INF /tmp/benchmark_next-orly_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 02. Size: 41 MiB of 41 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912222931149928 INF /tmp/benchmark_next-orly_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: next-orly
RELAY_URL: ws://next-orly:8080
TEST_TIMESTAMP: 2025-09-12T22:29:31+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,190 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_nostr-rs-relay_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912224044213613 INF /tmp/benchmark_nostr-rs-relay_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912224044214094 INF /tmp/benchmark_nostr-rs-relay_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912224044214130 INF /tmp/benchmark_nostr-rs-relay_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912224044214381 INF (*types.Uint32)(0xc000233c3c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912224044214413 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 613.036589ms
Events/sec: 16312.24
Avg latency: 476.418µs
P90 latency: 627.852µs
P95 latency: 686.836µs
P99 latency: 841.471µs
Bottom 10% Avg latency: 722.179µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 156.218882ms
Burst completed: 1000 events in 170.25756ms
Burst completed: 1000 events in 164.944293ms
Burst completed: 1000 events in 162.767866ms
Burst completed: 1000 events in 148.744622ms
Burst completed: 1000 events in 163.556351ms
Burst completed: 1000 events in 172.007512ms
Burst completed: 1000 events in 159.806858ms
Burst completed: 1000 events in 168.086258ms
Burst completed: 1000 events in 164.931889ms
Burst test completed: 10000 events in 6.657581804s
Events/sec: 1502.05
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 34.850355805s
Combined ops/sec: 286.94
Pausing 10s before next round...
=== Starting test round 2/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 707.652249ms
Events/sec: 14131.23
Avg latency: 551.706µs
P90 latency: 724.937µs
P95 latency: 790.563µs
P99 latency: 980.677µs
Bottom 10% Avg latency: 836.659µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 164.62419ms
Burst completed: 1000 events in 155.938167ms
Burst completed: 1000 events in 132.903056ms
Burst completed: 1000 events in 142.377596ms
Burst completed: 1000 events in 155.024184ms
Burst completed: 1000 events in 147.095521ms
Burst completed: 1000 events in 150.027389ms
Burst completed: 1000 events in 152.873043ms
Burst completed: 1000 events in 150.635479ms
Burst completed: 1000 events in 146.45553ms
Burst test completed: 10000 events in 6.519122877s
Events/sec: 1533.95
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4806 reads in 1m0.03930731s
Combined ops/sec: 163.33
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 613.036589ms
Total Events: 10000
Events/sec: 16312.24
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 154 MB
Avg Latency: 476.418µs
P90 Latency: 627.852µs
P95 Latency: 686.836µs
P99 Latency: 841.471µs
Bottom 10% Avg Latency: 722.179µs
----------------------------------------
Test: Burst Pattern
Duration: 6.657581804s
Total Events: 10000
Events/sec: 1502.05
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 209 MB
Avg Latency: 182.765µs
P90 Latency: 234.409µs
P95 Latency: 257.082µs
P99 Latency: 330.764µs
Bottom 10% Avg Latency: 277.843µs
----------------------------------------
Test: Mixed Read/Write
Duration: 34.850355805s
Total Events: 10000
Events/sec: 286.94
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 221 MB
Avg Latency: 8.802188ms
P90 Latency: 19.075904ms
P95 Latency: 20.680962ms
P99 Latency: 22.78326ms
Bottom 10% Avg Latency: 20.897398ms
----------------------------------------
Test: Peak Throughput
Duration: 707.652249ms
Total Events: 10000
Events/sec: 14131.23
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 120 MB
Avg Latency: 551.706µs
P90 Latency: 724.937µs
P95 Latency: 790.563µs
P99 Latency: 980.677µs
Bottom 10% Avg Latency: 836.659µs
----------------------------------------
Test: Burst Pattern
Duration: 6.519122877s
Total Events: 10000
Events/sec: 1533.95
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 168 MB
Avg Latency: 204.873µs
P90 Latency: 271.569µs
P95 Latency: 329.28µs
P99 Latency: 558.829µs
Bottom 10% Avg Latency: 380.136µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.03930731s
Total Events: 9806
Events/sec: 163.33
Success Rate: 98.1%
Concurrent Workers: 8
Memory Used: 164 MB
Avg Latency: 19.506135ms
P90 Latency: 43.206775ms
P95 Latency: 45.944446ms
P99 Latency: 49.910436ms
Bottom 10% Avg Latency: 46.417943ms
----------------------------------------
Report saved to: /tmp/benchmark_nostr-rs-relay_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_nostr-rs-relay_8/benchmark_report.adoc
20250912224323628137 INF /tmp/benchmark_nostr-rs-relay_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912224324180883 INF /tmp/benchmark_nostr-rs-relay_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 02. Size: 41 MiB of 41 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912224324184069 INF /tmp/benchmark_nostr-rs-relay_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: nostr-rs-relay
RELAY_URL: ws://nostr-rs-relay:8080
TEST_TIMESTAMP: 2025-09-12T22:43:24+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,190 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_relayer-basic_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912223509638362 INF /tmp/benchmark_relayer-basic_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912223509638864 INF /tmp/benchmark_relayer-basic_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912223509638903 INF /tmp/benchmark_relayer-basic_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912223509639558 INF (*types.Uint32)(0xc00570005c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912223509639620 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 659.848301ms
Events/sec: 15155.00
Avg latency: 513.243µs
P90 latency: 706.89µs
P95 latency: 792.685µs
P99 latency: 1.089215ms
Bottom 10% Avg latency: 864.746µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 142.551144ms
Burst completed: 1000 events in 137.426595ms
Burst completed: 1000 events in 139.51501ms
Burst completed: 1000 events in 143.683041ms
Burst completed: 1000 events in 136.500167ms
Burst completed: 1000 events in 139.573844ms
Burst completed: 1000 events in 145.873173ms
Burst completed: 1000 events in 144.256594ms
Burst completed: 1000 events in 157.89329ms
Burst completed: 1000 events in 153.882313ms
Burst test completed: 10000 events in 6.47066659s
Events/sec: 1545.44
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 37.483034098s
Combined ops/sec: 266.79
Pausing 10s before next round...
=== Starting test round 2/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 701.479526ms
Events/sec: 14255.58
Avg latency: 544.692µs
P90 latency: 742.997µs
P95 latency: 845.975µs
P99 latency: 1.147624ms
Bottom 10% Avg latency: 913.45µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 143.063212ms
Burst completed: 1000 events in 139.422008ms
Burst completed: 1000 events in 138.184516ms
Burst completed: 1000 events in 148.207616ms
Burst completed: 1000 events in 137.663883ms
Burst completed: 1000 events in 141.607643ms
Burst completed: 1000 events in 143.668551ms
Burst completed: 1000 events in 140.467359ms
Burst completed: 1000 events in 139.860509ms
Burst completed: 1000 events in 138.328306ms
Burst test completed: 10000 events in 6.43971118s
Events/sec: 1552.86
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4870 reads in 1m0.034216467s
Combined ops/sec: 164.41
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 659.848301ms
Total Events: 10000
Events/sec: 15155.00
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 153 MB
Avg Latency: 513.243µs
P90 Latency: 706.89µs
P95 Latency: 792.685µs
P99 Latency: 1.089215ms
Bottom 10% Avg Latency: 864.746µs
----------------------------------------
Test: Burst Pattern
Duration: 6.47066659s
Total Events: 10000
Events/sec: 1545.44
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 206 MB
Avg Latency: 273.645µs
P90 Latency: 407.483µs
P95 Latency: 498.989µs
P99 Latency: 772.406µs
Bottom 10% Avg Latency: 574.801µs
----------------------------------------
Test: Mixed Read/Write
Duration: 37.483034098s
Total Events: 10000
Events/sec: 266.79
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 163 MB
Avg Latency: 9.873363ms
P90 Latency: 21.643466ms
P95 Latency: 22.924497ms
P99 Latency: 24.961324ms
Bottom 10% Avg Latency: 23.201171ms
----------------------------------------
Test: Peak Throughput
Duration: 701.479526ms
Total Events: 10000
Events/sec: 14255.58
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 153 MB
Avg Latency: 544.692µs
P90 Latency: 742.997µs
P95 Latency: 845.975µs
P99 Latency: 1.147624ms
Bottom 10% Avg Latency: 913.45µs
----------------------------------------
Test: Burst Pattern
Duration: 6.43971118s
Total Events: 10000
Events/sec: 1552.86
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 204 MB
Avg Latency: 266.006µs
P90 Latency: 402.683µs
P95 Latency: 491.253µs
P99 Latency: 715.735µs
Bottom 10% Avg Latency: 553.762µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.034216467s
Total Events: 9870
Events/sec: 164.41
Success Rate: 98.7%
Concurrent Workers: 8
Memory Used: 184 MB
Avg Latency: 19.308183ms
P90 Latency: 42.766459ms
P95 Latency: 45.372157ms
P99 Latency: 49.993951ms
Bottom 10% Avg Latency: 46.189525ms
----------------------------------------
Report saved to: /tmp/benchmark_relayer-basic_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_relayer-basic_8/benchmark_report.adoc
20250912223751453794 INF /tmp/benchmark_relayer-basic_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912223752488197 INF /tmp/benchmark_relayer-basic_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 02. Size: 41 MiB of 41 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912223752491495 INF /tmp/benchmark_relayer-basic_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: relayer-basic
RELAY_URL: ws://relayer-basic:7447
TEST_TIMESTAMP: 2025-09-12T22:37:52+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,190 +0,0 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_strfry_8
Events: 10000, Workers: 8, Duration: 1m0s
20250912223757656112 INF /tmp/benchmark_strfry_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
20250912223757657685 INF /tmp/benchmark_strfry_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
20250912223757657767 INF /tmp/benchmark_strfry_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
20250912223757658314 INF (*types.Uint32)(0xc0055c63ac)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
20250912223757658385 INF migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 655.950723ms
Events/sec: 15245.05
Avg latency: 510.383µs
P90 latency: 690.815µs
P95 latency: 769.085µs
P99 latency: 1.000349ms
Bottom 10% Avg latency: 831.211µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 168.844089ms
Burst completed: 1000 events in 138.644286ms
Burst completed: 1000 events in 167.717113ms
Burst completed: 1000 events in 141.566337ms
Burst completed: 1000 events in 141.186447ms
Burst completed: 1000 events in 145.845582ms
Burst completed: 1000 events in 142.834263ms
Burst completed: 1000 events in 144.707595ms
Burst completed: 1000 events in 144.096361ms
Burst completed: 1000 events in 158.524931ms
Burst test completed: 10000 events in 6.520630606s
Events/sec: 1533.59
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 36.04854491s
Combined ops/sec: 277.40
Pausing 10s before next round...
=== Starting test round 2/2 ===
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 644.867085ms
Events/sec: 15507.07
Avg latency: 501.972µs
P90 latency: 650.197µs
P95 latency: 709.37µs
P99 latency: 914.673µs
Bottom 10% Avg latency: 754.969µs
=== Burst Pattern Test ===
Burst completed: 1000 events in 133.763626ms
Burst completed: 1000 events in 135.289448ms
Burst completed: 1000 events in 136.874215ms
Burst completed: 1000 events in 135.118277ms
Burst completed: 1000 events in 139.247778ms
Burst completed: 1000 events in 142.262475ms
Burst completed: 1000 events in 141.21783ms
Burst completed: 1000 events in 143.089554ms
Burst completed: 1000 events in 148.027057ms
Burst completed: 1000 events in 150.006497ms
Burst test completed: 10000 events in 6.429121967s
Events/sec: 1555.42
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4857 reads in 1m0.047885362s
Combined ops/sec: 164.15
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 655.950723ms
Total Events: 10000
Events/sec: 15245.05
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 154 MB
Avg Latency: 510.383µs
P90 Latency: 690.815µs
P95 Latency: 769.085µs
P99 Latency: 1.000349ms
Bottom 10% Avg Latency: 831.211µs
----------------------------------------
Test: Burst Pattern
Duration: 6.520630606s
Total Events: 10000
Events/sec: 1533.59
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 208 MB
Avg Latency: 223.359µs
P90 Latency: 321.256µs
P95 Latency: 378.145µs
P99 Latency: 530.597µs
Bottom 10% Avg Latency: 412.953µs
----------------------------------------
Test: Mixed Read/Write
Duration: 36.04854491s
Total Events: 10000
Events/sec: 277.40
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 222 MB
Avg Latency: 9.309397ms
P90 Latency: 20.403594ms
P95 Latency: 22.152884ms
P99 Latency: 24.513304ms
Bottom 10% Avg Latency: 22.447709ms
----------------------------------------
Test: Peak Throughput
Duration: 644.867085ms
Total Events: 10000
Events/sec: 15507.07
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 125 MB
Avg Latency: 501.972µs
P90 Latency: 650.197µs
P95 Latency: 709.37µs
P99 Latency: 914.673µs
Bottom 10% Avg Latency: 754.969µs
----------------------------------------
Test: Burst Pattern
Duration: 6.429121967s
Total Events: 10000
Events/sec: 1555.42
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 170 MB
Avg Latency: 239.454µs
P90 Latency: 335.133µs
P95 Latency: 408.012µs
P99 Latency: 593.458µs
Bottom 10% Avg Latency: 446.804µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.047885362s
Total Events: 9857
Events/sec: 164.15
Success Rate: 98.6%
Concurrent Workers: 8
Memory Used: 189 MB
Avg Latency: 19.373297ms
P90 Latency: 42.953055ms
P95 Latency: 45.636867ms
P99 Latency: 49.71977ms
Bottom 10% Avg Latency: 46.144029ms
----------------------------------------
Report saved to: /tmp/benchmark_strfry_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_strfry_8/benchmark_report.adoc
20250912224038033173 INF /tmp/benchmark_strfry_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
20250912224039055498 INF /tmp/benchmark_strfry_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 02. Size: 41 MiB of 41 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
20250912224039058214 INF /tmp/benchmark_strfry_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: strfry
RELAY_URL: ws://strfry:8080
TEST_TIMESTAMP: 2025-09-12T22:40:39+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -1,7 +1,7 @@
================================================================
NOSTR RELAY BENCHMARK AGGREGATE REPORT
================================================================
Generated: 2025-09-12T20:02:26+00:00
Generated: 2025-09-20T11:04:39+00:00
Benchmark Configuration:
Events per test: 10000
Concurrent workers: 8
@@ -16,98 +16,98 @@ SUMMARY BY RELAY
Relay: next-orly
----------------------------------------
Status: COMPLETED
Events/sec: 17901.30
Events/sec: 1504.52
Events/sec: 17901.30
Events/sec: 1035.42
Events/sec: 659.20
Events/sec: 1094.56
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 433.058µs
Avg Latency: 182.813µs
Avg Latency: 9.086952ms
P95 Latency: 456.738µs
P95 Latency: 152.86µs
P95 Latency: 18.156339ms
Avg Latency: 470.069µs
Bottom 10% Avg Latency: 750.491µs
Avg Latency: 190.573µs
P95 Latency: 693.101µs
P95 Latency: 289.761µs
P95 Latency: 22.450848ms
Relay: khatru-sqlite
----------------------------------------
Status: COMPLETED
Events/sec: 14291.70
Events/sec: 1530.29
Events/sec: 14291.70
Events/sec: 1105.61
Events/sec: 624.87
Events/sec: 1070.10
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 545.724µs
Avg Latency: 205.962µs
Avg Latency: 9.092604ms
P95 Latency: 473.43µs
P95 Latency: 165.525µs
P95 Latency: 19.302571ms
Avg Latency: 458.035µs
Bottom 10% Avg Latency: 702.193µs
Avg Latency: 193.997µs
P95 Latency: 660.608µs
P95 Latency: 302.666µs
P95 Latency: 23.653412ms
Relay: khatru-badger
----------------------------------------
Status: COMPLETED
Events/sec: 16351.11
Events/sec: 1539.25
Events/sec: 16351.11
Events/sec: 1040.11
Events/sec: 663.14
Events/sec: 1065.58
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 474.016µs
Avg Latency: 226.602µs
Avg Latency: 9.930935ms
P95 Latency: 479.03µs
P95 Latency: 239.525µs
P95 Latency: 17.75358ms
Avg Latency: 454.784µs
Bottom 10% Avg Latency: 706.219µs
Avg Latency: 193.914µs
P95 Latency: 654.637µs
P95 Latency: 296.525µs
P95 Latency: 21.642655ms
Relay: relayer-basic
----------------------------------------
Status: COMPLETED
Events/sec: 16522.60
Events/sec: 1537.71
Events/sec: 16522.60
Events/sec: 1104.88
Events/sec: 642.17
Events/sec: 1079.27
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 466.066µs
Avg Latency: 215.609µs
Avg Latency: 9.851217ms
P95 Latency: 514.849µs
P95 Latency: 141.91µs
P95 Latency: 23.101412ms
Avg Latency: 433.89µs
Bottom 10% Avg Latency: 653.813µs
Avg Latency: 186.306µs
P95 Latency: 617.868µs
P95 Latency: 279.192µs
P95 Latency: 21.247322ms
Relay: strfry
----------------------------------------
Status: COMPLETED
Events/sec: 15346.12
Events/sec: 1534.88
Events/sec: 15346.12
Events/sec: 1090.49
Events/sec: 652.03
Events/sec: 1098.57
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 506.51µs
Avg Latency: 216.564µs
Avg Latency: 9.938991ms
P95 Latency: 590.442µs
P95 Latency: 267.91µs
P95 Latency: 19.784708ms
Avg Latency: 448.058µs
Bottom 10% Avg Latency: 729.464µs
Avg Latency: 189.06µs
P95 Latency: 667.141µs
P95 Latency: 290.433µs
P95 Latency: 20.822884ms
Relay: nostr-rs-relay
----------------------------------------
Status: COMPLETED
Events/sec: 15199.95
Events/sec: 1533.87
Events/sec: 15199.95
Events/sec: 1123.91
Events/sec: 647.62
Events/sec: 1033.64
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 508.699µs
Avg Latency: 217.187µs
Avg Latency: 9.38757ms
P95 Latency: 1.011413ms
P95 Latency: 130.018µs
P95 Latency: 19.250416ms
Avg Latency: 416.753µs
Bottom 10% Avg Latency: 638.318µs
Avg Latency: 185.217µs
P95 Latency: 597.338µs
P95 Latency: 273.191µs
P95 Latency: 22.416221ms
================================================================
@@ -115,12 +115,12 @@ DETAILED RESULTS
================================================================
Individual relay reports are available in:
- /reports/run_20250912_195729/khatru-badger_results.txt
- /reports/run_20250912_195729/khatru-sqlite_results.txt
- /reports/run_20250912_195729/next-orly_results.txt
- /reports/run_20250912_195729/nostr-rs-relay_results.txt
- /reports/run_20250912_195729/relayer-basic_results.txt
- /reports/run_20250912_195729/strfry_results.txt
- /reports/run_20250920_101521/khatru-badger_results.txt
- /reports/run_20250920_101521/khatru-sqlite_results.txt
- /reports/run_20250920_101521/next-orly_results.txt
- /reports/run_20250920_101521/nostr-rs-relay_results.txt
- /reports/run_20250920_101521/relayer-basic_results.txt
- /reports/run_20250920_101521/strfry_results.txt
================================================================
BENCHMARK COMPARISON TABLE
@@ -128,12 +128,12 @@ BENCHMARK COMPARISON TABLE
Relay Status Peak Tput/s Avg Latency Success Rate
---- ------ ----------- ----------- ------------
next-orly OK 17901.30 433.058µs 100.0%
khatru-sqlite OK 14291.70 545.724µs 100.0%
khatru-badger OK 16351.11 474.016µs 100.0%
relayer-basic OK 16522.60 466.066µs 100.0%
strfry OK 15346.12 506.51µs 100.0%
nostr-rs-relay OK 15199.95 508.699µs 100.0%
next-orly OK 1035.42 470.069µs 100.0%
khatru-sqlite OK 1105.61 458.035µs 100.0%
khatru-badger OK 1040.11 454.784µs 100.0%
relayer-basic OK 1104.88 433.89µs 100.0%
strfry OK 1090.49 448.058µs 100.0%
nostr-rs-relay OK 1123.91 416.753µs 100.0%
================================================================
End of Report

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_khatru-badger_8
Events: 10000, Workers: 8, Duration: 1m0s
1758364309339505/tmp/benchmark_khatru-badger_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758364309340007/tmp/benchmark_khatru-badger_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758364309340039/tmp/benchmark_khatru-badger_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758364309340327(*types.Uint32)(0xc000147840)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758364309340465migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.614321551s
Events/sec: 1040.11
Avg latency: 454.784µs
P90 latency: 596.266µs
P95 latency: 654.637µs
P99 latency: 844.569µs
Bottom 10% Avg latency: 706.219µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 136.444875ms
Burst completed: 1000 events in 141.806497ms
Burst completed: 1000 events in 168.991278ms
Burst completed: 1000 events in 167.713425ms
Burst completed: 1000 events in 162.89698ms
Burst completed: 1000 events in 157.775164ms
Burst completed: 1000 events in 166.476709ms
Burst completed: 1000 events in 161.742632ms
Burst completed: 1000 events in 162.138977ms
Burst completed: 1000 events in 156.657194ms
Burst test completed: 10000 events in 15.07982611s
Events/sec: 663.14
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 44.903267299s
Combined ops/sec: 222.70
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3166 queries in 1m0.104195004s
Queries/sec: 52.68
Avg query latency: 125.847553ms
P95 query latency: 148.109766ms
P99 query latency: 212.054697ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11366 operations (1366 queries, 10000 writes) in 1m0.127232573s
Operations/sec: 189.03
Avg latency: 16.671438ms
Avg query latency: 134.993072ms
Avg write latency: 508.703µs
P95 latency: 133.755996ms
P99 latency: 152.790563ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.384548186s
Events/sec: 1065.58
Avg latency: 566.375µs
P90 latency: 738.377µs
P95 latency: 839.679µs
P99 latency: 1.131084ms
Bottom 10% Avg latency: 1.312791ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 166.832259ms
Burst completed: 1000 events in 175.061575ms
Burst completed: 1000 events in 168.897493ms
Burst completed: 1000 events in 167.584171ms
Burst completed: 1000 events in 178.212526ms
Burst completed: 1000 events in 202.208945ms
Burst completed: 1000 events in 154.130024ms
Burst completed: 1000 events in 168.817721ms
Burst completed: 1000 events in 153.032223ms
Burst completed: 1000 events in 154.799008ms
Burst test completed: 10000 events in 15.449161726s
Events/sec: 647.28
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4582 reads in 1m0.037041762s
Combined ops/sec: 159.60
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 959 queries in 1m0.42440735s
Queries/sec: 15.87
Avg query latency: 418.846875ms
P95 query latency: 473.089327ms
P99 query latency: 650.467474ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10484 operations (484 queries, 10000 writes) in 1m0.283590079s
Operations/sec: 173.91
Avg latency: 17.921964ms
Avg query latency: 381.041592ms
Avg write latency: 346.974µs
P95 latency: 1.269749ms
P99 latency: 399.015222ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 9.614321551s
Total Events: 10000
Events/sec: 1040.11
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 118 MB
Avg Latency: 454.784µs
P90 Latency: 596.266µs
P95 Latency: 654.637µs
P99 Latency: 844.569µs
Bottom 10% Avg Latency: 706.219µs
----------------------------------------
Test: Burst Pattern
Duration: 15.07982611s
Total Events: 10000
Events/sec: 663.14
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 162 MB
Avg Latency: 193.914µs
P90 Latency: 255.617µs
P95 Latency: 296.525µs
P99 Latency: 451.81µs
Bottom 10% Avg Latency: 343.222µs
----------------------------------------
Test: Mixed Read/Write
Duration: 44.903267299s
Total Events: 10000
Events/sec: 222.70
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 121 MB
Avg Latency: 9.145633ms
P90 Latency: 19.946513ms
P95 Latency: 21.642655ms
P99 Latency: 23.951572ms
Bottom 10% Avg Latency: 21.861602ms
----------------------------------------
Test: Query Performance
Duration: 1m0.104195004s
Total Events: 3166
Events/sec: 52.68
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 188 MB
Avg Latency: 125.847553ms
P90 Latency: 140.664966ms
P95 Latency: 148.109766ms
P99 Latency: 212.054697ms
Bottom 10% Avg Latency: 164.089129ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.127232573s
Total Events: 11366
Events/sec: 189.03
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 112 MB
Avg Latency: 16.671438ms
P90 Latency: 122.627849ms
P95 Latency: 133.755996ms
P99 Latency: 152.790563ms
Bottom 10% Avg Latency: 138.087104ms
----------------------------------------
Test: Peak Throughput
Duration: 9.384548186s
Total Events: 10000
Events/sec: 1065.58
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 566.375µs
P90 Latency: 738.377µs
P95 Latency: 839.679µs
P99 Latency: 1.131084ms
Bottom 10% Avg Latency: 1.312791ms
----------------------------------------
Test: Burst Pattern
Duration: 15.449161726s
Total Events: 10000
Events/sec: 647.28
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 165 MB
Avg Latency: 186.353µs
P90 Latency: 243.413µs
P95 Latency: 283.06µs
P99 Latency: 440.76µs
Bottom 10% Avg Latency: 324.151µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.037041762s
Total Events: 9582
Events/sec: 159.60
Success Rate: 95.8%
Concurrent Workers: 8
Memory Used: 138 MB
Avg Latency: 16.358228ms
P90 Latency: 37.654373ms
P95 Latency: 40.578604ms
P99 Latency: 46.331181ms
Bottom 10% Avg Latency: 41.76124ms
----------------------------------------
Test: Query Performance
Duration: 1m0.42440735s
Total Events: 959
Events/sec: 15.87
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 110 MB
Avg Latency: 418.846875ms
P90 Latency: 448.809017ms
P95 Latency: 473.089327ms
P99 Latency: 650.467474ms
Bottom 10% Avg Latency: 518.112626ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.283590079s
Total Events: 10484
Events/sec: 173.91
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 205 MB
Avg Latency: 17.921964ms
P90 Latency: 582.319µs
P95 Latency: 1.269749ms
P99 Latency: 399.015222ms
Bottom 10% Avg Latency: 176.257001ms
----------------------------------------
Report saved to: /tmp/benchmark_khatru-badger_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_khatru-badger_8/benchmark_report.adoc
1758364794792663/tmp/benchmark_khatru-badger_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758364796617126/tmp/benchmark_khatru-badger_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758364796621659/tmp/benchmark_khatru-badger_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: khatru-badger
RELAY_URL: ws://khatru-badger:3334
TEST_TIMESTAMP: 2025-09-20T10:39:56+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_khatru-sqlite_8
Events: 10000, Workers: 8, Duration: 1m0s
1758363814412229/tmp/benchmark_khatru-sqlite_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758363814412803/tmp/benchmark_khatru-sqlite_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758363814412840/tmp/benchmark_khatru-sqlite_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758363814413123(*types.Uint32)(0xc0001ea00c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758363814413200migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.044789549s
Events/sec: 1105.61
Avg latency: 458.035µs
P90 latency: 601.736µs
P95 latency: 660.608µs
P99 latency: 844.108µs
Bottom 10% Avg latency: 702.193µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 146.610877ms
Burst completed: 1000 events in 179.229665ms
Burst completed: 1000 events in 157.096919ms
Burst completed: 1000 events in 164.796374ms
Burst completed: 1000 events in 188.464354ms
Burst completed: 1000 events in 196.529596ms
Burst completed: 1000 events in 169.425581ms
Burst completed: 1000 events in 147.99354ms
Burst completed: 1000 events in 157.996252ms
Burst completed: 1000 events in 167.299262ms
Burst test completed: 10000 events in 16.003207139s
Events/sec: 624.87
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 46.924555793s
Combined ops/sec: 213.11
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3052 queries in 1m0.102264s
Queries/sec: 50.78
Avg query latency: 128.464192ms
P95 query latency: 148.086431ms
P99 query latency: 219.275394ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11296 operations (1296 queries, 10000 writes) in 1m0.108871986s
Operations/sec: 187.93
Avg latency: 16.71621ms
Avg query latency: 142.320434ms
Avg write latency: 437.903µs
P95 latency: 141.357185ms
P99 latency: 163.50992ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.344884331s
Events/sec: 1070.10
Avg latency: 578.453µs
P90 latency: 742.585µs
P95 latency: 849.679µs
P99 latency: 1.122058ms
Bottom 10% Avg latency: 1.362355ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 185.472655ms
Burst completed: 1000 events in 194.135516ms
Burst completed: 1000 events in 176.056931ms
Burst completed: 1000 events in 161.500315ms
Burst completed: 1000 events in 157.673837ms
Burst completed: 1000 events in 167.130208ms
Burst completed: 1000 events in 182.164655ms
Burst completed: 1000 events in 156.589581ms
Burst completed: 1000 events in 154.419949ms
Burst completed: 1000 events in 158.445927ms
Burst test completed: 10000 events in 15.587711126s
Events/sec: 641.53
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4405 reads in 1m0.043842569s
Combined ops/sec: 156.64
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 915 queries in 1m0.3452177s
Queries/sec: 15.16
Avg query latency: 435.125142ms
P95 query latency: 520.311963ms
P99 query latency: 618.85899ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10489 operations (489 queries, 10000 writes) in 1m0.27235761s
Operations/sec: 174.03
Avg latency: 18.043774ms
Avg query latency: 379.681531ms
Avg write latency: 359.688µs
P95 latency: 1.316628ms
P99 latency: 400.223248ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 9.044789549s
Total Events: 10000
Events/sec: 1105.61
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 144 MB
Avg Latency: 458.035µs
P90 Latency: 601.736µs
P95 Latency: 660.608µs
P99 Latency: 844.108µs
Bottom 10% Avg Latency: 702.193µs
----------------------------------------
Test: Burst Pattern
Duration: 16.003207139s
Total Events: 10000
Events/sec: 624.87
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 89 MB
Avg Latency: 193.997µs
P90 Latency: 261.969µs
P95 Latency: 302.666µs
P99 Latency: 431.933µs
Bottom 10% Avg Latency: 334.383µs
----------------------------------------
Test: Mixed Read/Write
Duration: 46.924555793s
Total Events: 10000
Events/sec: 213.11
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 96 MB
Avg Latency: 9.781737ms
P90 Latency: 21.91971ms
P95 Latency: 23.653412ms
P99 Latency: 27.511972ms
Bottom 10% Avg Latency: 24.396695ms
----------------------------------------
Test: Query Performance
Duration: 1m0.102264s
Total Events: 3052
Events/sec: 50.78
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 209 MB
Avg Latency: 128.464192ms
P90 Latency: 142.195039ms
P95 Latency: 148.086431ms
P99 Latency: 219.275394ms
Bottom 10% Avg Latency: 162.874217ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.108871986s
Total Events: 11296
Events/sec: 187.93
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 159 MB
Avg Latency: 16.71621ms
P90 Latency: 127.287246ms
P95 Latency: 141.357185ms
P99 Latency: 163.50992ms
Bottom 10% Avg Latency: 145.199189ms
----------------------------------------
Test: Peak Throughput
Duration: 9.344884331s
Total Events: 10000
Events/sec: 1070.10
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 578.453µs
P90 Latency: 742.585µs
P95 Latency: 849.679µs
P99 Latency: 1.122058ms
Bottom 10% Avg Latency: 1.362355ms
----------------------------------------
Test: Burst Pattern
Duration: 15.587711126s
Total Events: 10000
Events/sec: 641.53
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 141 MB
Avg Latency: 190.235µs
P90 Latency: 254.795µs
P95 Latency: 290.563µs
P99 Latency: 437.323µs
Bottom 10% Avg Latency: 328.752µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.043842569s
Total Events: 9405
Events/sec: 156.64
Success Rate: 94.0%
Concurrent Workers: 8
Memory Used: 105 MB
Avg Latency: 16.852438ms
P90 Latency: 39.677855ms
P95 Latency: 42.553634ms
P99 Latency: 48.262077ms
Bottom 10% Avg Latency: 43.994063ms
----------------------------------------
Test: Query Performance
Duration: 1m0.3452177s
Total Events: 915
Events/sec: 15.16
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 157 MB
Avg Latency: 435.125142ms
P90 Latency: 482.304439ms
P95 Latency: 520.311963ms
P99 Latency: 618.85899ms
Bottom 10% Avg Latency: 545.670939ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.27235761s
Total Events: 10489
Events/sec: 174.03
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 132 MB
Avg Latency: 18.043774ms
P90 Latency: 583.962µs
P95 Latency: 1.316628ms
P99 Latency: 400.223248ms
Bottom 10% Avg Latency: 177.440946ms
----------------------------------------
Report saved to: /tmp/benchmark_khatru-sqlite_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_khatru-sqlite_8/benchmark_report.adoc
1758364302230610/tmp/benchmark_khatru-sqlite_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758364304057942/tmp/benchmark_khatru-sqlite_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758364304063521/tmp/benchmark_khatru-sqlite_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: khatru-sqlite
RELAY_URL: ws://khatru-sqlite:3334
TEST_TIMESTAMP: 2025-09-20T10:31:44+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_next-orly_8
Events: 10000, Workers: 8, Duration: 1m0s
1758363321263384/tmp/benchmark_next-orly_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758363321263864/tmp/benchmark_next-orly_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758363321263887/tmp/benchmark_next-orly_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758363321264128(*types.Uint32)(0xc0001f7ffc)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758363321264177migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.657904043s
Events/sec: 1035.42
Avg latency: 470.069µs
P90 latency: 628.167µs
P95 latency: 693.101µs
P99 latency: 922.357µs
Bottom 10% Avg latency: 750.491µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 175.034134ms
Burst completed: 1000 events in 150.401771ms
Burst completed: 1000 events in 168.992305ms
Burst completed: 1000 events in 179.447581ms
Burst completed: 1000 events in 165.602457ms
Burst completed: 1000 events in 178.649561ms
Burst completed: 1000 events in 195.002303ms
Burst completed: 1000 events in 168.970954ms
Burst completed: 1000 events in 150.818413ms
Burst completed: 1000 events in 185.285662ms
Burst test completed: 10000 events in 15.169978801s
Events/sec: 659.20
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 45.597478865s
Combined ops/sec: 219.31
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3151 queries in 1m0.067849757s
Queries/sec: 52.46
Avg query latency: 126.38548ms
P95 query latency: 149.976367ms
P99 query latency: 205.807461ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11325 operations (1325 queries, 10000 writes) in 1m0.081967157s
Operations/sec: 188.49
Avg latency: 16.694154ms
Avg query latency: 139.524748ms
Avg write latency: 419.1µs
P95 latency: 138.688202ms
P99 latency: 158.824742ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.136097148s
Events/sec: 1094.56
Avg latency: 510.7µs
P90 latency: 636.763µs
P95 latency: 705.564µs
P99 latency: 922.777µs
Bottom 10% Avg latency: 1.094965ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 176.337148ms
Burst completed: 1000 events in 177.351251ms
Burst completed: 1000 events in 181.515292ms
Burst completed: 1000 events in 164.043866ms
Burst completed: 1000 events in 152.697196ms
Burst completed: 1000 events in 144.231922ms
Burst completed: 1000 events in 162.606659ms
Burst completed: 1000 events in 137.485182ms
Burst completed: 1000 events in 163.19487ms
Burst completed: 1000 events in 147.900339ms
Burst test completed: 10000 events in 15.514130113s
Events/sec: 644.57
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4489 reads in 1m0.036174989s
Combined ops/sec: 158.05
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 900 queries in 1m0.304636826s
Queries/sec: 14.92
Avg query latency: 444.57989ms
P95 query latency: 547.598358ms
P99 query latency: 660.926147ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10462 operations (462 queries, 10000 writes) in 1m0.362856212s
Operations/sec: 173.32
Avg latency: 17.808607ms
Avg query latency: 395.594177ms
Avg write latency: 354.914µs
P95 latency: 1.221657ms
P99 latency: 411.642669ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 9.657904043s
Total Events: 10000
Events/sec: 1035.42
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 144 MB
Avg Latency: 470.069µs
P90 Latency: 628.167µs
P95 Latency: 693.101µs
P99 Latency: 922.357µs
Bottom 10% Avg Latency: 750.491µs
----------------------------------------
Test: Burst Pattern
Duration: 15.169978801s
Total Events: 10000
Events/sec: 659.20
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 135 MB
Avg Latency: 190.573µs
P90 Latency: 252.701µs
P95 Latency: 289.761µs
P99 Latency: 408.147µs
Bottom 10% Avg Latency: 316.797µs
----------------------------------------
Test: Mixed Read/Write
Duration: 45.597478865s
Total Events: 10000
Events/sec: 219.31
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 119 MB
Avg Latency: 9.381158ms
P90 Latency: 20.487026ms
P95 Latency: 22.450848ms
P99 Latency: 24.696325ms
Bottom 10% Avg Latency: 22.632933ms
----------------------------------------
Test: Query Performance
Duration: 1m0.067849757s
Total Events: 3151
Events/sec: 52.46
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 145 MB
Avg Latency: 126.38548ms
P90 Latency: 142.39268ms
P95 Latency: 149.976367ms
P99 Latency: 205.807461ms
Bottom 10% Avg Latency: 162.636454ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.081967157s
Total Events: 11325
Events/sec: 188.49
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 194 MB
Avg Latency: 16.694154ms
P90 Latency: 125.314618ms
P95 Latency: 138.688202ms
P99 Latency: 158.824742ms
Bottom 10% Avg Latency: 142.699977ms
----------------------------------------
Test: Peak Throughput
Duration: 9.136097148s
Total Events: 10000
Events/sec: 1094.56
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 510.7µs
P90 Latency: 636.763µs
P95 Latency: 705.564µs
P99 Latency: 922.777µs
Bottom 10% Avg Latency: 1.094965ms
----------------------------------------
Test: Burst Pattern
Duration: 15.514130113s
Total Events: 10000
Events/sec: 644.57
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 138 MB
Avg Latency: 230.062µs
P90 Latency: 316.624µs
P95 Latency: 389.882µs
P99 Latency: 859.548µs
Bottom 10% Avg Latency: 529.836µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.036174989s
Total Events: 9489
Events/sec: 158.05
Success Rate: 94.9%
Concurrent Workers: 8
Memory Used: 182 MB
Avg Latency: 16.56372ms
P90 Latency: 38.24931ms
P95 Latency: 41.187306ms
P99 Latency: 46.02529ms
Bottom 10% Avg Latency: 42.131189ms
----------------------------------------
Test: Query Performance
Duration: 1m0.304636826s
Total Events: 900
Events/sec: 14.92
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 141 MB
Avg Latency: 444.57989ms
P90 Latency: 490.730651ms
P95 Latency: 547.598358ms
P99 Latency: 660.926147ms
Bottom 10% Avg Latency: 563.628707ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.362856212s
Total Events: 10462
Events/sec: 173.32
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 152 MB
Avg Latency: 17.808607ms
P90 Latency: 631.703µs
P95 Latency: 1.221657ms
P99 Latency: 411.642669ms
Bottom 10% Avg Latency: 175.052418ms
----------------------------------------
Report saved to: /tmp/benchmark_next-orly_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_next-orly_8/benchmark_report.adoc
1758363807245770/tmp/benchmark_next-orly_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758363809118416/tmp/benchmark_next-orly_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758363809123697/tmp/benchmark_next-orly_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: next-orly
RELAY_URL: ws://next-orly:8080
TEST_TIMESTAMP: 2025-09-20T10:23:29+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_nostr-rs-relay_8
Events: 10000, Workers: 8, Duration: 1m0s
1758365785928076/tmp/benchmark_nostr-rs-relay_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758365785929028/tmp/benchmark_nostr-rs-relay_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758365785929097/tmp/benchmark_nostr-rs-relay_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758365785929509(*types.Uint32)(0xc0001c820c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758365785929573migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 8.897492256s
Events/sec: 1123.91
Avg latency: 416.753µs
P90 latency: 546.351µs
P95 latency: 597.338µs
P99 latency: 760.549µs
Bottom 10% Avg latency: 638.318µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 158.263016ms
Burst completed: 1000 events in 181.558983ms
Burst completed: 1000 events in 155.219861ms
Burst completed: 1000 events in 183.834156ms
Burst completed: 1000 events in 192.398437ms
Burst completed: 1000 events in 176.450074ms
Burst completed: 1000 events in 175.050138ms
Burst completed: 1000 events in 178.883047ms
Burst completed: 1000 events in 180.74321ms
Burst completed: 1000 events in 169.39146ms
Burst test completed: 10000 events in 15.441062872s
Events/sec: 647.62
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 45.847091984s
Combined ops/sec: 218.12
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3229 queries in 1m0.085047549s
Queries/sec: 53.74
Avg query latency: 123.209617ms
P95 query latency: 141.745618ms
P99 query latency: 154.527843ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11298 operations (1298 queries, 10000 writes) in 1m0.096751583s
Operations/sec: 188.00
Avg latency: 16.447175ms
Avg query latency: 139.791065ms
Avg write latency: 437.138µs
P95 latency: 137.879538ms
P99 latency: 162.020385ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.674593819s
Events/sec: 1033.64
Avg latency: 541.545µs
P90 latency: 693.862µs
P95 latency: 775.757µs
P99 latency: 1.05005ms
Bottom 10% Avg latency: 1.219386ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 168.056064ms
Burst completed: 1000 events in 159.819647ms
Burst completed: 1000 events in 147.500264ms
Burst completed: 1000 events in 159.150392ms
Burst completed: 1000 events in 149.954829ms
Burst completed: 1000 events in 138.082938ms
Burst completed: 1000 events in 157.234213ms
Burst completed: 1000 events in 158.468955ms
Burst completed: 1000 events in 144.346047ms
Burst completed: 1000 events in 154.930576ms
Burst test completed: 10000 events in 15.646785427s
Events/sec: 639.11
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4415 reads in 1m0.02899167s
Combined ops/sec: 156.84
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 890 queries in 1m0.279192867s
Queries/sec: 14.76
Avg query latency: 448.809547ms
P95 query latency: 607.28509ms
P99 query latency: 786.387053ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10469 operations (469 queries, 10000 writes) in 1m0.190785048s
Operations/sec: 173.93
Avg latency: 17.73903ms
Avg query latency: 388.59336ms
Avg write latency: 345.962µs
P95 latency: 1.158136ms
P99 latency: 407.947907ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 8.897492256s
Total Events: 10000
Events/sec: 1123.91
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 132 MB
Avg Latency: 416.753µs
P90 Latency: 546.351µs
P95 Latency: 597.338µs
P99 Latency: 760.549µs
Bottom 10% Avg Latency: 638.318µs
----------------------------------------
Test: Burst Pattern
Duration: 15.441062872s
Total Events: 10000
Events/sec: 647.62
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 104 MB
Avg Latency: 185.217µs
P90 Latency: 241.64µs
P95 Latency: 273.191µs
P99 Latency: 412.897µs
Bottom 10% Avg Latency: 306.752µs
----------------------------------------
Test: Mixed Read/Write
Duration: 45.847091984s
Total Events: 10000
Events/sec: 218.12
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 96 MB
Avg Latency: 9.446215ms
P90 Latency: 20.522135ms
P95 Latency: 22.416221ms
P99 Latency: 24.696283ms
Bottom 10% Avg Latency: 22.59535ms
----------------------------------------
Test: Query Performance
Duration: 1m0.085047549s
Total Events: 3229
Events/sec: 53.74
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 175 MB
Avg Latency: 123.209617ms
P90 Latency: 137.629898ms
P95 Latency: 141.745618ms
P99 Latency: 154.527843ms
Bottom 10% Avg Latency: 145.245967ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.096751583s
Total Events: 11298
Events/sec: 188.00
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 181 MB
Avg Latency: 16.447175ms
P90 Latency: 123.920421ms
P95 Latency: 137.879538ms
P99 Latency: 162.020385ms
Bottom 10% Avg Latency: 142.654147ms
----------------------------------------
Test: Peak Throughput
Duration: 9.674593819s
Total Events: 10000
Events/sec: 1033.64
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 541.545µs
P90 Latency: 693.862µs
P95 Latency: 775.757µs
P99 Latency: 1.05005ms
Bottom 10% Avg Latency: 1.219386ms
----------------------------------------
Test: Burst Pattern
Duration: 15.646785427s
Total Events: 10000
Events/sec: 639.11
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 146 MB
Avg Latency: 331.896µs
P90 Latency: 520.511µs
P95 Latency: 864.486µs
P99 Latency: 2.251087ms
Bottom 10% Avg Latency: 1.16922ms
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.02899167s
Total Events: 9415
Events/sec: 156.84
Success Rate: 94.2%
Concurrent Workers: 8
Memory Used: 147 MB
Avg Latency: 16.723365ms
P90 Latency: 39.058801ms
P95 Latency: 41.904891ms
P99 Latency: 47.156263ms
Bottom 10% Avg Latency: 42.800456ms
----------------------------------------
Test: Query Performance
Duration: 1m0.279192867s
Total Events: 890
Events/sec: 14.76
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 156 MB
Avg Latency: 448.809547ms
P90 Latency: 524.488485ms
P95 Latency: 607.28509ms
P99 Latency: 786.387053ms
Bottom 10% Avg Latency: 634.016595ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.190785048s
Total Events: 10469
Events/sec: 173.93
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 226 MB
Avg Latency: 17.73903ms
P90 Latency: 561.359µs
P95 Latency: 1.158136ms
P99 Latency: 407.947907ms
Bottom 10% Avg Latency: 174.508065ms
----------------------------------------
Report saved to: /tmp/benchmark_nostr-rs-relay_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_nostr-rs-relay_8/benchmark_report.adoc
1758366272164052/tmp/benchmark_nostr-rs-relay_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758366274030399/tmp/benchmark_nostr-rs-relay_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758366274036413/tmp/benchmark_nostr-rs-relay_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: nostr-rs-relay
RELAY_URL: ws://nostr-rs-relay:8080
TEST_TIMESTAMP: 2025-09-20T11:04:34+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_relayer-basic_8
Events: 10000, Workers: 8, Duration: 1m0s
1758364801895559/tmp/benchmark_relayer-basic_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758364801896041/tmp/benchmark_relayer-basic_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758364801896078/tmp/benchmark_relayer-basic_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758364801896347(*types.Uint32)(0xc0001a801c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758364801896400migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.050770003s
Events/sec: 1104.88
Avg latency: 433.89µs
P90 latency: 567.261µs
P95 latency: 617.868µs
P99 latency: 783.593µs
Bottom 10% Avg latency: 653.813µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 183.738134ms
Burst completed: 1000 events in 155.035832ms
Burst completed: 1000 events in 160.066514ms
Burst completed: 1000 events in 183.724238ms
Burst completed: 1000 events in 178.910929ms
Burst completed: 1000 events in 168.905441ms
Burst completed: 1000 events in 172.584809ms
Burst completed: 1000 events in 177.214508ms
Burst completed: 1000 events in 169.921566ms
Burst completed: 1000 events in 162.042488ms
Burst test completed: 10000 events in 15.572250139s
Events/sec: 642.17
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 44.509677166s
Combined ops/sec: 224.67
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3253 queries in 1m0.095238426s
Queries/sec: 54.13
Avg query latency: 122.100718ms
P95 query latency: 140.360749ms
P99 query latency: 148.353154ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11408 operations (1408 queries, 10000 writes) in 1m0.117581615s
Operations/sec: 189.76
Avg latency: 16.525268ms
Avg query latency: 130.972853ms
Avg write latency: 411.048µs
P95 latency: 132.130964ms
P99 latency: 146.285305ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.265496879s
Events/sec: 1079.27
Avg latency: 529.266µs
P90 latency: 658.033µs
P95 latency: 732.024µs
P99 latency: 953.285µs
Bottom 10% Avg latency: 1.168714ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 172.300479ms
Burst completed: 1000 events in 149.247397ms
Burst completed: 1000 events in 170.000198ms
Burst completed: 1000 events in 133.786958ms
Burst completed: 1000 events in 172.157036ms
Burst completed: 1000 events in 153.284738ms
Burst completed: 1000 events in 166.711903ms
Burst completed: 1000 events in 170.635427ms
Burst completed: 1000 events in 153.381031ms
Burst completed: 1000 events in 162.125949ms
Burst test completed: 10000 events in 16.674963543s
Events/sec: 599.70
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4665 reads in 1m0.035358264s
Combined ops/sec: 160.99
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 944 queries in 1m0.383519958s
Queries/sec: 15.63
Avg query latency: 421.75292ms
P95 query latency: 491.340259ms
P99 query latency: 664.614262ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10479 operations (479 queries, 10000 writes) in 1m0.291926697s
Operations/sec: 173.80
Avg latency: 18.049265ms
Avg query latency: 385.864458ms
Avg write latency: 430.918µs
P95 latency: 3.05038ms
P99 latency: 404.540502ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 9.050770003s
Total Events: 10000
Events/sec: 1104.88
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 153 MB
Avg Latency: 433.89µs
P90 Latency: 567.261µs
P95 Latency: 617.868µs
P99 Latency: 783.593µs
Bottom 10% Avg Latency: 653.813µs
----------------------------------------
Test: Burst Pattern
Duration: 15.572250139s
Total Events: 10000
Events/sec: 642.17
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 134 MB
Avg Latency: 186.306µs
P90 Latency: 243.995µs
P95 Latency: 279.192µs
P99 Latency: 392.859µs
Bottom 10% Avg Latency: 303.766µs
----------------------------------------
Test: Mixed Read/Write
Duration: 44.509677166s
Total Events: 10000
Events/sec: 224.67
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 163 MB
Avg Latency: 8.892738ms
P90 Latency: 19.406836ms
P95 Latency: 21.247322ms
P99 Latency: 23.452072ms
Bottom 10% Avg Latency: 21.397913ms
----------------------------------------
Test: Query Performance
Duration: 1m0.095238426s
Total Events: 3253
Events/sec: 54.13
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 126 MB
Avg Latency: 122.100718ms
P90 Latency: 136.523661ms
P95 Latency: 140.360749ms
P99 Latency: 148.353154ms
Bottom 10% Avg Latency: 142.067372ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.117581615s
Total Events: 11408
Events/sec: 189.76
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 149 MB
Avg Latency: 16.525268ms
P90 Latency: 121.696848ms
P95 Latency: 132.130964ms
P99 Latency: 146.285305ms
Bottom 10% Avg Latency: 134.054744ms
----------------------------------------
Test: Peak Throughput
Duration: 9.265496879s
Total Events: 10000
Events/sec: 1079.27
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 529.266µs
P90 Latency: 658.033µs
P95 Latency: 732.024µs
P99 Latency: 953.285µs
Bottom 10% Avg Latency: 1.168714ms
----------------------------------------
Test: Burst Pattern
Duration: 16.674963543s
Total Events: 10000
Events/sec: 599.70
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 142 MB
Avg Latency: 264.288µs
P90 Latency: 350.187µs
P95 Latency: 519.139µs
P99 Latency: 1.961326ms
Bottom 10% Avg Latency: 877.366µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.035358264s
Total Events: 9665
Events/sec: 160.99
Success Rate: 96.7%
Concurrent Workers: 8
Memory Used: 151 MB
Avg Latency: 16.019245ms
P90 Latency: 36.340362ms
P95 Latency: 39.113864ms
P99 Latency: 44.271098ms
Bottom 10% Avg Latency: 40.108462ms
----------------------------------------
Test: Query Performance
Duration: 1m0.383519958s
Total Events: 944
Events/sec: 15.63
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 280 MB
Avg Latency: 421.75292ms
P90 Latency: 460.902551ms
P95 Latency: 491.340259ms
P99 Latency: 664.614262ms
Bottom 10% Avg Latency: 538.014725ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.291926697s
Total Events: 10479
Events/sec: 173.80
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 122 MB
Avg Latency: 18.049265ms
P90 Latency: 843.867µs
P95 Latency: 3.05038ms
P99 Latency: 404.540502ms
Bottom 10% Avg Latency: 177.245211ms
----------------------------------------
Report saved to: /tmp/benchmark_relayer-basic_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_relayer-basic_8/benchmark_report.adoc
1758365287933287/tmp/benchmark_relayer-basic_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758365289807797/tmp/benchmark_relayer-basic_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758365289812921/tmp/benchmark_relayer-basic_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: relayer-basic
RELAY_URL: ws://relayer-basic:7447
TEST_TIMESTAMP: 2025-09-20T10:48:10+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_strfry_8
Events: 10000, Workers: 8, Duration: 1m0s
1758365295110579/tmp/benchmark_strfry_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758365295111085/tmp/benchmark_strfry_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758365295111113/tmp/benchmark_strfry_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758365295111319(*types.Uint32)(0xc000141a3c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758365295111354migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.170212358s
Events/sec: 1090.49
Avg latency: 448.058µs
P90 latency: 597.558µs
P95 latency: 667.141µs
P99 latency: 920.784µs
Bottom 10% Avg latency: 729.464µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 172.138862ms
Burst completed: 1000 events in 168.99322ms
Burst completed: 1000 events in 162.213786ms
Burst completed: 1000 events in 161.027417ms
Burst completed: 1000 events in 183.148824ms
Burst completed: 1000 events in 178.152837ms
Burst completed: 1000 events in 158.65623ms
Burst completed: 1000 events in 186.7166ms
Burst completed: 1000 events in 177.202878ms
Burst completed: 1000 events in 182.780071ms
Burst test completed: 10000 events in 15.336760896s
Events/sec: 652.03
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 44.257468151s
Combined ops/sec: 225.95
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3002 queries in 1m0.091429487s
Queries/sec: 49.96
Avg query latency: 131.632043ms
P95 query latency: 175.810416ms
P99 query latency: 228.52716ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11308 operations (1308 queries, 10000 writes) in 1m0.111257202s
Operations/sec: 188.12
Avg latency: 16.193707ms
Avg query latency: 137.019852ms
Avg write latency: 389.647µs
P95 latency: 136.70132ms
P99 latency: 156.996779ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.102738s
Events/sec: 1098.57
Avg latency: 493.093µs
P90 latency: 605.684µs
P95 latency: 659.477µs
P99 latency: 826.344µs
Bottom 10% Avg latency: 1.097884ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 178.755916ms
Burst completed: 1000 events in 170.810722ms
Burst completed: 1000 events in 166.730701ms
Burst completed: 1000 events in 172.177576ms
Burst completed: 1000 events in 164.907178ms
Burst completed: 1000 events in 153.267727ms
Burst completed: 1000 events in 157.855743ms
Burst completed: 1000 events in 159.632496ms
Burst completed: 1000 events in 160.802526ms
Burst completed: 1000 events in 178.513954ms
Burst test completed: 10000 events in 15.535933443s
Events/sec: 643.67
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4550 reads in 1m0.032080518s
Combined ops/sec: 159.08
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 913 queries in 1m0.248877091s
Queries/sec: 15.15
Avg query latency: 436.472206ms
P95 query latency: 493.12732ms
P99 query latency: 623.201275ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10470 operations (470 queries, 10000 writes) in 1m0.293280495s
Operations/sec: 173.65
Avg latency: 18.084009ms
Avg query latency: 395.171481ms
Avg write latency: 360.898µs
P95 latency: 1.338148ms
P99 latency: 413.21015ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 9.170212358s
Total Events: 10000
Events/sec: 1090.49
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 108 MB
Avg Latency: 448.058µs
P90 Latency: 597.558µs
P95 Latency: 667.141µs
P99 Latency: 920.784µs
Bottom 10% Avg Latency: 729.464µs
----------------------------------------
Test: Burst Pattern
Duration: 15.336760896s
Total Events: 10000
Events/sec: 652.03
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 123 MB
Avg Latency: 189.06µs
P90 Latency: 248.714µs
P95 Latency: 290.433µs
P99 Latency: 416.924µs
Bottom 10% Avg Latency: 324.174µs
----------------------------------------
Test: Mixed Read/Write
Duration: 44.257468151s
Total Events: 10000
Events/sec: 225.95
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 158 MB
Avg Latency: 8.745534ms
P90 Latency: 18.980294ms
P95 Latency: 20.822884ms
P99 Latency: 23.124918ms
Bottom 10% Avg Latency: 21.006886ms
----------------------------------------
Test: Query Performance
Duration: 1m0.091429487s
Total Events: 3002
Events/sec: 49.96
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 191 MB
Avg Latency: 131.632043ms
P90 Latency: 152.618309ms
P95 Latency: 175.810416ms
P99 Latency: 228.52716ms
Bottom 10% Avg Latency: 186.230874ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.111257202s
Total Events: 11308
Events/sec: 188.12
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 146 MB
Avg Latency: 16.193707ms
P90 Latency: 122.204256ms
P95 Latency: 136.70132ms
P99 Latency: 156.996779ms
Bottom 10% Avg Latency: 140.031139ms
----------------------------------------
Test: Peak Throughput
Duration: 9.102738s
Total Events: 10000
Events/sec: 1098.57
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 493.093µs
P90 Latency: 605.684µs
P95 Latency: 659.477µs
P99 Latency: 826.344µs
Bottom 10% Avg Latency: 1.097884ms
----------------------------------------
Test: Burst Pattern
Duration: 15.535933443s
Total Events: 10000
Events/sec: 643.67
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 130 MB
Avg Latency: 186.177µs
P90 Latency: 243.915µs
P95 Latency: 276.146µs
P99 Latency: 418.787µs
Bottom 10% Avg Latency: 309.015µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.032080518s
Total Events: 9550
Events/sec: 159.08
Success Rate: 95.5%
Concurrent Workers: 8
Memory Used: 115 MB
Avg Latency: 16.401942ms
P90 Latency: 37.575878ms
P95 Latency: 40.323279ms
P99 Latency: 45.453669ms
Bottom 10% Avg Latency: 41.331235ms
----------------------------------------
Test: Query Performance
Duration: 1m0.248877091s
Total Events: 913
Events/sec: 15.15
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 211 MB
Avg Latency: 436.472206ms
P90 Latency: 474.430346ms
P95 Latency: 493.12732ms
P99 Latency: 623.201275ms
Bottom 10% Avg Latency: 523.084076ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.293280495s
Total Events: 10470
Events/sec: 173.65
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 171 MB
Avg Latency: 18.084009ms
P90 Latency: 624.339µs
P95 Latency: 1.338148ms
P99 Latency: 413.21015ms
Bottom 10% Avg Latency: 177.8924ms
----------------------------------------
Report saved to: /tmp/benchmark_strfry_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_strfry_8/benchmark_report.adoc
1758365779337138/tmp/benchmark_strfry_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758365780726692/tmp/benchmark_strfry_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758365780732292/tmp/benchmark_strfry_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: strfry
RELAY_URL: ws://strfry:8080
TEST_TIMESTAMP: 2025-09-20T10:56:20+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

634
cmd/stresstest/main.go Normal file
View File

@@ -0,0 +1,634 @@
package main
import (
"bufio"
"bytes"
"context"
"flag"
"fmt"
"math/rand"
"os"
"os/signal"
"runtime"
"strings"
"sync"
"sync/atomic"
"time"
"lol.mleku.dev/log"
"next.orly.dev/pkg/crypto/p256k"
"next.orly.dev/pkg/encoders/envelopes/eventenvelope"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/event/examples"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/encoders/timestamp"
"next.orly.dev/pkg/protocol/ws"
)
// randomHex returns a hex-encoded string of n random bytes (2n hex chars)
func randomHex(n int) string {
b := make([]byte, n)
_, _ = rand.Read(b)
return hex.Enc(b)
}
func makeEvent(rng *rand.Rand, signer *p256k.Signer) (*event.E, error) {
ev := &event.E{
CreatedAt: time.Now().Unix(),
Kind: kind.TextNote.K,
Tags: tag.NewS(),
Content: []byte(fmt.Sprintf("stresstest %d", rng.Int63())),
}
// Random number of p-tags up to 100
nPTags := rng.Intn(101) // 0..100 inclusive
for i := 0; i < nPTags; i++ {
// random 32-byte pubkey in hex (64 chars)
phex := randomHex(32)
ev.Tags.Append(tag.NewFromAny("p", phex))
}
// Sign and verify to ensure pubkey, id and signature are coherent
if err := ev.Sign(signer); err != nil {
return nil, err
}
if ok, err := ev.Verify(); err != nil || !ok {
return nil, fmt.Errorf("event signature verification failed: %v", err)
}
return ev, nil
}
type RelayConn struct {
mu sync.RWMutex
client *ws.Client
url string
}
type CacheIndex struct {
events []*event.E
ids [][]byte
authors [][]byte
times []int64
tags map[byte][][]byte // single-letter tag -> list of values
}
func (rc *RelayConn) Get() *ws.Client {
rc.mu.RLock()
defer rc.mu.RUnlock()
return rc.client
}
func (rc *RelayConn) Reconnect(ctx context.Context) error {
rc.mu.Lock()
defer rc.mu.Unlock()
if rc.client != nil {
_ = rc.client.Close()
}
c, err := ws.RelayConnect(ctx, rc.url)
if err != nil {
return err
}
rc.client = c
return nil
}
// loadCacheAndIndex parses examples.Cache (JSONL of events) and builds an index
func loadCacheAndIndex() (*CacheIndex, error) {
scanner := bufio.NewScanner(bytes.NewReader(examples.Cache))
idx := &CacheIndex{tags: make(map[byte][][]byte)}
for scanner.Scan() {
line := scanner.Bytes()
if len(bytes.TrimSpace(line)) == 0 {
continue
}
ev := event.New()
rem, err := ev.Unmarshal(line)
_ = rem
if err != nil {
// skip malformed lines
continue
}
idx.events = append(idx.events, ev)
// collect fields
if len(ev.ID) > 0 {
idx.ids = append(idx.ids, append([]byte(nil), ev.ID...))
}
if len(ev.Pubkey) > 0 {
idx.authors = append(idx.authors, append([]byte(nil), ev.Pubkey...))
}
idx.times = append(idx.times, ev.CreatedAt)
if ev.Tags != nil {
for _, tg := range *ev.Tags {
if tg == nil || tg.Len() < 2 {
continue
}
k := tg.Key()
if len(k) != 1 {
continue // only single-letter keys per requirement
}
key := k[0]
for _, v := range tg.T[1:] {
idx.tags[key] = append(
idx.tags[key], append([]byte(nil), v...),
)
}
}
}
}
return idx, nil
}
// publishCacheEvents uploads all cache events to the relay using multiple concurrent connections
func publishCacheEvents(
ctx context.Context, relayURL string, idx *CacheIndex,
) (sentCount int) {
numWorkers := runtime.NumCPU()
log.I.F("using %d concurrent connections for cache upload", numWorkers)
// Channel to distribute events to workers
eventChan := make(chan *event.E, len(idx.events))
var totalSent atomic.Int64
// Fill the event channel
for _, ev := range idx.events {
eventChan <- ev
}
close(eventChan)
// Start worker goroutines
var wg sync.WaitGroup
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
// Create separate connection for this worker
client, err := ws.RelayConnect(ctx, relayURL)
if err != nil {
log.E.F("worker %d: failed to connect: %v", workerID, err)
return
}
defer client.Close()
rc := &RelayConn{client: client, url: relayURL}
workerSent := 0
// Process events from the channel
for ev := range eventChan {
select {
case <-ctx.Done():
return
default:
}
// Get client connection
wsClient := rc.Get()
if wsClient == nil {
if err := rc.Reconnect(ctx); err != nil {
log.E.F("worker %d: reconnect failed: %v", workerID, err)
continue
}
wsClient = rc.Get()
}
// Send event without waiting for OK response (fire-and-forget)
envelope := eventenvelope.NewSubmissionWith(ev)
envBytes := envelope.Marshal(nil)
if err := <-wsClient.Write(envBytes); err != nil {
log.E.F("worker %d: write error: %v", workerID, err)
errStr := err.Error()
if strings.Contains(errStr, "connection closed") {
_ = rc.Reconnect(ctx)
}
time.Sleep(50 * time.Millisecond)
continue
}
workerSent++
totalSent.Add(1)
log.T.F("worker %d: sent event %d (total: %d)", workerID, workerSent, totalSent.Load())
// Small delay to prevent overwhelming the relay
select {
case <-time.After(10 * time.Millisecond):
case <-ctx.Done():
return
}
}
log.I.F("worker %d: completed, sent %d events", workerID, workerSent)
}(i)
}
// Wait for all workers to complete
wg.Wait()
return int(totalSent.Load())
}
// buildRandomFilter builds a filter combining random subsets of id, author, timestamp, and a single-letter tag value.
func buildRandomFilter(idx *CacheIndex, rng *rand.Rand, mask int) *filter.F {
// pick a random base event as anchor for fields
i := rng.Intn(len(idx.events))
ev := idx.events[i]
f := filter.New()
// clear defaults we don't set
f.Kinds = kind.NewS() // we don't constrain kinds
// include fields based on mask bits: 1=id, 2=author, 4=timestamp, 8=tag
if mask&1 != 0 {
f.Ids.T = append(f.Ids.T, append([]byte(nil), ev.ID...))
}
if mask&2 != 0 {
f.Authors.T = append(f.Authors.T, append([]byte(nil), ev.Pubkey...))
}
if mask&4 != 0 {
// use a tight window around the event timestamp (exact match)
f.Since = timestamp.FromUnix(ev.CreatedAt)
f.Until = timestamp.FromUnix(ev.CreatedAt)
}
if mask&8 != 0 {
// choose a random single-letter tag from this event if present; fallback to global index
var key byte
var val []byte
chosen := false
if ev.Tags != nil {
for _, tg := range *ev.Tags {
if tg == nil || tg.Len() < 2 {
continue
}
k := tg.Key()
if len(k) == 1 {
key = k[0]
vv := tg.T[1:]
val = vv[rng.Intn(len(vv))]
chosen = true
break
}
}
}
if !chosen && len(idx.tags) > 0 {
// pick a random entry from global tags map
keys := make([]byte, 0, len(idx.tags))
for k := range idx.tags {
keys = append(keys, k)
}
key = keys[rng.Intn(len(keys))]
vals := idx.tags[key]
val = vals[rng.Intn(len(vals))]
}
if key != 0 && len(val) > 0 {
f.Tags.Append(tag.NewFromBytesSlice([]byte{key}, val))
}
}
return f
}
func publisherWorker(
ctx context.Context, rc *RelayConn, id int, stats *uint64,
) {
// Unique RNG per worker
src := rand.NewSource(time.Now().UnixNano() ^ int64(id<<16))
rng := rand.New(src)
// Generate and reuse signing key per worker
signer := &p256k.Signer{}
if err := signer.Generate(); err != nil {
log.E.F("worker %d: signer generate error: %v", id, err)
return
}
for {
select {
case <-ctx.Done():
return
default:
}
ev, err := makeEvent(rng, signer)
if err != nil {
log.E.F("worker %d: makeEvent error: %v", id, err)
return
}
// Send event without waiting for OK response (fire-and-forget)
client := rc.Get()
if client == nil {
_ = rc.Reconnect(ctx)
continue
}
// Create EVENT envelope and send directly without waiting for OK
envelope := eventenvelope.NewSubmissionWith(ev)
envBytes := envelope.Marshal(nil)
if err := <-client.Write(envBytes); err != nil {
log.E.F("worker %d: write error: %v", id, err)
errStr := err.Error()
if strings.Contains(errStr, "connection closed") {
for attempt := 0; attempt < 5; attempt++ {
if ctx.Err() != nil {
return
}
if err := rc.Reconnect(ctx); err == nil {
log.I.F("worker %d: reconnected to %s", id, rc.url)
break
}
select {
case <-time.After(200 * time.Millisecond):
case <-ctx.Done():
return
}
}
}
// back off briefly on error to avoid tight loop if relay misbehaves
select {
case <-time.After(100 * time.Millisecond):
case <-ctx.Done():
return
}
continue
}
atomic.AddUint64(stats, 1)
// Randomly fluctuate pacing: small random sleep 0..50ms plus occasional longer jitter
sleep := time.Duration(rng.Intn(50)) * time.Millisecond
if rng.Intn(10) == 0 { // 10% chance add extra 100..400ms
sleep += time.Duration(100+rng.Intn(300)) * time.Millisecond
}
select {
case <-time.After(sleep):
case <-ctx.Done():
return
}
}
}
func queryWorker(
ctx context.Context, rc *RelayConn, idx *CacheIndex, id int,
queries, results *uint64, subTimeout time.Duration,
minInterval, maxInterval time.Duration,
) {
rng := rand.New(rand.NewSource(time.Now().UnixNano() ^ int64(id<<24)))
mask := 1
for {
select {
case <-ctx.Done():
return
default:
}
if len(idx.events) == 0 {
time.Sleep(200 * time.Millisecond)
continue
}
f := buildRandomFilter(idx, rng, mask)
mask++
if mask > 15 { // all combinations of 4 criteria (excluding 0)
mask = 1
}
client := rc.Get()
if client == nil {
_ = rc.Reconnect(ctx)
continue
}
ff := filter.S{f}
sCtx, cancel := context.WithTimeout(ctx, subTimeout)
sub, err := client.Subscribe(
sCtx, &ff, ws.WithLabel("stresstest-query"),
)
if err != nil {
cancel()
// reconnect on connection issues
errStr := err.Error()
if strings.Contains(errStr, "connection closed") {
_ = rc.Reconnect(ctx)
}
continue
}
atomic.AddUint64(queries, 1)
// read until EOSE or timeout
innerDone := false
for !innerDone {
select {
case <-sCtx.Done():
innerDone = true
case <-sub.EndOfStoredEvents:
innerDone = true
case ev, ok := <-sub.Events:
if !ok {
innerDone = true
break
}
if ev != nil {
atomic.AddUint64(results, 1)
}
}
}
sub.Unsub()
cancel()
// wait a random interval between queries
interval := minInterval
if maxInterval > minInterval {
delta := rng.Int63n(int64(maxInterval - minInterval))
interval += time.Duration(delta)
}
select {
case <-time.After(interval):
case <-ctx.Done():
return
}
}
}
func startReader(ctx context.Context, rl *ws.Client, received *uint64) error {
// Broad filter: subscribe to text notes since now-5m to catch our own writes
f := filter.New()
f.Kinds = kind.NewS(kind.TextNote)
// We don't set authors to ensure we read all text notes coming in
ff := filter.S{f}
sub, err := rl.Subscribe(ctx, &ff, ws.WithLabel("stresstest-reader"))
if err != nil {
return err
}
go func() {
for {
select {
case <-ctx.Done():
return
case ev, ok := <-sub.Events:
if !ok {
return
}
if ev != nil {
atomic.AddUint64(received, 1)
}
}
}
}()
return nil
}
func main() {
var (
address string
port int
workers int
duration time.Duration
publishTimeout time.Duration
queryWorkers int
queryTimeout time.Duration
queryMinInt time.Duration
queryMaxInt time.Duration
skipCache bool
)
flag.StringVar(
&address, "address", "localhost", "relay address (host or IP)",
)
flag.IntVar(&port, "port", 3334, "relay port")
flag.IntVar(
&workers, "workers", 8, "number of concurrent publisher workers",
)
flag.DurationVar(
&duration, "duration", 60*time.Second,
"how long to run the stress test",
)
flag.DurationVar(
&publishTimeout, "publish-timeout", 15*time.Second,
"timeout waiting for OK per publish",
)
flag.IntVar(
&queryWorkers, "query-workers", 4, "number of concurrent query workers",
)
flag.DurationVar(
&queryTimeout, "query-timeout", 3*time.Second,
"subscription timeout for queries",
)
flag.DurationVar(
&queryMinInt, "query-min-interval", 50*time.Millisecond,
"minimum interval between queries per worker",
)
flag.DurationVar(
&queryMaxInt, "query-max-interval", 300*time.Millisecond,
"maximum interval between queries per worker",
)
flag.BoolVar(
&skipCache, "skip-cache", false,
"skip uploading examples.Cache before running",
)
flag.Parse()
relayURL := fmt.Sprintf("ws://%s:%d", address, port)
log.I.F("stresstest: connecting to %s", relayURL)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Handle Ctrl+C
sigc := make(chan os.Signal, 1)
signal.Notify(sigc, os.Interrupt)
go func() {
select {
case <-sigc:
log.I.Ln("interrupt received, shutting down...")
cancel()
case <-ctx.Done():
}
}()
rl, err := ws.RelayConnect(ctx, relayURL)
if err != nil {
log.E.F("failed to connect to relay %s: %v", relayURL, err)
os.Exit(1)
}
defer rl.Close()
rc := &RelayConn{client: rl, url: relayURL}
// Load and publish cache events first (unless skipped)
idx, err := loadCacheAndIndex()
if err != nil {
log.E.F("failed to load examples.Cache: %v", err)
}
cacheSent := 0
if !skipCache && idx != nil && len(idx.events) > 0 {
log.I.F("sending %d events from examples.Cache...", len(idx.events))
cacheSent = publishCacheEvents(ctx, relayURL, idx)
log.I.F("sent %d/%d cache events", cacheSent, len(idx.events))
}
var pubOK uint64
var recvCount uint64
var qCount uint64
var qResults uint64
if err := startReader(ctx, rl, &recvCount); err != nil {
log.E.F("reader subscribe error: %v", err)
// continue anyway, we can still write
}
wg := sync.WaitGroup{}
// Start publisher workers
wg.Add(workers)
for i := 0; i < workers; i++ {
i := i
go func() {
defer wg.Done()
publisherWorker(ctx, rc, i, &pubOK)
}()
}
// Start query workers
if idx != nil && len(idx.events) > 0 && queryWorkers > 0 {
wg.Add(queryWorkers)
for i := 0; i < queryWorkers; i++ {
i := i
go func() {
defer wg.Done()
queryWorker(
ctx, rc, idx, i, &qCount, &qResults, queryTimeout,
queryMinInt, queryMaxInt,
)
}()
}
}
// Timer for duration and periodic stats
ticker := time.NewTicker(2 * time.Second)
defer ticker.Stop()
end := time.NewTimer(duration)
start := time.Now()
loop:
for {
select {
case <-ticker.C:
elapsed := time.Since(start).Seconds()
p := atomic.LoadUint64(&pubOK)
r := atomic.LoadUint64(&recvCount)
qc := atomic.LoadUint64(&qCount)
qr := atomic.LoadUint64(&qResults)
log.I.F(
"elapsed=%.1fs sent=%d (%.0f/s) received=%d cache_sent=%d queries=%d results=%d",
elapsed, p, float64(p)/elapsed, r, cacheSent, qc, qr,
)
case <-end.C:
break loop
case <-ctx.Done():
break loop
}
}
cancel()
wg.Wait()
p := atomic.LoadUint64(&pubOK)
r := atomic.LoadUint64(&recvCount)
qc := atomic.LoadUint64(&qCount)
qr := atomic.LoadUint64(&qResults)
log.I.F(
"stresstest complete: cache_sent=%d sent=%d received=%d queries=%d results=%d duration=%s",
cacheSent, p, r, qc, qr,
time.Since(start).Truncate(time.Millisecond),
)
}

5
go.mod
View File

@@ -19,7 +19,7 @@ require (
golang.org/x/lint v0.0.0-20241112194109-818c5a804067
golang.org/x/net v0.43.0
honnef.co/go/tools v0.6.1
lol.mleku.dev v1.0.2
lol.mleku.dev v1.0.3
lukechampine.com/frand v1.5.1
)
@@ -28,15 +28,12 @@ require (
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/dgraph-io/ristretto/v2 v2.2.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/felixge/fgprof v0.9.3 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/google/flatbuffers v25.2.10+incompatible // indirect
github.com/google/pprof v0.0.0-20211214055906-6f57359322fd // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/templexxx/cpu v0.0.1 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect

11
go.sum
View File

@@ -20,8 +20,6 @@ github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da h1:aIftn67I1fkbMa5
github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/felixge/fgprof v0.9.3 h1:VvyZxILNuCiUCSXtPtYmmtGvb65nqXh2QFWc0Wpf2/g=
github.com/felixge/fgprof v0.9.3/go.mod h1:RdbpDgzqYVh/T9fPELJyV7EYJuHB55UTEULNun8eiPw=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
@@ -46,10 +44,6 @@ github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA=
github.com/pkg/profile v1.7.0/go.mod h1:8Uer0jas47ZQMJ7VD+OHknK4YDY07LPUC6dEvqDjvNo=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
@@ -101,7 +95,6 @@ golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI=
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -121,7 +114,7 @@ gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI=
honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4=
lol.mleku.dev v1.0.2 h1:bSV1hHnkmt1hq+9nSvRwN6wgcI7itbM3XRZ4dMB438c=
lol.mleku.dev v1.0.2/go.mod h1:DQ0WnmkntA9dPLCXgvtIgYt5G0HSqx3wSTLolHgWeLA=
lol.mleku.dev v1.0.3 h1:IrqLd/wFRghu6MX7mgyKh//3VQiId2AM4RdCbFqSLnY=
lol.mleku.dev v1.0.3/go.mod h1:DQ0WnmkntA9dPLCXgvtIgYt5G0HSqx3wSTLolHgWeLA=
lukechampine.com/frand v1.5.1 h1:fg0eRtdmGFIxhP5zQJzM1lFDbD6CUfu/f+7WgAZd5/w=
lukechampine.com/frand v1.5.1/go.mod h1:4VstaWc2plN4Mjr10chUD46RAVGWhpkZ5Nja8+Azp0Q=

179
main.go
View File

@@ -4,7 +4,9 @@ import (
"context"
"fmt"
"net/http"
pp "net/http/pprof"
"os"
"os/exec"
"os/signal"
"runtime"
"time"
@@ -19,6 +21,31 @@ import (
"next.orly.dev/pkg/version"
)
// openBrowser attempts to open the specified URL in the default browser.
// It supports multiple platforms including Linux, macOS, and Windows.
func openBrowser(url string) {
var err error
switch runtime.GOOS {
case "linux":
err = exec.Command("xdg-open", url).Start()
case "windows":
err = exec.Command(
"rundll32", "url.dll,FileProtocolHandler", url,
).Start()
case "darwin":
err = exec.Command("open", url).Start()
default:
log.W.F("unsupported platform for opening browser: %s", runtime.GOOS)
return
}
if err != nil {
log.E.F("failed to open browser: %v", err)
} else {
log.I.F("opened browser to %s", url)
}
}
func main() {
runtime.GOMAXPROCS(runtime.NumCPU() * 4)
var err error
@@ -26,16 +53,97 @@ func main() {
if cfg, err = config.New(); chk.T(err) {
}
log.I.F("starting %s %s", cfg.AppName, version.V)
// If OpenPprofWeb is true and profiling is enabled, we need to ensure HTTP profiling is also enabled
if cfg.OpenPprofWeb && cfg.Pprof != "" && !cfg.PprofHTTP {
log.I.F("enabling HTTP pprof server to support web viewer")
cfg.PprofHTTP = true
}
switch cfg.Pprof {
case "cpu":
prof := profile.Start(profile.CPUProfile)
defer prof.Stop()
if cfg.PprofPath != "" {
prof := profile.Start(
profile.CPUProfile, profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.CPUProfile)
defer prof.Stop()
}
case "memory":
prof := profile.Start(profile.MemProfile)
defer prof.Stop()
if cfg.PprofPath != "" {
prof := profile.Start(
profile.MemProfile, profile.MemProfileRate(32),
profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.MemProfile)
defer prof.Stop()
}
case "allocation":
prof := profile.Start(profile.MemProfileAllocs)
defer prof.Stop()
if cfg.PprofPath != "" {
prof := profile.Start(
profile.MemProfileAllocs, profile.MemProfileRate(32),
profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.MemProfileAllocs)
defer prof.Stop()
}
case "heap":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.MemProfileHeap, profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.MemProfileHeap)
defer prof.Stop()
}
case "mutex":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.MutexProfile, profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.MutexProfile)
defer prof.Stop()
}
case "threadcreate":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.ThreadcreationProfile,
profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.ThreadcreationProfile)
defer prof.Stop()
}
case "goroutine":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.GoroutineProfile, profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.GoroutineProfile)
defer prof.Stop()
}
case "block":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.BlockProfile, profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.BlockProfile)
defer prof.Stop()
}
}
ctx, cancel := context.WithCancel(context.Background())
var db *database.D
@@ -50,6 +158,47 @@ func main() {
}
acl.Registry.Syncer()
// Start HTTP pprof server if enabled
if cfg.PprofHTTP {
pprofAddr := fmt.Sprintf("%s:%d", cfg.Listen, 6060)
pprofMux := http.NewServeMux()
pprofMux.HandleFunc("/debug/pprof/", pp.Index)
pprofMux.HandleFunc("/debug/pprof/cmdline", pp.Cmdline)
pprofMux.HandleFunc("/debug/pprof/profile", pp.Profile)
pprofMux.HandleFunc("/debug/pprof/symbol", pp.Symbol)
pprofMux.HandleFunc("/debug/pprof/trace", pp.Trace)
for _, p := range []string{
"allocs", "block", "goroutine", "heap", "mutex", "threadcreate",
} {
pprofMux.Handle("/debug/pprof/"+p, pp.Handler(p))
}
ppSrv := &http.Server{Addr: pprofAddr, Handler: pprofMux}
go func() {
log.I.F("pprof server listening on %s", pprofAddr)
if err := ppSrv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.E.F("pprof server error: %v", err)
}
}()
go func() {
<-ctx.Done()
shutdownCtx, cancelShutdown := context.WithTimeout(
context.Background(), 2*time.Second,
)
defer cancelShutdown()
_ = ppSrv.Shutdown(shutdownCtx)
}()
// Open the pprof web viewer if enabled
if cfg.OpenPprofWeb && cfg.Pprof != "" {
pprofURL := fmt.Sprintf("http://localhost:6060/debug/pprof/")
go func() {
// Wait a moment for the server to start
time.Sleep(500 * time.Millisecond)
openBrowser(pprofURL)
}()
}
}
// Start health check HTTP server if configured
var healthSrv *http.Server
if cfg.HealthPort > 0 {
@@ -61,6 +210,20 @@ func main() {
log.I.F("health check ok")
},
)
// Optional shutdown endpoint to gracefully stop the process so profiling defers run
if cfg.EnableShutdown {
mux.HandleFunc(
"/shutdown", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("shutting down"))
log.I.F("shutdown requested via /shutdown; sending SIGINT to self")
go func() {
p, _ := os.FindProcess(os.Getpid())
_ = p.Signal(os.Interrupt)
}()
},
)
}
healthSrv = &http.Server{
Addr: fmt.Sprintf(
"%s:%d", cfg.Listen, cfg.HealthPort,
@@ -91,12 +254,10 @@ func main() {
fmt.Printf("\r")
cancel()
chk.E(db.Close())
return
case <-quit:
cancel()
chk.E(db.Close())
return
}
}
log.I.F("exiting")
}

View File

@@ -208,6 +208,7 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
return
}
urls := f.adminRelays()
log.I.S(urls)
if len(urls) == 0 {
log.W.F("follows syncer: no admin relays found in DB (kind 10002)")
return

View File

@@ -2,6 +2,7 @@ package database
import (
"bytes"
"fmt"
"github.com/dgraph-io/badger/v4"
"lol.mleku.dev/chk"
@@ -25,8 +26,23 @@ func (d *D) FetchEventBySerial(ser *types.Uint40) (ev *event.E, err error) {
if v, err = item.ValueCopy(nil); chk.E(err) {
return
}
// Check if we have valid data before attempting to unmarshal
if len(v) < 32+32+1+2+1+1+64 { // ID + Pubkey + min varint fields + Sig
err = fmt.Errorf(
"incomplete event data: got %d bytes, expected at least %d",
len(v), 32+32+1+2+1+1+64,
)
return
}
ev = new(event.E)
if err = ev.UnmarshalBinary(bytes.NewBuffer(v)); chk.E(err) {
if err = ev.UnmarshalBinary(bytes.NewBuffer(v)); err != nil {
// Add more context to EOF errors for debugging
if err.Error() == "EOF" {
err = fmt.Errorf(
"EOF while unmarshaling event (serial=%v, data_len=%d): %w",
ser, len(v), err,
)
}
return
}
return

View File

@@ -0,0 +1,68 @@
package database
import (
"bytes"
"github.com/dgraph-io/badger/v4"
"lol.mleku.dev/chk"
"next.orly.dev/pkg/database/indexes"
"next.orly.dev/pkg/database/indexes/types"
"next.orly.dev/pkg/encoders/event"
)
// FetchEventsBySerials fetches multiple events by their serials in a single database transaction.
// Returns a map of serial uint64 value to event, only including successfully fetched events.
func (d *D) FetchEventsBySerials(serials []*types.Uint40) (events map[uint64]*event.E, err error) {
events = make(map[uint64]*event.E)
if len(serials) == 0 {
return events, nil
}
if err = d.View(
func(txn *badger.Txn) (err error) {
for _, ser := range serials {
buf := new(bytes.Buffer)
if err = indexes.EventEnc(ser).MarshalWrite(buf); chk.E(err) {
// Skip this serial on error but continue with others
continue
}
var item *badger.Item
if item, err = txn.Get(buf.Bytes()); err != nil {
// Skip this serial if not found but continue with others
err = nil
continue
}
var v []byte
if v, err = item.ValueCopy(nil); chk.E(err) {
// Skip this serial on error but continue with others
err = nil
continue
}
// Check if we have valid data before attempting to unmarshal
if len(v) < 32+32+1+2+1+1+64 { // ID + Pubkey + min varint fields + Sig
// Skip this serial - incomplete data
continue
}
ev := new(event.E)
if err = ev.UnmarshalBinary(bytes.NewBuffer(v)); err != nil {
// Skip this serial on unmarshal error but continue with others
err = nil
continue
}
// Successfully unmarshaled event, add to results
events[ser.Get()] = ev
}
return nil
},
); err != nil {
return
}
return events, nil
}

View File

@@ -362,7 +362,6 @@ func GetIndexesFromFilter(f *filter.F) (idxs []Range, err error) {
if f.Authors != nil && f.Authors.Len() > 0 {
for _, author := range f.Authors.T {
var p *types2.PubHash
log.I.S(author)
if p, err = CreatePubHashFromData(author); chk.E(err) {
return
}

View File

@@ -8,6 +8,7 @@ import (
"lol.mleku.dev/errorf"
"lol.mleku.dev/log"
"next.orly.dev/pkg/database/indexes/types"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/tag"
@@ -64,6 +65,99 @@ func (d *D) GetSerialById(id []byte) (ser *types.Uint40, err error) {
return
}
// GetSerialsByIds takes a tag.T containing multiple IDs and returns a map of IDs to their
// corresponding serial numbers. It directly queries the IdPrefix index for matching IDs,
// which is more efficient than using GetIndexesFromFilter.
func (d *D) GetSerialsByIds(ids *tag.T) (
serials map[string]*types.Uint40, err error,
) {
return d.GetSerialsByIdsWithFilter(ids, nil)
}
// GetSerialsByIdsWithFilter takes a tag.T containing multiple IDs and returns a
// map of IDs to their corresponding serial numbers, applying a filter function
// to each event. The function directly creates ID index prefixes for efficient querying.
func (d *D) GetSerialsByIdsWithFilter(
ids *tag.T, fn func(ev *event.E, ser *types.Uint40) bool,
) (serials map[string]*types.Uint40, err error) {
log.T.F("GetSerialsByIdsWithFilter: input ids count=%d", ids.Len())
// Initialize the result map
serials = make(map[string]*types.Uint40)
// Return early if no IDs are provided
if ids.Len() == 0 {
return
}
// Process all IDs in a single transaction
if err = d.View(
func(txn *badger.Txn) (err error) {
it := txn.NewIterator(badger.DefaultIteratorOptions)
defer it.Close()
// Process each ID sequentially
for _, id := range ids.T {
// idHex := hex.Enc(id)
// Get the index prefix for this ID
var idxs []Range
if idxs, err = GetIndexesFromFilter(&filter.F{Ids: tag.NewFromBytesSlice(id)}); chk.E(err) {
// Skip this ID if we can't create its index
continue
}
// Skip if no index was created
if len(idxs) == 0 {
continue
}
// Seek to the start of this ID's range in the database
it.Seek(idxs[0].Start)
if it.ValidForPrefix(idxs[0].Start) {
// Found an entry for this ID
item := it.Item()
key := item.Key()
// Extract the serial number from the key
ser := new(types.Uint40)
buf := bytes.NewBuffer(key[len(key)-5:])
if err = ser.UnmarshalRead(buf); chk.E(err) {
continue
}
// If a filter function is provided, fetch the event and apply the filter
if fn != nil {
var ev *event.E
if ev, err = d.FetchEventBySerial(ser); err != nil {
// Skip this event if we can't fetch it
continue
}
// Apply the filter
if !fn(ev, ser) {
// Skip this event if it doesn't pass the filter
continue
}
}
// Store the serial in the result map using the hex-encoded ID as the key
serials[string(id)] = ser
}
}
return
},
); chk.E(err) {
return
}
log.T.F(
"GetSerialsByIdsWithFilter: found %d serials out of %d requested ids",
len(serials), ids.Len(),
)
return
}
// func (d *D) GetSerialBytesById(id []byte) (ser []byte, err error) {
// var idxs []Range
// if idxs, err = GetIndexesFromFilter(&filter.F{Ids: tag.New(id)}); chk.E(err) {

View File

@@ -48,9 +48,11 @@ func TestGetSerialById(t *testing.T) {
// Unmarshal the event
if _, err = ev.Unmarshal(b); chk.E(err) {
ev.Free()
t.Fatal(err)
}
ev.Free()
events = append(events, ev)
// Save the event to the database

View File

@@ -55,8 +55,10 @@ func TestGetSerialsByRange(t *testing.T) {
// Unmarshal the event
if _, err = ev.Unmarshal(b); chk.E(err) {
ev.Free()
t.Fatal(err)
}
ev.Free()
events = append(events, ev)

View File

@@ -5,7 +5,6 @@ import (
"context"
"sort"
"strconv"
"strings"
"time"
"lol.mleku.dev/chk"
@@ -43,73 +42,63 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
var expDeletes types.Uint40s
var expEvs event.S
if f.Ids != nil && f.Ids.Len() > 0 {
// for _, id := range f.Ids.T {
// log.T.F("QueryEvents: looking for ID=%s", hex.Enc(id))
// }
// log.T.F("QueryEvents: ids path, count=%d", f.Ids.Len())
for _, idx := range f.Ids.T {
// log.T.F("QueryEvents: lookup id=%s", hex.Enc(idx))
// we know there is only Ids in this, so run the ID query and fetch.
var ser *types.Uint40
var idErr error
if ser, idErr = d.GetSerialById(idx); idErr != nil {
// Check if this is a "not found" error which is expected for IDs we don't have
if strings.Contains(idErr.Error(), "id not found in database") {
// log.T.F(
// "QueryEvents: ID not found in database: %s",
// hex.Enc(idx),
// )
} else {
// Log unexpected errors but continue processing other IDs
// log.E.F(
// "QueryEvents: error looking up id=%s err=%v",
// hex.Enc(idx), idErr,
// )
}
// Get all serials for the requested IDs in a single batch operation
log.T.F("QueryEvents: ids path, count=%d", f.Ids.Len())
// Use GetSerialsByIds to batch process all IDs at once
serials, idErr := d.GetSerialsByIds(f.Ids)
if idErr != nil {
log.E.F("QueryEvents: error looking up ids: %v", idErr)
// Continue with whatever IDs we found
}
// Convert serials map to slice for batch fetch
var serialsSlice []*types.Uint40
idHexToSerial := make(map[uint64]string) // Map serial value back to original ID hex
for idHex, ser := range serials {
serialsSlice = append(serialsSlice, ser)
idHexToSerial[ser.Get()] = idHex
}
// Fetch all events in a single batch operation
var fetchedEvents map[uint64]*event.E
if fetchedEvents, err = d.FetchEventsBySerials(serialsSlice); err != nil {
log.E.F("QueryEvents: batch fetch failed: %v", err)
return
}
// Process each successfully fetched event and apply filters
for serialValue, ev := range fetchedEvents {
idHex := idHexToSerial[serialValue]
// Convert serial value back to Uint40 for expiration handling
ser := new(types.Uint40)
if err = ser.Set(serialValue); err != nil {
log.T.F("QueryEvents: error converting serial %d: %v", serialValue, err)
continue
}
// Check if the serial is nil, which indicates the ID wasn't found
if ser == nil {
// log.T.F("QueryEvents: Serial is nil for ID: %s", hex.Enc(idx))
continue
}
// fetch the events
var ev *event.E
if ev, err = d.FetchEventBySerial(ser); err != nil {
// log.T.F(
// "QueryEvents: fetch by serial failed for id=%s ser=%v err=%v",
// hex.Enc(idx), ser, err,
// )
continue
}
// log.T.F(
// "QueryEvents: found id=%s kind=%d created_at=%d",
// hex.Enc(ev.ID), ev.Kind, ev.CreatedAt,
// )
// check for an expiration tag and delete after returning the result
if CheckExpiration(ev) {
log.T.F(
"QueryEvents: id=%s filtered out due to expiration",
hex.Enc(ev.ID),
"QueryEvents: id=%s filtered out due to expiration", idHex,
)
expDeletes = append(expDeletes, ser)
expEvs = append(expEvs, ev)
continue
}
// skip events that have been deleted by a proper deletion event
if derr := d.CheckForDeleted(ev, nil); derr != nil {
// log.T.F(
// "QueryEvents: id=%s filtered out due to deletion: %v",
// hex.Enc(ev.ID), derr,
// )
// log.T.F("QueryEvents: id=%s filtered out due to deletion: %v", idHex, derr)
continue
}
// log.T.F(
// "QueryEvents: id=%s SUCCESSFULLY FOUND, adding to results",
// hex.Enc(ev.ID),
// )
// Add the event to the results
evs = append(evs, ev)
// log.T.F("QueryEvents: id=%s SUCCESSFULLY FOUND, adding to results", idHex)
}
// sort the events by timestamp
sort.Slice(
evs, func(i, j int) bool {
@@ -159,16 +148,33 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
// Add deletion events to the list of events to process
idPkTs = append(idPkTs, deletionIdPkTs...)
}
// First pass: collect all deletion events
// Prepare serials for batch fetch
var allSerials []*types.Uint40
serialToIdPk := make(map[uint64]*store.IdPkTs)
for _, idpk := range idPkTs {
var ev *event.E
ser := new(types.Uint40)
if err = ser.Set(idpk.Ser); chk.E(err) {
if err = ser.Set(idpk.Ser); err != nil {
continue
}
if ev, err = d.FetchEventBySerial(ser); err != nil {
allSerials = append(allSerials, ser)
serialToIdPk[ser.Get()] = idpk
}
// Fetch all events in batch
var allEvents map[uint64]*event.E
if allEvents, err = d.FetchEventsBySerials(allSerials); err != nil {
log.E.F("QueryEvents: batch fetch failed in non-IDs path: %v", err)
return
}
// First pass: collect all deletion events
for serialValue, ev := range allEvents {
// Convert serial value back to Uint40 for expiration handling
ser := new(types.Uint40)
if err = ser.Set(serialValue); err != nil {
continue
}
// check for an expiration tag and delete after returning the result
if CheckExpiration(ev) {
expDeletes = append(expDeletes, ser)
@@ -292,15 +298,7 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
}
}
// Second pass: process all events, filtering out deleted ones
for _, idpk := range idPkTs {
var ev *event.E
ser := new(types.Uint40)
if err = ser.Set(idpk.Ser); chk.E(err) {
continue
}
if ev, err = d.FetchEventBySerial(ser); err != nil {
continue
}
for _, ev := range allEvents {
// Add logging for tag filter debugging
if f.Tags != nil && f.Tags.Len() > 0 {
// var eventTags []string

View File

@@ -56,8 +56,10 @@ func setupTestDB(t *testing.T) (
// Unmarshal the event
if _, err = ev.Unmarshal(b); chk.E(err) {
ev.Free()
t.Fatal(err)
}
ev.Free()
events = append(events, ev)

View File

@@ -173,10 +173,10 @@ func (d *D) CheckForDeleted(ev *event.E, admins [][]byte) (err error) {
}
}
if ev.CreatedAt < maxTs {
err = fmt.Errorf(
"blocked: was deleted by address %s: event is older than the delete: event: %d delete: %d",
at, ev.CreatedAt, maxTs,
)
// err = fmt.Errorf(
// "blocked: was deleted by address %s: event is older than the delete: event: %d delete: %d",
// at, ev.CreatedAt, maxTs,
// )
return
}
return
@@ -205,20 +205,20 @@ func (d *D) CheckForDeleted(ev *event.E, admins [][]byte) (err error) {
if len(s) > 0 {
// For e-tag deletions (delete by ID), any deletion event means the event cannot be resubmitted
// regardless of timestamp, since it's a specific deletion of this exact event
err = errorf.E(
"blocked: was deleted by ID and cannot be resubmitted",
// ev.ID,
)
// err = errorf.E(
// "blocked: was deleted by ID and cannot be resubmitted",
// // ev.ID,
// )
return
}
}
if len(sers) > 0 {
// For e-tag deletions (delete by ID), any deletion event means the event cannot be resubmitted
// regardless of timestamp, since it's a specific deletion of this exact event
err = errorf.E(
"blocked: was deleted by ID and cannot be resubmitted",
// ev.ID,
)
// err = errorf.E(
// "blocked: was deleted by ID and cannot be resubmitted",
// // ev.ID,
// )
return
}

View File

@@ -17,11 +17,12 @@ func (d *D) QueryForSerials(c context.Context, f *filter.F) (
var founds []*types.Uint40
var idPkTs []*store.IdPkTs
if f.Ids != nil && f.Ids.Len() > 0 {
for _, id := range f.Ids.T {
var ser *types.Uint40
if ser, err = d.GetSerialById(id); chk.E(err) {
return
}
// Use batch lookup to minimize transactions when resolving IDs to serials
var serialMap map[string]*types.Uint40
if serialMap, err = d.GetSerialsByIds(f.Ids); chk.E(err) {
return
}
for _, ser := range serialMap {
founds = append(founds, ser)
}
var tmp []*store.IdPkTs

View File

@@ -11,7 +11,6 @@ import (
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/text"
"next.orly.dev/pkg/interfaces/codec"
"next.orly.dev/pkg/utils/bufpool"
"next.orly.dev/pkg/utils/constraints"
"next.orly.dev/pkg/utils/units"
)
@@ -75,8 +74,8 @@ func (en *Submission) Unmarshal(b []byte) (r []byte, err error) {
if r, err = en.E.Unmarshal(r); chk.T(err) {
return
}
buf := bufpool.Get()
r = en.E.Marshal(buf)
// after parsing the event object, r points just after the event JSON
// now skip to the end of the envelope (consume comma/closing bracket etc.)
if r, err = envelopes.SkipToTheEnd(r); chk.E(err) {
return
}

View File

@@ -15,13 +15,10 @@ import (
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/encoders/text"
"next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils/bufpool"
)
// E is the primary datatype of nostr. This is the form of the structure that
// defines its JSON string-based format. Always use New() and Free() to create
// and free event.E to take advantage of the bufpool which greatly improves
// memory allocation behaviour when encoding and decoding nostr events.
// defines its JSON string-based format.
//
// WARNING: DO NOT use json.Marshal with this type because it will not properly
// encode <, >, and & characters due to legacy bullcrap in the encoding/json
@@ -57,10 +54,6 @@ type E struct {
// Sig is the signature on the ID hash that validates as coming from the
// Pubkey in binary format.
Sig []byte
// b is the decode buffer for the event.E. this is where the UnmarshalJSON
// will source the memory to store all of the fields except for the tags.
b bufpool.B
}
var (
@@ -73,31 +66,75 @@ var (
jSig = []byte("sig")
)
// New returns a new event.E. The returned event.E should be freed with Free()
// to return the unmarshalling buffer to the bufpool.
// New returns a new event.E.
func New() *E {
return &E{
b: bufpool.Get(),
}
return &E{}
}
// Free returns the event.E to the pool, as well as nilling all of the fields.
// This should hint to the GC that the event.E can be freed, and the memory
// reused. The decode buffer will be returned to the pool for reuse.
// Free nils all of the fields to hint to the GC that the event.E can be freed.
func (ev *E) Free() {
bufpool.Put(ev.b)
ev.ID = nil
ev.Pubkey = nil
ev.Tags = nil
ev.Content = nil
ev.Sig = nil
ev.b = nil
}
// Clone creates a deep copy of the event with independent memory allocations.
// The clone does not use bufpool, ensuring it has a separate lifetime from
// the original event. This prevents corruption when the original is freed
// while the clone is still in use (e.g., in asynchronous delivery).
func (ev *E) Clone() *E {
clone := &E{
CreatedAt: ev.CreatedAt,
Kind: ev.Kind,
}
// Deep copy all byte slices with independent memory
if ev.ID != nil {
clone.ID = make([]byte, len(ev.ID))
copy(clone.ID, ev.ID)
}
if ev.Pubkey != nil {
clone.Pubkey = make([]byte, len(ev.Pubkey))
copy(clone.Pubkey, ev.Pubkey)
}
if ev.Content != nil {
clone.Content = make([]byte, len(ev.Content))
copy(clone.Content, ev.Content)
}
if ev.Sig != nil {
clone.Sig = make([]byte, len(ev.Sig))
copy(clone.Sig, ev.Sig)
}
// Deep copy tags
if ev.Tags != nil {
clone.Tags = tag.NewS()
for _, tg := range *ev.Tags {
if tg != nil {
// Create new tag with deep-copied elements
newTag := tag.NewWithCap(len(tg.T))
for _, element := range tg.T {
newElement := make([]byte, len(element))
copy(newElement, element)
newTag.T = append(newTag.T, newElement)
}
clone.Tags.Append(newTag)
}
}
}
return clone
}
// EstimateSize returns a size for the event that allows for worst case scenario
// expansion of the escaped content and tags.
func (ev *E) EstimateSize() (size int) {
size = len(ev.ID)*2 + len(ev.Pubkey)*2 + len(ev.Sig)*2 + len(ev.Content)*2
if ev.Tags == nil {
return
}
for _, v := range *ev.Tags {
for _, w := range (*v).T {
size += len(w) * 2
@@ -132,6 +169,9 @@ func (ev *E) Marshal(dst []byte) (b []byte) {
b = append(b, `":`...)
if ev.Tags != nil {
b = ev.Tags.Marshal(b)
} else {
// Emit empty array for nil tags to keep JSON valid
b = append(b, '[', ']')
}
b = append(b, `,"`...)
b = append(b, jContent...)
@@ -148,27 +188,21 @@ func (ev *E) Marshal(dst []byte) (b []byte) {
// MarshalJSON marshals an event.E into a JSON byte string.
//
// Call bufpool.PutBytes(b) to return the buffer to the bufpool after use.
//
// WARNING: if json.Marshal is called in the hopes of invoking this function on
// an event, if it has <, > or * in the content or tags they are escaped into
// unicode escapes and break the event ID. Call this function directly in order
// to bypass this issue.
func (ev *E) MarshalJSON() (b []byte, err error) {
b = bufpool.Get()
b = ev.Marshal(b[:0])
b = ev.Marshal(nil)
return
}
func (ev *E) Serialize() (b []byte) {
b = bufpool.Get()
b = ev.Marshal(b[:0])
b = ev.Marshal(nil)
return
}
// Unmarshal unmarshalls a JSON string into an event.E.
//
// Call ev.Free() to return the provided buffer to the bufpool afterwards.
func (ev *E) Unmarshal(b []byte) (rem []byte, err error) {
key := make([]byte, 0, 9)
for ; len(b) > 0; b = b[1:] {
@@ -181,7 +215,6 @@ func (ev *E) Unmarshal(b []byte) (rem []byte, err error) {
goto BetweenKeys
}
}
log.I.F("start")
goto eof
BetweenKeys:
for ; len(b) > 0; b = b[1:] {
@@ -194,7 +227,6 @@ BetweenKeys:
goto InKey
}
}
log.I.F("BetweenKeys")
goto eof
InKey:
for ; len(b) > 0; b = b[1:] {
@@ -204,7 +236,6 @@ InKey:
}
key = append(key, b[0])
}
log.I.F("InKey")
goto eof
InKV:
for ; len(b) > 0; b = b[1:] {
@@ -217,7 +248,6 @@ InKV:
goto InVal
}
}
log.I.F("InKV")
goto eof
InVal:
// Skip whitespace before value
@@ -341,8 +371,8 @@ BetweenKV:
goto InKey
}
}
log.I.F("between kv")
goto eof
// If we reach here, the buffer ended unexpectedly. Treat as end-of-object
goto AfterClose
AfterClose:
rem = b
return
@@ -361,6 +391,7 @@ eof:
//
// Call ev.Free() to return the provided buffer to the bufpool afterwards.
func (ev *E) UnmarshalJSON(b []byte) (err error) {
log.I.F("UnmarshalJSON: '%s'", b)
_, err = ev.Unmarshal(b)
return
}

View File

@@ -9,7 +9,6 @@ import (
"lol.mleku.dev/errorf"
"next.orly.dev/pkg/encoders/text"
utils "next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils/bufpool"
)
// The tag position meanings, so they are clear when reading.
@@ -21,18 +20,17 @@ const (
type T struct {
T [][]byte
b bufpool.B
}
func New() *T { return &T{b: bufpool.Get()} }
func New() *T { return &T{} }
func NewFromBytesSlice(t ...[]byte) (tt *T) {
tt = &T{T: t, b: bufpool.Get()}
tt = &T{T: t}
return
}
func NewFromAny(t ...any) (tt *T) {
tt = &T{b: bufpool.Get()}
tt = &T{}
for _, v := range t {
switch vv := v.(type) {
case []byte:
@@ -47,11 +45,10 @@ func NewFromAny(t ...any) (tt *T) {
}
func NewWithCap(c int) *T {
return &T{T: make([][]byte, 0, c), b: bufpool.Get()}
return &T{T: make([][]byte, 0, c)}
}
func (t *T) Free() {
bufpool.Put(t.b)
t.T = nil
}
@@ -99,18 +96,12 @@ func (t *T) Marshal(dst []byte) (b []byte) {
// in an event as you will have a bad time. Use the json.Marshal function in the
// pkg/encoders/json package instead, this has a fork of the json library that
// disables html escaping for json.Marshal.
//
// Call bufpool.PutBytes(b) to return the buffer to the bufpool after use.
func (t *T) MarshalJSON() (b []byte, err error) {
b = bufpool.Get()
b = t.Marshal(b)
b = t.Marshal(nil)
return
}
// Unmarshal decodes a standard minified JSON array of strings to a tags.T.
//
// Call bufpool.PutBytes(b) to return the buffer to the bufpool after use if it
// was originally created using bufpool.Get().
func (t *T) Unmarshal(b []byte) (r []byte, err error) {
var inQuotes, openedBracket bool
var quoteStart int
@@ -127,7 +118,11 @@ func (t *T) Unmarshal(b []byte) (r []byte, err error) {
i++
} else if b[i] == '"' {
inQuotes = false
t.T = append(t.T, text.NostrUnescape(b[quoteStart:i]))
// Copy the quoted substring before unescaping so we don't mutate the
// original JSON buffer in-place (which would corrupt subsequent parsing).
copyBuf := make([]byte, i-quoteStart)
copy(copyBuf, b[quoteStart:i])
t.T = append(t.T, text.NostrUnescape(copyBuf))
}
}
if !openedBracket || inQuotes {

View File

@@ -5,7 +5,6 @@ import (
"lol.mleku.dev/chk"
"next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils/bufpool"
)
// S is a list of tag.T - which are lists of string elements with ordering and
@@ -47,6 +46,9 @@ func (s *S) Append(t ...*T) {
// ContainsAny returns true if any of the values given in `values` matches any
// of the tag elements.
func (s *S) ContainsAny(tagName []byte, values [][]byte) bool {
if s == nil {
return false
}
if len(tagName) < 1 {
return false
}
@@ -67,10 +69,7 @@ func (s *S) ContainsAny(tagName []byte, values [][]byte) bool {
}
// MarshalJSON encodes a tags.T appended to a provided byte slice in JSON form.
//
// Call bufpool.PutBytes(b) to return the buffer to the bufpool after use.
func (s *S) MarshalJSON() (b []byte, err error) {
b = bufpool.Get()
b = append(b, '[')
for i, ss := range *s {
b = ss.Marshal(b)
@@ -97,8 +96,6 @@ func (s *S) Marshal(dst []byte) (b []byte) {
// UnmarshalJSON a tags.T from a provided byte slice and return what remains
// after the end of the array.
//
// Call bufpool.PutBytes(b) to return the buffer to the bufpool after use.
func (s *S) UnmarshalJSON(b []byte) (err error) {
_, err = s.Unmarshal(b)
return

View File

@@ -94,7 +94,10 @@ func UnmarshalQuoted(b []byte) (content, rem []byte, err error) {
if !escaping {
rem = rem[1:]
content = content[:contentLen]
content = NostrUnescape(content)
// Create a copy of the content to avoid corrupting the original input buffer
contentCopy := make([]byte, len(content))
copy(contentCopy, content)
content = NostrUnescape(contentCopy)
return
}
contentLen++

View File

@@ -140,6 +140,12 @@ func (r *Client) Context() context.Context { return r.connectionContext }
// IsConnected returns true if the connection to this relay seems to be active.
func (r *Client) IsConnected() bool { return r.connectionContext.Err() == nil }
// ConnectionCause returns the cancel cause for the relay connection context.
func (r *Client) ConnectionCause() error { return context.Cause(r.connectionContext) }
// LastError returns the last connection error observed by the reader loop.
func (r *Client) LastError() error { return r.ConnectionError }
// Connect tries to establish a websocket connection to r.URL.
// If the context expires before the connection is complete, an error is returned.
// Once successfully connected, context expiration has no effect: call r.Close
@@ -218,6 +224,11 @@ func (r *Client) ConnectWithTLS(
for {
select {
case <-r.connectionContext.Done():
log.T.F(
"WS.Client: connection context done for %s: cause=%v lastErr=%v",
r.URL, context.Cause(r.connectionContext),
r.ConnectionError,
)
ticker.Stop()
r.Connection = nil
@@ -241,13 +252,17 @@ func (r *Client) ConnectWithTLS(
"{%s} error writing ping: %v; closing websocket", r.URL,
err,
)
r.Close() // this should trigger a context cancelation
r.CloseWithReason(
fmt.Errorf(
"ping failed: %w", err,
),
) // this should trigger a context cancelation
return
}
case wr := <-r.writeQueue:
// all write requests will go through this to prevent races
log.D.F("{%s} sending %v\n", r.URL, string(wr.msg))
// log.D.F("{%s} sending %v\n", r.URL, string(wr.msg))
if err = r.Connection.WriteMessage(
r.connectionContext, wr.msg,
); err != nil {
@@ -269,7 +284,11 @@ func (r *Client) ConnectWithTLS(
r.connectionContext, buf,
); err != nil {
r.ConnectionError = err
r.Close()
log.T.F(
"WS.Client: reader loop error on %s: %v; closing connection",
r.URL, err,
)
r.CloseWithReason(fmt.Errorf("reader loop error: %w", err))
return
}
message := buf.Bytes()
@@ -358,11 +377,11 @@ func (r *Client) ConnectWithTLS(
if okCallback, exist := r.okCallbacks.Load(string(env.EventID)); exist {
okCallback(env.OK, env.ReasonString())
} else {
log.I.F(
"{%s} got an unexpected OK message for event %0x",
r.URL,
env.EventID,
)
// log.I.F(
// "{%s} got an unexpected OK message for event %0x",
// r.URL,
// env.EventID,
// )
}
}
}
@@ -479,14 +498,27 @@ func (r *Client) Subscribe(
sub := r.PrepareSubscription(ctx, ff, opts...)
if r.Connection == nil {
log.T.F(
"WS.Subscribe: not connected to %s; aborting sub id=%s", r.URL,
sub.GetID(),
)
return nil, fmt.Errorf("not connected to %s", r.URL)
}
log.T.F(
"WS.Subscribe: firing subscription id=%s to %s with %d filters",
sub.GetID(), r.URL, len(*ff),
)
if err := sub.Fire(); err != nil {
log.T.F(
"WS.Subscribe: Fire failed id=%s to %s: %v", sub.GetID(), r.URL,
err,
)
return nil, fmt.Errorf(
"couldn't subscribe to %v at %s: %w", ff, r.URL, err,
)
}
log.T.F("WS.Subscribe: Fire succeeded id=%s to %s", sub.GetID(), r.URL)
return sub, nil
}
@@ -598,9 +630,10 @@ func (r *Client) QuerySync(
}
// Close closes the relay connection.
func (r *Client) Close() error {
return r.close(errors.New("Close() called"))
}
func (r *Client) Close() error { return r.CloseWithReason(errors.New("Close() called")) }
// CloseWithReason closes the relay connection with a specific reason that will be stored as the context cancel cause.
func (r *Client) CloseWithReason(reason error) error { return r.close(reason) }
func (r *Client) close(reason error) error {
r.closeMutex.Lock()
@@ -609,6 +642,10 @@ func (r *Client) close(reason error) error {
if r.connectionContextCancel == nil {
return fmt.Errorf("relay already closed")
}
log.T.F(
"WS.Client: closing connection to %s: reason=%v lastErr=%v", r.URL,
reason, r.ConnectionError,
)
r.connectionContextCancel(reason)
r.connectionContextCancel = nil

View File

@@ -8,6 +8,7 @@ import (
"sync/atomic"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/encoders/envelopes/closeenvelope"
"next.orly.dev/pkg/encoders/envelopes/reqenvelope"
"next.orly.dev/pkg/encoders/event"
@@ -88,8 +89,14 @@ var (
)
func (sub *Subscription) start() {
// Wait for the context to be done instead of blocking immediately
// This allows the subscription to receive events before terminating
sub.live.Store(true)
// debug: log start of subscription goroutine
log.T.F("WS.Subscription.start: started id=%s", sub.GetID())
<-sub.Context.Done()
// the subscription ends once the context is canceled (if not already)
log.T.F("WS.Subscription.start: context done for id=%s", sub.GetID())
sub.unsub(errors.New("context done on start()")) // this will set sub.live to false
// do this so we don't have the possibility of closing the Events channel and then trying to send to it
sub.mu.Lock()
@@ -180,10 +187,18 @@ func (sub *Subscription) Fire() (err error) {
var reqb []byte
reqb = reqenvelope.NewFrom(sub.id, sub.Filters).Marshal(nil)
sub.live.Store(true)
log.T.F(
"WS.Subscription.Fire: sending REQ id=%s filters=%d bytes=%d",
sub.GetID(), len(*sub.Filters), len(reqb),
)
if err = <-sub.Client.Write(reqb); err != nil {
err = fmt.Errorf("failed to write: %w", err)
log.T.F(
"WS.Subscription.Fire: write failed id=%s: %v", sub.GetID(), err,
)
sub.cancel(err)
return
}
log.T.F("WS.Subscription.Fire: write ok id=%s", sub.GetID())
return
}

View File

@@ -1 +1 @@
v0.4.5
v0.4.9

169
scripts/benchmark.sh Executable file
View File

@@ -0,0 +1,169 @@
#!/bin/bash
set -euo pipefail
# scripts/benchmark.sh - Run full benchmark suite on a relay at a configurable address
#
# Usage:
# ./scripts/benchmark.sh [relay_address] [relay_port]
#
# Example:
# ./scripts/benchmark.sh localhost 3334
# ./scripts/benchmark.sh nostr.example.com 8080
#
# If relay_address and relay_port are not provided, defaults to localhost:3334
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd -- "${SCRIPT_DIR}/.." && pwd)"
cd "$REPO_ROOT"
# Default values
RELAY_ADDRESS="${1:-localhost}"
RELAY_PORT="${2:-3334}"
RELAY_URL="ws://${RELAY_ADDRESS}:${RELAY_PORT}"
BENCHMARK_EVENTS="${BENCHMARK_EVENTS:-10000}"
BENCHMARK_WORKERS="${BENCHMARK_WORKERS:-8}"
BENCHMARK_DURATION="${BENCHMARK_DURATION:-60s}"
REPORTS_DIR="${REPORTS_DIR:-$REPO_ROOT/cmd/benchmark/reports}"
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
RUN_DIR="${REPORTS_DIR}/run_${TIMESTAMP}"
# Ensure the benchmark binary is built
BENCHMARK_BIN="${REPO_ROOT}/cmd/benchmark/benchmark"
if [[ ! -x "$BENCHMARK_BIN" ]]; then
echo "Building benchmark binary..."
go build -o "$BENCHMARK_BIN" "$REPO_ROOT/cmd/benchmark"
fi
# Create output directory
mkdir -p "${RUN_DIR}"
echo "=================================================="
echo "Nostr Relay Benchmark"
echo "=================================================="
echo "Timestamp: $(date)"
echo "Target Relay: ${RELAY_URL}"
echo "Events per test: ${BENCHMARK_EVENTS}"
echo "Concurrent workers: ${BENCHMARK_WORKERS}"
echo "Test duration: ${BENCHMARK_DURATION}"
echo "Output directory: ${RUN_DIR}"
echo "=================================================="
# Function to wait for relay to be ready
wait_for_relay() {
local url="$1"
local max_attempts=30
local attempt=0
echo "Waiting for relay to be ready at ${url}..."
while [ $attempt -lt $max_attempts ]; do
# Try to get HTTP status code with curl
local status=$(curl -s -o /dev/null -w "%{http_code}" --connect-timeout 5 --max-time 5 "http://${RELAY_ADDRESS}:${RELAY_PORT}" || echo 000)
case "$status" in
101|200|400|404|426)
echo "Relay is ready! (HTTP ${status})"
return 0
;;
esac
attempt=$((attempt + 1))
echo " Attempt ${attempt}/${max_attempts}: Relay not ready yet (HTTP ${status})..."
sleep 2
done
echo "ERROR: Relay failed to become ready after ${max_attempts} attempts"
return 1
}
# Function to run benchmark against the relay
run_benchmark() {
local output_file="${RUN_DIR}/benchmark_results.txt"
echo ""
echo "=================================================="
echo "Testing relay at ${RELAY_URL}"
echo "=================================================="
# Wait for relay to be ready
if ! wait_for_relay "${RELAY_ADDRESS}:${RELAY_PORT}"; then
echo "ERROR: Relay is not responding, aborting..."
echo "RELAY_URL: ${RELAY_URL}" > "${output_file}"
echo "STATUS: FAILED - Relay not responding" >> "${output_file}"
echo "ERROR: Connection failed" >> "${output_file}"
return 1
fi
# Run the benchmark
echo "Running benchmark against ${RELAY_URL}..."
# Create temporary directory for benchmark data
TEMP_DATA_DIR="/tmp/benchmark_${TIMESTAMP}"
mkdir -p "${TEMP_DATA_DIR}"
# Run benchmark and capture both stdout and stderr
if "${BENCHMARK_BIN}" \
-relay-url="${RELAY_URL}" \
-datadir="${TEMP_DATA_DIR}" \
-events="${BENCHMARK_EVENTS}" \
-workers="${BENCHMARK_WORKERS}" \
-duration="${BENCHMARK_DURATION}" \
# > "${output_file}"
2>&1; then
echo "✓ Benchmark completed successfully"
# Add relay identification to the report
echo "" >> "${output_file}"
echo "RELAY_URL: ${RELAY_URL}" >> "${output_file}"
echo "TEST_TIMESTAMP: $(date -Iseconds)" >> "${output_file}"
echo "BENCHMARK_CONFIG:" >> "${output_file}"
echo " Events: ${BENCHMARK_EVENTS}" >> "${output_file}"
echo " Workers: ${BENCHMARK_WORKERS}" >> "${output_file}"
echo " Duration: ${BENCHMARK_DURATION}" >> "${output_file}" else
echo "✗ Benchmark failed"
echo "" >> "${output_file}"
echo "RELAY_URL: ${RELAY_URL}" >> "${output_file}"
echo "STATUS: FAILED" >> "${output_file}"
echo "TEST_TIMESTAMP: $(date -Iseconds)" >> "${output_file}"
fi
# Clean up temporary data
rm -rf "${TEMP_DATA_DIR}"
return 0
}
# Main execution
echo "Starting relay benchmark..."
run_benchmark
# Display results
if [ -f "${RUN_DIR}/benchmark_results.txt" ]; then
echo ""
echo "=================================================="
echo "Benchmark Results Summary"
echo "=================================================="
# Extract key metrics from the benchmark report
if grep -q "STATUS: FAILED" "${RUN_DIR}/benchmark_results.txt"; then
echo "Status: FAILED"
grep "ERROR:" "${RUN_DIR}/benchmark_results.txt" | head -1 || echo "Error: Unknown failure"
else
echo "Status: COMPLETED"
# Extract performance metrics
grep "Events/sec:" "${RUN_DIR}/benchmark_results.txt" | head -3 || true
grep "Success Rate:" "${RUN_DIR}/benchmark_results.txt" | head -3 || true
grep "Avg Latency:" "${RUN_DIR}/benchmark_results.txt" | head -3 || true
grep "P95 Latency:" "${RUN_DIR}/benchmark_results.txt" | head -3 || true
grep "Memory:" "${RUN_DIR}/benchmark_results.txt" | head -3 || true
fi
echo ""
echo "Full results available in: ${RUN_DIR}/benchmark_results.txt"
fi
echo ""
echo "=================================================="
echo "Benchmark Completed!"
echo "=================================================="
echo "Results directory: ${RUN_DIR}"
echo "Benchmark finished at: $(date)"

File diff suppressed because it is too large Load Diff