Add serve mode, fix binary tags, document CLI tools, improve Docker
Some checks failed
Go / build-and-release (push) Has been cancelled

- Add 'serve' subcommand for ephemeral RAM-based relay at /dev/shm with
  open ACL mode for testing and benchmarking
- Fix e-tag and p-tag decoding to use ValueHex()/ValueBinary() methods
  instead of Value() which returns raw bytes for binary-optimized storage
- Document all command-line tools in readme.adoc (relay-tester, benchmark,
  stresstest, blossomtest, aggregator, convert, FIND, policytest, etc.)
- Switch Docker images from Alpine to Debian for proper libsecp256k1
  Schnorr signature and ECDH support required by Nostr
- Upgrade Docker Go version from 1.21 to 1.25
- Add ramdisk mode (--ramdisk) to benchmark script for eliminating disk
  I/O bottlenecks in performance measurements
- Add docker-compose.ramdisk.yml for tmpfs-based benchmark volumes
- Add test coverage for privileged policy with binary-encoded p-tags
- Fix blossom test to expect 200 OK for anonymous uploads when auth is
  not required (RequireAuth=false with ACL mode 'none')
- Update follows ACL to handle both binary and hex p-tag formats
- Grant owner access to all users in serve mode via None ACL
- Add benchmark reports from multi-relay comparison run
- Update CLAUDE.md with binary tag handling documentation
- Bump version to v0.30.2
This commit is contained in:
2025-11-26 09:52:29 +00:00
parent f1ddad3318
commit fad39ec201
42 changed files with 2720 additions and 234 deletions

View File

@@ -133,7 +133,11 @@
"Bash(ssh relay1:*)",
"Bash(done)",
"Bash(go run:*)",
"Bash(go doc:*)"
"Bash(go doc:*)",
"Bash(/tmp/orly-test help:*)",
"Bash(go version:*)",
"Bash(ss:*)",
"Bash(CGO_ENABLED=0 go clean:*)"
],
"deny": [],
"ask": []

View File

@@ -4,6 +4,7 @@ test-build
*.exe
*.dll
*.so
!libsecp256k1.so
*.dylib
# Test files

View File

@@ -65,7 +65,7 @@ The workflow uses standard Gitea Actions environment variables:
- **Solution**: Verify `GITEA_TOKEN` secret is set correctly with appropriate permissions
**Issue**: Go version not found
- **Solution**: The workflow downloads Go 1.25.0 directly from go.dev, ensure the runner has internet access
- **Solution**: The workflow downloads Go 1.25.3 directly from go.dev, ensure the runner has internet access
### Customization

View File

@@ -35,11 +35,11 @@ jobs:
- name: Set up Go
run: |
echo "Setting up Go 1.25.0..."
echo "Setting up Go 1.25.3..."
cd /tmp
wget -q https://go.dev/dl/go1.25.0.linux-amd64.tar.gz
wget -q https://go.dev/dl/go1.25.3.linux-amd64.tar.gz
sudo rm -rf /usr/local/go
sudo tar -C /usr/local -xzf go1.25.0.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.25.3.linux-amd64.tar.gz
export PATH=/usr/local/go/bin:$PATH
go version
@@ -76,9 +76,7 @@ jobs:
export PATH=/usr/local/go/bin:$PATH
cd ${GITHUB_WORKSPACE}
echo "Running tests..."
# Download libsecp256k1.so from nostr repository
echo "Downloading libsecp256k1.so from nostr repository..."
wget -q https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so -O libsecp256k1.so
# libsecp256k1.so is included in the repository
chmod +x libsecp256k1.so
# Set LD_LIBRARY_PATH so tests can find the library
export LD_LIBRARY_PATH=${GITHUB_WORKSPACE}:${LD_LIBRARY_PATH}
@@ -96,9 +94,8 @@ jobs:
# Create directory for binaries
mkdir -p release-binaries
# Download the pre-compiled libsecp256k1.so for Linux AMD64 from nostr repository
echo "Downloading libsecp256k1.so from nostr repository..."
wget -q https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so -O release-binaries/libsecp256k1-linux-amd64.so
# Copy libsecp256k1.so from repository to release binaries
cp libsecp256k1.so release-binaries/libsecp256k1-linux-amd64.so
chmod +x release-binaries/libsecp256k1-linux-amd64.so
# Build for Linux AMD64 (pure Go + purego dynamic loading)

View File

@@ -59,10 +59,8 @@ cd app/web && bun run dev
# Or manually with purego setup
CGO_ENABLED=0 go test ./...
# Note: libsecp256k1.so is automatically downloaded by test.sh if needed
# It can also be manually downloaded from the nostr repository:
# wget https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so
# export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}$(pwd)"
# Note: libsecp256k1.so is included in the repository root
# Set LD_LIBRARY_PATH to use it: export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}$(pwd)"
```
### Run Specific Package Tests
@@ -210,7 +208,7 @@ export ORLY_DB_INDEX_CACHE_MB=256 # Index cache size
- Schnorr signature operations (NIP-01)
- ECDH for encrypted DMs (NIP-04, NIP-44)
- Public key recovery from signatures
- `libsecp256k1.so` - Downloaded from nostr repository at runtime/build time
- `libsecp256k1.so` - Included in repository root for runtime loading
- Key derivation and conversion utilities
- SIMD-accelerated SHA256 using minio/sha256-simd
- SIMD-accelerated hex encoding using templexxx/xhex
@@ -259,8 +257,7 @@ export ORLY_DB_INDEX_CACHE_MB=256 # Index cache size
- All builds use `CGO_ENABLED=0`
- The p8k crypto library (from `git.mleku.dev/mleku/nostr`) uses `github.com/ebitengine/purego` to dynamically load `libsecp256k1.so` at runtime
- This avoids CGO complexity while maintaining C library performance
- `libsecp256k1.so` is automatically downloaded by build/test scripts from the nostr repository
- Manual download: `wget https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so`
- `libsecp256k1.so` is included in the repository root
- Library must be in `LD_LIBRARY_PATH` or same directory as binary for runtime loading
**Database Backend Selection:**
@@ -310,6 +307,23 @@ export ORLY_DB_INDEX_CACHE_MB=256 # Index cache size
- External packages (e.g., `app/`) should ONLY use public API methods, never access internal fields
- **DO NOT** change unexported fields to exported when fixing bugs - this breaks the domain boundary
**Binary-Optimized Tag Storage (IMPORTANT):**
- The nostr library (`git.mleku.dev/mleku/nostr/encoders/tag`) uses binary optimization for `e` and `p` tags
- When events are unmarshaled from JSON, 64-character hex values in e/p tags are converted to 33-byte binary format (32 bytes hash + null terminator)
- **DO NOT** use `tag.Value()` directly for e/p tags - it returns raw bytes which may be binary, not hex
- **ALWAYS** use these methods instead:
- `tag.ValueHex()` - Returns hex string regardless of storage format (handles both binary and hex)
- `tag.ValueBinary()` - Returns 32-byte binary if stored in binary format, nil otherwise
- Example pattern for comparing pubkeys:
```go
// CORRECT: Use ValueHex() for hex decoding
pt, err := hex.Dec(string(pTag.ValueHex()))
// WRONG: Value() may return binary bytes, not hex
pt, err := hex.Dec(string(pTag.Value())) // Will fail for binary-encoded tags!
```
- This optimization saves memory and enables faster comparisons in the database layer
## Development Workflow
### Making Changes to Web UI
@@ -370,7 +384,7 @@ export ORLY_PPROF_PATH=/tmp/profiles
```
This script:
1. Installs Go 1.25.0 if needed
1. Installs Go 1.25.3 if needed
2. Builds relay with embedded web UI
3. Installs to `~/.local/bin/orly`
4. Creates systemd service

View File

@@ -1,10 +1,11 @@
# Multi-stage Dockerfile for ORLY relay
# Stage 1: Build stage
FROM golang:1.21-alpine AS builder
# Use Debian-based Go image to match runtime stage (avoids musl/glibc linker mismatch)
FROM golang:1.25-bookworm AS builder
# Install build dependencies
RUN apk add --no-cache git make
RUN apt-get update && apt-get install -y --no-install-recommends git make && rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /build
@@ -20,28 +21,26 @@ COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o orly -ldflags="-w -s" .
# Stage 2: Runtime stage
FROM alpine:latest
# Use Debian slim instead of Alpine because Debian's libsecp256k1 includes
# Schnorr signatures (secp256k1_schnorrsig_*) and ECDH which Nostr requires.
# Alpine's libsecp256k1 is built without these modules.
FROM debian:bookworm-slim
# Install runtime dependencies
RUN apk add --no-cache ca-certificates curl wget
RUN apt-get update && \
apt-get install -y --no-install-recommends ca-certificates curl libsecp256k1-1 && \
rm -rf /var/lib/apt/lists/*
# Create app user
RUN addgroup -g 1000 orly && \
adduser -D -u 1000 -G orly orly
RUN groupadd -g 1000 orly && \
useradd -m -u 1000 -g orly orly
# Set working directory
WORKDIR /app
# Copy binary from builder
# Copy binary (libsecp256k1.so.1 is already installed via apt)
COPY --from=builder /build/orly /app/orly
# Download libsecp256k1.so from nostr repository (optional for performance)
RUN wget -q https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so \
-O /app/libsecp256k1.so || echo "Warning: libsecp256k1.so download failed (optional)"
# Set library path
ENV LD_LIBRARY_PATH=/app
# Create data directory
RUN mkdir -p /data && chown -R orly:orly /data /app

View File

@@ -1,9 +1,10 @@
# Dockerfile for relay-tester
FROM golang:1.21-alpine AS builder
# Use Debian-based Go image to match runtime stage (avoids musl/glibc linker mismatch)
FROM golang:1.25-bookworm AS builder
# Install build dependencies
RUN apk add --no-cache git
RUN apt-get update && apt-get install -y --no-install-recommends git && rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /build
@@ -19,12 +20,19 @@ COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o relay-tester ./cmd/relay-tester
# Runtime stage
FROM alpine:latest
# Use Debian slim instead of Alpine because Debian's libsecp256k1 includes
# Schnorr signatures (secp256k1_schnorrsig_*) and ECDH which Nostr requires.
# Alpine's libsecp256k1 is built without these modules.
FROM debian:bookworm-slim
RUN apk add --no-cache ca-certificates
# Install runtime dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends ca-certificates libsecp256k1-1 && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy binary (libsecp256k1.so.1 is already installed via apt)
COPY --from=builder /build/relay-tester /app/relay-tester
# Default relay URL (can be overridden)

View File

@@ -88,6 +88,10 @@ type C struct {
// Cluster replication configuration
ClusterPropagatePrivilegedEvents bool `env:"ORLY_CLUSTER_PROPAGATE_PRIVILEGED_EVENTS" default:"true" usage:"propagate privileged events (DMs, gift wraps, etc.) to relay peers for replication"`
// ServeMode is set programmatically by the 'serve' subcommand to grant full owner
// access to all users (no env tag - internal use only)
ServeMode bool
}
// New creates and initializes a new configuration object for the relay
@@ -193,6 +197,21 @@ func IdentityRequested() (requested bool) {
return
}
// ServeRequested checks if the first command line argument is "serve" and returns
// whether the relay should start in ephemeral serve mode with RAM-based storage.
//
// Return Values
// - requested: true if the 'serve' subcommand was provided, false otherwise.
func ServeRequested() (requested bool) {
if len(os.Args) > 1 {
switch strings.ToLower(os.Args[1]) {
case "serve":
requested = true
}
}
return
}
// KV is a key/value pair.
type KV struct{ Key, Value string }
@@ -324,10 +343,14 @@ func PrintHelp(cfg *C, printer io.Writer) {
)
_, _ = fmt.Fprintf(
printer,
`Usage: %s [env|help]
`Usage: %s [env|help|identity|serve]
- env: print environment variables configuring %s
- help: print this help text
- identity: print the relay identity secret and public key
- serve: start ephemeral relay with RAM-based storage at /dev/shm/orlyserve
listening on 0.0.0.0:10547 with 'none' ACL mode (open relay)
useful for testing and benchmarking
`,
cfg.AppName, cfg.AppName,

View File

@@ -142,19 +142,26 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
// if e tags are found, delete them if the author is signer, or one of
// the owners is signer
if utils.FastEqual(t.Key(), []byte("e")) {
val := t.Value()
if len(val) == 0 {
log.W.F("HandleDelete: empty e-tag value")
continue
}
log.I.F("HandleDelete: processing e-tag with value: %s", string(val))
// First try binary format (optimized storage for e-tags)
var dst []byte
if b, e := hex.Dec(string(val)); chk.E(e) {
log.E.F("HandleDelete: failed to decode hex event ID %s: %v", string(val), e)
continue
if binVal := t.ValueBinary(); binVal != nil {
dst = binVal
log.I.F("HandleDelete: processing binary e-tag event ID: %0x", dst)
} else {
dst = b
log.I.F("HandleDelete: decoded event ID: %0x", dst)
// Fall back to hex decoding for non-binary values
val := t.Value()
if len(val) == 0 {
log.W.F("HandleDelete: empty e-tag value")
continue
}
log.I.F("HandleDelete: processing e-tag with value: %s", string(val))
if b, e := hex.Dec(string(val)); chk.E(e) {
log.E.F("HandleDelete: failed to decode hex event ID %s: %v", string(val), e)
continue
} else {
dst = b
log.I.F("HandleDelete: decoded event ID: %0x", dst)
}
}
f := &filter.F{
Ids: tag.NewFromBytesSlice(dst),
@@ -164,7 +171,7 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
log.E.F("HandleDelete: failed to get serials from filter: %v", err)
continue
}
log.I.F("HandleDelete: found %d serials for event ID %s", len(sers), string(val))
log.I.F("HandleDelete: found %d serials for event ID %0x", len(sers), dst)
// if found, delete them
if len(sers) > 0 {
// there should be only one event per serial, so we can just

View File

@@ -54,9 +54,17 @@ func testPrivilegedEventFiltering(events event.S, authedPubkey []byte, aclMode s
// Check p tags
pTags := ev.Tags.GetAll([]byte("p"))
for _, pTag := range pTags {
var pt []byte
var err error
if pt, err = hex.Dec(string(pTag.Value())); err != nil {
// First try binary format (optimized storage)
if pt := pTag.ValueBinary(); pt != nil {
if bytes.Equal(pt, authedPubkey) {
authorized = true
break
}
continue
}
// Fall back to hex decoding for non-binary values
pt, err := hex.Dec(string(pTag.Value()))
if err != nil {
continue
}
if bytes.Equal(pt, authedPubkey) {

View File

@@ -1,22 +1,11 @@
# Dockerfile for benchmark runner
FROM golang:1.25-alpine AS builder
# Uses pure Go build with purego for dynamic libsecp256k1 loading
# Install build dependencies including libsecp256k1 build requirements
RUN apk add --no-cache git ca-certificates gcc musl-dev autoconf automake libtool make
# Use Debian-based Go image to match runtime stage (avoids musl/glibc linker mismatch)
FROM golang:1.25-bookworm AS builder
# Build libsecp256k1 EARLY - this layer will be cached unless secp256k1 version changes
# Using specific version tag and parallel builds for faster compilation
RUN cd /tmp && \
git clone https://github.com/bitcoin-core/secp256k1.git && \
cd secp256k1 && \
git checkout v0.6.0 && \
git submodule init && \
git submodule update && \
./autogen.sh && \
./configure --enable-module-recovery --enable-module-ecdh --enable-module-schnorrsig --enable-module-extrakeys && \
make -j$(nproc) && \
make install && \
cd /tmp && rm -rf secp256k1
# Install build dependencies (no secp256k1 build needed)
RUN apt-get update && apt-get install -y --no-install-recommends git ca-certificates && rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /build
@@ -28,27 +17,25 @@ RUN go mod download
# Copy source code
COPY . .
# Build the benchmark tool with CGO enabled
RUN CGO_ENABLED=1 GOOS=linux go build -a -o benchmark ./cmd/benchmark
# Copy libsecp256k1.so if available
RUN if [ -f pkg/crypto/p8k/libsecp256k1.so ]; then \
cp pkg/crypto/p8k/libsecp256k1.so /build/; \
fi
# Build the benchmark tool with CGO disabled (uses purego for crypto)
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o benchmark ./cmd/benchmark
# Final stage
FROM alpine:latest
# Use Debian slim instead of Alpine because Debian's libsecp256k1 includes
# Schnorr signatures (secp256k1_schnorrsig_*) and ECDH which Nostr requires.
# Alpine's libsecp256k1 is built without these modules.
FROM debian:bookworm-slim
# Install runtime dependencies including libsecp256k1
RUN apk --no-cache add ca-certificates curl wget libsecp256k1
# Install runtime dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends ca-certificates curl libsecp256k1-1 && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy benchmark binary
# Copy benchmark binary (libsecp256k1.so.1 is already installed via apt)
COPY --from=builder /build/benchmark /app/benchmark
# libsecp256k1 is already installed system-wide via apk
# Copy benchmark runner script
COPY cmd/benchmark/benchmark-runner.sh /app/benchmark-runner
@@ -56,13 +43,10 @@ COPY cmd/benchmark/benchmark-runner.sh /app/benchmark-runner
RUN chmod +x /app/benchmark-runner
# Create runtime user and reports directory owned by uid 1000
RUN adduser -u 1000 -D appuser && \
RUN useradd -m -u 1000 appuser && \
mkdir -p /reports && \
chown -R 1000:1000 /app /reports
# Set library path
ENV LD_LIBRARY_PATH=/app:/usr/local/lib:/usr/lib
# Environment variables
ENV BENCHMARK_EVENTS=50000
ENV BENCHMARK_WORKERS=24
@@ -72,4 +56,4 @@ ENV BENCHMARK_DURATION=60s
USER 1000:1000
# Run the benchmark runner
CMD ["/app/benchmark-runner"]
CMD ["/app/benchmark-runner"]

View File

@@ -1,75 +1,51 @@
# Dockerfile for next.orly.dev relay
FROM ubuntu:22.04 as builder
# Dockerfile for next.orly.dev relay (benchmark version)
# Uses pure Go build with purego for dynamic libsecp256k1 loading
# Set environment variables
ARG GOLANG_VERSION=1.22.5
# Stage 1: Build stage
# Use Debian-based Go image to match runtime stage (avoids musl/glibc linker mismatch)
FROM golang:1.25-bookworm AS builder
# Update package list and install ALL dependencies in one layer
RUN apt-get update && \
apt-get install -y wget ca-certificates build-essential autoconf libtool git && \
rm -rf /var/lib/apt/lists/*
# Download and install Go binary
RUN wget https://go.dev/dl/go${GOLANG_VERSION}.linux-amd64.tar.gz && \
rm -rf /usr/local/go && \
tar -C /usr/local -xzf go${GOLANG_VERSION}.linux-amd64.tar.gz && \
rm go${GOLANG_VERSION}.linux-amd64.tar.gz
# Set PATH environment variable
ENV PATH="/usr/local/go/bin:${PATH}"
# Verify installation
RUN go version
# Build secp256k1 EARLY - this layer will be cached unless secp256k1 version changes
RUN cd /tmp && \
rm -rf secp256k1 && \
git clone https://github.com/bitcoin-core/secp256k1.git && \
cd secp256k1 && \
git checkout v0.6.0 && \
git submodule init && \
git submodule update && \
./autogen.sh && \
./configure --enable-module-schnorrsig --enable-module-ecdh --prefix=/usr && \
make -j$(nproc) && \
make install && \
cd /tmp && rm -rf secp256k1
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends git make && rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /build
# Copy go modules AFTER secp256k1 build - this allows module cache to be reused
# Copy go mod files first for better layer caching
COPY go.mod go.sum ./
RUN go mod download
# Copy source code LAST - this is the most frequently changing layer
# Copy source code
COPY . .
# Build the relay (libsecp256k1 installed via make install to /usr/lib)
RUN CGO_ENABLED=1 GOOS=linux go build -gcflags "all=-N -l" -o relay .
# Build the relay with CGO disabled (uses purego for crypto)
# Include debug symbols for profiling
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -gcflags "all=-N -l" -o relay .
# Create non-root user (uid 1000) for runtime in builder stage (used by analyzer)
RUN useradd -u 1000 -m -s /bin/bash appuser && \
RUN useradd -m -u 1000 appuser && \
chown -R 1000:1000 /build
# Switch to uid 1000 for any subsequent runtime use of this stage
USER 1000:1000
# Final stage
FROM ubuntu:22.04
# Use Debian slim instead of Alpine because Debian's libsecp256k1 includes
# Schnorr signatures (secp256k1_schnorrsig_*) and ECDH which Nostr requires.
# Alpine's libsecp256k1 is built without these modules.
FROM debian:bookworm-slim
# Install runtime dependencies
RUN apt-get update && apt-get install -y ca-certificates curl libsecp256k1-0 libsecp256k1-dev && rm -rf /var/lib/apt/lists/* && \
ln -sf /usr/lib/x86_64-linux-gnu/libsecp256k1.so.0 /usr/lib/x86_64-linux-gnu/libsecp256k1.so.5
RUN apt-get update && \
apt-get install -y --no-install-recommends ca-certificates curl libsecp256k1-1 && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy binary from builder
# Copy binary (libsecp256k1.so.1 is already installed via apt)
COPY --from=builder /build/relay /app/relay
# libsecp256k1 is already installed system-wide in the final stage via apt-get install libsecp256k1-0
# Create runtime user and writable directories
RUN useradd -u 1000 -m -s /bin/bash appuser && \
RUN useradd -m -u 1000 appuser && \
mkdir -p /data /profiles /app && \
chown -R 1000:1000 /data /profiles /app
@@ -96,4 +72,4 @@ HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
USER 1000:1000
# Run the relay
CMD ["/app/relay"]
CMD ["/app/relay"]

View File

@@ -0,0 +1,77 @@
# Docker Compose override file for ramdisk-based benchmarks
# Uses /dev/shm (tmpfs) for all database storage to eliminate disk I/O bottlenecks
# and measure raw relay performance.
#
# Usage: docker compose -f docker-compose.yml -f docker-compose.ramdisk.yml up
# Or via run-benchmark.sh --ramdisk
version: "3.8"
services:
# Next.orly.dev relay with Badger
next-orly-badger:
volumes:
- /dev/shm/benchmark/next-orly-badger:/data
# Next.orly.dev relay with DGraph
next-orly-dgraph:
volumes:
- /dev/shm/benchmark/next-orly-dgraph:/data
# DGraph Zero - cluster coordinator
dgraph-zero:
volumes:
- /dev/shm/benchmark/dgraph-zero:/data
# DGraph Alpha - data node
dgraph-alpha:
volumes:
- /dev/shm/benchmark/dgraph-alpha:/data
# Next.orly.dev relay with Neo4j
next-orly-neo4j:
volumes:
- /dev/shm/benchmark/next-orly-neo4j:/data
# Neo4j database
neo4j:
volumes:
- /dev/shm/benchmark/neo4j:/data
- /dev/shm/benchmark/neo4j-logs:/logs
# Khatru with SQLite
khatru-sqlite:
volumes:
- /dev/shm/benchmark/khatru-sqlite:/data
# Khatru with Badger
khatru-badger:
volumes:
- /dev/shm/benchmark/khatru-badger:/data
# Relayer basic example
relayer-basic:
volumes:
- /dev/shm/benchmark/relayer-basic:/data
# Strfry
strfry:
volumes:
- /dev/shm/benchmark/strfry:/data
- ./configs/strfry.conf:/etc/strfry.conf
# Nostr-rs-relay
nostr-rs-relay:
volumes:
- /dev/shm/benchmark/nostr-rs-relay:/data
- ./configs/config.toml:/app/config.toml
# Rely-SQLite relay
rely-sqlite:
volumes:
- /dev/shm/benchmark/rely-sqlite:/data
# PostgreSQL for relayer-basic
postgres:
volumes:
- /dev/shm/benchmark/postgres:/var/lib/postgresql/data

View File

@@ -0,0 +1,194 @@
================================================================
NOSTR RELAY BENCHMARK AGGREGATE REPORT
================================================================
Generated: 2025-11-26T08:04:35+00:00
Benchmark Configuration:
Events per test: 50000
Concurrent workers: 24
Test duration: 60s
Relays tested: 9
================================================================
SUMMARY BY RELAY
================================================================
Relay: rely-sqlite
----------------------------------------
Status: COMPLETED
Events/sec: 16298.40
Events/sec: 6150.97
Events/sec: 16298.40
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 1.360569ms
Bottom 10% Avg Latency: 746.704µs
Avg Latency: 1.411735ms
P95 Latency: 2.160818ms
P95 Latency: 2.29313ms
P95 Latency: 916.446µs
Relay: next-orly-badger
----------------------------------------
Status: COMPLETED
Events/sec: 16698.91
Events/sec: 6011.59
Events/sec: 16698.91
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 1.331911ms
Bottom 10% Avg Latency: 766.682µs
Avg Latency: 1.496861ms
P95 Latency: 2.019719ms
P95 Latency: 2.715024ms
P95 Latency: 914.112µs
Relay: next-orly-dgraph
----------------------------------------
Status: COMPLETED
Events/sec: 14573.58
Events/sec: 6072.22
Events/sec: 14573.58
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 1.571025ms
Bottom 10% Avg Latency: 802.953µs
Avg Latency: 1.454825ms
P95 Latency: 2.610305ms
P95 Latency: 2.541414ms
P95 Latency: 902.751µs
Relay: next-orly-neo4j
----------------------------------------
Status: COMPLETED
Events/sec: 16594.60
Events/sec: 6139.73
Events/sec: 16594.60
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 1.341265ms
Bottom 10% Avg Latency: 760.268µs
Avg Latency: 1.417529ms
P95 Latency: 2.068012ms
P95 Latency: 2.279114ms
P95 Latency: 893.313µs
Relay: khatru-sqlite
----------------------------------------
Status: COMPLETED
Events/sec: 16775.48
Events/sec: 6077.32
Events/sec: 16775.48
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 1.316097ms
Bottom 10% Avg Latency: 743.925µs
Avg Latency: 1.448816ms
P95 Latency: 2.019999ms
P95 Latency: 2.415349ms
P95 Latency: 915.807µs
Relay: khatru-badger
----------------------------------------
Status: COMPLETED
Events/sec: 14573.64
Events/sec: 6123.62
Events/sec: 14573.64
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 1.582659ms
Bottom 10% Avg Latency: 849.196µs
Avg Latency: 1.42045ms
P95 Latency: 2.584156ms
P95 Latency: 2.297743ms
P95 Latency: 911.2µs
Relay: relayer-basic
----------------------------------------
Status: COMPLETED
Events/sec: 16103.85
Events/sec: 6038.31
Events/sec: 16103.85
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 1.401051ms
Bottom 10% Avg Latency: 788.805µs
Avg Latency: 1.501362ms
P95 Latency: 2.187347ms
P95 Latency: 2.477719ms
P95 Latency: 920.8µs
Relay: strfry
----------------------------------------
Status: COMPLETED
Events/sec: 16207.30
Events/sec: 6075.12
Events/sec: 16207.30
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 1.381579ms
Bottom 10% Avg Latency: 760.474µs
Avg Latency: 1.45496ms
P95 Latency: 2.15555ms
P95 Latency: 2.414222ms
P95 Latency: 907.647µs
Relay: nostr-rs-relay
----------------------------------------
Status: COMPLETED
Events/sec: 15751.45
Events/sec: 6163.36
Events/sec: 15751.45
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 1.442411ms
Bottom 10% Avg Latency: 812.222µs
Avg Latency: 1.414472ms
P95 Latency: 2.22848ms
P95 Latency: 2.267184ms
P95 Latency: 921.434µs
================================================================
DETAILED RESULTS
================================================================
Individual relay reports are available in:
- /reports/run_20251126_073410/khatru-badger_results.txt
- /reports/run_20251126_073410/khatru-sqlite_results.txt
- /reports/run_20251126_073410/next-orly-badger_results.txt
- /reports/run_20251126_073410/next-orly-dgraph_results.txt
- /reports/run_20251126_073410/next-orly-neo4j_results.txt
- /reports/run_20251126_073410/nostr-rs-relay_results.txt
- /reports/run_20251126_073410/relayer-basic_results.txt
- /reports/run_20251126_073410/rely-sqlite_results.txt
- /reports/run_20251126_073410/strfry_results.txt
================================================================
BENCHMARK COMPARISON TABLE
================================================================
Relay Status Peak Tput/s Avg Latency Success Rate
---- ------ ----------- ----------- ------------
rely-sqlite OK 16298.40 1.360569ms 100.0%
next-orly-badger OK 16698.91 1.331911ms 100.0%
next-orly-dgraph OK 14573.58 1.571025ms 100.0%
next-orly-neo4j OK 16594.60 1.341265ms 100.0%
khatru-sqlite OK 16775.48 1.316097ms 100.0%
khatru-badger OK 14573.64 1.582659ms 100.0%
relayer-basic OK 16103.85 1.401051ms 100.0%
strfry OK 16207.30 1.381579ms 100.0%
nostr-rs-relay OK 15751.45 1.442411ms 100.0%
================================================================
End of Report
================================================================

View File

@@ -0,0 +1,197 @@
Starting Nostr Relay Benchmark (Badger Backend)
Data Directory: /tmp/benchmark_khatru-badger_8
Events: 50000, Workers: 24, Duration: 1m0s
1764143463950443 migrating to version 1... /build/pkg/database/migrations.go:66
1764143463950524 migrating to version 2... /build/pkg/database/migrations.go:73
1764143463950554 migrating to version 3... /build/pkg/database/migrations.go:80
1764143463950562 cleaning up ephemeral events (kinds 20000-29999)... /build/pkg/database/migrations.go:294
1764143463950601 cleaned up 0 ephemeral events from database /build/pkg/database/migrations.go:339
1764143463950677 migrating to version 4... /build/pkg/database/migrations.go:87
1764143463950693 converting events to optimized inline storage (Reiser4 optimization)... /build/pkg/database/migrations.go:347
1764143463950707 found 0 events to convert (0 regular, 0 replaceable, 0 addressable) /build/pkg/database/migrations.go:436
1764143463950715 migration complete: converted 0 events to optimized inline storage, deleted 0 old keys /build/pkg/database/migrations.go:545
1764143463950741 migrating to version 5... /build/pkg/database/migrations.go:94
1764143463950748 re-encoding events with optimized tag binary format... /build/pkg/database/migrations.go:552
1764143463950772 found 0 events with e/p tags to re-encode /build/pkg/database/migrations.go:639
1764143463950779 no events need re-encoding /build/pkg/database/migrations.go:642
╔════════════════════════════════════════════════════════╗
║ BADGER BACKEND BENCHMARK SUITE ║
╚════════════════════════════════════════════════════════╝
=== Starting Badger benchmark ===
RunPeakThroughputTest (Badger)..
=== Peak Throughput Test ===
2025/11/26 07:51:03 INFO: Successfully loaded embedded libsecp256k1 v5.0.0 from /tmp/orly-libsecp256k1/libsecp256k1.so
Events saved: 50000/50000 (100.0%), errors: 0
Duration: 3.430851381s
Events/sec: 14573.64
Avg latency: 1.582659ms
P90 latency: 2.208413ms
P95 latency: 2.584156ms
P99 latency: 3.989364ms
Bottom 10% Avg latency: 849.196µs
Wiping database between tests...
RunBurstPatternTest (Badger)..
=== Burst Pattern Test ===
Burst completed: 5000 events in 327.135579ms
Burst completed: 5000 events in 347.321999ms
Burst completed: 5000 events in 293.638919ms
Burst completed: 5000 events in 315.213974ms
Burst completed: 5000 events in 293.822691ms
Burst completed: 5000 events in 393.17551ms
Burst completed: 5000 events in 317.689223ms
Burst completed: 5000 events in 283.629668ms
Burst completed: 5000 events in 306.891378ms
Burst completed: 5000 events in 281.684719ms
Burst test completed: 50000 events in 8.165107452s, errors: 0
Events/sec: 6123.62
Wiping database between tests...
RunMixedReadWriteTest (Badger)..
=== Mixed Read/Write Test ===
Generating 1000 unique synthetic events (minimum 300 bytes each)...
Generated 1000 events:
Average content size: 312 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database for read tests...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Mixed test completed: 25000 writes, 25000 reads in 24.414376807s
Combined ops/sec: 2047.97
Wiping database between tests...
RunQueryTest (Badger)..
=== Query Test ===
Generating 10000 unique synthetic events (minimum 300 bytes each)...
Generated 10000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 10000 events for query tests...
Query test completed: 367781 queries in 1m0.004424256s
Queries/sec: 6129.23
Avg query latency: 1.861418ms
P95 query latency: 7.652288ms
P99 query latency: 11.670769ms
Wiping database between tests...
RunConcurrentQueryStoreTest (Badger)..
=== Concurrent Query/Store Test ===
Generating 5000 unique synthetic events (minimum 300 bytes each)...
Generated 5000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 5000 events for concurrent query/store test...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Concurrent test completed: 307708 operations (257708 queries, 50000 writes) in 1m0.003628582s
Operations/sec: 5128.16
Avg latency: 1.520953ms
Avg query latency: 1.503959ms
Avg write latency: 1.608546ms
P95 latency: 3.958904ms
P99 latency: 6.227011ms
=== Badger benchmark completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 3.430851381s
Total Events: 50000
Events/sec: 14573.64
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 102 MB
Avg Latency: 1.582659ms
P90 Latency: 2.208413ms
P95 Latency: 2.584156ms
P99 Latency: 3.989364ms
Bottom 10% Avg Latency: 849.196µs
----------------------------------------
Test: Burst Pattern
Duration: 8.165107452s
Total Events: 50000
Events/sec: 6123.62
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 211 MB
Avg Latency: 1.42045ms
P90 Latency: 1.976894ms
P95 Latency: 2.297743ms
P99 Latency: 3.397761ms
Bottom 10% Avg Latency: 671.897µs
----------------------------------------
Test: Mixed Read/Write
Duration: 24.414376807s
Total Events: 50000
Events/sec: 2047.97
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 134 MB
Avg Latency: 390.225µs
P90 Latency: 811.651µs
P95 Latency: 911.2µs
P99 Latency: 1.140536ms
Bottom 10% Avg Latency: 1.056491ms
----------------------------------------
Test: Query Performance
Duration: 1m0.004424256s
Total Events: 367781
Events/sec: 6129.23
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 113 MB
Avg Latency: 1.861418ms
P90 Latency: 5.800639ms
P95 Latency: 7.652288ms
P99 Latency: 11.670769ms
Bottom 10% Avg Latency: 8.426888ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.003628582s
Total Events: 307708
Events/sec: 5128.16
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 100 MB
Avg Latency: 1.520953ms
P90 Latency: 3.075583ms
P95 Latency: 3.958904ms
P99 Latency: 6.227011ms
Bottom 10% Avg Latency: 4.506519ms
----------------------------------------
Report saved to: /tmp/benchmark_khatru-badger_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_khatru-badger_8/benchmark_report.adoc
RELAY_NAME: khatru-badger
RELAY_URL: ws://khatru-badger:3334
TEST_TIMESTAMP: 2025-11-26T07:54:21+00:00
BENCHMARK_CONFIG:
Events: 50000
Workers: 24
Duration: 60s

View File

@@ -0,0 +1,197 @@
Starting Nostr Relay Benchmark (Badger Backend)
Data Directory: /tmp/benchmark_khatru-sqlite_8
Events: 50000, Workers: 24, Duration: 1m0s
1764143261406084 migrating to version 1... /build/pkg/database/migrations.go:66
1764143261406169 migrating to version 2... /build/pkg/database/migrations.go:73
1764143261406201 migrating to version 3... /build/pkg/database/migrations.go:80
1764143261406210 cleaning up ephemeral events (kinds 20000-29999)... /build/pkg/database/migrations.go:294
1764143261406219 cleaned up 0 ephemeral events from database /build/pkg/database/migrations.go:339
1764143261406234 migrating to version 4... /build/pkg/database/migrations.go:87
1764143261406240 converting events to optimized inline storage (Reiser4 optimization)... /build/pkg/database/migrations.go:347
1764143261406256 found 0 events to convert (0 regular, 0 replaceable, 0 addressable) /build/pkg/database/migrations.go:436
1764143261406263 migration complete: converted 0 events to optimized inline storage, deleted 0 old keys /build/pkg/database/migrations.go:545
1764143261406285 migrating to version 5... /build/pkg/database/migrations.go:94
1764143261406291 re-encoding events with optimized tag binary format... /build/pkg/database/migrations.go:552
1764143261406310 found 0 events with e/p tags to re-encode /build/pkg/database/migrations.go:639
1764143261406315 no events need re-encoding /build/pkg/database/migrations.go:642
╔════════════════════════════════════════════════════════╗
║ BADGER BACKEND BENCHMARK SUITE ║
╚════════════════════════════════════════════════════════╝
=== Starting Badger benchmark ===
RunPeakThroughputTest (Badger)..
=== Peak Throughput Test ===
2025/11/26 07:47:41 INFO: Successfully loaded embedded libsecp256k1 v5.0.0 from /tmp/orly-libsecp256k1/libsecp256k1.so
Events saved: 50000/50000 (100.0%), errors: 0
Duration: 2.980541518s
Events/sec: 16775.48
Avg latency: 1.316097ms
P90 latency: 1.75215ms
P95 latency: 2.019999ms
P99 latency: 2.884086ms
Bottom 10% Avg latency: 743.925µs
Wiping database between tests...
RunBurstPatternTest (Badger)..
=== Burst Pattern Test ===
Burst completed: 5000 events in 294.559368ms
Burst completed: 5000 events in 338.351868ms
Burst completed: 5000 events in 289.64343ms
Burst completed: 5000 events in 418.427743ms
Burst completed: 5000 events in 337.294837ms
Burst completed: 5000 events in 359.624702ms
Burst completed: 5000 events in 307.791949ms
Burst completed: 5000 events in 284.861295ms
Burst completed: 5000 events in 314.638569ms
Burst completed: 5000 events in 274.271908ms
Burst test completed: 50000 events in 8.227316527s, errors: 0
Events/sec: 6077.32
Wiping database between tests...
RunMixedReadWriteTest (Badger)..
=== Mixed Read/Write Test ===
Generating 1000 unique synthetic events (minimum 300 bytes each)...
Generated 1000 events:
Average content size: 312 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database for read tests...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Mixed test completed: 25000 writes, 25000 reads in 24.361629597s
Combined ops/sec: 2052.41
Wiping database between tests...
RunQueryTest (Badger)..
=== Query Test ===
Generating 10000 unique synthetic events (minimum 300 bytes each)...
Generated 10000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 10000 events for query tests...
Query test completed: 369485 queries in 1m0.007598809s
Queries/sec: 6157.30
Avg query latency: 1.851496ms
P95 query latency: 7.629059ms
P99 query latency: 11.579084ms
Wiping database between tests...
RunConcurrentQueryStoreTest (Badger)..
=== Concurrent Query/Store Test ===
Generating 5000 unique synthetic events (minimum 300 bytes each)...
Generated 5000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 5000 events for concurrent query/store test...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Concurrent test completed: 307591 operations (257591 queries, 50000 writes) in 1m0.003842232s
Operations/sec: 5126.19
Avg latency: 1.567905ms
Avg query latency: 1.520146ms
Avg write latency: 1.813947ms
P95 latency: 4.080054ms
P99 latency: 7.252873ms
=== Badger benchmark completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 2.980541518s
Total Events: 50000
Events/sec: 16775.48
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 205 MB
Avg Latency: 1.316097ms
P90 Latency: 1.75215ms
P95 Latency: 2.019999ms
P99 Latency: 2.884086ms
Bottom 10% Avg Latency: 743.925µs
----------------------------------------
Test: Burst Pattern
Duration: 8.227316527s
Total Events: 50000
Events/sec: 6077.32
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 206 MB
Avg Latency: 1.448816ms
P90 Latency: 2.065115ms
P95 Latency: 2.415349ms
P99 Latency: 3.441514ms
Bottom 10% Avg Latency: 642.527µs
----------------------------------------
Test: Mixed Read/Write
Duration: 24.361629597s
Total Events: 50000
Events/sec: 2052.41
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 170 MB
Avg Latency: 395.815µs
P90 Latency: 821.619µs
P95 Latency: 915.807µs
P99 Latency: 1.137015ms
Bottom 10% Avg Latency: 1.044106ms
----------------------------------------
Test: Query Performance
Duration: 1m0.007598809s
Total Events: 369485
Events/sec: 6157.30
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 97 MB
Avg Latency: 1.851496ms
P90 Latency: 5.786274ms
P95 Latency: 7.629059ms
P99 Latency: 11.579084ms
Bottom 10% Avg Latency: 8.382865ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.003842232s
Total Events: 307591
Events/sec: 5126.19
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 143 MB
Avg Latency: 1.567905ms
P90 Latency: 3.141841ms
P95 Latency: 4.080054ms
P99 Latency: 7.252873ms
Bottom 10% Avg Latency: 4.875018ms
----------------------------------------
Report saved to: /tmp/benchmark_khatru-sqlite_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_khatru-sqlite_8/benchmark_report.adoc
RELAY_NAME: khatru-sqlite
RELAY_URL: ws://khatru-sqlite:3334
TEST_TIMESTAMP: 2025-11-26T07:50:58+00:00
BENCHMARK_CONFIG:
Events: 50000
Workers: 24
Duration: 60s

View File

@@ -0,0 +1,197 @@
Starting Nostr Relay Benchmark (Badger Backend)
Data Directory: /tmp/benchmark_next-orly-badger_8
Events: 50000, Workers: 24, Duration: 1m0s
1764142653240629 migrating to version 1... /build/pkg/database/migrations.go:66
1764142653240705 migrating to version 2... /build/pkg/database/migrations.go:73
1764142653240726 migrating to version 3... /build/pkg/database/migrations.go:80
1764142653240732 cleaning up ephemeral events (kinds 20000-29999)... /build/pkg/database/migrations.go:294
1764142653240742 cleaned up 0 ephemeral events from database /build/pkg/database/migrations.go:339
1764142653240754 migrating to version 4... /build/pkg/database/migrations.go:87
1764142653240759 converting events to optimized inline storage (Reiser4 optimization)... /build/pkg/database/migrations.go:347
1764142653240772 found 0 events to convert (0 regular, 0 replaceable, 0 addressable) /build/pkg/database/migrations.go:436
1764142653240777 migration complete: converted 0 events to optimized inline storage, deleted 0 old keys /build/pkg/database/migrations.go:545
1764142653240794 migrating to version 5... /build/pkg/database/migrations.go:94
1764142653240799 re-encoding events with optimized tag binary format... /build/pkg/database/migrations.go:552
1764142653240815 found 0 events with e/p tags to re-encode /build/pkg/database/migrations.go:639
1764142653240820 no events need re-encoding /build/pkg/database/migrations.go:642
╔════════════════════════════════════════════════════════╗
║ BADGER BACKEND BENCHMARK SUITE ║
╚════════════════════════════════════════════════════════╝
=== Starting Badger benchmark ===
RunPeakThroughputTest (Badger)..
=== Peak Throughput Test ===
2025/11/26 07:37:33 INFO: Successfully loaded embedded libsecp256k1 v5.0.0 from /tmp/orly-libsecp256k1/libsecp256k1.so
Events saved: 50000/50000 (100.0%), errors: 0
Duration: 2.994207496s
Events/sec: 16698.91
Avg latency: 1.331911ms
P90 latency: 1.752681ms
P95 latency: 2.019719ms
P99 latency: 2.937258ms
Bottom 10% Avg latency: 766.682µs
Wiping database between tests...
RunBurstPatternTest (Badger)..
=== Burst Pattern Test ===
Burst completed: 5000 events in 296.493381ms
Burst completed: 5000 events in 346.037614ms
Burst completed: 5000 events in 295.42219ms
Burst completed: 5000 events in 310.553567ms
Burst completed: 5000 events in 290.939907ms
Burst completed: 5000 events in 586.599699ms
Burst completed: 5000 events in 331.078074ms
Burst completed: 5000 events in 266.026786ms
Burst completed: 5000 events in 305.143046ms
Burst completed: 5000 events in 283.61665ms
Burst test completed: 50000 events in 8.317273769s, errors: 0
Events/sec: 6011.59
Wiping database between tests...
RunMixedReadWriteTest (Badger)..
=== Mixed Read/Write Test ===
Generating 1000 unique synthetic events (minimum 300 bytes each)...
Generated 1000 events:
Average content size: 312 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database for read tests...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Mixed test completed: 25000 writes, 25000 reads in 24.376567267s
Combined ops/sec: 2051.15
Wiping database between tests...
RunQueryTest (Badger)..
=== Query Test ===
Generating 10000 unique synthetic events (minimum 300 bytes each)...
Generated 10000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 10000 events for query tests...
Query test completed: 379823 queries in 1m0.005132427s
Queries/sec: 6329.84
Avg query latency: 1.793906ms
P95 query latency: 7.34021ms
P99 query latency: 11.188253ms
Wiping database between tests...
RunConcurrentQueryStoreTest (Badger)..
=== Concurrent Query/Store Test ===
Generating 5000 unique synthetic events (minimum 300 bytes each)...
Generated 5000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 5000 events for concurrent query/store test...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Concurrent test completed: 311181 operations (261181 queries, 50000 writes) in 1m0.003287869s
Operations/sec: 5186.07
Avg latency: 1.534716ms
Avg query latency: 1.48944ms
Avg write latency: 1.771222ms
P95 latency: 3.923748ms
P99 latency: 6.879882ms
=== Badger benchmark completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 2.994207496s
Total Events: 50000
Events/sec: 16698.91
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 91 MB
Avg Latency: 1.331911ms
P90 Latency: 1.752681ms
P95 Latency: 2.019719ms
P99 Latency: 2.937258ms
Bottom 10% Avg Latency: 766.682µs
----------------------------------------
Test: Burst Pattern
Duration: 8.317273769s
Total Events: 50000
Events/sec: 6011.59
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 204 MB
Avg Latency: 1.496861ms
P90 Latency: 2.150147ms
P95 Latency: 2.715024ms
P99 Latency: 5.496937ms
Bottom 10% Avg Latency: 684.458µs
----------------------------------------
Test: Mixed Read/Write
Duration: 24.376567267s
Total Events: 50000
Events/sec: 2051.15
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 194 MB
Avg Latency: 396.054µs
P90 Latency: 819.913µs
P95 Latency: 914.112µs
P99 Latency: 1.134723ms
Bottom 10% Avg Latency: 1.077234ms
----------------------------------------
Test: Query Performance
Duration: 1m0.005132427s
Total Events: 379823
Events/sec: 6329.84
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 96 MB
Avg Latency: 1.793906ms
P90 Latency: 5.558514ms
P95 Latency: 7.34021ms
P99 Latency: 11.188253ms
Bottom 10% Avg Latency: 8.06994ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.003287869s
Total Events: 311181
Events/sec: 5186.07
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 141 MB
Avg Latency: 1.534716ms
P90 Latency: 3.051195ms
P95 Latency: 3.923748ms
P99 Latency: 6.879882ms
Bottom 10% Avg Latency: 4.67505ms
----------------------------------------
Report saved to: /tmp/benchmark_next-orly-badger_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_next-orly-badger_8/benchmark_report.adoc
RELAY_NAME: next-orly-badger
RELAY_URL: ws://next-orly-badger:8080
TEST_TIMESTAMP: 2025-11-26T07:40:50+00:00
BENCHMARK_CONFIG:
Events: 50000
Workers: 24
Duration: 60s

View File

@@ -0,0 +1,197 @@
Starting Nostr Relay Benchmark (Badger Backend)
Data Directory: /tmp/benchmark_next-orly-dgraph_8
Events: 50000, Workers: 24, Duration: 1m0s
1764142855890301 migrating to version 1... /build/pkg/database/migrations.go:66
1764142855890401 migrating to version 2... /build/pkg/database/migrations.go:73
1764142855890440 migrating to version 3... /build/pkg/database/migrations.go:80
1764142855890449 cleaning up ephemeral events (kinds 20000-29999)... /build/pkg/database/migrations.go:294
1764142855890460 cleaned up 0 ephemeral events from database /build/pkg/database/migrations.go:339
1764142855890476 migrating to version 4... /build/pkg/database/migrations.go:87
1764142855890481 converting events to optimized inline storage (Reiser4 optimization)... /build/pkg/database/migrations.go:347
1764142855890495 found 0 events to convert (0 regular, 0 replaceable, 0 addressable) /build/pkg/database/migrations.go:436
1764142855890504 migration complete: converted 0 events to optimized inline storage, deleted 0 old keys /build/pkg/database/migrations.go:545
1764142855890528 migrating to version 5... /build/pkg/database/migrations.go:94
1764142855890536 re-encoding events with optimized tag binary format... /build/pkg/database/migrations.go:552
1764142855890559 found 0 events with e/p tags to re-encode /build/pkg/database/migrations.go:639
1764142855890568 no events need re-encoding /build/pkg/database/migrations.go:642
╔════════════════════════════════════════════════════════╗
║ BADGER BACKEND BENCHMARK SUITE ║
╚════════════════════════════════════════════════════════╝
=== Starting Badger benchmark ===
RunPeakThroughputTest (Badger)..
=== Peak Throughput Test ===
2025/11/26 07:40:55 INFO: Successfully loaded embedded libsecp256k1 v5.0.0 from /tmp/orly-libsecp256k1/libsecp256k1.so
Events saved: 50000/50000 (100.0%), errors: 0
Duration: 3.430865656s
Events/sec: 14573.58
Avg latency: 1.571025ms
P90 latency: 2.249507ms
P95 latency: 2.610305ms
P99 latency: 3.786808ms
Bottom 10% Avg latency: 802.953µs
Wiping database between tests...
RunBurstPatternTest (Badger)..
=== Burst Pattern Test ===
Burst completed: 5000 events in 413.260391ms
Burst completed: 5000 events in 416.696811ms
Burst completed: 5000 events in 281.278288ms
Burst completed: 5000 events in 305.471838ms
Burst completed: 5000 events in 284.063576ms
Burst completed: 5000 events in 366.197285ms
Burst completed: 5000 events in 310.188337ms
Burst completed: 5000 events in 270.424131ms
Burst completed: 5000 events in 313.061864ms
Burst completed: 5000 events in 268.841724ms
Burst test completed: 50000 events in 8.234222191s, errors: 0
Events/sec: 6072.22
Wiping database between tests...
RunMixedReadWriteTest (Badger)..
=== Mixed Read/Write Test ===
Generating 1000 unique synthetic events (minimum 300 bytes each)...
Generated 1000 events:
Average content size: 312 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database for read tests...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Mixed test completed: 25000 writes, 25000 reads in 24.374242444s
Combined ops/sec: 2051.35
Wiping database between tests...
RunQueryTest (Badger)..
=== Query Test ===
Generating 10000 unique synthetic events (minimum 300 bytes each)...
Generated 10000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 10000 events for query tests...
Query test completed: 363398 queries in 1m0.008386122s
Queries/sec: 6055.79
Avg query latency: 1.896628ms
P95 query latency: 7.915977ms
P99 query latency: 12.369055ms
Wiping database between tests...
RunConcurrentQueryStoreTest (Badger)..
=== Concurrent Query/Store Test ===
Generating 5000 unique synthetic events (minimum 300 bytes each)...
Generated 5000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 5000 events for concurrent query/store test...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Concurrent test completed: 310491 operations (260491 queries, 50000 writes) in 1m0.002972174s
Operations/sec: 5174.59
Avg latency: 1.519446ms
Avg query latency: 1.48579ms
Avg write latency: 1.694789ms
P95 latency: 3.910804ms
P99 latency: 6.189507ms
=== Badger benchmark completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 3.430865656s
Total Events: 50000
Events/sec: 14573.58
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 128 MB
Avg Latency: 1.571025ms
P90 Latency: 2.249507ms
P95 Latency: 2.610305ms
P99 Latency: 3.786808ms
Bottom 10% Avg Latency: 802.953µs
----------------------------------------
Test: Burst Pattern
Duration: 8.234222191s
Total Events: 50000
Events/sec: 6072.22
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 195 MB
Avg Latency: 1.454825ms
P90 Latency: 2.128246ms
P95 Latency: 2.541414ms
P99 Latency: 3.875045ms
Bottom 10% Avg Latency: 688.084µs
----------------------------------------
Test: Mixed Read/Write
Duration: 24.374242444s
Total Events: 50000
Events/sec: 2051.35
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 125 MB
Avg Latency: 390.403µs
P90 Latency: 807.74µs
P95 Latency: 902.751µs
P99 Latency: 1.111889ms
Bottom 10% Avg Latency: 1.037165ms
----------------------------------------
Test: Query Performance
Duration: 1m0.008386122s
Total Events: 363398
Events/sec: 6055.79
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 149 MB
Avg Latency: 1.896628ms
P90 Latency: 5.916526ms
P95 Latency: 7.915977ms
P99 Latency: 12.369055ms
Bottom 10% Avg Latency: 8.802319ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.002972174s
Total Events: 310491
Events/sec: 5174.59
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 156 MB
Avg Latency: 1.519446ms
P90 Latency: 3.03826ms
P95 Latency: 3.910804ms
P99 Latency: 6.189507ms
Bottom 10% Avg Latency: 4.473046ms
----------------------------------------
Report saved to: /tmp/benchmark_next-orly-dgraph_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_next-orly-dgraph_8/benchmark_report.adoc
RELAY_NAME: next-orly-dgraph
RELAY_URL: ws://next-orly-dgraph:8080
TEST_TIMESTAMP: 2025-11-26T07:44:13+00:00
BENCHMARK_CONFIG:
Events: 50000
Workers: 24
Duration: 60s

View File

@@ -0,0 +1,197 @@
Starting Nostr Relay Benchmark (Badger Backend)
Data Directory: /tmp/benchmark_next-orly-neo4j_8
Events: 50000, Workers: 24, Duration: 1m0s
1764143058917148 migrating to version 1... /build/pkg/database/migrations.go:66
1764143058917210 migrating to version 2... /build/pkg/database/migrations.go:73
1764143058917229 migrating to version 3... /build/pkg/database/migrations.go:80
1764143058917234 cleaning up ephemeral events (kinds 20000-29999)... /build/pkg/database/migrations.go:294
1764143058917243 cleaned up 0 ephemeral events from database /build/pkg/database/migrations.go:339
1764143058917256 migrating to version 4... /build/pkg/database/migrations.go:87
1764143058917261 converting events to optimized inline storage (Reiser4 optimization)... /build/pkg/database/migrations.go:347
1764143058917274 found 0 events to convert (0 regular, 0 replaceable, 0 addressable) /build/pkg/database/migrations.go:436
1764143058917281 migration complete: converted 0 events to optimized inline storage, deleted 0 old keys /build/pkg/database/migrations.go:545
1764143058917296 migrating to version 5... /build/pkg/database/migrations.go:94
1764143058917301 re-encoding events with optimized tag binary format... /build/pkg/database/migrations.go:552
1764143058917316 found 0 events with e/p tags to re-encode /build/pkg/database/migrations.go:639
1764143058917321 no events need re-encoding /build/pkg/database/migrations.go:642
╔════════════════════════════════════════════════════════╗
║ BADGER BACKEND BENCHMARK SUITE ║
╚════════════════════════════════════════════════════════╝
=== Starting Badger benchmark ===
RunPeakThroughputTest (Badger)..
=== Peak Throughput Test ===
2025/11/26 07:44:18 INFO: Successfully loaded embedded libsecp256k1 v5.0.0 from /tmp/orly-libsecp256k1/libsecp256k1.so
Events saved: 50000/50000 (100.0%), errors: 0
Duration: 3.013027595s
Events/sec: 16594.60
Avg latency: 1.341265ms
P90 latency: 1.798828ms
P95 latency: 2.068012ms
P99 latency: 2.883646ms
Bottom 10% Avg latency: 760.268µs
Wiping database between tests...
RunBurstPatternTest (Badger)..
=== Burst Pattern Test ===
Burst completed: 5000 events in 286.776937ms
Burst completed: 5000 events in 322.103436ms
Burst completed: 5000 events in 287.074253ms
Burst completed: 5000 events in 307.39847ms
Burst completed: 5000 events in 289.282402ms
Burst completed: 5000 events in 351.106806ms
Burst completed: 5000 events in 307.616957ms
Burst completed: 5000 events in 281.010206ms
Burst completed: 5000 events in 387.29128ms
Burst completed: 5000 events in 317.867754ms
Burst test completed: 50000 events in 8.143674752s, errors: 0
Events/sec: 6139.73
Wiping database between tests...
RunMixedReadWriteTest (Badger)..
=== Mixed Read/Write Test ===
Generating 1000 unique synthetic events (minimum 300 bytes each)...
Generated 1000 events:
Average content size: 312 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database for read tests...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Mixed test completed: 25000 writes, 25000 reads in 24.392570025s
Combined ops/sec: 2049.80
Wiping database between tests...
RunQueryTest (Badger)..
=== Query Test ===
Generating 10000 unique synthetic events (minimum 300 bytes each)...
Generated 10000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 10000 events for query tests...
Query test completed: 381354 queries in 1m0.004315541s
Queries/sec: 6355.44
Avg query latency: 1.774601ms
P95 query latency: 7.270517ms
P99 query latency: 11.058437ms
Wiping database between tests...
RunConcurrentQueryStoreTest (Badger)..
=== Concurrent Query/Store Test ===
Generating 5000 unique synthetic events (minimum 300 bytes each)...
Generated 5000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 5000 events for concurrent query/store test...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Concurrent test completed: 311298 operations (261298 queries, 50000 writes) in 1m0.002804902s
Operations/sec: 5188.06
Avg latency: 1.525543ms
Avg query latency: 1.487415ms
Avg write latency: 1.724798ms
P95 latency: 3.973942ms
P99 latency: 6.346957ms
=== Badger benchmark completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 3.013027595s
Total Events: 50000
Events/sec: 16594.60
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 135 MB
Avg Latency: 1.341265ms
P90 Latency: 1.798828ms
P95 Latency: 2.068012ms
P99 Latency: 2.883646ms
Bottom 10% Avg Latency: 760.268µs
----------------------------------------
Test: Burst Pattern
Duration: 8.143674752s
Total Events: 50000
Events/sec: 6139.73
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 210 MB
Avg Latency: 1.417529ms
P90 Latency: 1.96735ms
P95 Latency: 2.279114ms
P99 Latency: 3.319737ms
Bottom 10% Avg Latency: 689.835µs
----------------------------------------
Test: Mixed Read/Write
Duration: 24.392570025s
Total Events: 50000
Events/sec: 2049.80
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 194 MB
Avg Latency: 389.458µs
P90 Latency: 807.449µs
P95 Latency: 893.313µs
P99 Latency: 1.078376ms
Bottom 10% Avg Latency: 1.008354ms
----------------------------------------
Test: Query Performance
Duration: 1m0.004315541s
Total Events: 381354
Events/sec: 6355.44
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 149 MB
Avg Latency: 1.774601ms
P90 Latency: 5.479193ms
P95 Latency: 7.270517ms
P99 Latency: 11.058437ms
Bottom 10% Avg Latency: 7.987ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.002804902s
Total Events: 311298
Events/sec: 5188.06
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 91 MB
Avg Latency: 1.525543ms
P90 Latency: 3.063464ms
P95 Latency: 3.973942ms
P99 Latency: 6.346957ms
Bottom 10% Avg Latency: 4.524119ms
----------------------------------------
Report saved to: /tmp/benchmark_next-orly-neo4j_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_next-orly-neo4j_8/benchmark_report.adoc
RELAY_NAME: next-orly-neo4j
RELAY_URL: ws://next-orly-neo4j:8080
TEST_TIMESTAMP: 2025-11-26T07:47:36+00:00
BENCHMARK_CONFIG:
Events: 50000
Workers: 24
Duration: 60s

View File

@@ -0,0 +1,197 @@
Starting Nostr Relay Benchmark (Badger Backend)
Data Directory: /tmp/benchmark_nostr-rs-relay_8
Events: 50000, Workers: 24, Duration: 1m0s
1764144072428228 migrating to version 1... /build/pkg/database/migrations.go:66
1764144072428311 migrating to version 2... /build/pkg/database/migrations.go:73
1764144072428332 migrating to version 3... /build/pkg/database/migrations.go:80
1764144072428337 cleaning up ephemeral events (kinds 20000-29999)... /build/pkg/database/migrations.go:294
1764144072428348 cleaned up 0 ephemeral events from database /build/pkg/database/migrations.go:339
1764144072428362 migrating to version 4... /build/pkg/database/migrations.go:87
1764144072428367 converting events to optimized inline storage (Reiser4 optimization)... /build/pkg/database/migrations.go:347
1764144072428382 found 0 events to convert (0 regular, 0 replaceable, 0 addressable) /build/pkg/database/migrations.go:436
1764144072428388 migration complete: converted 0 events to optimized inline storage, deleted 0 old keys /build/pkg/database/migrations.go:545
1764144072428403 migrating to version 5... /build/pkg/database/migrations.go:94
1764144072428407 re-encoding events with optimized tag binary format... /build/pkg/database/migrations.go:552
1764144072428461 found 0 events with e/p tags to re-encode /build/pkg/database/migrations.go:639
1764144072428504 no events need re-encoding /build/pkg/database/migrations.go:642
╔════════════════════════════════════════════════════════╗
║ BADGER BACKEND BENCHMARK SUITE ║
╚════════════════════════════════════════════════════════╝
=== Starting Badger benchmark ===
RunPeakThroughputTest (Badger)..
=== Peak Throughput Test ===
2025/11/26 08:01:12 INFO: Successfully loaded embedded libsecp256k1 v5.0.0 from /tmp/orly-libsecp256k1/libsecp256k1.so
Events saved: 50000/50000 (100.0%), errors: 0
Duration: 3.174311581s
Events/sec: 15751.45
Avg latency: 1.442411ms
P90 latency: 1.94422ms
P95 latency: 2.22848ms
P99 latency: 3.230197ms
Bottom 10% Avg latency: 812.222µs
Wiping database between tests...
RunBurstPatternTest (Badger)..
=== Burst Pattern Test ===
Burst completed: 5000 events in 307.983371ms
Burst completed: 5000 events in 362.020748ms
Burst completed: 5000 events in 287.762195ms
Burst completed: 5000 events in 312.062236ms
Burst completed: 5000 events in 293.876571ms
Burst completed: 5000 events in 374.103253ms
Burst completed: 5000 events in 310.909244ms
Burst completed: 5000 events in 283.004205ms
Burst completed: 5000 events in 298.739839ms
Burst completed: 5000 events in 276.165042ms
Burst test completed: 50000 events in 8.112460039s, errors: 0
Events/sec: 6163.36
Wiping database between tests...
RunMixedReadWriteTest (Badger)..
=== Mixed Read/Write Test ===
Generating 1000 unique synthetic events (minimum 300 bytes each)...
Generated 1000 events:
Average content size: 312 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database for read tests...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Mixed test completed: 25000 writes, 25000 reads in 24.41340672s
Combined ops/sec: 2048.06
Wiping database between tests...
RunQueryTest (Badger)..
=== Query Test ===
Generating 10000 unique synthetic events (minimum 300 bytes each)...
Generated 10000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 10000 events for query tests...
Query test completed: 370248 queries in 1m0.004253098s
Queries/sec: 6170.36
Avg query latency: 1.845097ms
P95 query latency: 7.60818ms
P99 query latency: 11.65437ms
Wiping database between tests...
RunConcurrentQueryStoreTest (Badger)..
=== Concurrent Query/Store Test ===
Generating 5000 unique synthetic events (minimum 300 bytes each)...
Generated 5000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 5000 events for concurrent query/store test...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Concurrent test completed: 309475 operations (259475 queries, 50000 writes) in 1m0.004403417s
Operations/sec: 5157.54
Avg latency: 1.523601ms
Avg query latency: 1.501844ms
Avg write latency: 1.63651ms
P95 latency: 3.938186ms
P99 latency: 6.342582ms
=== Badger benchmark completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 3.174311581s
Total Events: 50000
Events/sec: 15751.45
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 205 MB
Avg Latency: 1.442411ms
P90 Latency: 1.94422ms
P95 Latency: 2.22848ms
P99 Latency: 3.230197ms
Bottom 10% Avg Latency: 812.222µs
----------------------------------------
Test: Burst Pattern
Duration: 8.112460039s
Total Events: 50000
Events/sec: 6163.36
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 254 MB
Avg Latency: 1.414472ms
P90 Latency: 1.957275ms
P95 Latency: 2.267184ms
P99 Latency: 3.19513ms
Bottom 10% Avg Latency: 750.181µs
----------------------------------------
Test: Mixed Read/Write
Duration: 24.41340672s
Total Events: 50000
Events/sec: 2048.06
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 129 MB
Avg Latency: 400.791µs
P90 Latency: 826.182µs
P95 Latency: 921.434µs
P99 Latency: 1.143516ms
Bottom 10% Avg Latency: 1.063808ms
----------------------------------------
Test: Query Performance
Duration: 1m0.004253098s
Total Events: 370248
Events/sec: 6170.36
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 156 MB
Avg Latency: 1.845097ms
P90 Latency: 5.757979ms
P95 Latency: 7.60818ms
P99 Latency: 11.65437ms
Bottom 10% Avg Latency: 8.384135ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.004403417s
Total Events: 309475
Events/sec: 5157.54
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 142 MB
Avg Latency: 1.523601ms
P90 Latency: 3.071867ms
P95 Latency: 3.938186ms
P99 Latency: 6.342582ms
Bottom 10% Avg Latency: 4.516506ms
----------------------------------------
Report saved to: /tmp/benchmark_nostr-rs-relay_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_nostr-rs-relay_8/benchmark_report.adoc
RELAY_NAME: nostr-rs-relay
RELAY_URL: ws://nostr-rs-relay:8080
TEST_TIMESTAMP: 2025-11-26T08:04:30+00:00
BENCHMARK_CONFIG:
Events: 50000
Workers: 24
Duration: 60s

View File

@@ -0,0 +1,197 @@
Starting Nostr Relay Benchmark (Badger Backend)
Data Directory: /tmp/benchmark_relayer-basic_8
Events: 50000, Workers: 24, Duration: 1m0s
1764143666952973 migrating to version 1... /build/pkg/database/migrations.go:66
1764143666953030 migrating to version 2... /build/pkg/database/migrations.go:73
1764143666953049 migrating to version 3... /build/pkg/database/migrations.go:80
1764143666953055 cleaning up ephemeral events (kinds 20000-29999)... /build/pkg/database/migrations.go:294
1764143666953065 cleaned up 0 ephemeral events from database /build/pkg/database/migrations.go:339
1764143666953078 migrating to version 4... /build/pkg/database/migrations.go:87
1764143666953083 converting events to optimized inline storage (Reiser4 optimization)... /build/pkg/database/migrations.go:347
1764143666953094 found 0 events to convert (0 regular, 0 replaceable, 0 addressable) /build/pkg/database/migrations.go:436
1764143666953100 migration complete: converted 0 events to optimized inline storage, deleted 0 old keys /build/pkg/database/migrations.go:545
1764143666953114 migrating to version 5... /build/pkg/database/migrations.go:94
1764143666953119 re-encoding events with optimized tag binary format... /build/pkg/database/migrations.go:552
1764143666953134 found 0 events with e/p tags to re-encode /build/pkg/database/migrations.go:639
1764143666953141 no events need re-encoding /build/pkg/database/migrations.go:642
╔════════════════════════════════════════════════════════╗
║ BADGER BACKEND BENCHMARK SUITE ║
╚════════════════════════════════════════════════════════╝
=== Starting Badger benchmark ===
RunPeakThroughputTest (Badger)..
=== Peak Throughput Test ===
2025/11/26 07:54:26 INFO: Successfully loaded embedded libsecp256k1 v5.0.0 from /tmp/orly-libsecp256k1/libsecp256k1.so
Events saved: 50000/50000 (100.0%), errors: 0
Duration: 3.104848253s
Events/sec: 16103.85
Avg latency: 1.401051ms
P90 latency: 1.888349ms
P95 latency: 2.187347ms
P99 latency: 3.155266ms
Bottom 10% Avg latency: 788.805µs
Wiping database between tests...
RunBurstPatternTest (Badger)..
=== Burst Pattern Test ===
Burst completed: 5000 events in 309.873989ms
Burst completed: 5000 events in 341.685521ms
Burst completed: 5000 events in 289.850715ms
Burst completed: 5000 events in 315.600908ms
Burst completed: 5000 events in 288.702527ms
Burst completed: 5000 events in 374.124316ms
Burst completed: 5000 events in 312.291426ms
Burst completed: 5000 events in 289.316359ms
Burst completed: 5000 events in 420.327167ms
Burst completed: 5000 events in 332.309838ms
Burst test completed: 50000 events in 8.280469107s, errors: 0
Events/sec: 6038.31
Wiping database between tests...
RunMixedReadWriteTest (Badger)..
=== Mixed Read/Write Test ===
Generating 1000 unique synthetic events (minimum 300 bytes each)...
Generated 1000 events:
Average content size: 312 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database for read tests...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Mixed test completed: 25000 writes, 25000 reads in 24.499295481s
Combined ops/sec: 2040.88
Wiping database between tests...
RunQueryTest (Badger)..
=== Query Test ===
Generating 10000 unique synthetic events (minimum 300 bytes each)...
Generated 10000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 10000 events for query tests...
Query test completed: 375154 queries in 1m0.004300893s
Queries/sec: 6252.12
Avg query latency: 1.804479ms
P95 query latency: 7.361776ms
P99 query latency: 11.303739ms
Wiping database between tests...
RunConcurrentQueryStoreTest (Badger)..
=== Concurrent Query/Store Test ===
Generating 5000 unique synthetic events (minimum 300 bytes each)...
Generated 5000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 5000 events for concurrent query/store test...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Concurrent test completed: 306374 operations (256374 queries, 50000 writes) in 1m0.003786148s
Operations/sec: 5105.91
Avg latency: 1.576576ms
Avg query latency: 1.528734ms
Avg write latency: 1.821884ms
P95 latency: 4.109035ms
P99 latency: 6.61579ms
=== Badger benchmark completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 3.104848253s
Total Events: 50000
Events/sec: 16103.85
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 126 MB
Avg Latency: 1.401051ms
P90 Latency: 1.888349ms
P95 Latency: 2.187347ms
P99 Latency: 3.155266ms
Bottom 10% Avg Latency: 788.805µs
----------------------------------------
Test: Burst Pattern
Duration: 8.280469107s
Total Events: 50000
Events/sec: 6038.31
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 253 MB
Avg Latency: 1.501362ms
P90 Latency: 2.126101ms
P95 Latency: 2.477719ms
P99 Latency: 3.656509ms
Bottom 10% Avg Latency: 737.519µs
----------------------------------------
Test: Mixed Read/Write
Duration: 24.499295481s
Total Events: 50000
Events/sec: 2040.88
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 146 MB
Avg Latency: 400.179µs
P90 Latency: 824.427µs
P95 Latency: 920.8µs
P99 Latency: 1.163662ms
Bottom 10% Avg Latency: 1.084633ms
----------------------------------------
Test: Query Performance
Duration: 1m0.004300893s
Total Events: 375154
Events/sec: 6252.12
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 144 MB
Avg Latency: 1.804479ms
P90 Latency: 5.607171ms
P95 Latency: 7.361776ms
P99 Latency: 11.303739ms
Bottom 10% Avg Latency: 8.12332ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.003786148s
Total Events: 306374
Events/sec: 5105.91
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 115 MB
Avg Latency: 1.576576ms
P90 Latency: 3.182483ms
P95 Latency: 4.109035ms
P99 Latency: 6.61579ms
Bottom 10% Avg Latency: 4.720777ms
----------------------------------------
Report saved to: /tmp/benchmark_relayer-basic_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_relayer-basic_8/benchmark_report.adoc
RELAY_NAME: relayer-basic
RELAY_URL: ws://relayer-basic:7447
TEST_TIMESTAMP: 2025-11-26T07:57:44+00:00
BENCHMARK_CONFIG:
Events: 50000
Workers: 24
Duration: 60s

View File

@@ -0,0 +1,198 @@
Starting Nostr Relay Benchmark (Badger Backend)
Data Directory: /tmp/benchmark_rely-sqlite_8
Events: 50000, Workers: 24, Duration: 1m0s
1764142450497543 migrating to version 1... /build/pkg/database/migrations.go:66
1764142450497609 migrating to version 2... /build/pkg/database/migrations.go:73
1764142450497631 migrating to version 3... /build/pkg/database/migrations.go:80
1764142450497636 cleaning up ephemeral events (kinds 20000-29999)... /build/pkg/database/migrations.go:294
1764142450497646 cleaned up 0 ephemeral events from database /build/pkg/database/migrations.go:339
1764142450497688 migrating to version 4... /build/pkg/database/migrations.go:87
1764142450497694 converting events to optimized inline storage (Reiser4 optimization)... /build/pkg/database/migrations.go:347
1764142450497706 found 0 events to convert (0 regular, 0 replaceable, 0 addressable) /build/pkg/database/migrations.go:436
1764142450497711 migration complete: converted 0 events to optimized inline storage, deleted 0 old keys /build/pkg/database/migrations.go:545
1764142450497773 migrating to version 5... /build/pkg/database/migrations.go:94
1764142450497779 re-encoding events with optimized tag binary format... /build/pkg/database/migrations.go:552
1764142450497793 found 0 events with e/p tags to re-encode /build/pkg/database/migrations.go:639
1764142450497798 no events need re-encoding /build/pkg/database/migrations.go:642
╔════════════════════════════════════════════════════════╗
║ BADGER BACKEND BENCHMARK SUITE ║
╚════════════════════════════════════════════════════════╝
=== Starting Badger benchmark ===
RunPeakThroughputTest (Badger)..
=== Peak Throughput Test ===
2025/11/26 07:34:10 INFO: Extracted embedded libsecp256k1 to /tmp/orly-libsecp256k1/libsecp256k1.so
2025/11/26 07:34:10 INFO: Successfully loaded embedded libsecp256k1 v5.0.0 from /tmp/orly-libsecp256k1/libsecp256k1.so
Events saved: 50000/50000 (100.0%), errors: 0
Duration: 3.067785126s
Events/sec: 16298.40
Avg latency: 1.360569ms
P90 latency: 1.819407ms
P95 latency: 2.160818ms
P99 latency: 3.606363ms
Bottom 10% Avg latency: 746.704µs
Wiping database between tests...
RunBurstPatternTest (Badger)..
=== Burst Pattern Test ===
Burst completed: 5000 events in 312.311304ms
Burst completed: 5000 events in 359.334028ms
Burst completed: 5000 events in 307.257652ms
Burst completed: 5000 events in 318.240243ms
Burst completed: 5000 events in 295.405906ms
Burst completed: 5000 events in 369.690986ms
Burst completed: 5000 events in 308.42646ms
Burst completed: 5000 events in 267.313308ms
Burst completed: 5000 events in 301.834829ms
Burst completed: 5000 events in 282.800373ms
Burst test completed: 50000 events in 8.128805288s, errors: 0
Events/sec: 6150.97
Wiping database between tests...
RunMixedReadWriteTest (Badger)..
=== Mixed Read/Write Test ===
Generating 1000 unique synthetic events (minimum 300 bytes each)...
Generated 1000 events:
Average content size: 312 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database for read tests...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Mixed test completed: 25000 writes, 25000 reads in 24.426575006s
Combined ops/sec: 2046.95
Wiping database between tests...
RunQueryTest (Badger)..
=== Query Test ===
Generating 10000 unique synthetic events (minimum 300 bytes each)...
Generated 10000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 10000 events for query tests...
Query test completed: 369377 queries in 1m0.005034278s
Queries/sec: 6155.77
Avg query latency: 1.850212ms
P95 query latency: 7.621476ms
P99 query latency: 11.610958ms
Wiping database between tests...
RunConcurrentQueryStoreTest (Badger)..
=== Concurrent Query/Store Test ===
Generating 5000 unique synthetic events (minimum 300 bytes each)...
Generated 5000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 5000 events for concurrent query/store test...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Concurrent test completed: 310678 operations (260678 queries, 50000 writes) in 1m0.003278222s
Operations/sec: 5177.68
Avg latency: 1.513088ms
Avg query latency: 1.495086ms
Avg write latency: 1.606937ms
P95 latency: 3.92433ms
P99 latency: 6.216487ms
=== Badger benchmark completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 3.067785126s
Total Events: 50000
Events/sec: 16298.40
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 89 MB
Avg Latency: 1.360569ms
P90 Latency: 1.819407ms
P95 Latency: 2.160818ms
P99 Latency: 3.606363ms
Bottom 10% Avg Latency: 746.704µs
----------------------------------------
Test: Burst Pattern
Duration: 8.128805288s
Total Events: 50000
Events/sec: 6150.97
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 203 MB
Avg Latency: 1.411735ms
P90 Latency: 1.9936ms
P95 Latency: 2.29313ms
P99 Latency: 3.168238ms
Bottom 10% Avg Latency: 711.036µs
----------------------------------------
Test: Mixed Read/Write
Duration: 24.426575006s
Total Events: 50000
Events/sec: 2046.95
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 127 MB
Avg Latency: 401.18µs
P90 Latency: 826.125µs
P95 Latency: 916.446µs
P99 Latency: 1.122669ms
Bottom 10% Avg Latency: 1.080638ms
----------------------------------------
Test: Query Performance
Duration: 1m0.005034278s
Total Events: 369377
Events/sec: 6155.77
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 106 MB
Avg Latency: 1.850212ms
P90 Latency: 5.767292ms
P95 Latency: 7.621476ms
P99 Latency: 11.610958ms
Bottom 10% Avg Latency: 8.365982ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.003278222s
Total Events: 310678
Events/sec: 5177.68
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 133 MB
Avg Latency: 1.513088ms
P90 Latency: 3.049471ms
P95 Latency: 3.92433ms
P99 Latency: 6.216487ms
Bottom 10% Avg Latency: 4.456235ms
----------------------------------------
Report saved to: /tmp/benchmark_rely-sqlite_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_rely-sqlite_8/benchmark_report.adoc
RELAY_NAME: rely-sqlite
RELAY_URL: ws://rely-sqlite:3334
TEST_TIMESTAMP: 2025-11-26T07:37:28+00:00
BENCHMARK_CONFIG:
Events: 50000
Workers: 24
Duration: 60s

View File

@@ -0,0 +1,197 @@
Starting Nostr Relay Benchmark (Badger Backend)
Data Directory: /tmp/benchmark_strfry_8
Events: 50000, Workers: 24, Duration: 1m0s
1764143869786425 migrating to version 1... /build/pkg/database/migrations.go:66
1764143869786498 migrating to version 2... /build/pkg/database/migrations.go:73
1764143869786524 migrating to version 3... /build/pkg/database/migrations.go:80
1764143869786530 cleaning up ephemeral events (kinds 20000-29999)... /build/pkg/database/migrations.go:294
1764143869786539 cleaned up 0 ephemeral events from database /build/pkg/database/migrations.go:339
1764143869786552 migrating to version 4... /build/pkg/database/migrations.go:87
1764143869786556 converting events to optimized inline storage (Reiser4 optimization)... /build/pkg/database/migrations.go:347
1764143869786565 found 0 events to convert (0 regular, 0 replaceable, 0 addressable) /build/pkg/database/migrations.go:436
1764143869786570 migration complete: converted 0 events to optimized inline storage, deleted 0 old keys /build/pkg/database/migrations.go:545
1764143869786584 migrating to version 5... /build/pkg/database/migrations.go:94
1764143869786589 re-encoding events with optimized tag binary format... /build/pkg/database/migrations.go:552
1764143869786604 found 0 events with e/p tags to re-encode /build/pkg/database/migrations.go:639
1764143869786609 no events need re-encoding /build/pkg/database/migrations.go:642
╔════════════════════════════════════════════════════════╗
║ BADGER BACKEND BENCHMARK SUITE ║
╚════════════════════════════════════════════════════════╝
=== Starting Badger benchmark ===
RunPeakThroughputTest (Badger)..
=== Peak Throughput Test ===
2025/11/26 07:57:49 INFO: Successfully loaded embedded libsecp256k1 v5.0.0 from /tmp/orly-libsecp256k1/libsecp256k1.so
Events saved: 50000/50000 (100.0%), errors: 0
Duration: 3.085029825s
Events/sec: 16207.30
Avg latency: 1.381579ms
P90 latency: 1.865718ms
P95 latency: 2.15555ms
P99 latency: 3.097841ms
Bottom 10% Avg latency: 760.474µs
Wiping database between tests...
RunBurstPatternTest (Badger)..
=== Burst Pattern Test ===
Burst completed: 5000 events in 307.173651ms
Burst completed: 5000 events in 334.907841ms
Burst completed: 5000 events in 290.888159ms
Burst completed: 5000 events in 403.807089ms
Burst completed: 5000 events in 327.956144ms
Burst completed: 5000 events in 364.629959ms
Burst completed: 5000 events in 328.780115ms
Burst completed: 5000 events in 290.361314ms
Burst completed: 5000 events in 304.825415ms
Burst completed: 5000 events in 270.287065ms
Burst test completed: 50000 events in 8.230287366s, errors: 0
Events/sec: 6075.12
Wiping database between tests...
RunMixedReadWriteTest (Badger)..
=== Mixed Read/Write Test ===
Generating 1000 unique synthetic events (minimum 300 bytes each)...
Generated 1000 events:
Average content size: 312 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database for read tests...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Mixed test completed: 25000 writes, 25000 reads in 24.348961585s
Combined ops/sec: 2053.48
Wiping database between tests...
RunQueryTest (Badger)..
=== Query Test ===
Generating 10000 unique synthetic events (minimum 300 bytes each)...
Generated 10000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 10000 events for query tests...
Query test completed: 376537 queries in 1m0.004019885s
Queries/sec: 6275.20
Avg query latency: 1.80891ms
P95 query latency: 7.432319ms
P99 query latency: 11.306037ms
Wiping database between tests...
RunConcurrentQueryStoreTest (Badger)..
=== Concurrent Query/Store Test ===
Generating 5000 unique synthetic events (minimum 300 bytes each)...
Generated 5000 events:
Average content size: 313 bytes
All events are unique (incremental timestamps)
All events are properly signed
Pre-populating database with 5000 events for concurrent query/store test...
Generating 50000 unique synthetic events (minimum 300 bytes each)...
Generated 50000 events:
Average content size: 314 bytes
All events are unique (incremental timestamps)
All events are properly signed
Concurrent test completed: 310473 operations (260473 queries, 50000 writes) in 1m0.003152564s
Operations/sec: 5174.28
Avg latency: 1.532065ms
Avg query latency: 1.496816ms
Avg write latency: 1.715689ms
P95 latency: 3.943934ms
P99 latency: 6.631879ms
=== Badger benchmark completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 3.085029825s
Total Events: 50000
Events/sec: 16207.30
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 129 MB
Avg Latency: 1.381579ms
P90 Latency: 1.865718ms
P95 Latency: 2.15555ms
P99 Latency: 3.097841ms
Bottom 10% Avg Latency: 760.474µs
----------------------------------------
Test: Burst Pattern
Duration: 8.230287366s
Total Events: 50000
Events/sec: 6075.12
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 254 MB
Avg Latency: 1.45496ms
P90 Latency: 2.073563ms
P95 Latency: 2.414222ms
P99 Latency: 3.497151ms
Bottom 10% Avg Latency: 681.141µs
----------------------------------------
Test: Mixed Read/Write
Duration: 24.348961585s
Total Events: 50000
Events/sec: 2053.48
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 175 MB
Avg Latency: 394.928µs
P90 Latency: 814.769µs
P95 Latency: 907.647µs
P99 Latency: 1.116704ms
Bottom 10% Avg Latency: 1.044591ms
----------------------------------------
Test: Query Performance
Duration: 1m0.004019885s
Total Events: 376537
Events/sec: 6275.20
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 138 MB
Avg Latency: 1.80891ms
P90 Latency: 5.616736ms
P95 Latency: 7.432319ms
P99 Latency: 11.306037ms
Bottom 10% Avg Latency: 8.164604ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.003152564s
Total Events: 310473
Events/sec: 5174.28
Success Rate: 100.0%
Concurrent Workers: 24
Memory Used: 147 MB
Avg Latency: 1.532065ms
P90 Latency: 3.05393ms
P95 Latency: 3.943934ms
P99 Latency: 6.631879ms
Bottom 10% Avg Latency: 4.619007ms
----------------------------------------
Report saved to: /tmp/benchmark_strfry_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_strfry_8/benchmark_report.adoc
RELAY_NAME: strfry
RELAY_URL: ws://strfry:8080
TEST_TIMESTAMP: 2025-11-26T08:01:07+00:00
BENCHMARK_CONFIG:
Events: 50000
Workers: 24
Duration: 60s

View File

@@ -1,9 +1,44 @@
#!/bin/bash
# Wrapper script to run the benchmark suite and automatically shut down when complete
#
# Usage:
# ./run-benchmark.sh # Use disk-based storage (default)
# ./run-benchmark.sh --ramdisk # Use /dev/shm ramdisk for maximum performance
set -e
# Parse command line arguments
USE_RAMDISK=false
for arg in "$@"; do
case $arg in
--ramdisk)
USE_RAMDISK=true
shift
;;
--help|-h)
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Options:"
echo " --ramdisk Use /dev/shm ramdisk storage instead of disk"
echo " This eliminates disk I/O bottlenecks for accurate"
echo " relay performance measurement."
echo " --help, -h Show this help message"
echo ""
echo "Requirements for --ramdisk:"
echo " - /dev/shm must be available (tmpfs mount)"
echo " - At least 8GB available in /dev/shm recommended"
echo " - Increase size with: sudo mount -o remount,size=16G /dev/shm"
exit 0
;;
*)
echo "Unknown option: $arg"
echo "Use --help for usage information"
exit 1
;;
esac
done
# Determine docker-compose command
if docker compose version &> /dev/null 2>&1; then
DOCKER_COMPOSE="docker compose"
@@ -11,43 +46,107 @@ else
DOCKER_COMPOSE="docker-compose"
fi
# Clean old data directories (may be owned by root from Docker)
if [ -d "data" ]; then
echo "Cleaning old data directories..."
if ! rm -rf data/ 2>/dev/null; then
# If normal rm fails (permission denied), provide clear instructions
echo ""
echo "ERROR: Cannot remove data directories due to permission issues."
echo "This happens because Docker creates files as root."
echo ""
echo "Please run one of the following to clean up:"
echo " sudo rm -rf data/"
echo " sudo chown -R \$(id -u):\$(id -g) data/ && rm -rf data/"
echo ""
echo "Then run this script again."
# Set data directory and compose files based on mode
if [ "$USE_RAMDISK" = true ]; then
DATA_BASE="/dev/shm/benchmark"
COMPOSE_FILES="-f docker-compose.yml -f docker-compose.ramdisk.yml"
echo "======================================================"
echo " RAMDISK BENCHMARK MODE"
echo "======================================================"
# Check /dev/shm availability
if [ ! -d "/dev/shm" ]; then
echo "ERROR: /dev/shm is not available on this system."
echo "This benchmark requires a tmpfs-mounted /dev/shm for RAM-based storage."
exit 1
fi
# Check available space in /dev/shm (need at least 8GB for benchmarks)
SHM_AVAILABLE_KB=$(df /dev/shm | tail -1 | awk '{print $4}')
SHM_AVAILABLE_GB=$((SHM_AVAILABLE_KB / 1024 / 1024))
echo " Storage location: ${DATA_BASE}"
echo " Available RAM: ${SHM_AVAILABLE_GB}GB"
echo " This eliminates disk I/O bottlenecks for accurate"
echo " relay performance measurement."
echo "======================================================"
echo ""
if [ "$SHM_AVAILABLE_KB" -lt 8388608 ]; then
echo "WARNING: Less than 8GB available in /dev/shm (${SHM_AVAILABLE_GB}GB available)"
echo "Benchmarks may fail if databases grow too large."
echo "Consider increasing tmpfs size: sudo mount -o remount,size=16G /dev/shm"
echo ""
read -p "Continue anyway? [y/N] " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
fi
else
DATA_BASE="./data"
COMPOSE_FILES="-f docker-compose.yml"
echo "======================================================"
echo " DISK-BASED BENCHMARK MODE (default)"
echo "======================================================"
echo " Storage location: ${DATA_BASE}"
echo " Tip: Use --ramdisk for faster benchmarks without"
echo " disk I/O bottlenecks."
echo "======================================================"
echo ""
fi
# Clean old data directories (may be owned by root from Docker)
if [ -d "${DATA_BASE}" ]; then
echo "Cleaning old data directories at ${DATA_BASE}..."
if ! rm -rf "${DATA_BASE}" 2>/dev/null; then
# If normal rm fails (permission denied), try with sudo for ramdisk
if [ "$USE_RAMDISK" = true ]; then
echo "Need elevated permissions to clean ramdisk..."
if ! sudo rm -rf "${DATA_BASE}" 2>/dev/null; then
echo ""
echo "ERROR: Cannot remove data directories."
echo "Please run: sudo rm -rf ${DATA_BASE}"
echo "Then run this script again."
exit 1
fi
else
# Provide clear instructions for disk-based mode
echo ""
echo "ERROR: Cannot remove data directories due to permission issues."
echo "This happens because Docker creates files as root."
echo ""
echo "Please run one of the following to clean up:"
echo " sudo rm -rf ${DATA_BASE}/"
echo " sudo chown -R \$(id -u):\$(id -g) ${DATA_BASE}/ && rm -rf ${DATA_BASE}/"
echo ""
echo "Then run this script again."
exit 1
fi
fi
fi
# Stop any running containers from previous runs
echo "Stopping any running containers..."
$DOCKER_COMPOSE down 2>/dev/null || true
$DOCKER_COMPOSE $COMPOSE_FILES down 2>/dev/null || true
# Create fresh data directories with correct permissions
echo "Preparing data directories..."
echo "Preparing data directories at ${DATA_BASE}..."
# Clean Neo4j data to prevent "already running" errors
if [ -d "data/neo4j" ]; then
echo "Cleaning Neo4j data directory..."
rm -rf data/neo4j/*
if [ "$USE_RAMDISK" = true ]; then
# Create ramdisk directories
mkdir -p "${DATA_BASE}"/{next-orly-badger,next-orly-dgraph,next-orly-neo4j,dgraph-zero,dgraph-alpha,neo4j,neo4j-logs,khatru-sqlite,khatru-badger,relayer-basic,strfry,nostr-rs-relay,rely-sqlite,postgres}
chmod 777 "${DATA_BASE}"/{next-orly-badger,next-orly-dgraph,next-orly-neo4j,dgraph-zero,dgraph-alpha,neo4j,neo4j-logs,khatru-sqlite,khatru-badger,relayer-basic,strfry,nostr-rs-relay,rely-sqlite,postgres}
else
# Create disk directories (relative path)
mkdir -p data/{next-orly-badger,next-orly-dgraph,next-orly-neo4j,dgraph-zero,dgraph-alpha,neo4j,neo4j-logs,khatru-sqlite,khatru-badger,relayer-basic,strfry,nostr-rs-relay,rely-sqlite,postgres}
chmod 777 data/{next-orly-badger,next-orly-dgraph,next-orly-neo4j,dgraph-zero,dgraph-alpha,neo4j,neo4j-logs,khatru-sqlite,khatru-badger,relayer-basic,strfry,nostr-rs-relay,rely-sqlite,postgres}
fi
mkdir -p data/{next-orly-badger,next-orly-dgraph,next-orly-neo4j,dgraph-zero,dgraph-alpha,neo4j,neo4j-logs,khatru-sqlite,khatru-badger,relayer-basic,strfry,nostr-rs-relay,rely-sqlite,postgres}
chmod 777 data/{next-orly-badger,next-orly-dgraph,next-orly-neo4j,dgraph-zero,dgraph-alpha,neo4j,neo4j-logs,khatru-sqlite,khatru-badger,relayer-basic,strfry,nostr-rs-relay,rely-sqlite,postgres}
echo "Building fresh Docker images..."
# Force rebuild to pick up latest code changes
$DOCKER_COMPOSE build --no-cache benchmark-runner next-orly-badger next-orly-dgraph next-orly-neo4j rely-sqlite
$DOCKER_COMPOSE $COMPOSE_FILES build --no-cache benchmark-runner next-orly-badger next-orly-dgraph next-orly-neo4j rely-sqlite
echo ""
echo "Starting benchmark suite..."
@@ -55,7 +154,22 @@ echo "This will automatically shut down all containers when the benchmark comple
echo ""
# Run docker compose with flags to exit when benchmark-runner completes
$DOCKER_COMPOSE up --exit-code-from benchmark-runner --abort-on-container-exit
$DOCKER_COMPOSE $COMPOSE_FILES up --exit-code-from benchmark-runner --abort-on-container-exit
# Cleanup function
cleanup() {
echo ""
echo "Cleaning up..."
$DOCKER_COMPOSE $COMPOSE_FILES down 2>/dev/null || true
if [ "$USE_RAMDISK" = true ]; then
echo "Cleaning ramdisk data at ${DATA_BASE}..."
rm -rf "${DATA_BASE}" 2>/dev/null || sudo rm -rf "${DATA_BASE}" 2>/dev/null || true
fi
}
# Register cleanup on script exit
trap cleanup EXIT
echo ""
echo "Benchmark suite has completed and all containers have been stopped."

View File

@@ -36,12 +36,12 @@ var (
// BlossomDescriptor represents a blob descriptor returned by the server
type BlossomDescriptor struct {
URL string `json:"url"`
SHA256 string `json:"sha256"`
Size int64 `json:"size"`
Type string `json:"type,omitempty"`
Uploaded int64 `json:"uploaded"`
PublicKey string `json:"public_key,omitempty"`
URL string `json:"url"`
SHA256 string `json:"sha256"`
Size int64 `json:"size"`
Type string `json:"type,omitempty"`
Uploaded int64 `json:"uploaded"`
PublicKey string `json:"public_key,omitempty"`
Tags [][]string `json:"tags,omitempty"`
}
@@ -49,7 +49,7 @@ func main() {
flag.Parse()
fmt.Println("🌸 Blossom Test Tool")
fmt.Println("===================\n")
fmt.Println("===================")
// Get or generate keypair (only if auth is enabled)
var sec, pub []byte

View File

@@ -1,54 +1,50 @@
# Dockerfile for Stella's Nostr Relay (next.orly.dev)
# Owner: npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx
#
# Build from repository root:
# docker build -f contrib/stella/Dockerfile -t stella-relay .
FROM golang:alpine AS builder
# Use Debian-based Go image to match runtime stage (avoids musl/glibc linker mismatch)
FROM golang:1.25-bookworm AS builder
# Install build dependencies
RUN apk add --no-cache \
git \
build-base \
autoconf \
automake \
libtool \
pkgconfig
# Install secp256k1 library from Alpine packages
RUN apk add --no-cache libsecp256k1-dev
RUN apt-get update && apt-get install -y --no-install-recommends git make && rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /build
# Copy go modules first (for better caching)
COPY ../../go.mod go.sum ./
COPY go.mod go.sum ./
RUN go mod download
# Copy source code
COPY ../.. .
COPY . .
# Build the relay with optimizations from v0.4.8
RUN CGO_ENABLED=1 GOOS=linux go build -ldflags "-w -s" -o relay .
# Build the relay with CGO disabled (uses purego for crypto)
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags "-w -s" -o relay .
# Create non-root user for security
RUN adduser -D -u 1000 stella && \
RUN useradd -m -u 1000 stella && \
chown -R 1000:1000 /build
# Final stage - minimal runtime image
FROM alpine:latest
# Use Debian slim instead of Alpine because Debian's libsecp256k1 includes
# Schnorr signatures (secp256k1_schnorrsig_*) and ECDH which Nostr requires.
# Alpine's libsecp256k1 is built without these modules.
FROM debian:bookworm-slim
# Install only runtime dependencies
RUN apk add --no-cache \
ca-certificates \
curl \
libsecp256k1 \
libsecp256k1-dev
# Install runtime dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends ca-certificates curl libsecp256k1-1 && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy binary from builder
# Copy binary (libsecp256k1.so.1 is already installed via apt)
COPY --from=builder /build/relay /app/relay
# Create runtime user and directories
RUN adduser -D -u 1000 stella && \
RUN useradd -m -u 1000 stella && \
mkdir -p /data /profiles /app && \
chown -R 1000:1000 /data /profiles /app

View File

@@ -283,15 +283,13 @@ Dockerfiles simplified:
FROM golang:1.25-alpine AS builder
WORKDIR /build
COPY . .
RUN go build -ldflags "-s -w" -o orly .
RUN CGO_ENABLED=0 go build -ldflags "-s -w" -o orly .
# Runtime can optionally include library
# Runtime includes libsecp256k1.so from repository
FROM alpine:latest
RUN apk add --no-cache wget ca-certificates
RUN apk add --no-cache ca-certificates
COPY --from=builder /build/orly /app/orly
# Download libsecp256k1.so from nostr repository (optional for performance)
RUN wget -q https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so \
-O /app/libsecp256k1.so || echo "Warning: libsecp256k1.so download failed (optional)"
COPY --from=builder /build/libsecp256k1.so /app/libsecp256k1.so
ENV LD_LIBRARY_PATH=/app
CMD ["/app/orly"]
```

28
main.go
View File

@@ -62,6 +62,34 @@ func main() {
os.Exit(0)
}
// Handle 'serve' subcommand: start ephemeral relay with RAM-based storage
if config.ServeRequested() {
const serveDataDir = "/dev/shm/orlyserve"
log.I.F("serve mode: configuring ephemeral relay at %s", serveDataDir)
// Delete existing directory completely
if err = os.RemoveAll(serveDataDir); err != nil && !os.IsNotExist(err) {
log.E.F("failed to remove existing serve directory: %v", err)
os.Exit(1)
}
// Create fresh directory
if err = os.MkdirAll(serveDataDir, 0755); chk.E(err) {
log.E.F("failed to create serve directory: %v", err)
os.Exit(1)
}
// Override configuration for serve mode
cfg.DataDir = serveDataDir
cfg.Listen = "0.0.0.0"
cfg.Port = 10547
cfg.ACLMode = "none"
cfg.ServeMode = true // Grant full owner access to all users
log.I.F("serve mode: listening on %s:%d with ACL mode '%s' (full owner access)",
cfg.Listen, cfg.Port, cfg.ACLMode)
}
// Ensure profiling is stopped on interrupts (SIGINT/SIGTERM) as well as on normal exit
var profileStopOnce sync.Once
profileStop := func() {}

View File

@@ -123,14 +123,13 @@ func (f *Follows) Configure(cfg ...any) (err error) {
}
// log.I.F("admin follow list:\n%s", ev.Serialize())
for _, v := range ev.Tags.GetAll([]byte("p")) {
// log.I.F("adding follow: %s", v.Value())
var a []byte
if b, e := hex.DecodeString(string(v.Value())); chk.E(e) {
// log.I.F("adding follow: %s", v.ValueHex())
// ValueHex() automatically handles both binary and hex storage formats
if b, e := hex.DecodeString(string(v.ValueHex())); chk.E(e) {
continue
} else {
a = b
f.follows = append(f.follows, b)
}
f.follows = append(f.follows, a)
}
}
}
@@ -923,8 +922,15 @@ func (f *Follows) extractFollowedPubkeys(event *event.E) {
// Extract all 'p' tags (followed pubkeys) from the kind 3 event
for _, tag := range event.Tags.GetAll([]byte("p")) {
if len(tag.Value()) == 32 { // Valid pubkey length
f.AddFollow(tag.Value())
// First try binary format (optimized storage: 33 bytes = 32 hash + null)
if pubkey := tag.ValueBinary(); pubkey != nil {
f.AddFollow(pubkey)
continue
}
// Fall back to hex decoding for non-binary values
// ValueHex() handles both formats, but we already checked binary above
if pubkey, err := hex.DecodeString(string(tag.Value())); err == nil && len(pubkey) == 32 {
f.AddFollow(pubkey)
}
}
}

View File

@@ -52,6 +52,11 @@ func (n *None) Configure(cfg ...any) (err error) {
}
func (n *None) GetAccessLevel(pub []byte, address string) (level string) {
// In serve mode, grant full owner access to everyone
if n.cfg != nil && n.cfg.ServeMode {
return "owner"
}
// Check owners first
for _, v := range n.owners {
if utils.FastEqual(v, pub) {

View File

@@ -523,11 +523,11 @@ func TestServerErrorHandling(t *testing.T) {
statusCode: http.StatusNotFound,
},
{
name: "Missing auth header",
name: "Anonymous upload allowed",
method: "PUT",
path: "/upload",
body: []byte("test"),
statusCode: http.StatusUnauthorized,
statusCode: http.StatusOK, // RequireAuth=false and ACL=none allows anonymous uploads
},
{
name: "Invalid JSON in mirror",

View File

@@ -313,7 +313,7 @@ func New(policyJSON []byte) (p *P, err error) {
// 2. Mentioned in a p-tag of the event
//
// Both ev.Pubkey and userPubkey must be binary ([]byte), not hex-encoded.
// P-tags are assumed to contain hex-encoded pubkeys that will be decoded.
// P-tags may be stored in either binary-optimized format (33 bytes) or hex format.
//
// This is the single source of truth for "parties_involved" / "privileged" checks.
func IsPartyInvolved(ev *event.E, userPubkey []byte) bool {
@@ -330,8 +330,8 @@ func IsPartyInvolved(ev *event.E, userPubkey []byte) bool {
// Check if user is in p tags
pTags := ev.Tags.GetAll([]byte("p"))
for _, pTag := range pTags {
// pTag.Value() returns hex-encoded string; decode to bytes for comparison
pt, err := hex.Dec(string(pTag.Value()))
// ValueHex() handles both binary and hex storage formats automatically
pt, err := hex.Dec(string(pTag.ValueHex()))
if err != nil {
// Skip malformed tags
continue

View File

@@ -4,6 +4,7 @@ import (
"encoding/json"
"testing"
"git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/hex"
)
@@ -241,3 +242,97 @@ func TestNoReadAllowNoPrivileged(t *testing.T) {
}
})
}
// TestPrivilegedWithBinaryEncodedPTags tests that privileged access works correctly
// when p-tags are stored in binary-optimized format (as happens after JSON unmarshaling).
// This is the real-world scenario where events come from network JSON.
func TestPrivilegedWithBinaryEncodedPTags(t *testing.T) {
_, alicePubkey := generateTestKeypair(t)
_, bobPubkey := generateTestKeypair(t)
_, charliePubkey := generateTestKeypair(t)
// Create policy with privileged flag
policyJSON := map[string]interface{}{
"rules": map[string]interface{}{
"4": map[string]interface{}{
"description": "DM - privileged only",
"privileged": true,
},
},
}
policyBytes, err := json.Marshal(policyJSON)
if err != nil {
t.Fatalf("Failed to marshal policy: %v", err)
}
policy, err := New(policyBytes)
if err != nil {
t.Fatalf("Failed to create policy: %v", err)
}
// Create event JSON with p-tag (simulating real network event)
// When this JSON is unmarshaled, the p-tag value will be converted to binary format
eventJSON := `{
"id": "0000000000000000000000000000000000000000000000000000000000000001",
"pubkey": "` + hex.Enc(alicePubkey) + `",
"created_at": 1234567890,
"kind": 4,
"tags": [["p", "` + hex.Enc(bobPubkey) + `"]],
"content": "Secret message",
"sig": "00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
}`
var ev event.E
if err := json.Unmarshal([]byte(eventJSON), &ev); err != nil {
t.Fatalf("Failed to unmarshal event: %v", err)
}
// Verify the p-tag is stored in binary format
pTags := ev.Tags.GetAll([]byte("p"))
if len(pTags) == 0 {
t.Fatal("Event should have p-tag")
}
pTag := pTags[0]
binValue := pTag.ValueBinary()
t.Logf("P-tag Value() length: %d", len(pTag.Value()))
t.Logf("P-tag ValueBinary(): %v (len=%d)", binValue != nil, len(binValue))
if binValue == nil {
t.Log("Warning: P-tag is NOT in binary format (test may not exercise the binary code path)")
} else {
t.Log("P-tag IS in binary format - testing binary-encoded path")
}
// Test: Bob (in p-tag) should be able to read even with binary-encoded tag
t.Run("bob_binary_ptag_can_read", func(t *testing.T) {
allowed, err := policy.CheckPolicy("read", &ev, bobPubkey, "127.0.0.1")
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
if !allowed {
t.Error("BUG! Recipient (in binary-encoded p-tag) should be able to read privileged event")
}
})
// Test: Alice (author) should be able to read
t.Run("alice_author_can_read", func(t *testing.T) {
allowed, err := policy.CheckPolicy("read", &ev, alicePubkey, "127.0.0.1")
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
if !allowed {
t.Error("Author should be able to read their own privileged event")
}
})
// Test: Charlie (third party) should NOT be able to read
t.Run("charlie_denied", func(t *testing.T) {
allowed, err := policy.CheckPolicy("read", &ev, charliePubkey, "127.0.0.1")
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
if allowed {
t.Error("Third party should NOT be able to read privileged event")
}
})
}

View File

@@ -200,7 +200,7 @@ func ParseTrustAct(ev *event.E) (ta *TrustAct, err error) {
ta = &TrustAct{
Event: ev,
TargetPubkey: string(pTag.Value()),
TargetPubkey: string(pTag.ValueHex()), // ValueHex() handles binary/hex storage
TrustLevel: trustLevel,
RelayURL: string(relayTag.Value()),
Expiry: expiry,

View File

@@ -1 +1 @@
v0.30.1
v0.30.2

View File

@@ -37,7 +37,7 @@ ORLY is a standard Go application that can be built using the Go toolchain.
=== prerequisites
- Go 1.25.0 or later
- Go 1.25.3 or later
- Git
- For web UI: link:https://bun.sh/[Bun] JavaScript runtime
@@ -179,7 +179,7 @@ cd next.orly.dev
The script will:
1. **Install Go 1.25.0** if not present (in `~/.local/go`)
1. **Install Go 1.25.3** if not present (in `~/.local/go`)
2. **Configure environment** by creating `~/.goenv` and updating `~/.bashrc`
3. **Build the relay** with embedded web UI using `update-embedded-web.sh`
4. **Set capabilities** for port 443 binding (requires sudo)
@@ -342,6 +342,165 @@ For detailed testing instructions, multi-relay testing scenarios, and advanced u
The benchmark suite provides comprehensive performance testing and comparison across multiple relay implementations, including throughput, latency, and memory usage metrics.
== command-line tools
ORLY includes several command-line utilities in the `cmd/` directory for testing, debugging, and administration.
=== relay-tester
Nostr protocol compliance testing tool. Validates that a relay correctly implements the Nostr protocol specification.
[source,bash]
----
# Run all protocol compliance tests
go run ./cmd/relay-tester -url ws://localhost:3334
# List available tests
go run ./cmd/relay-tester -list
# Run specific test
go run ./cmd/relay-tester -url ws://localhost:3334 -test "Basic Event"
# Output results as JSON
go run ./cmd/relay-tester -url ws://localhost:3334 -json
----
=== benchmark
Comprehensive relay performance benchmarking tool. Tests event storage, queries, and subscription performance with detailed latency metrics (P90, P95, P99).
[source,bash]
----
# Run benchmarks against local database
go run ./cmd/benchmark -data-dir /tmp/bench-db -events 10000 -workers 4
# Run benchmarks against a running relay
go run ./cmd/benchmark -relay ws://localhost:3334 -events 5000
# Use different database backends
go run ./cmd/benchmark -dgraph -events 10000
go run ./cmd/benchmark -neo4j -events 10000
----
The `cmd/benchmark/` directory also includes Docker Compose configurations for comparative benchmarks across multiple relay implementations (strfry, nostr-rs-relay, khatru, etc.).
=== stresstest
Load testing tool for evaluating relay performance under sustained high-traffic conditions. Generates events with random content and tags to simulate realistic workloads.
[source,bash]
----
# Run stress test with 10 concurrent workers
go run ./cmd/stresstest -url ws://localhost:3334 -workers 10 -duration 60s
# Generate events with random p-tags (up to 100 per event)
go run ./cmd/stresstest -url ws://localhost:3334 -workers 5
----
=== blossomtest
Tests the Blossom blob storage protocol (BUD-01/BUD-02) implementation. Validates upload, download, and authentication flows.
[source,bash]
----
# Test with generated key
go run ./cmd/blossomtest -url http://localhost:3334 -size 1024
# Test with specific nsec
go run ./cmd/blossomtest -url http://localhost:3334 -nsec nsec1...
# Test anonymous uploads (no authentication)
go run ./cmd/blossomtest -url http://localhost:3334 -no-auth
----
=== aggregator
Event aggregation utility that fetches events from multiple relays using bloom filters for deduplication. Useful for syncing events across relays with memory-efficient duplicate detection.
[source,bash]
----
go run ./cmd/aggregator -relays wss://relay1.com,wss://relay2.com -output events.jsonl
----
=== convert
Key format conversion utility. Converts between hex and bech32 (npub/nsec) formats for Nostr keys.
[source,bash]
----
# Convert npub to hex
go run ./cmd/convert npub1abc...
# Convert hex to npub
go run ./cmd/convert 0123456789abcdef...
# Convert secret key (nsec or hex) - outputs both nsec and derived npub
go run ./cmd/convert --secret nsec1xyz...
----
=== FIND
Free Internet Name Daemon - CLI tool for the distributed naming system. Manages name registration, transfers, and certificate issuance.
[source,bash]
----
# Validate a name format
go run ./cmd/FIND verify-name example.nostr
# Generate a new key pair
go run ./cmd/FIND generate-key
# Create a registration proposal
go run ./cmd/FIND register myname.nostr
# Transfer a name to a new owner
go run ./cmd/FIND transfer myname.nostr npub1newowner...
----
=== policytest
Tests the policy system for event write control. Validates that policy rules correctly allow or reject events based on kind, pubkey, and other criteria.
[source,bash]
----
go run ./cmd/policytest -url ws://localhost:3334 -type event -kind 4678
go run ./cmd/policytest -url ws://localhost:3334 -type req -kind 1
go run ./cmd/policytest -url ws://localhost:3334 -type publish-and-query -count 5
----
=== policyfiltertest
Tests policy-based filtering with authorized and unauthorized pubkeys. Validates access control rules for specific users.
[source,bash]
----
go run ./cmd/policyfiltertest -url ws://localhost:3334 \
-allowed-pubkey <hex> -allowed-sec <hex> \
-unauthorized-pubkey <hex> -unauthorized-sec <hex>
----
=== subscription-test
Tests WebSocket subscription stability over extended periods. Monitors for dropped subscriptions and connection issues.
[source,bash]
----
# Run subscription stability test for 60 seconds
go run ./cmd/subscription-test -url ws://localhost:3334 -duration 60 -kind 1
# With verbose output
go run ./cmd/subscription-test -url ws://localhost:3334 -duration 120 -v
----
=== subscription-test-simple
Simplified subscription stability test that verifies subscriptions remain active without dropping over the test duration.
[source,bash]
----
go run ./cmd/subscription-test-simple -url ws://localhost:3334 -duration 120
----
== access control
=== follows ACL
@@ -378,3 +537,26 @@ export ORLY_CLUSTER_PROPAGATE_PRIVILEGED_EVENTS=false
**Important:** When disabled, privileged events will not be replicated to peer relays. This provides better privacy but means these events will only be available on the originating relay. Users should be aware that accessing their privileged events may require connecting directly to the relay where they were originally published.
== developer notes
=== binary-optimized tag storage
The nostr library (`git.mleku.dev/mleku/nostr/encoders/tag`) uses binary optimization for `e` and `p` tags to reduce memory usage and improve comparison performance.
When events are unmarshaled from JSON, 64-character hex values in e/p tags are converted to 33-byte binary format (32 bytes hash + null terminator).
**Important:** When working with e/p tag values in code:
* **DO NOT** use `tag.Value()` directly - it returns raw bytes which may be binary, not hex
* **ALWAYS** use `tag.ValueHex()` to get a hex string regardless of storage format
* **Use** `tag.ValueBinary()` to get raw 32-byte binary (returns nil if not binary-encoded)
[source,go]
----
// CORRECT: Use ValueHex() for hex decoding
pt, err := hex.Dec(string(pTag.ValueHex()))
// WRONG: Value() may return binary bytes, not hex
pt, err := hex.Dec(string(pTag.Value())) // Will fail for binary-encoded tags!
----

View File

@@ -83,7 +83,7 @@ docker-compose -f docker-compose-test.yml down -v
Multi-stage build for ORLY:
**Stage 1: Builder**
- Based on golang:1.21-alpine
- Based on golang:1.25-alpine
- Downloads dependencies
- Builds static binary with `CGO_ENABLED=0`
- Copies libsecp256k1.so for crypto operations
@@ -365,7 +365,7 @@ start_period: 60s # Default is 20-30s
# Pre-pull images
docker pull dgraph/standalone:latest
docker pull golang:1.21-alpine
docker pull golang:1.25-alpine
```
**High memory usage**

View File

@@ -193,7 +193,7 @@ echo "=== All deployment script tests passed! ==="
echo ""
echo "The deployment script appears to be working correctly."
echo "In a real deployment, it would:"
echo " 1. Install Go 1.23.1 to ~/.local/go"
echo " 1. Install Go 1.25.3 to ~/.local/go"
echo " 2. Set up Go environment in ~/.goenv"
echo " 3. Install build dependencies via ubuntu_install_libsecp256k1.sh"
echo " 4. Build the relay with embedded web UI"

View File

@@ -33,11 +33,11 @@ if [[ ! -x "$BENCHMARK_BIN" ]]; then
echo "Building benchmark binary (pure Go + purego)..."
cd "$REPO_ROOT/cmd/benchmark"
CGO_ENABLED=0 go build -o "$BENCHMARK_BIN" .
# Download libsecp256k1.so from nostr repository (runtime optional)
wget -q https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so \
-O "$(dirname "$BENCHMARK_BIN")/libsecp256k1.so" 2>/dev/null || \
echo "Warning: Failed to download libsecp256k1.so (optional for performance)"
chmod +x "$(dirname "$BENCHMARK_BIN")/libsecp256k1.so" 2>/dev/null || true
# Copy libsecp256k1.so from repo root (runtime optional)
if [[ -f "$REPO_ROOT/libsecp256k1.so" ]]; then
cp "$REPO_ROOT/libsecp256k1.so" "$(dirname "$BENCHMARK_BIN")/"
chmod +x "$(dirname "$BENCHMARK_BIN")/libsecp256k1.so" 2>/dev/null || true
fi
cd "$REPO_ROOT"
fi

View File

@@ -197,13 +197,12 @@ build_application() {
log_info "Building binary in current directory (pure Go + purego)..."
CGO_ENABLED=0 go build -o "$BINARY_NAME"
# Download libsecp256k1.so from nostr repository (optional, for runtime performance)
log_info "Downloading libsecp256k1.so from nostr repository..."
if wget -q https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so -O libsecp256k1.so; then
# Verify libsecp256k1.so exists in repo (used by purego for runtime crypto)
if [[ -f "./libsecp256k1.so" ]]; then
chmod +x libsecp256k1.so
log_success "Downloaded libsecp256k1.so successfully (runtime optional)"
log_success "Found libsecp256k1.so in repository"
else
log_warning "Failed to download libsecp256k1.so - relay will still work but may have slower crypto"
log_warning "libsecp256k1.so not found in repo - relay will still work but may have slower crypto"
fi
if [[ -f "./$BINARY_NAME" ]]; then

View File

@@ -3,9 +3,8 @@
# libsecp256k1 is loaded dynamically at runtime if available
export CGO_ENABLED=0
# Download libsecp256k1.so from nostr repository if not present
if [ ! -f "libsecp256k1.so" ]; then
wget -q https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so -O libsecp256k1.so 2>/dev/null || true
# Verify libsecp256k1.so exists in repo (should be at repo root)
if [ -f "libsecp256k1.so" ]; then
chmod +x libsecp256k1.so 2>/dev/null || true
fi

View File

@@ -3,12 +3,10 @@
# libsecp256k1 is loaded dynamically at runtime if available
export CGO_ENABLED=0
# Download libsecp256k1.so from nostr repository if not present
# Verify libsecp256k1.so exists in repo (should be at repo root)
if [ ! -f "libsecp256k1.so" ]; then
echo "Downloading libsecp256k1.so from nostr repository..."
wget -q https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so -O libsecp256k1.so || {
echo "Warning: Failed to download libsecp256k1.so - tests may fail"
}
echo "Warning: libsecp256k1.so not found in repo - tests may use fallback crypto"
else
chmod +x libsecp256k1.so 2>/dev/null || true
fi