Compare commits

...

49 Commits

Author SHA1 Message Date
199f922208 Refactor deletion checks and error handling; bump version to v0.6.4.
Some checks failed
Go / build (push) Has been cancelled
2025-09-21 18:15:27 +01:00
405e223aa6 implement delete events 2025-09-21 18:06:11 +01:00
fc3a89a309 Remove unused JavaScript file index-tha189jf.js from dist directory.
- Cleaned up the `app/web/dist/` directory by deleting an unreferenced and outdated build artifact.
- Maintained a lean and organized repository structure.
2025-09-21 17:17:31 +01:00
ba8166da07 Remove unused JavaScript file index-wnwvj11w.js from dist directory.
- Cleaned up the `app/web/dist/` directory by deleting an unreferenced and outdated build artifact.
- Maintained a lean and organized repository structure.
2025-09-21 17:17:15 +01:00
3e3af08644 Remove unused JavaScript file index-wnwvj11w.js from dist directory.
Some checks failed
Go / build (push) Has been cancelled
- Cleaned up the `app/web/dist/` directory by deleting an unreferenced and outdated build artifact.
- Maintained a lean and organized repository structure.
2025-09-21 16:39:45 +01:00
fbdf565bf7 Remove unused JavaScript file index-sskmjaqz.js from dist directory.
- Cleaned up the `app/web/dist/` directory by deleting an unreferenced and outdated build artifact.
- Maintained a lean and organized repository structure.
2025-09-21 16:33:23 +01:00
14b6960070 Add admin-only "All Events Log" feature with WebSocket integration.
Some checks failed
Go / build (push) Has been cancelled
- Implemented an "All Events Log" section accessible only to admin users.
- Added WebSocket-based data fetching to retrieve all events from the relay.
- Included profile caching and metadata fetching for event authors.
- Updated UI components to display events with expandable raw JSON details.
- Adjusted CSS for avatar sizes and improved layout.
- Refactored logout logic to reset all event states.
2025-09-21 16:31:06 +01:00
f9896e52ea use websockets for events log 2025-09-21 16:12:10 +01:00
ad7ca69964 Bump version to v0.6.1 for patch release.
Some checks failed
Go / build (push) Has been cancelled
2025-09-21 14:39:23 +01:00
facf03783f Remove outdated CSS and JavaScript files from dist directory.
- Deleted `index-zhtd763e.css` and `index-zqddcpy5.js` to streamline the build artifacts.
- Simplified repository by removing unused generated files to maintain a clean structure.
2025-09-21 14:36:49 +01:00
a5b6943320 Bump version to v0.6.0 for upcoming release.
Some checks failed
Go / build (push) Has been cancelled
2025-09-21 11:56:32 +01:00
1fe0a395be Add minimal local build outputs for streamlined dist integration.
- Introduced `index-zhtd763e.css` with a tailored CSS rule set for performance optimization.
- Added `index-zqddcpy5.js` containing essential JavaScript for React app functionality and improved compatibility.
2025-09-21 11:51:20 +01:00
92b3716a61 Remove dist directory and streamline build artifacts.
- Deleted `index.css`, `index.js`, and `index.html` from `app/web/dist/`.
- Cleared unused build artifacts to maintain a lean repository structure.
2025-09-21 11:46:59 +01:00
5c05d741d9 Replace remote Tailwind CSS with a local minimal subset; refine .gitignore and dist structure.
- Added a minimal `tailwind.min.css` with utilities tailored to app needs (`app/web/dist/`).
- Updated `.gitignore` to include specific `dist/` paths while maintaining clean build artifacts.
- Added local `dist` files (`index.css`, `index.js`) for better control over UI styling and build outputs.
2025-09-21 11:34:08 +01:00
9a1bbbafce Refine login view styling and update authentication text.
- Updated `App.jsx` to improve layout with centered flexbox and dynamic width.
- Adjusted login text for better clarity: "Authenticate" replaces "Connect".
2025-09-21 11:28:35 +01:00
2fd3828010 Refine login view styling and update authentication text.
- Updated `App.jsx` to improve layout with centered flexbox and dynamic width.
- Adjusted login text for better clarity: "Authenticate" replaces "Connect".
2025-09-21 10:38:25 +01:00
24b742bd20 Enable dev mode for React app with proxy support; refine build, styles, and UI.
- Adjusted `package.json` scripts for Bun dev server and build flow.
- Added `dev.html` for standalone web development with hot-reload enabled.
- Introduced `WebDisableEmbedded` and `WebDevProxyURL` configurations to support proxying non-API paths.
- Refactored server logic to handle reverse proxy for development mode.
- Updated `App.jsx` structure, styles, and layout for responsiveness and dynamic padding.
- Improved login interface with logo support and cleaner design.
- Enhanced development flow documentation in `README.md`.
2025-09-21 10:29:17 +01:00
6f71b95734 Handle EOF case in text encoder helper loop.
- Added check for `len(rem) == 0` to return `io.EOF` when no remaining input is available.
2025-09-21 03:00:29 +01:00
82665444f4 Add /api/auth/logout endpoint and improve auth flow.
- Implemented `handleAuthLogout` to support user logout by clearing session cookies.
- Improved `/api/auth/status` with authentication cookie validation for persistent login state.
- Enhanced `App.jsx` to prevent UI flash during auth status checks and streamline logout flow.
- Refined user profile handling and permission fetch logic for better reliability.
2025-09-20 20:30:14 +01:00
effeae4495 Replace remote Tailwind CSS with a minimal local build; refine build script and UI styling.
- Added `tailwind.min.css` tailored to current app requirements to reduce external dependencies.
- Updated `index.html` to use the local Tailwind CSS file.
- Improved `package.json` `build` script to ensure `dist` directory creation and inclusion of all `public/` assets.
- Refined CSS and layout in `App.jsx` for better consistency and responsiveness.
2025-09-20 20:24:04 +01:00
6b38291bf9 Add CORS headers and update UI for enhanced user profile handling.
- Added CORS support in server responses for cross-origin requests (`Access-Control-Allow-Origin`, etc.).
- Improved header panel behavior with a sticky position and refined CSS styling.
- Integrated profile data fetching (Kind 0 metadata) for user personalization.
- Enhanced login functionality to support dynamic profile display based on fetched metadata.
- Updated `index.html` to include Tailwind CSS for better design consistency.
2025-09-20 19:54:27 +01:00
0b69ea6d80 Embed React app and add new user authentication interface.
- Integrated a React-based web frontend into the Go server using the `embed` package, serving it from `/`.
- Added build and development scripts utilizing Bun for the React app (`package.json`, `README.md`).
- Enhanced auth interface to support better user experience and permissions (`App.jsx`, CSS updates).
- Refactored `/api/auth/login` to serve React UI, removing hardcoded HTML template.
- Implemented `/api/permissions/` with ACL support for user access management.
2025-09-20 19:03:25 +01:00
9c85dca598 Add graceful termination logging on signal triggers.
- Added explicit `log.I.F("exiting")` calls on signal handling for better visibility during shutdown.
- Ensured immediate return after logging to prevent further processing.
2025-09-20 17:59:06 +01:00
0d8c518896 Add user authentication interface with Nostr relay support.
- Implemented basic UI for login with NIP-07 extensions or private keys.
- Added `/api/auth/` endpoints for challenge generation, login handling, and status checking.
- Introduced challenge storage with thread-safe management.
- Enhanced `Server` structure to support authentication and user interface workflows.
- Improved HTML/CSS for a responsive and user-friendly experience.
2025-09-20 14:17:41 +01:00
20fbce9263 Add spider functionality for relay crawling, marker management, and new SpiderMode config.
- Introduced the `spider` package for relay crawling, including periodic tasks and one-time sync capabilities.
- Added `SetMarker`, `GetMarker`, `HasMarker`, and `DeleteMarker` methods in the database for marker management.
- Updated configuration with `SpiderMode` and `SpiderFrequency` options to enable and customize spider behavior.
- Integrated `spider` initialization into the main application flow.
- Improved tag handling, NIP-70 compliance, and protected tag validation in event processing.
- Removed unnecessary logging and replaced `errorf` with `fmt.Errorf` for better error handling.
- Incremented version to `v0.5.0`.
2025-09-20 13:46:22 +01:00
4532def9f5 Remove large outdated stacktrace.txt log file.
Some checks failed
Go / build (push) Has been cancelled
- Deleted auto-generated `stacktrace.txt` file to reduce repository clutter and maintain relevance of retained files.
2025-09-20 12:07:55 +01:00
90f21fbcd1 Add detailed benchmark results for multiple relays.
- Included results for `relayer-basic`, `strfry`, and `nostr-rs-relay` relay benchmarks.
- Comprehensive performance metrics added for throughput, latency, query, and concurrent operations.
- Reports saved as plain text and AsciiDoc formats.
2025-09-20 12:06:57 +01:00
81a40c04e5 Refactor publishCacheEvents for concurrent publishing and optimize database access.
- Updated `publishCacheEvents` to utilize multiple concurrent connections for event publishing.
- Introduced worker-based architecture leveraging `runtime.NumCPU` for parallel uploads.
- Optimized database fetch logic in `FetchEventsBySerials` for improved maintainability and performance.
- Bumped version to `v0.4.8`.
2025-09-20 04:10:59 +01:00
58a9e83038 Refactor publishCacheEvents and publisherWorker to use fire-and-forget publishing.
- Replaced `Publish` calls with direct event envelope writes, removing wait-for-OK behavior.
- Simplified `publishCacheEvents` logic, removed per-publish timeout contexts, and updated return values.
- Adjusted log messages to reflect "sent" instead of "published."
- Enhanced relay stability with delays between successful publishes.
- Removed unused `publishTimeout` parameter from `publisherWorker` and main logic.
2025-09-20 03:48:50 +01:00
22cde96f3f Remove bufpool references and unused imports, optimize memory operations.
- Removed `bufpool` usage throughout `tag`, `tags`, and `event` packages for memory efficiency.
- Replaced in-place buffer modifications with independent, deep-copied allocations to prevent unintended mutations.
- Added new `Clone` method for deep copying `event.E`.
- Ensured valid JSON emission for nil `Tags` in `event` marshaling.
- Introduced `cmd/stresstest` for relay stress-testing with detailed workload generation and query simulation.
2025-09-19 16:17:44 +01:00
49a172820a Remove unused dependencies and update lol.mleku.dev to v1.0.3. 2025-09-15 05:08:16 +01:00
9d2bf173fe Bump lol.mleku.dev to v1.0.3. 2025-09-15 05:05:52 +01:00
e521b788fb Delete outdated benchmark reports and results.
Removed old benchmark reports and detailed logs from the repository to clean up unnecessary files. These reports appear to be auto-generated and no longer relevant for ongoing development.
2025-09-15 05:00:19 +01:00
f5cce92bf8 Handle nil receiver S in ContainsAny method within tags.go. 2025-09-13 21:23:59 +01:00
2ccdc5e756 Bump version to v0.4.7. 2025-09-13 21:19:01 +01:00
173a34784f Remove redundant logging in acl/follows.go and get-indexes-from-filter.go, handle nil Tags in event.go. 2025-09-13 21:17:53 +01:00
a75e0994f9 Add debug logging for admins in ACL follows evaluation logic 2025-09-13 21:08:29 +01:00
60e925d748 added profiler tooling to enable automated generation of profile reports 2025-09-13 21:05:30 +01:00
3d2f970f04 added profiler tooling to enable automated generation of profile reports 2025-09-13 20:49:25 +01:00
935eb1fb0b added profiler tooling to enable automated generation of profile reports 2025-09-13 13:06:52 +01:00
509aac3819 Remove unused ACL integration and related configuration logic, bump version to v0.4.6.
Some checks failed
Go / build (push) Has been cancelled
2025-09-13 11:33:01 +01:00
a9893a0918 Bump version to v0.4.5.
Some checks failed
Go / build (push) Has been cancelled
2025-09-13 09:08:02 +01:00
8290e1ae0e Refactor error handling in publisher.go, comment redundant logging in acl/follows.go, and improve error handling for connection rejections (403). 2025-09-13 09:07:33 +01:00
fc546ddc0b Replace errorf with errors and fmt.Errorf, remove redundant logging across database operations, minimize unused imports, and improve concurrent event delivery logic. Added CPU utilization optimization in the main runtime configuration. 2025-09-13 00:47:53 +01:00
c45276ef08 Optimize deletion timestamp lookup by replacing sorting logic with linear scan to improve performance. Add profiling support with cmd/benchmark/profile.sh, introduce network load testing in benchmarks, and update benchmark reports with additional latency metrics (P90, bottom 10%). 2025-09-12 23:47:53 +01:00
fefa4d202e completed basic benchmark 2025-09-12 21:30:27 +01:00
bf062a4a46 Update default ACL mode to none in config. 2025-09-12 18:44:22 +01:00
246591b60b fix issue with memory allocation when marshaling events 2025-09-12 16:59:39 +01:00
098595717f Integrate ACL with publishers for background event dispatch, ensure proper buffer adjustments in event encoding, and enhance follows sync with event delivery logic. 2025-09-12 16:36:22 +01:00
79 changed files with 8555 additions and 15567 deletions

View File

@@ -92,4 +92,6 @@ A good typical example:
use the source of the relay-tester to help guide what expectations the test has,
and use context7 for information about the nostr protocol, and use additional
log statements to help locate the cause of bugs
log statements to help locate the cause of bugs
always use Go v1.25.1 for everything involving Go

18
.dockerignore Normal file
View File

@@ -0,0 +1,18 @@
# Exclude heavy or host-specific data from Docker build context
# Fixes: failed to solve: error from sender: open cmd/benchmark/data/postgres: permission denied
# Benchmark data and reports (mounted at runtime via volumes)
cmd/benchmark/data/
cmd/benchmark/reports/
# VCS and OS cruft
.git
.gitignore
**/.DS_Store
**/Thumbs.db
# Go build cache and binaries
**/bin/
**/dist/
**/build/
**/*.out

10
.gitignore vendored
View File

@@ -30,7 +30,7 @@ node_modules/**
/go.work.sum
/secp256k1/
cmd/benchmark/external
cmd/benchmark/data
# But not these files...
!/.gitignore
!*.go
@@ -91,6 +91,14 @@ cmd/benchmark/external
!Dockerfile*
!strfry.conf
!config.toml
!.dockerignore
!*.jsx
!*.tsx
!app/web/dist
!/app/web/dist
!/app/web/dist/*
!/app/web/dist/**
!bun.lock
# ...even if they are in subdirectories
!*/
/blocklist.json

7
.idea/jsLibraryMappings.xml generated Normal file
View File

@@ -0,0 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="JavaScriptLibraryMappings">
<file url="file://$PROJECT_DIR$/../github.com/jumble" libraries="{jumble/node_modules}" />
<file url="file://$PROJECT_DIR$/../github.com/mleku/jumble" libraries="{jumble/node_modules}" />
</component>
</project>

View File

@@ -23,18 +23,29 @@ import (
// and default values. It defines parameters for app behaviour, storage
// locations, logging, and network settings used across the relay service.
type C struct {
AppName string `env:"ORLY_APP_NAME" usage:"set a name to display on information about the relay" default:"ORLY"`
DataDir string `env:"ORLY_DATA_DIR" usage:"storage location for the event store" default:"~/.local/share/ORLY"`
Listen string `env:"ORLY_LISTEN" default:"0.0.0.0" usage:"network listen address"`
Port int `env:"ORLY_PORT" default:"3334" usage:"port to listen on"`
LogLevel string `env:"ORLY_LOG_LEVEL" default:"info" usage:"relay log level: fatal error warn info debug trace"`
DBLogLevel string `env:"ORLY_DB_LOG_LEVEL" default:"info" usage:"database log level: fatal error warn info debug trace"`
LogToStdout bool `env:"ORLY_LOG_TO_STDOUT" default:"false" usage:"log to stdout instead of stderr"`
Pprof string `env:"ORLY_PPROF" usage:"enable pprof in modes: cpu,memory,allocation"`
IPWhitelist []string `env:"ORLY_IP_WHITELIST" usage:"comma-separated list of IP addresses to allow access from, matches on prefixes to allow private subnets, eg 10.0.0 = 10.0.0.0/8"`
Admins []string `env:"ORLY_ADMINS" usage:"comma-separated list of admin npubs"`
Owners []string `env:"ORLY_OWNERS" usage:"comma-separated list of owner npubs, who have full control of the relay for wipe and restart and other functions"`
ACLMode string `env:"ORLY_ACL_MODE" usage:"ACL mode: follows,none" default:"follows"`
AppName string `env:"ORLY_APP_NAME" usage:"set a name to display on information about the relay" default:"ORLY"`
DataDir string `env:"ORLY_DATA_DIR" usage:"storage location for the event store" default:"~/.local/share/ORLY"`
Listen string `env:"ORLY_LISTEN" default:"0.0.0.0" usage:"network listen address"`
Port int `env:"ORLY_PORT" default:"3334" usage:"port to listen on"`
HealthPort int `env:"ORLY_HEALTH_PORT" default:"0" usage:"optional health check HTTP port; 0 disables"`
EnableShutdown bool `env:"ORLY_ENABLE_SHUTDOWN" default:"false" usage:"if true, expose /shutdown on the health port to gracefully stop the process (for profiling)"`
LogLevel string `env:"ORLY_LOG_LEVEL" default:"info" usage:"relay log level: fatal error warn info debug trace"`
DBLogLevel string `env:"ORLY_DB_LOG_LEVEL" default:"info" usage:"database log level: fatal error warn info debug trace"`
LogToStdout bool `env:"ORLY_LOG_TO_STDOUT" default:"false" usage:"log to stdout instead of stderr"`
Pprof string `env:"ORLY_PPROF" usage:"enable pprof in modes: cpu,memory,allocation,heap,block,goroutine,threadcreate,mutex"`
PprofPath string `env:"ORLY_PPROF_PATH" usage:"optional directory to write pprof profiles into (inside container); default is temporary dir"`
PprofHTTP bool `env:"ORLY_PPROF_HTTP" default:"false" usage:"if true, expose net/http/pprof on port 6060"`
OpenPprofWeb bool `env:"ORLY_OPEN_PPROF_WEB" default:"false" usage:"if true, automatically open the pprof web viewer when profiling is enabled"`
IPWhitelist []string `env:"ORLY_IP_WHITELIST" usage:"comma-separated list of IP addresses to allow access from, matches on prefixes to allow private subnets, eg 10.0.0 = 10.0.0.0/8"`
Admins []string `env:"ORLY_ADMINS" usage:"comma-separated list of admin npubs"`
Owners []string `env:"ORLY_OWNERS" usage:"comma-separated list of owner npubs, who have full control of the relay for wipe and restart and other functions"`
ACLMode string `env:"ORLY_ACL_MODE" usage:"ACL mode: follows,none" default:"none"`
SpiderMode string `env:"ORLY_SPIDER_MODE" usage:"spider mode: none,follow" default:"none"`
SpiderFrequency time.Duration `env:"ORLY_SPIDER_FREQUENCY" usage:"spider frequency in seconds" default:"1h"`
// Web UI and dev mode settings
WebDisableEmbedded bool `env:"ORLY_WEB_DISABLE" default:"false" usage:"disable serving the embedded web UI; useful for hot-reload during development"`
WebDevProxyURL string `env:"ORLY_WEB_DEV_PROXY_URL" usage:"when ORLY_WEB_DISABLE is true, reverse-proxy non-API paths to this dev server URL (e.g. http://localhost:5173)"`
}
// New creates and initializes a new configuration object for the relay

View File

@@ -145,12 +145,10 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
if ev, err = l.FetchEventBySerial(s); chk.E(err) {
continue
}
// check that the author is the same as the signer of the
// delete, for the e tag case the author is the signer of
// the event.
if !utils.FastEqual(env.E.Pubkey, ev.Pubkey) {
// allow deletion if the signer is the author OR an admin/owner
if !(ownerDelete || utils.FastEqual(env.E.Pubkey, ev.Pubkey)) {
log.W.F(
"HandleDelete: attempted deletion of event %s by different user - delete pubkey=%s, event pubkey=%s",
"HandleDelete: attempted deletion of event %s by unauthorized user - delete pubkey=%s, event pubkey=%s",
hex.Enc(ev.ID), hex.Enc(env.E.Pubkey),
hex.Enc(ev.Pubkey),
)

View File

@@ -8,13 +8,13 @@ import (
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
acl "next.orly.dev/pkg/acl"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
"next.orly.dev/pkg/encoders/envelopes/eventenvelope"
"next.orly.dev/pkg/encoders/envelopes/okenvelope"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/reason"
utils "next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils"
)
func (l *Listener) HandleEvent(msg []byte) (err error) {
@@ -103,6 +103,20 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
// user has write access or better, continue
// log.D.F("user has %s access", accessLevel)
}
// check for protected tag (NIP-70)
protectedTag := env.E.Tags.GetFirst([]byte("-"))
if protectedTag != nil && acl.Registry.Active.Load() != "none" {
// check that the pubkey of the event matches the authed pubkey
if !utils.FastEqual(l.authedPubkey.Load(), env.E.Pubkey) {
if err = Ok.Blocked(
l, env,
"protected tag may only be published by user authed to the same pubkey",
); chk.E(err) {
return
}
return
}
}
// if the event is a delete, process the delete
if env.E.Kind == kind.EventDeletion.K {
if err = l.HandleDelete(env); err != nil {
@@ -151,7 +165,9 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
return
}
// Deliver the event to subscribers immediately after sending OK response
l.publishers.Deliver(env.E)
// Clone the event to prevent corruption when the original is freed
clonedEvent := env.E.Clone()
go l.publishers.Deliver(clonedEvent)
log.D.F("saved event %0x", env.E.ID)
var isNewFromAdmin bool
for _, admin := range l.Admins {

View File

@@ -1,9 +1,9 @@
package app
import (
"fmt"
"lol.mleku.dev/chk"
"lol.mleku.dev/errorf"
"lol.mleku.dev/log"
"next.orly.dev/pkg/encoders/envelopes"
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
"next.orly.dev/pkg/encoders/envelopes/closeenvelope"
@@ -13,7 +13,7 @@ import (
)
func (l *Listener) HandleMessage(msg []byte, remote string) {
log.D.F("%s received message:\n%s", remote, msg)
// log.D.F("%s received message:\n%s", remote, msg)
var err error
var t string
var rem []byte
@@ -32,7 +32,7 @@ func (l *Listener) HandleMessage(msg []byte, remote string) {
// log.D.F("authenvelope: %s %s", remote, rem)
err = l.HandleAuth(rem)
default:
err = errorf.E("unknown envelope type %s\n%s", t, rem)
err = fmt.Errorf("unknown envelope type %s\n%s", t, rem)
}
}
if err != nil {
@@ -43,7 +43,7 @@ func (l *Listener) HandleMessage(msg []byte, remote string) {
// )
// },
// )
if err = noticeenvelope.NewFrom(err.Error()).Write(l); chk.E(err) {
if err = noticeenvelope.NewFrom(err.Error()).Write(l); err != nil {
return
}
}

View File

@@ -9,7 +9,7 @@ import (
"github.com/dgraph-io/badger/v4"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
acl "next.orly.dev/pkg/acl"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
"next.orly.dev/pkg/encoders/envelopes/closedenvelope"
"next.orly.dev/pkg/encoders/envelopes/eoseenvelope"
@@ -22,21 +22,21 @@ import (
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/reason"
"next.orly.dev/pkg/encoders/tag"
utils "next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils/normalize"
"next.orly.dev/pkg/utils/pointers"
)
func (l *Listener) HandleReq(msg []byte) (err error) {
log.T.F("HandleReq: START processing from %s\n%s\n", l.remote, msg)
var rem []byte
// log.T.F("HandleReq: START processing from %s\n%s\n", l.remote, msg)
// var rem []byte
env := reqenvelope.New()
if rem, err = env.Unmarshal(msg); chk.E(err) {
if _, err = env.Unmarshal(msg); chk.E(err) {
return normalize.Error.Errorf(err.Error())
}
if len(rem) > 0 {
log.I.F("REQ extra bytes: '%s'", rem)
}
// if len(rem) > 0 {
// log.I.F("REQ extra bytes: '%s'", rem)
// }
// send a challenge to the client to auth if an ACL is active
if acl.Registry.Active.Load() != "none" {
if err = authenvelope.NewChallengeWith(l.challenge.Load()).
@@ -57,59 +57,59 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
return
default:
// user has read access or better, continue
log.D.F("user has %s access", accessLevel)
// log.D.F("user has %s access", accessLevel)
}
var events event.S
for _, f := range *env.Filters {
idsLen := 0
kindsLen := 0
authorsLen := 0
tagsLen := 0
if f != nil {
if f.Ids != nil {
idsLen = f.Ids.Len()
}
if f.Kinds != nil {
kindsLen = f.Kinds.Len()
}
if f.Authors != nil {
authorsLen = f.Authors.Len()
}
if f.Tags != nil {
tagsLen = f.Tags.Len()
}
}
log.T.F(
"REQ %s: filter summary ids=%d kinds=%d authors=%d tags=%d",
env.Subscription, idsLen, kindsLen, authorsLen, tagsLen,
)
// idsLen := 0
// kindsLen := 0
// authorsLen := 0
// tagsLen := 0
// if f != nil {
// if f.Ids != nil {
// idsLen = f.Ids.Len()
// }
// if f.Kinds != nil {
// kindsLen = f.Kinds.Len()
// }
// if f.Authors != nil {
// authorsLen = f.Authors.Len()
// }
// if f.Tags != nil {
// tagsLen = f.Tags.Len()
// }
// }
// log.T.F(
// "REQ %s: filter summary ids=%d kinds=%d authors=%d tags=%d",
// env.Subscription, idsLen, kindsLen, authorsLen, tagsLen,
// )
if f != nil && f.Authors != nil && f.Authors.Len() > 0 {
var authors []string
for _, a := range f.Authors.T {
authors = append(authors, hex.Enc(a))
}
log.T.F("REQ %s: authors=%v", env.Subscription, authors)
// log.T.F("REQ %s: authors=%v", env.Subscription, authors)
}
if f != nil && f.Kinds != nil && f.Kinds.Len() > 0 {
log.T.F("REQ %s: kinds=%v", env.Subscription, f.Kinds.ToUint16())
}
if f != nil && f.Ids != nil && f.Ids.Len() > 0 {
var ids []string
for _, id := range f.Ids.T {
ids = append(ids, hex.Enc(id))
}
var lim any
if pointers.Present(f.Limit) {
lim = *f.Limit
} else {
lim = nil
}
log.T.F(
"REQ %s: ids filter count=%d ids=%v limit=%v", env.Subscription,
f.Ids.Len(), ids, lim,
)
}
if pointers.Present(f.Limit) {
// if f != nil && f.Kinds != nil && f.Kinds.Len() > 0 {
// log.T.F("REQ %s: kinds=%v", env.Subscription, f.Kinds.ToUint16())
// }
// if f != nil && f.Ids != nil && f.Ids.Len() > 0 {
// var ids []string
// for _, id := range f.Ids.T {
// ids = append(ids, hex.Enc(id))
// }
// // var lim any
// // if pointers.Present(f.Limit) {
// // lim = *f.Limit
// // } else {
// // lim = nil
// // }
// // log.T.F(
// // "REQ %s: ids filter count=%d ids=%v limit=%v", env.Subscription,
// // f.Ids.Len(), ids, lim,
// // )
// }
if f != nil && pointers.Present(f.Limit) {
if *f.Limit == 0 {
continue
}
@@ -119,15 +119,15 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
context.Background(), 30*time.Second,
)
defer cancel()
log.T.F(
"HandleReq: About to QueryEvents for %s, main context done: %v",
l.remote, l.ctx.Err() != nil,
)
// log.T.F(
// "HandleReq: About to QueryEvents for %s, main context done: %v",
// l.remote, l.ctx.Err() != nil,
// )
if events, err = l.QueryEvents(queryCtx, f); chk.E(err) {
if errors.Is(err, badger.ErrDBClosed) {
return
}
log.T.F("HandleReq: QueryEvents error for %s: %v", l.remote, err)
// log.T.F("HandleReq: QueryEvents error for %s: %v", l.remote, err)
err = nil
}
defer func() {
@@ -135,23 +135,23 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
ev.Free()
}
}()
log.T.F(
"HandleReq: QueryEvents completed for %s, found %d events",
l.remote, len(events),
)
// log.T.F(
// "HandleReq: QueryEvents completed for %s, found %d events",
// l.remote, len(events),
// )
}
var tmp event.S
privCheck:
for _, ev := range events {
if kind.IsPrivileged(ev.Kind) &&
accessLevel != "admin" { // admins can see all events
log.T.C(
func() string {
return fmt.Sprintf(
"checking privileged event %0x", ev.ID,
)
},
)
// log.T.C(
// func() string {
// return fmt.Sprintf(
// "checking privileged event %0x", ev.ID,
// )
// },
// )
pk := l.authedPubkey.Load()
if pk == nil {
continue
@@ -175,26 +175,26 @@ privCheck:
continue
}
if utils.FastEqual(pt, pk) {
log.T.C(
func() string {
return fmt.Sprintf(
"privileged event %s is for logged in pubkey %0x",
ev.ID, pk,
)
},
)
// log.T.C(
// func() string {
// return fmt.Sprintf(
// "privileged event %s is for logged in pubkey %0x",
// ev.ID, pk,
// )
// },
// )
tmp = append(tmp, ev)
continue privCheck
}
}
log.T.C(
func() string {
return fmt.Sprintf(
"privileged event %s does not contain the logged in pubkey %0x",
ev.ID, pk,
)
},
)
// log.T.C(
// func() string {
// return fmt.Sprintf(
// "privileged event %s does not contain the logged in pubkey %0x",
// ev.ID, pk,
// )
// },
// )
} else {
tmp = append(tmp, ev)
}
@@ -202,19 +202,19 @@ privCheck:
events = tmp
seen := make(map[string]struct{})
for _, ev := range events {
log.D.C(
func() string {
return fmt.Sprintf(
"REQ %s: sending EVENT id=%s kind=%d", env.Subscription,
hex.Enc(ev.ID), ev.Kind,
)
},
)
log.T.C(
func() string {
return fmt.Sprintf("event:\n%s\n", ev.Serialize())
},
)
// log.D.C(
// func() string {
// return fmt.Sprintf(
// "REQ %s: sending EVENT id=%s kind=%d", env.Subscription,
// hex.Enc(ev.ID), ev.Kind,
// )
// },
// )
// log.T.C(
// func() string {
// return fmt.Sprintf("event:\n%s\n", ev.Serialize())
// },
// )
var res *eventenvelope.Result
if res, err = eventenvelope.NewResultWith(
env.Subscription, ev,
@@ -229,7 +229,7 @@ privCheck:
}
// write the EOSE to signal to the client that all events found have been
// sent.
log.T.F("sending EOSE to %s", l.remote)
// log.T.F("sending EOSE to %s", l.remote)
if err = eoseenvelope.NewFrom(env.Subscription).
Write(l); chk.E(err) {
return
@@ -237,10 +237,10 @@ privCheck:
// if the query was for just Ids, we know there can't be any more results,
// so cancel the subscription.
cancel := true
log.T.F(
"REQ %s: computing cancel/subscription; events_sent=%d",
env.Subscription, len(events),
)
// log.T.F(
// "REQ %s: computing cancel/subscription; events_sent=%d",
// env.Subscription, len(events),
// )
var subbedFilters filter.S
for _, f := range *env.Filters {
if f.Ids.Len() < 1 {
@@ -255,10 +255,10 @@ privCheck:
}
notFounds = append(notFounds, id)
}
log.T.F(
"REQ %s: ids outstanding=%d of %d", env.Subscription,
len(notFounds), f.Ids.Len(),
)
// log.T.F(
// "REQ %s: ids outstanding=%d of %d", env.Subscription,
// len(notFounds), f.Ids.Len(),
// )
// if all were found, don't add to subbedFilters
if len(notFounds) == 0 {
continue
@@ -295,6 +295,6 @@ privCheck:
return
}
}
log.T.F("HandleReq: COMPLETED processing from %s", l.remote)
// log.T.F("HandleReq: COMPLETED processing from %s", l.remote)
return
}

View File

@@ -19,7 +19,7 @@ const (
DefaultWriteWait = 10 * time.Second
DefaultPongWait = 60 * time.Second
DefaultPingWait = DefaultPongWait / 2
DefaultReadTimeout = 3 * time.Second // Read timeout to detect stalled connections
DefaultReadTimeout = 7 * time.Second // Read timeout to detect stalled connections
DefaultWriteTimeout = 3 * time.Second
DefaultMaxMessageSize = 1 * units.Mb
@@ -97,7 +97,7 @@ whitelist:
}
var typ websocket.MessageType
var msg []byte
log.T.F("waiting for message from %s", remote)
// log.T.F("waiting for message from %s", remote)
// Create a read context with timeout to prevent indefinite blocking
readCtx, readCancel := context.WithTimeout(ctx, DefaultReadTimeout)
@@ -152,7 +152,7 @@ whitelist:
writeCancel()
continue
}
log.T.F("received message from %s: %s", remote, string(msg))
// log.T.F("received message from %s: %s", remote, string(msg))
go listener.HandleMessage(msg, remote)
}
}

View File

@@ -45,6 +45,8 @@ func Run(
publishers: publish.New(NewPublisher(ctx)),
Admins: adminKeys,
}
// Initialize the user interface
l.UserInterface()
addr := fmt.Sprintf("%s:%d", cfg.Listen, cfg.Port)
log.I.F("starting listener on http://%s", addr)
go func() {

View File

@@ -15,7 +15,7 @@ import (
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/interfaces/publisher"
"next.orly.dev/pkg/interfaces/typer"
utils "next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils"
)
const Type = "socketapi"
@@ -101,17 +101,17 @@ func (p *P) Receive(msg typer.T) {
if m.Cancel {
if m.Id == "" {
p.removeSubscriber(m.Conn)
log.D.F("removed listener %s", m.remote)
// log.D.F("removed listener %s", m.remote)
} else {
p.removeSubscriberId(m.Conn, m.Id)
log.D.C(
func() string {
return fmt.Sprintf(
"removed subscription %s for %s", m.Id,
m.remote,
)
},
)
// log.D.C(
// func() string {
// return fmt.Sprintf(
// "removed subscription %s for %s", m.Id,
// m.remote,
// )
// },
// )
}
return
}
@@ -123,27 +123,27 @@ func (p *P) Receive(msg typer.T) {
S: m.Filters, remote: m.remote, AuthedPubkey: m.AuthedPubkey,
}
p.Map[m.Conn] = subs
log.D.C(
func() string {
return fmt.Sprintf(
"created new subscription for %s, %s",
m.remote,
m.Filters.Marshal(nil),
)
},
)
// log.D.C(
// func() string {
// return fmt.Sprintf(
// "created new subscription for %s, %s",
// m.remote,
// m.Filters.Marshal(nil),
// )
// },
// )
} else {
subs[m.Id] = Subscription{
S: m.Filters, remote: m.remote, AuthedPubkey: m.AuthedPubkey,
}
log.D.C(
func() string {
return fmt.Sprintf(
"added subscription %s for %s", m.Id,
m.remote,
)
},
)
// log.D.C(
// func() string {
// return fmt.Sprintf(
// "added subscription %s for %s", m.Id,
// m.remote,
// )
// },
// )
}
}
}
@@ -229,7 +229,7 @@ func (p *P) Deliver(ev *event.E) {
if err = d.w.Write(
writeCtx, websocket.MessageText, res.Marshal(nil),
); chk.E(err) {
); err != nil {
// On error, remove the subscriber connection safely
p.removeSubscriber(d.w)
_ = d.w.CloseNow()

View File

@@ -2,13 +2,26 @@ package app
import (
"context"
"encoding/json"
"io"
"log"
"net/http"
"net/http/httputil"
"net/url"
"strconv"
"strings"
"sync"
"time"
"lol.mleku.dev/chk"
"next.orly.dev/app/config"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/protocol/auth"
"next.orly.dev/pkg/protocol/publish"
)
@@ -20,26 +33,53 @@ type Server struct {
publishers *publish.S
Admins [][]byte
*database.D
// optional reverse proxy for dev web server
devProxy *httputil.ReverseProxy
// Challenge storage for HTTP UI authentication
challengeMutex sync.RWMutex
challenges map[string][]byte
}
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// log.T.C(
// func() string {
// return fmt.Sprintf("path %v header %v", r.URL, r.Header)
// },
// )
if r.Header.Get("Upgrade") == "websocket" {
s.HandleWebsocket(w, r)
} else if r.Header.Get("Accept") == "application/nostr+json" {
s.HandleRelayInfo(w, r)
} else {
if s.mux == nil {
http.Error(w, "Upgrade required", http.StatusUpgradeRequired)
} else {
s.mux.ServeHTTP(w, r)
}
// Set CORS headers for all responses
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "GET, POST, OPTIONS")
w.Header().Set(
"Access-Control-Allow-Headers", "Content-Type, Authorization",
)
// Handle preflight OPTIONS requests
if r.Method == "OPTIONS" {
w.WriteHeader(http.StatusOK)
return
}
// If this is a websocket request, only intercept the relay root path.
// This allows other websocket paths (e.g., Vite HMR) to be handled by the dev proxy when enabled.
if r.Header.Get("Upgrade") == "websocket" {
if s.mux != nil && s.Config != nil && s.Config.WebDisableEmbedded && s.Config.WebDevProxyURL != "" && r.URL.Path != "/" {
// forward to mux (which will proxy to dev server)
s.mux.ServeHTTP(w, r)
return
}
s.HandleWebsocket(w, r)
return
}
if r.Header.Get("Accept") == "application/nostr+json" {
s.HandleRelayInfo(w, r)
return
}
if s.mux == nil {
http.Error(w, "Upgrade required", http.StatusUpgradeRequired)
return
}
s.mux.ServeHTTP(w, r)
}
func (s *Server) ServiceURL(req *http.Request) (st string) {
host := req.Header.Get("X-Forwarded-Host")
if host == "" {
@@ -70,3 +110,493 @@ func (s *Server) ServiceURL(req *http.Request) (st string) {
}
return proto + "://" + host
}
// UserInterface sets up a basic Nostr NDK interface that allows users to log into the relay user interface
func (s *Server) UserInterface() {
if s.mux == nil {
s.mux = http.NewServeMux()
}
// If dev proxy is configured, initialize it
if s.Config != nil && s.Config.WebDisableEmbedded && s.Config.WebDevProxyURL != "" {
proxyURL := s.Config.WebDevProxyURL
// Add default scheme if missing to avoid: proxy error: unsupported protocol scheme ""
if !strings.Contains(proxyURL, "://") {
proxyURL = "http://" + proxyURL
}
if target, err := url.Parse(proxyURL); !chk.E(err) {
if target.Scheme == "" || target.Host == "" {
// invalid URL, disable proxy
log.Printf(
"invalid ORLY_WEB_DEV_PROXY_URL: %q — disabling dev proxy\n",
s.Config.WebDevProxyURL,
)
} else {
s.devProxy = httputil.NewSingleHostReverseProxy(target)
// Ensure Host header points to upstream for dev servers that care
origDirector := s.devProxy.Director
s.devProxy.Director = func(req *http.Request) {
origDirector(req)
req.Host = target.Host
}
}
}
}
// Initialize challenge storage if not already done
if s.challenges == nil {
s.challengeMutex.Lock()
s.challenges = make(map[string][]byte)
s.challengeMutex.Unlock()
}
// Serve the main login interface (and static assets) or proxy in dev mode
s.mux.HandleFunc("/", s.handleLoginInterface)
// API endpoints for authentication
s.mux.HandleFunc("/api/auth/challenge", s.handleAuthChallenge)
s.mux.HandleFunc("/api/auth/login", s.handleAuthLogin)
s.mux.HandleFunc("/api/auth/status", s.handleAuthStatus)
s.mux.HandleFunc("/api/auth/logout", s.handleAuthLogout)
s.mux.HandleFunc("/api/permissions/", s.handlePermissions)
// Export endpoints
s.mux.HandleFunc("/api/export", s.handleExport)
s.mux.HandleFunc("/api/export/mine", s.handleExportMine)
// Events endpoints
s.mux.HandleFunc("/api/events/mine", s.handleEventsMine)
// Import endpoint (admin only)
s.mux.HandleFunc("/api/import", s.handleImport)
}
// handleLoginInterface serves the main user interface for login
func (s *Server) handleLoginInterface(w http.ResponseWriter, r *http.Request) {
// In dev mode with proxy configured, forward to dev server
if s.Config != nil && s.Config.WebDisableEmbedded && s.devProxy != nil {
s.devProxy.ServeHTTP(w, r)
return
}
// If embedded UI is disabled but no proxy configured, return a helpful message
if s.Config != nil && s.Config.WebDisableEmbedded {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.WriteHeader(http.StatusNotFound)
w.Write([]byte("Web UI disabled (ORLY_WEB_DISABLE=true). Run the web app in standalone dev mode (e.g., npm run dev) or set ORLY_WEB_DEV_PROXY_URL to proxy through this server."))
return
}
// Default: serve embedded React app
fileServer := http.FileServer(GetReactAppFS())
fileServer.ServeHTTP(w, r)
}
// handleAuthChallenge generates and returns an authentication challenge
func (s *Server) handleAuthChallenge(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Generate a proper challenge using the auth package
challenge := auth.GenerateChallenge()
challengeHex := hex.Enc(challenge)
// Store the challenge using the hex value as the key for easy lookup
s.challengeMutex.Lock()
s.challenges[challengeHex] = challenge
s.challengeMutex.Unlock()
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"challenge": "` + challengeHex + `"}`))
}
// handleAuthLogin processes authentication requests
func (s *Server) handleAuthLogin(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
// Read the request body
body, err := io.ReadAll(r.Body)
if chk.E(err) {
w.Write([]byte(`{"success": false, "error": "Failed to read request body"}`))
return
}
// Parse the signed event
var evt event.E
if err = json.Unmarshal(body, &evt); chk.E(err) {
w.Write([]byte(`{"success": false, "error": "Invalid event format"}`))
return
}
// Extract the challenge from the event to look up the stored challenge
challengeTag := evt.Tags.GetFirst([]byte("challenge"))
if challengeTag == nil {
w.Write([]byte(`{"success": false, "error": "Challenge tag missing from event"}`))
return
}
challengeHex := string(challengeTag.Value())
// Retrieve the stored challenge
s.challengeMutex.RLock()
_, exists := s.challenges[challengeHex]
s.challengeMutex.RUnlock()
if !exists {
w.Write([]byte(`{"success": false, "error": "Invalid or expired challenge"}`))
return
}
// Clean up the used challenge
s.challengeMutex.Lock()
delete(s.challenges, challengeHex)
s.challengeMutex.Unlock()
relayURL := s.ServiceURL(r)
// Validate the authentication event with the correct challenge
// The challenge in the event tag is hex-encoded, so we need to pass the hex string as bytes
ok, err := auth.Validate(&evt, []byte(challengeHex), relayURL)
if chk.E(err) || !ok {
errorMsg := "Authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
w.Write([]byte(`{"success": false, "error": "` + errorMsg + `"}`))
return
}
// Authentication successful: set a simple session cookie with the pubkey
cookie := &http.Cookie{
Name: "orly_auth",
Value: hex.Enc(evt.Pubkey),
Path: "/",
HttpOnly: true,
SameSite: http.SameSiteLaxMode,
MaxAge: 60 * 60 * 24 * 30, // 30 days
}
http.SetCookie(w, cookie)
w.Write([]byte(`{"success": true, "pubkey": "` + hex.Enc(evt.Pubkey) + `", "message": "Authentication successful"}`))
}
// handleAuthStatus returns the current authentication status
func (s *Server) handleAuthStatus(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
// Check for auth cookie
if c, err := r.Cookie("orly_auth"); err == nil && c.Value != "" {
// Validate pubkey format (hex)
if _, err := hex.Dec(c.Value); !chk.E(err) {
w.Write([]byte(`{"authenticated": true, "pubkey": "` + c.Value + `"}`))
return
}
}
w.Write([]byte(`{"authenticated": false}`))
}
// handleAuthLogout clears the auth cookie
func (s *Server) handleAuthLogout(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Expire the cookie
http.SetCookie(
w, &http.Cookie{
Name: "orly_auth",
Value: "",
Path: "/",
MaxAge: -1,
HttpOnly: true,
SameSite: http.SameSiteLaxMode,
},
)
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"success": true}`))
}
// handlePermissions returns the permission level for a given pubkey
func (s *Server) handlePermissions(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Extract pubkey from URL path
pubkeyHex := strings.TrimPrefix(r.URL.Path, "/api/permissions/")
if pubkeyHex == "" || pubkeyHex == "/" {
http.Error(w, "Invalid pubkey", http.StatusBadRequest)
return
}
// Convert hex to binary pubkey
pubkey, err := hex.Dec(pubkeyHex)
if chk.E(err) {
http.Error(w, "Invalid pubkey format", http.StatusBadRequest)
return
}
// Get access level using acl registry
permission := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
// Set content type and write JSON response
w.Header().Set("Content-Type", "application/json")
// Format response as proper JSON
response := struct {
Permission string `json:"permission"`
}{
Permission: permission,
}
// Marshal and write the response
jsonData, err := json.Marshal(response)
if chk.E(err) {
http.Error(
w, "Error generating response", http.StatusInternalServerError,
)
return
}
w.Write(jsonData)
}
// handleExport streams all events as JSONL (NDJSON). Admins only.
func (s *Server) handleExport(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Require auth cookie
c, err := r.Cookie("orly_auth")
if err != nil || c.Value == "" {
http.Error(w, "Not authenticated", http.StatusUnauthorized)
return
}
requesterPubHex := c.Value
requesterPub, err := hex.Dec(requesterPubHex)
if chk.E(err) {
http.Error(w, "Invalid auth cookie", http.StatusUnauthorized)
return
}
// Check permissions
if acl.Registry.GetAccessLevel(requesterPub, r.RemoteAddr) != "admin" {
http.Error(w, "Forbidden", http.StatusForbidden)
return
}
// Optional filtering by pubkey(s)
var pks [][]byte
q := r.URL.Query()
for _, pkHex := range q["pubkey"] {
if pkHex == "" {
continue
}
if pk, err := hex.Dec(pkHex); !chk.E(err) {
pks = append(pks, pk)
}
}
w.Header().Set("Content-Type", "application/x-ndjson")
filename := "events-" + time.Now().UTC().Format("20060102-150405Z") + ".jsonl"
w.Header().Set("Content-Disposition", "attachment; filename=\""+filename+"\"")
// Stream export
s.D.Export(s.Ctx, w, pks...)
}
// handleExportMine streams only the authenticated user's events as JSONL (NDJSON).
func (s *Server) handleExportMine(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Require auth cookie
c, err := r.Cookie("orly_auth")
if err != nil || c.Value == "" {
http.Error(w, "Not authenticated", http.StatusUnauthorized)
return
}
pubkey, err := hex.Dec(c.Value)
if chk.E(err) {
http.Error(w, "Invalid auth cookie", http.StatusUnauthorized)
return
}
w.Header().Set("Content-Type", "application/x-ndjson")
filename := "my-events-" + time.Now().UTC().Format("20060102-150405Z") + ".jsonl"
w.Header().Set("Content-Disposition", "attachment; filename=\""+filename+"\"")
// Stream export for this user's pubkey only
s.D.Export(s.Ctx, w, pubkey)
}
// handleImport receives a JSONL/NDJSON file or body and enqueues an async import. Admins only.
func (s *Server) handleImport(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Require auth cookie
c, err := r.Cookie("orly_auth")
if err != nil || c.Value == "" {
http.Error(w, "Not authenticated", http.StatusUnauthorized)
return
}
requesterPub, err := hex.Dec(c.Value)
if chk.E(err) {
http.Error(w, "Invalid auth cookie", http.StatusUnauthorized)
return
}
// Admins only
if acl.Registry.GetAccessLevel(requesterPub, r.RemoteAddr) != "admin" {
http.Error(w, "Forbidden", http.StatusForbidden)
return
}
ct := r.Header.Get("Content-Type")
if strings.HasPrefix(ct, "multipart/form-data") {
if err := r.ParseMultipartForm(32 << 20); chk.E(err) { // 32MB memory, rest to temp files
http.Error(w, "Failed to parse form", http.StatusBadRequest)
return
}
file, _, err := r.FormFile("file")
if chk.E(err) {
http.Error(w, "Missing file", http.StatusBadRequest)
return
}
defer file.Close()
s.D.Import(file)
} else {
if r.Body == nil {
http.Error(w, "Empty request body", http.StatusBadRequest)
return
}
s.D.Import(r.Body)
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusAccepted)
w.Write([]byte(`{"success": true, "message": "Import started"}`))
}
// handleEventsMine returns the authenticated user's events in JSON format with pagination
func (s *Server) handleEventsMine(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Require auth cookie
c, err := r.Cookie("orly_auth")
if err != nil || c.Value == "" {
http.Error(w, "Not authenticated", http.StatusUnauthorized)
return
}
pubkey, err := hex.Dec(c.Value)
if chk.E(err) {
http.Error(w, "Invalid auth cookie", http.StatusUnauthorized)
return
}
// Parse pagination parameters
query := r.URL.Query()
limit := 50 // default limit
if l := query.Get("limit"); l != "" {
if parsed, err := strconv.Atoi(l); err == nil && parsed > 0 && parsed <= 100 {
limit = parsed
}
}
offset := 0
if o := query.Get("offset"); o != "" {
if parsed, err := strconv.Atoi(o); err == nil && parsed >= 0 {
offset = parsed
}
}
// Use QueryEvents with filter for this user's events
f := &filter.F{
Authors: tag.NewFromBytesSlice(pubkey),
}
log.Printf("DEBUG: Querying events for pubkey: %s", hex.Enc(pubkey))
events, err := s.D.QueryEvents(s.Ctx, f)
if chk.E(err) {
log.Printf("DEBUG: QueryEvents failed: %v", err)
http.Error(w, "Failed to query events", http.StatusInternalServerError)
return
}
log.Printf("DEBUG: QueryEvents returned %d events", len(events))
// If no events found, let's also check if there are any events at all in the database
if len(events) == 0 {
// Create a filter to get any events (no authors filter)
allEventsFilter := &filter.F{}
allEvents, err := s.D.QueryEvents(s.Ctx, allEventsFilter)
if err == nil {
log.Printf("DEBUG: Total events in database: %d", len(allEvents))
} else {
log.Printf("DEBUG: Failed to query all events: %v", err)
}
}
// Events are already sorted by QueryEvents in reverse chronological order
// Apply offset and limit manually since QueryEvents doesn't support offset
totalEvents := len(events)
start := offset
if start > totalEvents {
start = totalEvents
}
end := start + limit
if end > totalEvents {
end = totalEvents
}
paginatedEvents := events[start:end]
// Convert events to JSON response format
type EventResponse struct {
ID string `json:"id"`
Kind int `json:"kind"`
CreatedAt int64 `json:"created_at"`
Content string `json:"content"`
RawJSON string `json:"raw_json"`
}
response := struct {
Events []EventResponse `json:"events"`
Total int `json:"total"`
Offset int `json:"offset"`
Limit int `json:"limit"`
}{
Events: make([]EventResponse, len(paginatedEvents)),
Total: totalEvents,
Offset: offset,
Limit: limit,
}
for i, ev := range paginatedEvents {
response.Events[i] = EventResponse{
ID: hex.Enc(ev.ID),
Kind: int(ev.Kind),
CreatedAt: int64(ev.CreatedAt),
Content: string(ev.Content),
RawJSON: string(ev.Serialize()),
}
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}

19
app/web.go Normal file
View File

@@ -0,0 +1,19 @@
package app
import (
"embed"
"io/fs"
"net/http"
)
//go:embed web/dist
var reactAppFS embed.FS
// GetReactAppFS returns a http.FileSystem from the embedded React app
func GetReactAppFS() http.FileSystem {
webDist, err := fs.Sub(reactAppFS, "web/dist")
if err != nil {
panic("Failed to load embedded web app: " + err.Error())
}
return http.FS(webDist)
}

30
app/web/.gitignore vendored Normal file
View File

@@ -0,0 +1,30 @@
# Dependencies
node_modules
.pnp
.pnp.js
# Bun
.bunfig.toml
bun.lockb
# Build directories
build
# Cache and logs
.cache
.temp
.log
*.log
# Environment variables
.env
.env.local
.env.development.local
.env.test.local
.env.production.local
# Editor directories and files
.idea
.vscode
*.swp
*.swo

89
app/web/README.md Normal file
View File

@@ -0,0 +1,89 @@
# Orly Web Application
This is a React web application that uses Bun for building and bundling, and is automatically embedded into the Go binary when built.
## Prerequisites
- [Bun](https://bun.sh/) - JavaScript runtime and toolkit
- Go 1.16+ (for embedding functionality)
## Development
There are two ways to develop the web app:
1) Standalone (recommended for hot reload)
- Start the Go relay with the embedded web UI disabled so the React app can run on its own dev server with HMR.
- Configure the relay via environment variables:
```bash
# In another shell at repo root
export ORLY_WEB_DISABLE=true
# Optional: if you want same-origin URLs, you can set a proxy target and access the relay on the same port
# export ORLY_WEB_DEV_PROXY_URL=http://localhost:5173
# Start the relay as usual
go run .
```
- Then start the React dev server:
```bash
cd app/web
bun install
bun dev
```
When ORLY_WEB_DISABLE=true is set, the Go server still serves the API and websocket endpoints and sends permissive CORS headers, so the dev server can access them cross-origin. If ORLY_WEB_DEV_PROXY_URL is set, the Go server will reverse-proxy non-/api paths to the dev server so you can use the same origin.
2) Embedded (no hot reload)
- Build the web app and run the Go server with defaults:
```bash
cd app/web
bun install
bun run build
cd ../../
go run .
```
## Building
The React application needs to be built before compiling the Go binary to ensure that the embedded files are available:
```bash
# Build the React application
cd app/web
bun install
bun run build
# Build the Go binary from project root
cd ../../
go build
```
## How it works
1. The React application is built to the `app/web/dist` directory
2. The Go embed directive in `app/web.go` embeds these files into the binary
3. When the server runs, it serves the embedded React app at the root path
## Build Automation
You can create a shell script to automate the build process:
```bash
#!/bin/bash
# build.sh
echo "Building React app..."
cd app/web
bun install
bun run build
echo "Building Go binary..."
cd ../../
go build
echo "Build complete!"
```
Make it executable with `chmod +x build.sh` and run with `./build.sh`.

36
app/web/bun.lock Normal file
View File

@@ -0,0 +1,36 @@
{
"lockfileVersion": 1,
"workspaces": {
"": {
"name": "orly-web",
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0",
},
"devDependencies": {
"bun-types": "latest",
},
},
},
"packages": {
"@types/node": ["@types/node@24.5.2", "", { "dependencies": { "undici-types": "~7.12.0" } }, "sha512-FYxk1I7wPv3K2XBaoyH2cTnocQEu8AOZ60hPbsyukMPLv5/5qr7V1i8PLHdl6Zf87I+xZXFvPCXYjiTFq+YSDQ=="],
"@types/react": ["@types/react@19.1.13", "", { "dependencies": { "csstype": "^3.0.2" } }, "sha512-hHkbU/eoO3EG5/MZkuFSKmYqPbSVk5byPFa3e7y/8TybHiLMACgI8seVYlicwk7H5K/rI2px9xrQp/C+AUDTiQ=="],
"bun-types": ["bun-types@1.2.22", "", { "dependencies": { "@types/node": "*" }, "peerDependencies": { "@types/react": "^19" } }, "sha512-hwaAu8tct/Zn6Zft4U9BsZcXkYomzpHJX28ofvx7k0Zz2HNz54n1n+tDgxoWFGB4PcFvJXJQloPhaV2eP3Q6EA=="],
"csstype": ["csstype@3.1.3", "", {}, "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw=="],
"js-tokens": ["js-tokens@4.0.0", "", {}, "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ=="],
"loose-envify": ["loose-envify@1.4.0", "", { "dependencies": { "js-tokens": "^3.0.0 || ^4.0.0" }, "bin": { "loose-envify": "cli.js" } }, "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q=="],
"react": ["react@18.3.1", "", { "dependencies": { "loose-envify": "^1.1.0" } }, "sha512-wS+hAgJShR0KhEvPJArfuPVN1+Hz1t0Y6n5jLrGQbkb4urgPE/0Rve+1kMB1v/oWgHgm4WIcV+i7F2pTVj+2iQ=="],
"react-dom": ["react-dom@18.3.1", "", { "dependencies": { "loose-envify": "^1.1.0", "scheduler": "^0.23.2" }, "peerDependencies": { "react": "^18.3.1" } }, "sha512-5m4nQKp+rZRb09LNH59GM4BxTh9251/ylbKIbpe7TpGxfJ+9kv6BLkLBXIjjspbgbnIBNqlI23tRnTWT0snUIw=="],
"scheduler": ["scheduler@0.23.2", "", { "dependencies": { "loose-envify": "^1.1.0" } }, "sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ=="],
"undici-types": ["undici-types@7.12.0", "", {}, "sha512-goOacqME2GYyOZZfb5Lgtu+1IDmAlAEu5xnD3+xTzS10hT0vzpf0SPjkXwAw9Jm+4n/mQGDP3LO8CPbYROeBfQ=="],
}
}

1
app/web/dist/index-q4cwd1fy.css vendored Normal file

File diff suppressed because one or more lines are too long

160
app/web/dist/index-w8zpqk4w.js vendored Normal file

File diff suppressed because one or more lines are too long

30
app/web/dist/index.html vendored Normal file
View File

@@ -0,0 +1,30 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Nostr Relay</title>
<link rel="stylesheet" crossorigin href="./index-q4cwd1fy.css"><script type="module" crossorigin src="./index-w8zpqk4w.js"></script></head>
<body>
<script>
// Apply system theme preference immediately to avoid flash of wrong theme
function applyTheme(isDark) {
document.body.classList.remove('bg-white', 'bg-gray-900');
document.body.classList.add(isDark ? 'bg-gray-900' : 'bg-white');
}
// Set initial theme
applyTheme(window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches);
// Listen for theme changes
if (window.matchMedia) {
window.matchMedia('(prefers-color-scheme: dark)').addEventListener('change', e => {
applyTheme(e.matches);
});
}
</script>
<div id="root"></div>
</body>
</html>

112
app/web/dist/tailwind.min.css vendored Normal file
View File

@@ -0,0 +1,112 @@
/*
Local Tailwind CSS (minimal subset for this UI)
Note: This file includes just the utilities used by the app to keep size small.
You can replace this with a full Tailwind build if desired.
*/
/* Preflight-like resets (very minimal) */
*,::before,::after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}
html,body,#root{height:100%}
html{line-height:1.5;-webkit-text-size-adjust:100%;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,Segoe UI,Roboto,Helvetica,Arial,Noto Sans,\"Apple Color Emoji\",\"Segoe UI Emoji\"}
body{margin:0}
button,input{font:inherit;color:inherit}
img{display:block;max-width:100%;height:auto}
/* Layout */
.sticky{position:sticky}.relative{position:relative}.absolute{position:absolute}
.top-0{top:0}.left-0{left:0}.inset-0{top:0;right:0;bottom:0;left:0}
.z-50{z-index:50}.z-10{z-index:10}
.block{display:block}.flex{display:flex}
.items-center{align-items:center}.justify-start{justify-content:flex-start}.justify-center{justify-content:center}.justify-end{justify-content:flex-end}
.flex-grow{flex-grow:1}.shrink-0{flex-shrink:0}
.overflow-hidden{overflow:hidden}
/* Sizing */
.w-full{width:100%}.w-auto{width:auto}.w-16{width:4rem}
.h-full{height:100%}.h-16{height:4rem}
.aspect-square{aspect-ratio:1/1}
.max-w-3xl{max-width:48rem}
/* Spacing */
.p-0{padding:0}.p-2{padding:.5rem}.p-3{padding:.75rem}.p-6{padding:1.5rem}
.px-2{padding-left:.5rem;padding-right:.5rem}
.mr-0{margin-right:0}.mr-2{margin-right:.5rem}
.mt-2{margin-top:.5rem}.mt-5{margin-top:1.25rem}
.mb-1{margin-bottom:.25rem}.mb-2{margin-bottom:.5rem}.mb-4{margin-bottom:1rem}.mb-5{margin-bottom:1.25rem}
.mx-auto{margin-left:auto;margin-right:auto}
/* Borders & Radius */
.rounded{border-radius:.25rem}.rounded-full{border-radius:9999px}
.border-0{border-width:0}.border-2{border-width:2px}
.border-white{border-color:#fff}
.border{border-width:1px}.border-gray-300{border-color:#d1d5db}.border-gray-600{border-color:#4b5563}
.border-red-500{border-color:#ef4444}.border-red-700{border-color:#b91c1c}
/* Colors / Backgrounds */
.bg-white{background-color:#fff}
.bg-gray-100{background-color:#f3f4f6}
.bg-gray-200{background-color:#e5e7eb}
.bg-gray-300{background-color:#d1d5db}
.bg-gray-600{background-color:#4b5563}
.bg-gray-700{background-color:#374151}
.bg-gray-800{background-color:#1f2937}
.bg-gray-900{background-color:#111827}
.bg-blue-500{background-color:#3b82f6}
.bg-blue-600{background-color:#2563eb}.hover\:bg-blue-700:hover{background-color:#1d4ed8}
.hover\:bg-blue-600:hover{background-color:#2563eb}
.bg-red-600{background-color:#dc2626}.hover\:bg-red-700:hover{background-color:#b91c1c}
.bg-cyan-100{background-color:#cffafe}
.bg-green-100{background-color:#d1fae5}
.bg-red-100{background-color:#fee2e2}
.bg-red-50{background-color:#fef2f2}
.bg-green-900{background-color:#064e3b}
.bg-red-900{background-color:#7f1d1d}
.bg-cyan-900{background-color:#164e63}
.bg-cover{background-size:cover}.bg-center{background-position:center}
.bg-transparent{background-color:transparent}
/* Text */
.text-left{text-align:left}
.text-white{color:#fff}
.text-gray-300{color:#d1d5db}
.text-gray-500{color:#6b7280}.hover\:text-gray-800:hover{color:#1f2937}
.hover\:text-gray-100:hover{color:#f3f4f6}
.text-gray-700{color:#374151}
.text-gray-800{color:#1f2937}
.text-gray-900{color:#111827}
.text-gray-100{color:#f3f4f6}
.text-green-800{color:#065f46}
.text-green-100{color:#dcfce7}
.text-red-800{color:#991b1b}
.text-red-200{color:#fecaca}
.text-red-100{color:#fee2e2}
.text-cyan-800{color:#155e75}
.text-cyan-100{color:#cffafe}
.text-base{font-size:1rem;line-height:1.5rem}
.text-lg{font-size:1.125rem;line-height:1.75rem}
.text-2xl{font-size:1.5rem;line-height:2rem}
.font-bold{font-weight:700}
/* Opacity */
.opacity-70{opacity:.7}
/* Effects */
.shadow{--tw-shadow:0 1px 3px 0 rgba(0,0,0,0.1),0 1px 2px -1px rgba(0,0,0,0.1);box-shadow:var(--tw-shadow)}
/* Cursor */
.cursor-pointer{cursor:pointer}
/* Box model */
.box-border{box-sizing:border-box}
/* Utilities */
.hover\:bg-transparent:hover{background-color:transparent}
.hover\:bg-gray-200:hover{background-color:#e5e7eb}
.hover\:bg-gray-600:hover{background-color:#4b5563}
.focus\:ring-2:focus{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}
.focus\:ring-blue-200:focus{--tw-ring-color:rgba(191, 219, 254, var(--tw-ring-opacity))}
.focus\:ring-blue-500:focus{--tw-ring-color:rgba(59, 130, 246, var(--tw-ring-opacity))}
.disabled\:opacity-50:disabled{opacity:.5}
.disabled\:cursor-not-allowed:disabled{cursor:not-allowed}
/* Height for avatar images in header already inherit from container */

18
app/web/package.json Normal file
View File

@@ -0,0 +1,18 @@
{
"name": "orly-web",
"version": "0.1.0",
"private": true,
"type": "module",
"scripts": {
"dev": "bun --hot --port 5173 public/dev.html",
"build": "rm -rf dist && bun build ./public/index.html --outdir ./dist --minify --splitting && cp -r public/tailwind.min.css dist/",
"preview": "bun x serve dist"
},
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
"bun-types": "latest"
}
}

13
app/web/public/dev.html Normal file
View File

@@ -0,0 +1,13 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Nostr Relay (Dev)</title>
<link rel="stylesheet" href="tailwind.min.css" />
</head>
<body class="bg-white">
<div id="root"></div>
<script type="module" src="/src/index.jsx"></script>
</body>
</html>

30
app/web/public/index.html Normal file
View File

@@ -0,0 +1,30 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Nostr Relay</title>
<link rel="stylesheet" href="tailwind.min.css" />
</head>
<body>
<script>
// Apply system theme preference immediately to avoid flash of wrong theme
function applyTheme(isDark) {
document.body.classList.remove('bg-white', 'bg-gray-900');
document.body.classList.add(isDark ? 'bg-gray-900' : 'bg-white');
}
// Set initial theme
applyTheme(window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches);
// Listen for theme changes
if (window.matchMedia) {
window.matchMedia('(prefers-color-scheme: dark)').addEventListener('change', e => {
applyTheme(e.matches);
});
}
</script>
<div id="root"></div>
<script type="module" src="/src/index.jsx"></script>
</body>
</html>

112
app/web/public/tailwind.min.css vendored Normal file
View File

@@ -0,0 +1,112 @@
/*
Local Tailwind CSS (minimal subset for this UI)
Note: This file includes just the utilities used by the app to keep size small.
You can replace this with a full Tailwind build if desired.
*/
/* Preflight-like resets (very minimal) */
*,::before,::after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}
html,body,#root{height:100%}
html{line-height:1.5;-webkit-text-size-adjust:100%;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,Segoe UI,Roboto,Helvetica,Arial,Noto Sans,\"Apple Color Emoji\",\"Segoe UI Emoji\"}
body{margin:0}
button,input{font:inherit;color:inherit}
img{display:block;max-width:100%;height:auto}
/* Layout */
.sticky{position:sticky}.relative{position:relative}.absolute{position:absolute}
.top-0{top:0}.left-0{left:0}.inset-0{top:0;right:0;bottom:0;left:0}
.z-50{z-index:50}.z-10{z-index:10}
.block{display:block}.flex{display:flex}
.items-center{align-items:center}.justify-start{justify-content:flex-start}.justify-center{justify-content:center}.justify-end{justify-content:flex-end}
.flex-grow{flex-grow:1}.shrink-0{flex-shrink:0}
.overflow-hidden{overflow:hidden}
/* Sizing */
.w-full{width:100%}.w-auto{width:auto}.w-16{width:4rem}
.h-full{height:100%}.h-16{height:4rem}
.aspect-square{aspect-ratio:1/1}
.max-w-3xl{max-width:48rem}
/* Spacing */
.p-0{padding:0}.p-2{padding:.5rem}.p-3{padding:.75rem}.p-6{padding:1.5rem}
.px-2{padding-left:.5rem;padding-right:.5rem}
.mr-0{margin-right:0}.mr-2{margin-right:.5rem}
.mt-2{margin-top:.5rem}.mt-5{margin-top:1.25rem}
.mb-1{margin-bottom:.25rem}.mb-2{margin-bottom:.5rem}.mb-4{margin-bottom:1rem}.mb-5{margin-bottom:1.25rem}
.mx-auto{margin-left:auto;margin-right:auto}
/* Borders & Radius */
.rounded{border-radius:.25rem}.rounded-full{border-radius:9999px}
.border-0{border-width:0}.border-2{border-width:2px}
.border-white{border-color:#fff}
.border{border-width:1px}.border-gray-300{border-color:#d1d5db}.border-gray-600{border-color:#4b5563}
.border-red-500{border-color:#ef4444}.border-red-700{border-color:#b91c1c}
/* Colors / Backgrounds */
.bg-white{background-color:#fff}
.bg-gray-100{background-color:#f3f4f6}
.bg-gray-200{background-color:#e5e7eb}
.bg-gray-300{background-color:#d1d5db}
.bg-gray-600{background-color:#4b5563}
.bg-gray-700{background-color:#374151}
.bg-gray-800{background-color:#1f2937}
.bg-gray-900{background-color:#111827}
.bg-blue-500{background-color:#3b82f6}
.bg-blue-600{background-color:#2563eb}.hover\:bg-blue-700:hover{background-color:#1d4ed8}
.hover\:bg-blue-600:hover{background-color:#2563eb}
.bg-red-600{background-color:#dc2626}.hover\:bg-red-700:hover{background-color:#b91c1c}
.bg-cyan-100{background-color:#cffafe}
.bg-green-100{background-color:#d1fae5}
.bg-red-100{background-color:#fee2e2}
.bg-red-50{background-color:#fef2f2}
.bg-green-900{background-color:#064e3b}
.bg-red-900{background-color:#7f1d1d}
.bg-cyan-900{background-color:#164e63}
.bg-cover{background-size:cover}.bg-center{background-position:center}
.bg-transparent{background-color:transparent}
/* Text */
.text-left{text-align:left}
.text-white{color:#fff}
.text-gray-300{color:#d1d5db}
.text-gray-500{color:#6b7280}.hover\:text-gray-800:hover{color:#1f2937}
.hover\:text-gray-100:hover{color:#f3f4f6}
.text-gray-700{color:#374151}
.text-gray-800{color:#1f2937}
.text-gray-900{color:#111827}
.text-gray-100{color:#f3f4f6}
.text-green-800{color:#065f46}
.text-green-100{color:#dcfce7}
.text-red-800{color:#991b1b}
.text-red-200{color:#fecaca}
.text-red-100{color:#fee2e2}
.text-cyan-800{color:#155e75}
.text-cyan-100{color:#cffafe}
.text-base{font-size:1rem;line-height:1.5rem}
.text-lg{font-size:1.125rem;line-height:1.75rem}
.text-2xl{font-size:1.5rem;line-height:2rem}
.font-bold{font-weight:700}
/* Opacity */
.opacity-70{opacity:.7}
/* Effects */
.shadow{--tw-shadow:0 1px 3px 0 rgba(0,0,0,0.1),0 1px 2px -1px rgba(0,0,0,0.1);box-shadow:var(--tw-shadow)}
/* Cursor */
.cursor-pointer{cursor:pointer}
/* Box model */
.box-border{box-sizing:border-box}
/* Utilities */
.hover\:bg-transparent:hover{background-color:transparent}
.hover\:bg-gray-200:hover{background-color:#e5e7eb}
.hover\:bg-gray-600:hover{background-color:#4b5563}
.focus\:ring-2:focus{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}
.focus\:ring-blue-200:focus{--tw-ring-color:rgba(191, 219, 254, var(--tw-ring-opacity))}
.focus\:ring-blue-500:focus{--tw-ring-color:rgba(59, 130, 246, var(--tw-ring-opacity))}
.disabled\:opacity-50:disabled{opacity:.5}
.disabled\:cursor-not-allowed:disabled{cursor:not-allowed}
/* Height for avatar images in header already inherit from container */

1960
app/web/src/App.jsx Normal file

File diff suppressed because it is too large Load Diff

11
app/web/src/index.jsx Normal file
View File

@@ -0,0 +1,11 @@
import React from 'react';
import { createRoot } from 'react-dom/client';
import App from './App';
import './styles.css';
const root = createRoot(document.getElementById('root'));
root.render(
<React.StrictMode>
<App />
</React.StrictMode>
);

191
app/web/src/styles.css Normal file
View File

@@ -0,0 +1,191 @@
body {
font-family: Arial, sans-serif;
margin: 0;
padding: 0;
}
.container {
background: #f9f9f9;
padding: 30px;
border-radius: 8px;
margin-top: 20px; /* Reduced space since header is now sticky */
}
.form-group {
margin-bottom: 20px;
}
label {
display: block;
margin-bottom: 5px;
font-weight: bold;
}
input, textarea {
width: 100%;
padding: 10px;
border: 1px solid #ddd;
border-radius: 4px;
}
button {
background: #007cba;
color: white;
padding: 12px 20px;
border: none;
border-radius: 4px;
cursor: pointer;
}
button:hover {
background: #005a87;
}
.danger-button {
background: #dc3545;
}
.danger-button:hover {
background: #c82333;
}
.status {
margin-top: 20px;
margin-bottom: 20px;
padding: 10px;
border-radius: 4px;
}
.success {
background: #d4edda;
color: #155724;
}
.error {
background: #f8d7da;
color: #721c24;
}
.info {
background: #d1ecf1;
color: #0c5460;
}
.header-panel {
position: sticky;
top: 0;
left: 0;
width: 100%;
background-color: #f8f9fa;
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
z-index: 1000;
height: 60px;
display: flex;
align-items: center;
background-size: cover;
background-position: center;
overflow: hidden;
}
.header-content {
display: flex;
align-items: center;
height: 100%;
padding: 0 0 0 12px;
width: 100%;
margin: 0 auto;
box-sizing: border-box;
}
.header-left {
display: flex;
align-items: center;
justify-content: flex-start;
height: 100%;
}
.header-center {
display: flex;
flex-grow: 1;
align-items: center;
justify-content: flex-start;
position: relative;
overflow: hidden;
}
.header-right {
display: flex;
align-items: center;
justify-content: flex-end;
height: 100%;
}
.header-logo {
height: 100%;
aspect-ratio: 1 / 1;
width: auto;
border-radius: 0;
object-fit: cover;
flex-shrink: 0;
}
.user-avatar {
width: 2em;
height: 2em;
border-radius: 50%;
object-fit: cover;
border: 2px solid white;
margin-right: 10px;
box-shadow: 0 1px 3px rgba(0,0,0,0.2);
}
.user-profile {
display: flex;
align-items: center;
position: relative;
z-index: 1;
}
.user-info {
font-weight: bold;
font-size: 1.2em;
text-align: left;
}
.user-name {
font-weight: bold;
font-size: 1em;
display: block;
}
.profile-banner {
position: absolute;
width: 100%;
height: 100%;
top: 0;
left: 0;
z-index: -1;
opacity: 0.7;
}
.logout-button {
background: transparent;
color: #6c757d;
border: none;
font-size: 20px;
cursor: pointer;
padding: 0;
display: flex;
align-items: center;
justify-content: center;
width: 48px;
height: 100%;
margin-left: 10px;
margin-right: 0;
flex-shrink: 0;
}
.logout-button:hover {
background: transparent;
color: #343a40;
}

View File

@@ -34,13 +34,18 @@ COPY cmd/benchmark/benchmark-runner.sh /app/benchmark-runner
# Make scripts executable
RUN chmod +x /app/benchmark-runner
# Create reports directory
RUN mkdir -p /reports
# Create runtime user and reports directory owned by uid 1000
RUN adduser -u 1000 -D appuser && \
mkdir -p /reports && \
chown -R 1000:1000 /app /reports
# Environment variables
ENV BENCHMARK_EVENTS=10000
ENV BENCHMARK_WORKERS=8
ENV BENCHMARK_DURATION=60s
# Drop privileges: run as uid 1000
USER 1000:1000
# Run the benchmark runner
CMD ["/app/benchmark-runner"]

View File

@@ -6,7 +6,7 @@ WORKDIR /build
COPY . .
# Build the basic-badger example
RUN cd examples/basic-badger && \
RUN echo ${pwd};cd examples/basic-badger && \
go mod tidy && \
CGO_ENABLED=0 go build -o khatru-badger .
@@ -15,9 +15,8 @@ RUN apk --no-cache add ca-certificates wget
WORKDIR /app
COPY --from=builder /build/examples/basic-badger/khatru-badger /app/
RUN mkdir -p /data
EXPOSE 8080
EXPOSE 3334
ENV DATABASE_PATH=/data/badger
ENV PORT=8080
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD wget --quiet --tries=1 --spider http://localhost:8080 || exit 1
CMD wget --quiet --tries=1 --spider http://localhost:3334 || exit 1
CMD ["/app/khatru-badger"]

View File

@@ -5,7 +5,7 @@ RUN apk add --no-cache git ca-certificates sqlite-dev gcc musl-dev
WORKDIR /build
COPY . .
# Build the basic-sqlite example
# Build the basic-sqlite3 example
RUN cd examples/basic-sqlite3 && \
go mod tidy && \
CGO_ENABLED=1 go build -o khatru-sqlite .
@@ -15,9 +15,8 @@ RUN apk --no-cache add ca-certificates sqlite wget
WORKDIR /app
COPY --from=builder /build/examples/basic-sqlite3/khatru-sqlite /app/
RUN mkdir -p /data
EXPOSE 8080
EXPOSE 3334
ENV DATABASE_PATH=/data/khatru.db
ENV PORT=8080
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD wget --quiet --tries=1 --spider http://localhost:8080 || exit 1
CMD wget --quiet --tries=1 --spider http://localhost:3334 || exit 1
CMD ["/app/khatru-sqlite"]

View File

@@ -2,7 +2,7 @@
FROM ubuntu:22.04 as builder
# Set environment variables
ARG GOLANG_VERSION=1.25.1
ARG GOLANG_VERSION=1.22.5
# Update package list and install dependencies
RUN apt-get update && \
@@ -46,7 +46,13 @@ RUN go mod download
COPY . .
# Build the relay
RUN CGO_ENABLED=1 GOOS=linux go build -o relay .
RUN CGO_ENABLED=1 GOOS=linux go build -gcflags "all=-N -l" -o relay .
# Create non-root user (uid 1000) for runtime in builder stage (used by analyzer)
RUN useradd -u 1000 -m -s /bin/bash appuser && \
chown -R 1000:1000 /build
# Switch to uid 1000 for any subsequent runtime use of this stage
USER 1000:1000
# Final stage
FROM ubuntu:22.04
@@ -60,21 +66,26 @@ WORKDIR /app
# Copy binary from builder
COPY --from=builder /build/relay /app/relay
# Create data directory
RUN mkdir -p /data
# Create runtime user and writable directories
RUN useradd -u 1000 -m -s /bin/bash appuser && \
mkdir -p /data /profiles /app && \
chown -R 1000:1000 /data /profiles /app
# Expose port
EXPOSE 8080
# Set environment variables
ENV DATA_DIR=/data
ENV LISTEN=0.0.0.0
ENV PORT=8080
ENV LOG_LEVEL=info
ENV ORLY_DATA_DIR=/data
ENV ORLY_LISTEN=0.0.0.0
ENV ORLY_PORT=8080
ENV ORLY_LOG_LEVEL=off
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080 || exit 1
CMD bash -lc "code=\$(curl -s -o /dev/null -w '%{http_code}' http://127.0.0.1:8080 || echo 000); echo \$code | grep -E '^(101|200|400|404|426)$' >/dev/null || exit 1"
# Drop privileges: run as uid 1000
USER 1000:1000
# Run the relay
CMD ["/app/relay"]

View File

@@ -1,16 +1,6 @@
FROM ubuntu:22.04 AS builder
FROM rust:1.81-alpine AS builder
RUN apt-get update && apt-get install -y \
curl \
build-essential \
libsqlite3-dev \
pkg-config \
protobuf-compiler \
&& rm -rf /var/lib/apt/lists/*
# Install Rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
RUN apk add --no-cache musl-dev sqlite-dev build-base bash perl protobuf
WORKDIR /build
COPY . .
@@ -18,8 +8,8 @@ COPY . .
# Build the relay
RUN cargo build --release
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y ca-certificates sqlite3 wget && rm -rf /var/lib/apt/lists/*
FROM alpine:latest
RUN apk --no-cache add ca-certificates sqlite wget
WORKDIR /app
COPY --from=builder /build/target/release/nostr-rs-relay /app/
RUN mkdir -p /data

View File

@@ -15,9 +15,9 @@ RUN apk --no-cache add ca-certificates sqlite wget
WORKDIR /app
COPY --from=builder /build/examples/basic/relayer-basic /app/
RUN mkdir -p /data
EXPOSE 8080
EXPOSE 7447
ENV DATABASE_PATH=/data/relayer.db
ENV PORT=8080
# PORT env is not used by relayer-basic; it always binds to 7447 in code.
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD wget --quiet --tries=1 --spider http://localhost:8080 || exit 1
CMD wget --quiet --tries=1 --spider http://localhost:7447 || exit 1
CMD ["/app/relayer-basic"]

View File

@@ -3,14 +3,21 @@ FROM ubuntu:22.04 AS builder
ENV DEBIAN_FRONTEND=noninteractive
# Install build dependencies
RUN apt-get update && apt-get install -y git g++ make libssl-dev zlib1g-dev liblmdb-dev libflatbuffers-dev libsecp256k1-dev libzstd-dev \
RUN apt-get update && apt-get install -y \
git \
build-essential \
liblmdb-dev \
libsecp256k1-dev \
pkg-config \
libtool \
autoconf \
automake \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
COPY . .
# Initialize git submodules
RUN git submodule update --init --recursive
# Fetch strfry source with submodules to ensure golpe is present
RUN git clone --recurse-submodules https://github.com/hoytech/strfry .
# Build strfry
RUN make setup-golpe && \
@@ -21,34 +28,17 @@ RUN apt-get update && apt-get install -y \
liblmdb0 \
libsecp256k1-0 \
curl \
bash \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /build/strfry /app/
COPY --from=builder /build/strfry.conf /app/
# Create the data directory placeholder (may be masked by volume at runtime)
RUN mkdir -p /data && \
chmod 755 /data
# Update strfry.conf to bind to all interfaces and use port 8080
RUN sed -i 's/bind = "127.0.0.1"/bind = "0.0.0.0"/' /app/strfry.conf && \
sed -i 's/port = 7777/port = 8080/' /app/strfry.conf
# Entrypoint ensures the LMDB directory exists inside the mounted volume before starting
ENV STRFRY_DB_PATH=/data/strfry.lmdb
RUN echo '#!/usr/bin/env bash' > /entrypoint.sh && \
echo 'set -euo pipefail' >> /entrypoint.sh && \
echo 'DB_PATH="${STRFRY_DB_PATH:-/data/strfry.lmdb}"' >> /entrypoint.sh && \
echo 'mkdir -p "$DB_PATH"' >> /entrypoint.sh && \
echo 'chown -R root:root "$(dirname "$DB_PATH")"' >> /entrypoint.sh && \
echo 'exec /app/strfry relay' >> /entrypoint.sh && \
chmod +x /entrypoint.sh
RUN mkdir -p /data
EXPOSE 8080
ENV STRFRY_DB_PATH=/data/strfry.lmdb
ENV STRFRY_RELAY_PORT=8080
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080 || exit 1
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/app/strfry", "relay"]

View File

@@ -9,7 +9,7 @@ set -e
BENCHMARK_EVENTS="${BENCHMARK_EVENTS:-10000}"
BENCHMARK_WORKERS="${BENCHMARK_WORKERS:-8}"
BENCHMARK_DURATION="${BENCHMARK_DURATION:-60s}"
BENCHMARK_TARGETS="${BENCHMARK_TARGETS:-next-orly:8001,khatru-sqlite:8002,khatru-badger:8003,relayer-basic:8004,strfry:8005,nostr-rs-relay:8006}"
BENCHMARK_TARGETS="${BENCHMARK_TARGETS:-next-orly:8080,khatru-sqlite:3334,khatru-badger:3334,relayer-basic:7447,strfry:8080,nostr-rs-relay:8080}"
OUTPUT_DIR="${OUTPUT_DIR:-/reports}"
# Create output directory
@@ -40,14 +40,24 @@ wait_for_relay() {
echo "Waiting for ${name} to be ready at ${url}..."
while [ $attempt -lt $max_attempts ]; do
if wget --quiet --tries=1 --spider --timeout=5 "http://${url}" 2>/dev/null || \
curl -f --connect-timeout 5 --max-time 5 "http://${url}" >/dev/null 2>&1; then
echo "${name} is ready!"
return 0
# Try wget first to obtain an HTTP status code
local status=""
status=$(wget --quiet --server-response --tries=1 --timeout=5 "http://${url}" 2>&1 | awk '/^ HTTP\//{print $2; exit}')
# Fallback to curl to obtain an HTTP status code
if [ -z "$status" ]; then
status=$(curl -s -o /dev/null -w "%{http_code}" --connect-timeout 5 --max-time 5 "http://${url}" || echo 000)
fi
case "$status" in
101|200|400|404|426)
echo "${name} is ready! (HTTP ${status})"
return 0
;;
esac
attempt=$((attempt + 1))
echo " Attempt ${attempt}/${max_attempts}: ${name} not ready yet..."
echo " Attempt ${attempt}/${max_attempts}: ${name} not ready yet (HTTP ${status:-none})..."
sleep 2
done

View File

@@ -8,10 +8,10 @@ services:
dockerfile: cmd/benchmark/Dockerfile.next-orly
container_name: benchmark-next-orly
environment:
- DATA_DIR=/data
- LISTEN=0.0.0.0
- PORT=8080
- LOG_LEVEL=info
- ORLY_DATA_DIR=/data
- ORLY_LISTEN=0.0.0.0
- ORLY_PORT=8080
- ORLY_LOG_LEVEL=off
volumes:
- ./data/next-orly:/data
ports:
@@ -19,7 +19,7 @@ services:
networks:
- benchmark-net
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080"]
test: ["CMD-SHELL", "code=$(curl -s -o /dev/null -w '%{http_code}' http://localhost:8080 || echo 000); echo $$code | grep -E '^(101|200|400|404|426)$' >/dev/null"]
interval: 30s
timeout: 10s
retries: 3
@@ -34,15 +34,14 @@ services:
environment:
- DATABASE_TYPE=sqlite
- DATABASE_PATH=/data/khatru.db
- PORT=8080
volumes:
- ./data/khatru-sqlite:/data
ports:
- "8002:8080"
- "8002:3334"
networks:
- benchmark-net
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8080"]
test: ["CMD-SHELL", "wget --quiet --server-response --tries=1 http://localhost:3334 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404)' >/dev/null"]
interval: 30s
timeout: 10s
retries: 3
@@ -57,15 +56,14 @@ services:
environment:
- DATABASE_TYPE=badger
- DATABASE_PATH=/data/badger
- PORT=8080
volumes:
- ./data/khatru-badger:/data
ports:
- "8003:8080"
- "8003:3334"
networks:
- benchmark-net
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8080"]
test: ["CMD-SHELL", "wget --quiet --server-response --tries=1 http://localhost:3334 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404)' >/dev/null"]
interval: 30s
timeout: 10s
retries: 3
@@ -78,16 +76,18 @@ services:
dockerfile: ../../Dockerfile.relayer-basic
container_name: benchmark-relayer-basic
environment:
- PORT=8080
- DATABASE_PATH=/data/relayer.db
- POSTGRESQL_DATABASE=postgres://relayer:relayerpass@postgres:5432/relayerdb?sslmode=disable
volumes:
- ./data/relayer-basic:/data
ports:
- "8004:8080"
- "8004:7447"
networks:
- benchmark-net
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8080"]
test: ["CMD-SHELL", "wget --quiet --server-response --tries=1 http://localhost:7447 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404)' >/dev/null"]
interval: 30s
timeout: 10s
retries: 3
@@ -95,9 +95,7 @@ services:
# Strfry
strfry:
build:
context: ./external/strfry
dockerfile: ../../Dockerfile.strfry
image: ghcr.io/hoytech/strfry:latest
container_name: benchmark-strfry
environment:
- STRFRY_DB_PATH=/data/strfry.lmdb
@@ -110,7 +108,7 @@ services:
networks:
- benchmark-net
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080"]
test: ["CMD-SHELL", "wget --quiet --server-response --tries=1 http://127.0.0.1:8080 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404|426)' >/dev/null"]
interval: 30s
timeout: 10s
retries: 3
@@ -158,7 +156,7 @@ services:
nostr-rs-relay:
condition: service_healthy
environment:
- BENCHMARK_TARGETS=next-orly:8001,khatru-sqlite:8002,khatru-badger:8003,relayer-basic:8004,strfry:8005,nostr-rs-relay:8006
- BENCHMARK_TARGETS=next-orly:8080,khatru-sqlite:3334,khatru-badger:3334,relayer-basic:7447,strfry:8080,nostr-rs-relay:8080
- BENCHMARK_EVENTS=10000
- BENCHMARK_WORKERS=8
- BENCHMARK_DURATION=60s
@@ -174,6 +172,25 @@ services:
/app/benchmark-runner --output-dir=/reports
"
# PostgreSQL for relayer-basic
postgres:
image: postgres:16-alpine
container_name: benchmark-postgres
environment:
- POSTGRES_DB=relayerdb
- POSTGRES_USER=relayer
- POSTGRES_PASSWORD=relayerpass
volumes:
- ./data/postgres:/var/lib/postgresql/data
networks:
- benchmark-net
healthcheck:
test: ["CMD-SHELL", "pg_isready -U relayer -d relayerdb"]
interval: 10s
timeout: 5s
retries: 5
start_period: 20s
networks:
benchmark-net:
driver: bridge

View File

@@ -2,23 +2,26 @@ package main
import (
"context"
"crypto/rand"
"flag"
"fmt"
"log"
"os"
"path/filepath"
"runtime"
"sort"
"strings"
"sync"
"time"
"next.orly.dev/pkg/crypto/p256k"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/encoders/envelopes/eventenvelope"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/encoders/timestamp"
"next.orly.dev/pkg/protocol/ws"
)
type BenchmarkConfig struct {
@@ -28,6 +31,11 @@ type BenchmarkConfig struct {
TestDuration time.Duration
BurstPattern bool
ReportInterval time.Duration
// Network load options
RelayURL string
NetWorkers int
NetRate int // events/sec per worker
}
type BenchmarkResult struct {
@@ -36,8 +44,10 @@ type BenchmarkResult struct {
TotalEvents int
EventsPerSecond float64
AvgLatency time.Duration
P90Latency time.Duration
P95Latency time.Duration
P99Latency time.Duration
Bottom10Avg time.Duration
SuccessRate float64
ConcurrentWorkers int
MemoryUsed uint64
@@ -52,8 +62,15 @@ type Benchmark struct {
}
func main() {
// lol.SetLogLevel("trace")
config := parseFlags()
if config.RelayURL != "" {
// Network mode: connect to relay and generate traffic
runNetworkLoad(config)
return
}
fmt.Printf("Starting Nostr Relay Benchmark\n")
fmt.Printf("Data Directory: %s\n", config.DataDir)
fmt.Printf(
@@ -64,13 +81,12 @@ func main() {
benchmark := NewBenchmark(config)
defer benchmark.Close()
// Run benchmark tests
benchmark.RunPeakThroughputTest()
benchmark.RunBurstPatternTest()
benchmark.RunMixedReadWriteTest()
// Run benchmark suite twice with pauses
benchmark.RunSuite()
// Generate report
// Generate reports
benchmark.GenerateReport()
benchmark.GenerateAsciidocReport()
}
func parseFlags() *BenchmarkConfig {
@@ -97,10 +113,168 @@ func parseFlags() *BenchmarkConfig {
"Report interval",
)
// Network mode flags
flag.StringVar(
&config.RelayURL, "relay-url", "",
"Relay WebSocket URL (enables network mode if set)",
)
flag.IntVar(
&config.NetWorkers, "net-workers", runtime.NumCPU(),
"Network workers (connections)",
)
flag.IntVar(&config.NetRate, "net-rate", 20, "Events per second per worker")
flag.Parse()
return config
}
func runNetworkLoad(cfg *BenchmarkConfig) {
fmt.Printf(
"Network mode: relay=%s workers=%d rate=%d ev/s per worker duration=%s\n",
cfg.RelayURL, cfg.NetWorkers, cfg.NetRate, cfg.TestDuration,
)
// Create a timeout context for benchmark control only, not for connections
timeoutCtx, cancel := context.WithTimeout(
context.Background(), cfg.TestDuration,
)
defer cancel()
// Use a separate background context for relay connections to avoid
// cancelling the server when the benchmark timeout expires
connCtx := context.Background()
var wg sync.WaitGroup
if cfg.NetWorkers <= 0 {
cfg.NetWorkers = 1
}
if cfg.NetRate <= 0 {
cfg.NetRate = 1
}
for i := 0; i < cfg.NetWorkers; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
// Connect to relay using non-cancellable context
rl, err := ws.RelayConnect(connCtx, cfg.RelayURL)
if err != nil {
fmt.Printf(
"worker %d: failed to connect to %s: %v\n", workerID,
cfg.RelayURL, err,
)
return
}
defer rl.Close()
fmt.Printf("worker %d: connected to %s\n", workerID, cfg.RelayURL)
// Signer for this worker
var keys p256k.Signer
if err := keys.Generate(); err != nil {
fmt.Printf("worker %d: keygen failed: %v\n", workerID, err)
return
}
// Start a concurrent subscriber that listens for events published by this worker
// Build a filter that matches this worker's pubkey and kind=1, since now
since := time.Now().Unix()
go func() {
f := filter.New()
f.Kinds = kind.NewS(kind.TextNote)
f.Authors = tag.NewWithCap(1)
f.Authors.T = append(f.Authors.T, keys.Pub())
f.Since = timestamp.FromUnix(since)
sub, err := rl.Subscribe(connCtx, filter.NewS(f))
if err != nil {
fmt.Printf(
"worker %d: subscribe error: %v\n", workerID, err,
)
return
}
defer sub.Unsub()
recv := 0
for {
select {
case <-timeoutCtx.Done():
fmt.Printf(
"worker %d: subscriber exiting after %d events (benchmark timeout: %v)\n",
workerID, recv, timeoutCtx.Err(),
)
return
case <-rl.Context().Done():
fmt.Printf(
"worker %d: relay connection closed; cause=%v lastErr=%v\n",
workerID, rl.ConnectionCause(), rl.LastError(),
)
return
case <-sub.EndOfStoredEvents:
// continue streaming live events
case ev := <-sub.Events:
if ev == nil {
continue
}
recv++
if recv%100 == 0 {
fmt.Printf(
"worker %d: received %d matching events\n",
workerID, recv,
)
}
ev.Free()
}
}
}()
interval := time.Second / time.Duration(cfg.NetRate)
ticker := time.NewTicker(interval)
defer ticker.Stop()
count := 0
for {
select {
case <-timeoutCtx.Done():
fmt.Printf(
"worker %d: stopping after %d publishes\n", workerID,
count,
)
return
case <-ticker.C:
// Build and sign a simple text note event
ev := event.New()
ev.Kind = uint16(1)
ev.CreatedAt = time.Now().Unix()
ev.Tags = tag.NewS()
ev.Content = []byte(fmt.Sprintf(
"bench worker=%d n=%d", workerID, count,
))
if err := ev.Sign(&keys); err != nil {
fmt.Printf("worker %d: sign error: %v\n", workerID, err)
ev.Free()
continue
}
// Async publish: don't wait for OK; this greatly increases throughput
ch := rl.Write(eventenvelope.NewSubmissionWith(ev).Marshal(nil))
// Non-blocking error check
select {
case err := <-ch:
if err != nil {
fmt.Printf(
"worker %d: write error: %v\n", workerID, err,
)
}
default:
}
if count%100 == 0 {
fmt.Printf(
"worker %d: sent %d events\n", workerID, count,
)
}
ev.Free()
count++
}
}
}(i)
}
wg.Wait()
}
func NewBenchmark(config *BenchmarkConfig) *Benchmark {
// Clean up existing data directory
os.RemoveAll(config.DataDir)
@@ -113,11 +287,16 @@ func NewBenchmark(config *BenchmarkConfig) *Benchmark {
log.Fatalf("Failed to create database: %v", err)
}
return &Benchmark{
b := &Benchmark{
config: config,
db: db,
results: make([]*BenchmarkResult, 0),
}
// Trigger compaction/GC before starting tests
b.compactDatabase()
return b
}
func (b *Benchmark) Close() {
@@ -126,6 +305,42 @@ func (b *Benchmark) Close() {
}
}
// RunSuite runs the three tests with a 10s pause between them and repeats the
// set twice with a 10s pause between rounds.
func (b *Benchmark) RunSuite() {
for round := 1; round <= 2; round++ {
fmt.Printf("\n=== Starting test round %d/2 ===\n", round)
fmt.Printf("RunPeakThroughputTest..\n")
b.RunPeakThroughputTest()
time.Sleep(10 * time.Second)
fmt.Printf("RunBurstPatternTest..\n")
b.RunBurstPatternTest()
time.Sleep(10 * time.Second)
fmt.Printf("RunMixedReadWriteTest..\n")
b.RunMixedReadWriteTest()
time.Sleep(10 * time.Second)
fmt.Printf("RunQueryTest..\n")
b.RunQueryTest()
time.Sleep(10 * time.Second)
fmt.Printf("RunConcurrentQueryStoreTest..\n")
b.RunConcurrentQueryStoreTest()
if round < 2 {
fmt.Println("\nPausing 10s before next round...")
time.Sleep(10 * time.Second)
}
fmt.Println("\n=== Test round completed ===\n")
}
}
// compactDatabase triggers a Badger value log GC before starting tests.
func (b *Benchmark) compactDatabase() {
if b.db == nil || b.db.DB == nil {
return
}
// Attempt value log GC. Ignore errors; this is best-effort.
_ = b.db.DB.RunValueLogGC(0.5)
}
func (b *Benchmark) RunPeakThroughputTest() {
fmt.Println("\n=== Peak Throughput Test ===")
@@ -185,8 +400,10 @@ func (b *Benchmark) RunPeakThroughputTest() {
if len(latencies) > 0 {
result.AvgLatency = calculateAvgLatency(latencies)
result.P90Latency = calculatePercentileLatency(latencies, 0.90)
result.P95Latency = calculatePercentileLatency(latencies, 0.95)
result.P99Latency = calculatePercentileLatency(latencies, 0.99)
result.Bottom10Avg = calculateBottom10Avg(latencies)
}
result.SuccessRate = float64(totalEvents) / float64(b.config.NumEvents) * 100
@@ -206,8 +423,10 @@ func (b *Benchmark) RunPeakThroughputTest() {
fmt.Printf("Duration: %v\n", duration)
fmt.Printf("Events/sec: %.2f\n", result.EventsPerSecond)
fmt.Printf("Avg latency: %v\n", result.AvgLatency)
fmt.Printf("P90 latency: %v\n", result.P90Latency)
fmt.Printf("P95 latency: %v\n", result.P95Latency)
fmt.Printf("P99 latency: %v\n", result.P99Latency)
fmt.Printf("Bottom 10%% Avg latency: %v\n", result.Bottom10Avg)
}
func (b *Benchmark) RunBurstPatternTest() {
@@ -282,8 +501,10 @@ func (b *Benchmark) RunBurstPatternTest() {
if len(latencies) > 0 {
result.AvgLatency = calculateAvgLatency(latencies)
result.P90Latency = calculatePercentileLatency(latencies, 0.90)
result.P95Latency = calculatePercentileLatency(latencies, 0.95)
result.P99Latency = calculatePercentileLatency(latencies, 0.99)
result.Bottom10Avg = calculateBottom10Avg(latencies)
}
result.SuccessRate = float64(totalEvents) / float64(eventIndex) * 100
@@ -387,8 +608,10 @@ func (b *Benchmark) RunMixedReadWriteTest() {
allLatencies := append(writeLatencies, readLatencies...)
if len(allLatencies) > 0 {
result.AvgLatency = calculateAvgLatency(allLatencies)
result.P90Latency = calculatePercentileLatency(allLatencies, 0.90)
result.P95Latency = calculatePercentileLatency(allLatencies, 0.95)
result.P99Latency = calculatePercentileLatency(allLatencies, 0.99)
result.Bottom10Avg = calculateBottom10Avg(allLatencies)
}
result.SuccessRate = float64(totalWrites+totalReads) / float64(len(events)) * 100
@@ -408,21 +631,343 @@ func (b *Benchmark) RunMixedReadWriteTest() {
fmt.Printf("Combined ops/sec: %.2f\n", result.EventsPerSecond)
}
// RunQueryTest specifically benchmarks the QueryEvents function performance
func (b *Benchmark) RunQueryTest() {
fmt.Println("\n=== Query Test ===")
start := time.Now()
var totalQueries int64
var queryLatencies []time.Duration
var errors []error
var mu sync.Mutex
// Pre-populate with events for querying
numSeedEvents := 10000
seedEvents := b.generateEvents(numSeedEvents)
ctx := context.Background()
fmt.Printf(
"Pre-populating database with %d events for query tests...\n",
numSeedEvents,
)
for _, ev := range seedEvents {
b.db.SaveEvent(ctx, ev)
}
// Create different types of filters for querying
filters := []*filter.F{
func() *filter.F { // Kind filter
f := filter.New()
f.Kinds = kind.NewS(kind.TextNote)
limit := uint(100)
f.Limit = &limit
return f
}(),
func() *filter.F { // Tag filter
f := filter.New()
f.Tags = tag.NewS(
tag.NewFromBytesSlice(
[]byte("t"), []byte("benchmark"),
),
)
limit := uint(100)
f.Limit = &limit
return f
}(),
func() *filter.F { // Mixed filter
f := filter.New()
f.Kinds = kind.NewS(kind.TextNote)
f.Tags = tag.NewS(
tag.NewFromBytesSlice(
[]byte("t"), []byte("benchmark"),
),
)
limit := uint(50)
f.Limit = &limit
return f
}(),
}
var wg sync.WaitGroup
// Start query workers
for i := 0; i < b.config.ConcurrentWorkers; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
filterIndex := workerID % len(filters)
queryCount := 0
for time.Since(start) < b.config.TestDuration {
// Rotate through different filters
f := filters[filterIndex]
filterIndex = (filterIndex + 1) % len(filters)
// Execute query
queryStart := time.Now()
events, err := b.db.QueryEvents(ctx, f)
queryLatency := time.Since(queryStart)
mu.Lock()
if err != nil {
errors = append(errors, err)
} else {
totalQueries++
queryLatencies = append(queryLatencies, queryLatency)
// Free event memory
for _, ev := range events {
ev.Free()
}
}
mu.Unlock()
queryCount++
if queryCount%10 == 0 {
time.Sleep(10 * time.Millisecond) // Small delay every 10 queries
}
}
}(i)
}
wg.Wait()
duration := time.Since(start)
// Calculate metrics
result := &BenchmarkResult{
TestName: "Query Performance",
Duration: duration,
TotalEvents: int(totalQueries),
EventsPerSecond: float64(totalQueries) / duration.Seconds(),
ConcurrentWorkers: b.config.ConcurrentWorkers,
MemoryUsed: getMemUsage(),
}
if len(queryLatencies) > 0 {
result.AvgLatency = calculateAvgLatency(queryLatencies)
result.P90Latency = calculatePercentileLatency(queryLatencies, 0.90)
result.P95Latency = calculatePercentileLatency(queryLatencies, 0.95)
result.P99Latency = calculatePercentileLatency(queryLatencies, 0.99)
result.Bottom10Avg = calculateBottom10Avg(queryLatencies)
}
result.SuccessRate = 100.0 // No specific target count for queries
for _, err := range errors {
result.Errors = append(result.Errors, err.Error())
}
b.mu.Lock()
b.results = append(b.results, result)
b.mu.Unlock()
fmt.Printf(
"Query test completed: %d queries in %v\n", totalQueries, duration,
)
fmt.Printf("Queries/sec: %.2f\n", result.EventsPerSecond)
fmt.Printf("Avg query latency: %v\n", result.AvgLatency)
fmt.Printf("P95 query latency: %v\n", result.P95Latency)
fmt.Printf("P99 query latency: %v\n", result.P99Latency)
}
// RunConcurrentQueryStoreTest benchmarks the performance of concurrent query and store operations
func (b *Benchmark) RunConcurrentQueryStoreTest() {
fmt.Println("\n=== Concurrent Query/Store Test ===")
start := time.Now()
var totalQueries, totalWrites int64
var queryLatencies, writeLatencies []time.Duration
var errors []error
var mu sync.Mutex
// Pre-populate with some events
numSeedEvents := 5000
seedEvents := b.generateEvents(numSeedEvents)
ctx := context.Background()
fmt.Printf(
"Pre-populating database with %d events for concurrent query/store test...\n",
numSeedEvents,
)
for _, ev := range seedEvents {
b.db.SaveEvent(ctx, ev)
}
// Generate events for writing during the test
writeEvents := b.generateEvents(b.config.NumEvents)
// Create filters for querying
filters := []*filter.F{
func() *filter.F { // Recent events filter
f := filter.New()
f.Since = timestamp.FromUnix(time.Now().Add(-10 * time.Minute).Unix())
limit := uint(100)
f.Limit = &limit
return f
}(),
func() *filter.F { // Kind and tag filter
f := filter.New()
f.Kinds = kind.NewS(kind.TextNote)
f.Tags = tag.NewS(
tag.NewFromBytesSlice(
[]byte("t"), []byte("benchmark"),
),
)
limit := uint(50)
f.Limit = &limit
return f
}(),
}
var wg sync.WaitGroup
// Half of the workers will be readers, half will be writers
numReaders := b.config.ConcurrentWorkers / 2
numWriters := b.config.ConcurrentWorkers - numReaders
// Start query workers (readers)
for i := 0; i < numReaders; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
filterIndex := workerID % len(filters)
queryCount := 0
for time.Since(start) < b.config.TestDuration {
// Select a filter
f := filters[filterIndex]
filterIndex = (filterIndex + 1) % len(filters)
// Execute query
queryStart := time.Now()
events, err := b.db.QueryEvents(ctx, f)
queryLatency := time.Since(queryStart)
mu.Lock()
if err != nil {
errors = append(errors, err)
} else {
totalQueries++
queryLatencies = append(queryLatencies, queryLatency)
// Free event memory
for _, ev := range events {
ev.Free()
}
}
mu.Unlock()
queryCount++
if queryCount%5 == 0 {
time.Sleep(5 * time.Millisecond) // Small delay
}
}
}(i)
}
// Start write workers
for i := 0; i < numWriters; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
eventIndex := workerID
writeCount := 0
for time.Since(start) < b.config.TestDuration && eventIndex < len(writeEvents) {
// Write operation
writeStart := time.Now()
_, _, err := b.db.SaveEvent(ctx, writeEvents[eventIndex])
writeLatency := time.Since(writeStart)
mu.Lock()
if err != nil {
errors = append(errors, err)
} else {
totalWrites++
writeLatencies = append(writeLatencies, writeLatency)
}
mu.Unlock()
eventIndex += numWriters
writeCount++
if writeCount%10 == 0 {
time.Sleep(10 * time.Millisecond) // Small delay every 10 writes
}
}
}(i)
}
wg.Wait()
duration := time.Since(start)
// Calculate metrics
totalOps := totalQueries + totalWrites
result := &BenchmarkResult{
TestName: "Concurrent Query/Store",
Duration: duration,
TotalEvents: int(totalOps),
EventsPerSecond: float64(totalOps) / duration.Seconds(),
ConcurrentWorkers: b.config.ConcurrentWorkers,
MemoryUsed: getMemUsage(),
}
// Calculate combined latencies for overall metrics
allLatencies := append(queryLatencies, writeLatencies...)
if len(allLatencies) > 0 {
result.AvgLatency = calculateAvgLatency(allLatencies)
result.P90Latency = calculatePercentileLatency(allLatencies, 0.90)
result.P95Latency = calculatePercentileLatency(allLatencies, 0.95)
result.P99Latency = calculatePercentileLatency(allLatencies, 0.99)
result.Bottom10Avg = calculateBottom10Avg(allLatencies)
}
result.SuccessRate = 100.0 // No specific target
for _, err := range errors {
result.Errors = append(result.Errors, err.Error())
}
b.mu.Lock()
b.results = append(b.results, result)
b.mu.Unlock()
// Calculate separate metrics for queries and writes
var queryAvg, writeAvg time.Duration
if len(queryLatencies) > 0 {
queryAvg = calculateAvgLatency(queryLatencies)
}
if len(writeLatencies) > 0 {
writeAvg = calculateAvgLatency(writeLatencies)
}
fmt.Printf(
"Concurrent test completed: %d operations (%d queries, %d writes) in %v\n",
totalOps, totalQueries, totalWrites, duration,
)
fmt.Printf("Operations/sec: %.2f\n", result.EventsPerSecond)
fmt.Printf("Avg latency: %v\n", result.AvgLatency)
fmt.Printf("Avg query latency: %v\n", queryAvg)
fmt.Printf("Avg write latency: %v\n", writeAvg)
fmt.Printf("P95 latency: %v\n", result.P95Latency)
fmt.Printf("P99 latency: %v\n", result.P99Latency)
}
func (b *Benchmark) generateEvents(count int) []*event.E {
events := make([]*event.E, count)
now := timestamp.Now()
// Generate a keypair for signing all events
var keys p256k.Signer
if err := keys.Generate(); err != nil {
log.Fatalf("Failed to generate keys for benchmark events: %v", err)
}
for i := 0; i < count; i++ {
ev := event.New()
// Generate random 32-byte ID
ev.ID = make([]byte, 32)
rand.Read(ev.ID)
// Generate random 32-byte pubkey
ev.Pubkey = make([]byte, 32)
rand.Read(ev.Pubkey)
ev.CreatedAt = now.I64()
ev.Kind = kind.TextNote.K
ev.Content = []byte(fmt.Sprintf(
@@ -437,6 +982,11 @@ func (b *Benchmark) generateEvents(count int) []*event.E {
),
)
// Properly sign the event instead of generating fake signatures
if err := ev.Sign(&keys); err != nil {
log.Fatalf("Failed to sign event %d: %v", i, err)
}
events[i] = ev
}
@@ -460,8 +1010,10 @@ func (b *Benchmark) GenerateReport() {
fmt.Printf("Concurrent Workers: %d\n", result.ConcurrentWorkers)
fmt.Printf("Memory Used: %d MB\n", result.MemoryUsed/(1024*1024))
fmt.Printf("Avg Latency: %v\n", result.AvgLatency)
fmt.Printf("P90 Latency: %v\n", result.P90Latency)
fmt.Printf("P95 Latency: %v\n", result.P95Latency)
fmt.Printf("P99 Latency: %v\n", result.P99Latency)
fmt.Printf("Bottom 10%% Avg Latency: %v\n", result.Bottom10Avg)
if len(result.Errors) > 0 {
fmt.Printf("Errors (%d):\n", len(result.Errors))
@@ -524,8 +1076,14 @@ func (b *Benchmark) saveReportToFile(path string) error {
),
)
file.WriteString(fmt.Sprintf("Avg Latency: %v\n", result.AvgLatency))
file.WriteString(fmt.Sprintf("P90 Latency: %v\n", result.P90Latency))
file.WriteString(fmt.Sprintf("P95 Latency: %v\n", result.P95Latency))
file.WriteString(fmt.Sprintf("P99 Latency: %v\n", result.P99Latency))
file.WriteString(
fmt.Sprintf(
"Bottom 10%% Avg Latency: %v\n", result.Bottom10Avg,
),
)
file.WriteString(
fmt.Sprintf(
"Memory: %d MB\n", result.MemoryUsed/(1024*1024),
@@ -537,6 +1095,41 @@ func (b *Benchmark) saveReportToFile(path string) error {
return nil
}
// GenerateAsciidocReport creates a simple AsciiDoc report alongside the text report.
func (b *Benchmark) GenerateAsciidocReport() error {
path := filepath.Join(b.config.DataDir, "benchmark_report.adoc")
file, err := os.Create(path)
if err != nil {
return err
}
defer file.Close()
file.WriteString("= NOSTR Relay Benchmark Results\n\n")
file.WriteString(
fmt.Sprintf(
"Generated: %s\n\n", time.Now().Format(time.RFC3339),
),
)
file.WriteString("[cols=\"1,^1,^1,^1,^1,^1\",options=\"header\"]\n")
file.WriteString("|===\n")
file.WriteString("| Test | Events/sec | Avg Latency | P90 | P95 | Bottom 10% Avg\n")
b.mu.RLock()
defer b.mu.RUnlock()
for _, r := range b.results {
file.WriteString(fmt.Sprintf("| %s\n", r.TestName))
file.WriteString(fmt.Sprintf("| %.2f\n", r.EventsPerSecond))
file.WriteString(fmt.Sprintf("| %v\n", r.AvgLatency))
file.WriteString(fmt.Sprintf("| %v\n", r.P90Latency))
file.WriteString(fmt.Sprintf("| %v\n", r.P95Latency))
file.WriteString(fmt.Sprintf("| %v\n", r.Bottom10Avg))
}
file.WriteString("|===\n")
fmt.Printf("AsciiDoc report saved to: %s\n", path)
return nil
}
// Helper functions
func calculateAvgLatency(latencies []time.Duration) time.Duration {
@@ -557,13 +1150,48 @@ func calculatePercentileLatency(
if len(latencies) == 0 {
return 0
}
// Simple percentile calculation - in production would sort first
index := int(float64(len(latencies)) * percentile)
if index >= len(latencies) {
index = len(latencies) - 1
// Sort a copy to avoid mutating caller slice
copySlice := make([]time.Duration, len(latencies))
copy(copySlice, latencies)
sort.Slice(
copySlice, func(i, j int) bool { return copySlice[i] < copySlice[j] },
)
index := int(float64(len(copySlice)-1) * percentile)
if index < 0 {
index = 0
}
return latencies[index]
if index >= len(copySlice) {
index = len(copySlice) - 1
}
return copySlice[index]
}
// calculateBottom10Avg returns the average latency of the slowest 10% of samples.
func calculateBottom10Avg(latencies []time.Duration) time.Duration {
if len(latencies) == 0 {
return 0
}
copySlice := make([]time.Duration, len(latencies))
copy(copySlice, latencies)
sort.Slice(
copySlice, func(i, j int) bool { return copySlice[i] < copySlice[j] },
)
start := int(float64(len(copySlice)) * 0.9)
if start < 0 {
start = 0
}
if start >= len(copySlice) {
start = len(copySlice) - 1
}
var total time.Duration
for i := start; i < len(copySlice); i++ {
total += copySlice[i]
}
count := len(copySlice) - start
if count <= 0 {
return 0
}
return total / time.Duration(count)
}
func getMemUsage() uint64 {

156
cmd/benchmark/profile.sh Executable file
View File

@@ -0,0 +1,156 @@
#!/usr/bin/env bash
set -euo pipefail
# Runs the ORLY relay with CPU profiling enabled and opens the resulting
# pprof profile in a local web UI.
#
# Usage:
# ./profile.sh [duration_seconds]
#
# - Builds the relay.
# - Starts it with ORLY_PPROF=cpu and minimal logging.
# - Waits for the profile path printed at startup.
# - Runs for DURATION seconds (default 10), then stops the relay to flush the
# CPU profile to disk.
# - Launches `go tool pprof -http=:8000` for convenient browsing.
#
# Notes:
# - The profile file path is detected from the relay's stdout/stderr lines
# emitted by github.com/pkg/profile, typically like:
# profile: cpu profiling enabled, path: /tmp/profile123456/cpu.pprof
# - You can change DURATION by passing a number of seconds as the first arg
# or by setting DURATION env var.
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd -- "${SCRIPT_DIR}/../.." && pwd)"
cd "$REPO_ROOT"
DURATION="${1:-${DURATION:-10}}"
PPROF_HTTP_PORT="${PPROF_HTTP_PORT:-8000}"
# Load generation controls
LOAD_ENABLED="${LOAD_ENABLED:-1}" # set to 0 to disable load
# Use the benchmark main package in cmd/benchmark as the load generator
BENCHMARK_PKG_DIR="$REPO_ROOT/cmd/benchmark"
BENCHMARK_BIN="${BENCHMARK_BIN:-}" # if empty, we will build to $RUN_DIR/benchmark
BENCHMARK_EVENTS="${BENCHMARK_EVENTS:-}" # optional override for -events
BENCHMARK_DURATION="${BENCHMARK_DURATION:-}" # optional override for -duration (e.g. 30s); defaults to DURATION seconds
BIN="$REPO_ROOT/next.orly.dev"
LOG_DIR="${LOG_DIR:-$REPO_ROOT/cmd/benchmark/reports}"
mkdir -p "$LOG_DIR"
RUN_TS="$(date +%Y%m%d_%H%M%S)"
RUN_DIR="$LOG_DIR/profile_run_${RUN_TS}"
mkdir -p "$RUN_DIR"
LOG_FILE="$RUN_DIR/relay.log"
LOAD_LOG_FILE="$RUN_DIR/load.log"
echo "[profile.sh] Building relay binary ..."
go build -o "$BIN" .
# Ensure we clean up the child process on exit
RELAY_PID=""
LOAD_PID=""
cleanup() {
if [[ -n "$LOAD_PID" ]] && kill -0 "$LOAD_PID" 2>/dev/null; then
echo "[profile.sh] Stopping load generator (pid=$LOAD_PID) ..."
kill -INT "$LOAD_PID" 2>/dev/null || true
sleep 0.5
kill -TERM "$LOAD_PID" 2>/dev/null || true
fi
if [[ -n "$RELAY_PID" ]] && kill -0 "$RELAY_PID" 2>/dev/null; then
echo "[profile.sh] Stopping relay (pid=$RELAY_PID) ..."
kill -INT "$RELAY_PID" 2>/dev/null || true
# give it a moment to exit and flush profile
sleep 1
kill -TERM "$RELAY_PID" 2>/dev/null || true
fi
}
trap cleanup EXIT
# Start the relay with CPU profiling enabled. Capture both stdout and stderr.
echo "[profile.sh] Starting relay with CPU profiling enabled ..."
(
ORLY_LOG_LEVEL=off \
ORLY_LISTEN="${ORLY_LISTEN:-127.0.0.1}" \
ORLY_PORT="${ORLY_PORT:-3334}" \
ORLY_PPROF=cpu \
"$BIN"
) >"$LOG_FILE" 2>&1 &
RELAY_PID=$!
echo "[profile.sh] Relay started with pid $RELAY_PID; logging to $LOG_FILE"
# Wait until the profile path is printed. Timeout after reasonable period.
PPROF_FILE=""
START_TIME=$(date +%s)
TIMEOUT=30
echo "[profile.sh] Waiting for profile path to appear in relay output ..."
while :; do
if grep -Eo "/tmp/profile[^ ]+/cpu\.pprof" "$LOG_FILE" >/dev/null 2>&1; then
PPROF_FILE=$(grep -Eo "/tmp/profile[^ ]+/cpu\.pprof" "$LOG_FILE" | tail -n1)
break
fi
NOW=$(date +%s)
if (( NOW - START_TIME > TIMEOUT )); then
echo "[profile.sh] ERROR: Timed out waiting for profile path in $LOG_FILE" >&2
echo "Last 50 log lines:" >&2
tail -n 50 "$LOG_FILE" >&2
exit 1
fi
sleep 0.3
done
echo "[profile.sh] Detected profile file: $PPROF_FILE"
# Optionally start load generator to exercise the relay
if [[ "$LOAD_ENABLED" == "1" ]]; then
# Build benchmark binary if not provided
if [[ -z "$BENCHMARK_BIN" ]]; then
BENCHMARK_BIN="$RUN_DIR/benchmark"
echo "[profile.sh] Building benchmark load generator ($BENCHMARK_PKG_DIR) ..."
go build -o "$BENCHMARK_BIN" "$BENCHMARK_PKG_DIR"
fi
BENCH_DB_DIR="$RUN_DIR/benchdb"
mkdir -p "$BENCH_DB_DIR"
DURATION_ARG="${BENCHMARK_DURATION:-${DURATION}s}"
EXTRA_EVENTS=""
if [[ -n "$BENCHMARK_EVENTS" ]]; then
EXTRA_EVENTS="-events=$BENCHMARK_EVENTS"
fi
echo "[profile.sh] Starting benchmark load generator for duration $DURATION_ARG ..."
RELAY_URL="ws://${ORLY_LISTEN:-127.0.0.1}:${ORLY_PORT:-3334}"
echo "[profile.sh] Using relay URL: $RELAY_URL"
(
"$BENCHMARK_BIN" -relay-url="$RELAY_URL" -net-workers="${NET_WORKERS:-2}" -net-rate="${NET_RATE:-20}" -duration="$DURATION_ARG" $EXTRA_EVENTS \
>"$LOAD_LOG_FILE" 2>&1 &
)
LOAD_PID=$!
echo "[profile.sh] Load generator started (pid=$LOAD_PID); logging to $LOAD_LOG_FILE"
else
echo "[profile.sh] LOAD_ENABLED=0; not starting load generator."
fi
echo "[profile.sh] Letting the relay run for ${DURATION}s to collect CPU samples ..."
sleep "$DURATION"
# Stop the relay to flush the CPU profile
cleanup
# Disable trap so we don't double-kill
trap - EXIT
# Wait briefly to ensure the profile file is finalized
for i in {1..20}; do
if [[ -s "$PPROF_FILE" ]]; then
break
fi
sleep 0.2
done
if [[ ! -s "$PPROF_FILE" ]]; then
echo "[profile.sh] WARNING: Profile file exists but is empty or missing: $PPROF_FILE" >&2
fi
# Launch pprof HTTP UI
echo "[profile.sh] Launching pprof web UI (http://localhost:${PPROF_HTTP_PORT}) ..."
exec go tool pprof -http=":${PPROF_HTTP_PORT}" "$BIN" "$PPROF_FILE"

View File

@@ -0,0 +1,140 @@
================================================================
NOSTR RELAY BENCHMARK AGGREGATE REPORT
================================================================
Generated: 2025-09-20T11:04:39+00:00
Benchmark Configuration:
Events per test: 10000
Concurrent workers: 8
Test duration: 60s
Relays tested: 6
================================================================
SUMMARY BY RELAY
================================================================
Relay: next-orly
----------------------------------------
Status: COMPLETED
Events/sec: 1035.42
Events/sec: 659.20
Events/sec: 1094.56
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 470.069µs
Bottom 10% Avg Latency: 750.491µs
Avg Latency: 190.573µs
P95 Latency: 693.101µs
P95 Latency: 289.761µs
P95 Latency: 22.450848ms
Relay: khatru-sqlite
----------------------------------------
Status: COMPLETED
Events/sec: 1105.61
Events/sec: 624.87
Events/sec: 1070.10
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 458.035µs
Bottom 10% Avg Latency: 702.193µs
Avg Latency: 193.997µs
P95 Latency: 660.608µs
P95 Latency: 302.666µs
P95 Latency: 23.653412ms
Relay: khatru-badger
----------------------------------------
Status: COMPLETED
Events/sec: 1040.11
Events/sec: 663.14
Events/sec: 1065.58
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 454.784µs
Bottom 10% Avg Latency: 706.219µs
Avg Latency: 193.914µs
P95 Latency: 654.637µs
P95 Latency: 296.525µs
P95 Latency: 21.642655ms
Relay: relayer-basic
----------------------------------------
Status: COMPLETED
Events/sec: 1104.88
Events/sec: 642.17
Events/sec: 1079.27
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 433.89µs
Bottom 10% Avg Latency: 653.813µs
Avg Latency: 186.306µs
P95 Latency: 617.868µs
P95 Latency: 279.192µs
P95 Latency: 21.247322ms
Relay: strfry
----------------------------------------
Status: COMPLETED
Events/sec: 1090.49
Events/sec: 652.03
Events/sec: 1098.57
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 448.058µs
Bottom 10% Avg Latency: 729.464µs
Avg Latency: 189.06µs
P95 Latency: 667.141µs
P95 Latency: 290.433µs
P95 Latency: 20.822884ms
Relay: nostr-rs-relay
----------------------------------------
Status: COMPLETED
Events/sec: 1123.91
Events/sec: 647.62
Events/sec: 1033.64
Success Rate: 100.0%
Success Rate: 100.0%
Success Rate: 100.0%
Avg Latency: 416.753µs
Bottom 10% Avg Latency: 638.318µs
Avg Latency: 185.217µs
P95 Latency: 597.338µs
P95 Latency: 273.191µs
P95 Latency: 22.416221ms
================================================================
DETAILED RESULTS
================================================================
Individual relay reports are available in:
- /reports/run_20250920_101521/khatru-badger_results.txt
- /reports/run_20250920_101521/khatru-sqlite_results.txt
- /reports/run_20250920_101521/next-orly_results.txt
- /reports/run_20250920_101521/nostr-rs-relay_results.txt
- /reports/run_20250920_101521/relayer-basic_results.txt
- /reports/run_20250920_101521/strfry_results.txt
================================================================
BENCHMARK COMPARISON TABLE
================================================================
Relay Status Peak Tput/s Avg Latency Success Rate
---- ------ ----------- ----------- ------------
next-orly OK 1035.42 470.069µs 100.0%
khatru-sqlite OK 1105.61 458.035µs 100.0%
khatru-badger OK 1040.11 454.784µs 100.0%
relayer-basic OK 1104.88 433.89µs 100.0%
strfry OK 1090.49 448.058µs 100.0%
nostr-rs-relay OK 1123.91 416.753µs 100.0%
================================================================
End of Report
================================================================

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_khatru-badger_8
Events: 10000, Workers: 8, Duration: 1m0s
1758364309339505/tmp/benchmark_khatru-badger_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758364309340007/tmp/benchmark_khatru-badger_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758364309340039/tmp/benchmark_khatru-badger_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758364309340327(*types.Uint32)(0xc000147840)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758364309340465migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.614321551s
Events/sec: 1040.11
Avg latency: 454.784µs
P90 latency: 596.266µs
P95 latency: 654.637µs
P99 latency: 844.569µs
Bottom 10% Avg latency: 706.219µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 136.444875ms
Burst completed: 1000 events in 141.806497ms
Burst completed: 1000 events in 168.991278ms
Burst completed: 1000 events in 167.713425ms
Burst completed: 1000 events in 162.89698ms
Burst completed: 1000 events in 157.775164ms
Burst completed: 1000 events in 166.476709ms
Burst completed: 1000 events in 161.742632ms
Burst completed: 1000 events in 162.138977ms
Burst completed: 1000 events in 156.657194ms
Burst test completed: 10000 events in 15.07982611s
Events/sec: 663.14
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 44.903267299s
Combined ops/sec: 222.70
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3166 queries in 1m0.104195004s
Queries/sec: 52.68
Avg query latency: 125.847553ms
P95 query latency: 148.109766ms
P99 query latency: 212.054697ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11366 operations (1366 queries, 10000 writes) in 1m0.127232573s
Operations/sec: 189.03
Avg latency: 16.671438ms
Avg query latency: 134.993072ms
Avg write latency: 508.703µs
P95 latency: 133.755996ms
P99 latency: 152.790563ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.384548186s
Events/sec: 1065.58
Avg latency: 566.375µs
P90 latency: 738.377µs
P95 latency: 839.679µs
P99 latency: 1.131084ms
Bottom 10% Avg latency: 1.312791ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 166.832259ms
Burst completed: 1000 events in 175.061575ms
Burst completed: 1000 events in 168.897493ms
Burst completed: 1000 events in 167.584171ms
Burst completed: 1000 events in 178.212526ms
Burst completed: 1000 events in 202.208945ms
Burst completed: 1000 events in 154.130024ms
Burst completed: 1000 events in 168.817721ms
Burst completed: 1000 events in 153.032223ms
Burst completed: 1000 events in 154.799008ms
Burst test completed: 10000 events in 15.449161726s
Events/sec: 647.28
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4582 reads in 1m0.037041762s
Combined ops/sec: 159.60
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 959 queries in 1m0.42440735s
Queries/sec: 15.87
Avg query latency: 418.846875ms
P95 query latency: 473.089327ms
P99 query latency: 650.467474ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10484 operations (484 queries, 10000 writes) in 1m0.283590079s
Operations/sec: 173.91
Avg latency: 17.921964ms
Avg query latency: 381.041592ms
Avg write latency: 346.974µs
P95 latency: 1.269749ms
P99 latency: 399.015222ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 9.614321551s
Total Events: 10000
Events/sec: 1040.11
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 118 MB
Avg Latency: 454.784µs
P90 Latency: 596.266µs
P95 Latency: 654.637µs
P99 Latency: 844.569µs
Bottom 10% Avg Latency: 706.219µs
----------------------------------------
Test: Burst Pattern
Duration: 15.07982611s
Total Events: 10000
Events/sec: 663.14
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 162 MB
Avg Latency: 193.914µs
P90 Latency: 255.617µs
P95 Latency: 296.525µs
P99 Latency: 451.81µs
Bottom 10% Avg Latency: 343.222µs
----------------------------------------
Test: Mixed Read/Write
Duration: 44.903267299s
Total Events: 10000
Events/sec: 222.70
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 121 MB
Avg Latency: 9.145633ms
P90 Latency: 19.946513ms
P95 Latency: 21.642655ms
P99 Latency: 23.951572ms
Bottom 10% Avg Latency: 21.861602ms
----------------------------------------
Test: Query Performance
Duration: 1m0.104195004s
Total Events: 3166
Events/sec: 52.68
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 188 MB
Avg Latency: 125.847553ms
P90 Latency: 140.664966ms
P95 Latency: 148.109766ms
P99 Latency: 212.054697ms
Bottom 10% Avg Latency: 164.089129ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.127232573s
Total Events: 11366
Events/sec: 189.03
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 112 MB
Avg Latency: 16.671438ms
P90 Latency: 122.627849ms
P95 Latency: 133.755996ms
P99 Latency: 152.790563ms
Bottom 10% Avg Latency: 138.087104ms
----------------------------------------
Test: Peak Throughput
Duration: 9.384548186s
Total Events: 10000
Events/sec: 1065.58
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 566.375µs
P90 Latency: 738.377µs
P95 Latency: 839.679µs
P99 Latency: 1.131084ms
Bottom 10% Avg Latency: 1.312791ms
----------------------------------------
Test: Burst Pattern
Duration: 15.449161726s
Total Events: 10000
Events/sec: 647.28
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 165 MB
Avg Latency: 186.353µs
P90 Latency: 243.413µs
P95 Latency: 283.06µs
P99 Latency: 440.76µs
Bottom 10% Avg Latency: 324.151µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.037041762s
Total Events: 9582
Events/sec: 159.60
Success Rate: 95.8%
Concurrent Workers: 8
Memory Used: 138 MB
Avg Latency: 16.358228ms
P90 Latency: 37.654373ms
P95 Latency: 40.578604ms
P99 Latency: 46.331181ms
Bottom 10% Avg Latency: 41.76124ms
----------------------------------------
Test: Query Performance
Duration: 1m0.42440735s
Total Events: 959
Events/sec: 15.87
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 110 MB
Avg Latency: 418.846875ms
P90 Latency: 448.809017ms
P95 Latency: 473.089327ms
P99 Latency: 650.467474ms
Bottom 10% Avg Latency: 518.112626ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.283590079s
Total Events: 10484
Events/sec: 173.91
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 205 MB
Avg Latency: 17.921964ms
P90 Latency: 582.319µs
P95 Latency: 1.269749ms
P99 Latency: 399.015222ms
Bottom 10% Avg Latency: 176.257001ms
----------------------------------------
Report saved to: /tmp/benchmark_khatru-badger_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_khatru-badger_8/benchmark_report.adoc
1758364794792663/tmp/benchmark_khatru-badger_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758364796617126/tmp/benchmark_khatru-badger_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758364796621659/tmp/benchmark_khatru-badger_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: khatru-badger
RELAY_URL: ws://khatru-badger:3334
TEST_TIMESTAMP: 2025-09-20T10:39:56+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_khatru-sqlite_8
Events: 10000, Workers: 8, Duration: 1m0s
1758363814412229/tmp/benchmark_khatru-sqlite_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758363814412803/tmp/benchmark_khatru-sqlite_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758363814412840/tmp/benchmark_khatru-sqlite_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758363814413123(*types.Uint32)(0xc0001ea00c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758363814413200migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.044789549s
Events/sec: 1105.61
Avg latency: 458.035µs
P90 latency: 601.736µs
P95 latency: 660.608µs
P99 latency: 844.108µs
Bottom 10% Avg latency: 702.193µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 146.610877ms
Burst completed: 1000 events in 179.229665ms
Burst completed: 1000 events in 157.096919ms
Burst completed: 1000 events in 164.796374ms
Burst completed: 1000 events in 188.464354ms
Burst completed: 1000 events in 196.529596ms
Burst completed: 1000 events in 169.425581ms
Burst completed: 1000 events in 147.99354ms
Burst completed: 1000 events in 157.996252ms
Burst completed: 1000 events in 167.299262ms
Burst test completed: 10000 events in 16.003207139s
Events/sec: 624.87
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 46.924555793s
Combined ops/sec: 213.11
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3052 queries in 1m0.102264s
Queries/sec: 50.78
Avg query latency: 128.464192ms
P95 query latency: 148.086431ms
P99 query latency: 219.275394ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11296 operations (1296 queries, 10000 writes) in 1m0.108871986s
Operations/sec: 187.93
Avg latency: 16.71621ms
Avg query latency: 142.320434ms
Avg write latency: 437.903µs
P95 latency: 141.357185ms
P99 latency: 163.50992ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.344884331s
Events/sec: 1070.10
Avg latency: 578.453µs
P90 latency: 742.585µs
P95 latency: 849.679µs
P99 latency: 1.122058ms
Bottom 10% Avg latency: 1.362355ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 185.472655ms
Burst completed: 1000 events in 194.135516ms
Burst completed: 1000 events in 176.056931ms
Burst completed: 1000 events in 161.500315ms
Burst completed: 1000 events in 157.673837ms
Burst completed: 1000 events in 167.130208ms
Burst completed: 1000 events in 182.164655ms
Burst completed: 1000 events in 156.589581ms
Burst completed: 1000 events in 154.419949ms
Burst completed: 1000 events in 158.445927ms
Burst test completed: 10000 events in 15.587711126s
Events/sec: 641.53
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4405 reads in 1m0.043842569s
Combined ops/sec: 156.64
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 915 queries in 1m0.3452177s
Queries/sec: 15.16
Avg query latency: 435.125142ms
P95 query latency: 520.311963ms
P99 query latency: 618.85899ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10489 operations (489 queries, 10000 writes) in 1m0.27235761s
Operations/sec: 174.03
Avg latency: 18.043774ms
Avg query latency: 379.681531ms
Avg write latency: 359.688µs
P95 latency: 1.316628ms
P99 latency: 400.223248ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 9.044789549s
Total Events: 10000
Events/sec: 1105.61
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 144 MB
Avg Latency: 458.035µs
P90 Latency: 601.736µs
P95 Latency: 660.608µs
P99 Latency: 844.108µs
Bottom 10% Avg Latency: 702.193µs
----------------------------------------
Test: Burst Pattern
Duration: 16.003207139s
Total Events: 10000
Events/sec: 624.87
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 89 MB
Avg Latency: 193.997µs
P90 Latency: 261.969µs
P95 Latency: 302.666µs
P99 Latency: 431.933µs
Bottom 10% Avg Latency: 334.383µs
----------------------------------------
Test: Mixed Read/Write
Duration: 46.924555793s
Total Events: 10000
Events/sec: 213.11
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 96 MB
Avg Latency: 9.781737ms
P90 Latency: 21.91971ms
P95 Latency: 23.653412ms
P99 Latency: 27.511972ms
Bottom 10% Avg Latency: 24.396695ms
----------------------------------------
Test: Query Performance
Duration: 1m0.102264s
Total Events: 3052
Events/sec: 50.78
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 209 MB
Avg Latency: 128.464192ms
P90 Latency: 142.195039ms
P95 Latency: 148.086431ms
P99 Latency: 219.275394ms
Bottom 10% Avg Latency: 162.874217ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.108871986s
Total Events: 11296
Events/sec: 187.93
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 159 MB
Avg Latency: 16.71621ms
P90 Latency: 127.287246ms
P95 Latency: 141.357185ms
P99 Latency: 163.50992ms
Bottom 10% Avg Latency: 145.199189ms
----------------------------------------
Test: Peak Throughput
Duration: 9.344884331s
Total Events: 10000
Events/sec: 1070.10
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 578.453µs
P90 Latency: 742.585µs
P95 Latency: 849.679µs
P99 Latency: 1.122058ms
Bottom 10% Avg Latency: 1.362355ms
----------------------------------------
Test: Burst Pattern
Duration: 15.587711126s
Total Events: 10000
Events/sec: 641.53
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 141 MB
Avg Latency: 190.235µs
P90 Latency: 254.795µs
P95 Latency: 290.563µs
P99 Latency: 437.323µs
Bottom 10% Avg Latency: 328.752µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.043842569s
Total Events: 9405
Events/sec: 156.64
Success Rate: 94.0%
Concurrent Workers: 8
Memory Used: 105 MB
Avg Latency: 16.852438ms
P90 Latency: 39.677855ms
P95 Latency: 42.553634ms
P99 Latency: 48.262077ms
Bottom 10% Avg Latency: 43.994063ms
----------------------------------------
Test: Query Performance
Duration: 1m0.3452177s
Total Events: 915
Events/sec: 15.16
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 157 MB
Avg Latency: 435.125142ms
P90 Latency: 482.304439ms
P95 Latency: 520.311963ms
P99 Latency: 618.85899ms
Bottom 10% Avg Latency: 545.670939ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.27235761s
Total Events: 10489
Events/sec: 174.03
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 132 MB
Avg Latency: 18.043774ms
P90 Latency: 583.962µs
P95 Latency: 1.316628ms
P99 Latency: 400.223248ms
Bottom 10% Avg Latency: 177.440946ms
----------------------------------------
Report saved to: /tmp/benchmark_khatru-sqlite_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_khatru-sqlite_8/benchmark_report.adoc
1758364302230610/tmp/benchmark_khatru-sqlite_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758364304057942/tmp/benchmark_khatru-sqlite_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758364304063521/tmp/benchmark_khatru-sqlite_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: khatru-sqlite
RELAY_URL: ws://khatru-sqlite:3334
TEST_TIMESTAMP: 2025-09-20T10:31:44+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_next-orly_8
Events: 10000, Workers: 8, Duration: 1m0s
1758363321263384/tmp/benchmark_next-orly_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758363321263864/tmp/benchmark_next-orly_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758363321263887/tmp/benchmark_next-orly_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758363321264128(*types.Uint32)(0xc0001f7ffc)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758363321264177migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.657904043s
Events/sec: 1035.42
Avg latency: 470.069µs
P90 latency: 628.167µs
P95 latency: 693.101µs
P99 latency: 922.357µs
Bottom 10% Avg latency: 750.491µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 175.034134ms
Burst completed: 1000 events in 150.401771ms
Burst completed: 1000 events in 168.992305ms
Burst completed: 1000 events in 179.447581ms
Burst completed: 1000 events in 165.602457ms
Burst completed: 1000 events in 178.649561ms
Burst completed: 1000 events in 195.002303ms
Burst completed: 1000 events in 168.970954ms
Burst completed: 1000 events in 150.818413ms
Burst completed: 1000 events in 185.285662ms
Burst test completed: 10000 events in 15.169978801s
Events/sec: 659.20
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 45.597478865s
Combined ops/sec: 219.31
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3151 queries in 1m0.067849757s
Queries/sec: 52.46
Avg query latency: 126.38548ms
P95 query latency: 149.976367ms
P99 query latency: 205.807461ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11325 operations (1325 queries, 10000 writes) in 1m0.081967157s
Operations/sec: 188.49
Avg latency: 16.694154ms
Avg query latency: 139.524748ms
Avg write latency: 419.1µs
P95 latency: 138.688202ms
P99 latency: 158.824742ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.136097148s
Events/sec: 1094.56
Avg latency: 510.7µs
P90 latency: 636.763µs
P95 latency: 705.564µs
P99 latency: 922.777µs
Bottom 10% Avg latency: 1.094965ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 176.337148ms
Burst completed: 1000 events in 177.351251ms
Burst completed: 1000 events in 181.515292ms
Burst completed: 1000 events in 164.043866ms
Burst completed: 1000 events in 152.697196ms
Burst completed: 1000 events in 144.231922ms
Burst completed: 1000 events in 162.606659ms
Burst completed: 1000 events in 137.485182ms
Burst completed: 1000 events in 163.19487ms
Burst completed: 1000 events in 147.900339ms
Burst test completed: 10000 events in 15.514130113s
Events/sec: 644.57
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4489 reads in 1m0.036174989s
Combined ops/sec: 158.05
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 900 queries in 1m0.304636826s
Queries/sec: 14.92
Avg query latency: 444.57989ms
P95 query latency: 547.598358ms
P99 query latency: 660.926147ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10462 operations (462 queries, 10000 writes) in 1m0.362856212s
Operations/sec: 173.32
Avg latency: 17.808607ms
Avg query latency: 395.594177ms
Avg write latency: 354.914µs
P95 latency: 1.221657ms
P99 latency: 411.642669ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 9.657904043s
Total Events: 10000
Events/sec: 1035.42
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 144 MB
Avg Latency: 470.069µs
P90 Latency: 628.167µs
P95 Latency: 693.101µs
P99 Latency: 922.357µs
Bottom 10% Avg Latency: 750.491µs
----------------------------------------
Test: Burst Pattern
Duration: 15.169978801s
Total Events: 10000
Events/sec: 659.20
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 135 MB
Avg Latency: 190.573µs
P90 Latency: 252.701µs
P95 Latency: 289.761µs
P99 Latency: 408.147µs
Bottom 10% Avg Latency: 316.797µs
----------------------------------------
Test: Mixed Read/Write
Duration: 45.597478865s
Total Events: 10000
Events/sec: 219.31
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 119 MB
Avg Latency: 9.381158ms
P90 Latency: 20.487026ms
P95 Latency: 22.450848ms
P99 Latency: 24.696325ms
Bottom 10% Avg Latency: 22.632933ms
----------------------------------------
Test: Query Performance
Duration: 1m0.067849757s
Total Events: 3151
Events/sec: 52.46
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 145 MB
Avg Latency: 126.38548ms
P90 Latency: 142.39268ms
P95 Latency: 149.976367ms
P99 Latency: 205.807461ms
Bottom 10% Avg Latency: 162.636454ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.081967157s
Total Events: 11325
Events/sec: 188.49
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 194 MB
Avg Latency: 16.694154ms
P90 Latency: 125.314618ms
P95 Latency: 138.688202ms
P99 Latency: 158.824742ms
Bottom 10% Avg Latency: 142.699977ms
----------------------------------------
Test: Peak Throughput
Duration: 9.136097148s
Total Events: 10000
Events/sec: 1094.56
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 510.7µs
P90 Latency: 636.763µs
P95 Latency: 705.564µs
P99 Latency: 922.777µs
Bottom 10% Avg Latency: 1.094965ms
----------------------------------------
Test: Burst Pattern
Duration: 15.514130113s
Total Events: 10000
Events/sec: 644.57
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 138 MB
Avg Latency: 230.062µs
P90 Latency: 316.624µs
P95 Latency: 389.882µs
P99 Latency: 859.548µs
Bottom 10% Avg Latency: 529.836µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.036174989s
Total Events: 9489
Events/sec: 158.05
Success Rate: 94.9%
Concurrent Workers: 8
Memory Used: 182 MB
Avg Latency: 16.56372ms
P90 Latency: 38.24931ms
P95 Latency: 41.187306ms
P99 Latency: 46.02529ms
Bottom 10% Avg Latency: 42.131189ms
----------------------------------------
Test: Query Performance
Duration: 1m0.304636826s
Total Events: 900
Events/sec: 14.92
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 141 MB
Avg Latency: 444.57989ms
P90 Latency: 490.730651ms
P95 Latency: 547.598358ms
P99 Latency: 660.926147ms
Bottom 10% Avg Latency: 563.628707ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.362856212s
Total Events: 10462
Events/sec: 173.32
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 152 MB
Avg Latency: 17.808607ms
P90 Latency: 631.703µs
P95 Latency: 1.221657ms
P99 Latency: 411.642669ms
Bottom 10% Avg Latency: 175.052418ms
----------------------------------------
Report saved to: /tmp/benchmark_next-orly_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_next-orly_8/benchmark_report.adoc
1758363807245770/tmp/benchmark_next-orly_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758363809118416/tmp/benchmark_next-orly_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758363809123697/tmp/benchmark_next-orly_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: next-orly
RELAY_URL: ws://next-orly:8080
TEST_TIMESTAMP: 2025-09-20T10:23:29+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_nostr-rs-relay_8
Events: 10000, Workers: 8, Duration: 1m0s
1758365785928076/tmp/benchmark_nostr-rs-relay_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758365785929028/tmp/benchmark_nostr-rs-relay_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758365785929097/tmp/benchmark_nostr-rs-relay_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758365785929509(*types.Uint32)(0xc0001c820c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758365785929573migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 8.897492256s
Events/sec: 1123.91
Avg latency: 416.753µs
P90 latency: 546.351µs
P95 latency: 597.338µs
P99 latency: 760.549µs
Bottom 10% Avg latency: 638.318µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 158.263016ms
Burst completed: 1000 events in 181.558983ms
Burst completed: 1000 events in 155.219861ms
Burst completed: 1000 events in 183.834156ms
Burst completed: 1000 events in 192.398437ms
Burst completed: 1000 events in 176.450074ms
Burst completed: 1000 events in 175.050138ms
Burst completed: 1000 events in 178.883047ms
Burst completed: 1000 events in 180.74321ms
Burst completed: 1000 events in 169.39146ms
Burst test completed: 10000 events in 15.441062872s
Events/sec: 647.62
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 45.847091984s
Combined ops/sec: 218.12
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3229 queries in 1m0.085047549s
Queries/sec: 53.74
Avg query latency: 123.209617ms
P95 query latency: 141.745618ms
P99 query latency: 154.527843ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11298 operations (1298 queries, 10000 writes) in 1m0.096751583s
Operations/sec: 188.00
Avg latency: 16.447175ms
Avg query latency: 139.791065ms
Avg write latency: 437.138µs
P95 latency: 137.879538ms
P99 latency: 162.020385ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.674593819s
Events/sec: 1033.64
Avg latency: 541.545µs
P90 latency: 693.862µs
P95 latency: 775.757µs
P99 latency: 1.05005ms
Bottom 10% Avg latency: 1.219386ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 168.056064ms
Burst completed: 1000 events in 159.819647ms
Burst completed: 1000 events in 147.500264ms
Burst completed: 1000 events in 159.150392ms
Burst completed: 1000 events in 149.954829ms
Burst completed: 1000 events in 138.082938ms
Burst completed: 1000 events in 157.234213ms
Burst completed: 1000 events in 158.468955ms
Burst completed: 1000 events in 144.346047ms
Burst completed: 1000 events in 154.930576ms
Burst test completed: 10000 events in 15.646785427s
Events/sec: 639.11
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4415 reads in 1m0.02899167s
Combined ops/sec: 156.84
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 890 queries in 1m0.279192867s
Queries/sec: 14.76
Avg query latency: 448.809547ms
P95 query latency: 607.28509ms
P99 query latency: 786.387053ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10469 operations (469 queries, 10000 writes) in 1m0.190785048s
Operations/sec: 173.93
Avg latency: 17.73903ms
Avg query latency: 388.59336ms
Avg write latency: 345.962µs
P95 latency: 1.158136ms
P99 latency: 407.947907ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 8.897492256s
Total Events: 10000
Events/sec: 1123.91
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 132 MB
Avg Latency: 416.753µs
P90 Latency: 546.351µs
P95 Latency: 597.338µs
P99 Latency: 760.549µs
Bottom 10% Avg Latency: 638.318µs
----------------------------------------
Test: Burst Pattern
Duration: 15.441062872s
Total Events: 10000
Events/sec: 647.62
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 104 MB
Avg Latency: 185.217µs
P90 Latency: 241.64µs
P95 Latency: 273.191µs
P99 Latency: 412.897µs
Bottom 10% Avg Latency: 306.752µs
----------------------------------------
Test: Mixed Read/Write
Duration: 45.847091984s
Total Events: 10000
Events/sec: 218.12
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 96 MB
Avg Latency: 9.446215ms
P90 Latency: 20.522135ms
P95 Latency: 22.416221ms
P99 Latency: 24.696283ms
Bottom 10% Avg Latency: 22.59535ms
----------------------------------------
Test: Query Performance
Duration: 1m0.085047549s
Total Events: 3229
Events/sec: 53.74
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 175 MB
Avg Latency: 123.209617ms
P90 Latency: 137.629898ms
P95 Latency: 141.745618ms
P99 Latency: 154.527843ms
Bottom 10% Avg Latency: 145.245967ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.096751583s
Total Events: 11298
Events/sec: 188.00
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 181 MB
Avg Latency: 16.447175ms
P90 Latency: 123.920421ms
P95 Latency: 137.879538ms
P99 Latency: 162.020385ms
Bottom 10% Avg Latency: 142.654147ms
----------------------------------------
Test: Peak Throughput
Duration: 9.674593819s
Total Events: 10000
Events/sec: 1033.64
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 541.545µs
P90 Latency: 693.862µs
P95 Latency: 775.757µs
P99 Latency: 1.05005ms
Bottom 10% Avg Latency: 1.219386ms
----------------------------------------
Test: Burst Pattern
Duration: 15.646785427s
Total Events: 10000
Events/sec: 639.11
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 146 MB
Avg Latency: 331.896µs
P90 Latency: 520.511µs
P95 Latency: 864.486µs
P99 Latency: 2.251087ms
Bottom 10% Avg Latency: 1.16922ms
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.02899167s
Total Events: 9415
Events/sec: 156.84
Success Rate: 94.2%
Concurrent Workers: 8
Memory Used: 147 MB
Avg Latency: 16.723365ms
P90 Latency: 39.058801ms
P95 Latency: 41.904891ms
P99 Latency: 47.156263ms
Bottom 10% Avg Latency: 42.800456ms
----------------------------------------
Test: Query Performance
Duration: 1m0.279192867s
Total Events: 890
Events/sec: 14.76
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 156 MB
Avg Latency: 448.809547ms
P90 Latency: 524.488485ms
P95 Latency: 607.28509ms
P99 Latency: 786.387053ms
Bottom 10% Avg Latency: 634.016595ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.190785048s
Total Events: 10469
Events/sec: 173.93
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 226 MB
Avg Latency: 17.73903ms
P90 Latency: 561.359µs
P95 Latency: 1.158136ms
P99 Latency: 407.947907ms
Bottom 10% Avg Latency: 174.508065ms
----------------------------------------
Report saved to: /tmp/benchmark_nostr-rs-relay_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_nostr-rs-relay_8/benchmark_report.adoc
1758366272164052/tmp/benchmark_nostr-rs-relay_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758366274030399/tmp/benchmark_nostr-rs-relay_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758366274036413/tmp/benchmark_nostr-rs-relay_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: nostr-rs-relay
RELAY_URL: ws://nostr-rs-relay:8080
TEST_TIMESTAMP: 2025-09-20T11:04:34+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_relayer-basic_8
Events: 10000, Workers: 8, Duration: 1m0s
1758364801895559/tmp/benchmark_relayer-basic_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758364801896041/tmp/benchmark_relayer-basic_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758364801896078/tmp/benchmark_relayer-basic_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758364801896347(*types.Uint32)(0xc0001a801c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758364801896400migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.050770003s
Events/sec: 1104.88
Avg latency: 433.89µs
P90 latency: 567.261µs
P95 latency: 617.868µs
P99 latency: 783.593µs
Bottom 10% Avg latency: 653.813µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 183.738134ms
Burst completed: 1000 events in 155.035832ms
Burst completed: 1000 events in 160.066514ms
Burst completed: 1000 events in 183.724238ms
Burst completed: 1000 events in 178.910929ms
Burst completed: 1000 events in 168.905441ms
Burst completed: 1000 events in 172.584809ms
Burst completed: 1000 events in 177.214508ms
Burst completed: 1000 events in 169.921566ms
Burst completed: 1000 events in 162.042488ms
Burst test completed: 10000 events in 15.572250139s
Events/sec: 642.17
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 44.509677166s
Combined ops/sec: 224.67
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3253 queries in 1m0.095238426s
Queries/sec: 54.13
Avg query latency: 122.100718ms
P95 query latency: 140.360749ms
P99 query latency: 148.353154ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11408 operations (1408 queries, 10000 writes) in 1m0.117581615s
Operations/sec: 189.76
Avg latency: 16.525268ms
Avg query latency: 130.972853ms
Avg write latency: 411.048µs
P95 latency: 132.130964ms
P99 latency: 146.285305ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.265496879s
Events/sec: 1079.27
Avg latency: 529.266µs
P90 latency: 658.033µs
P95 latency: 732.024µs
P99 latency: 953.285µs
Bottom 10% Avg latency: 1.168714ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 172.300479ms
Burst completed: 1000 events in 149.247397ms
Burst completed: 1000 events in 170.000198ms
Burst completed: 1000 events in 133.786958ms
Burst completed: 1000 events in 172.157036ms
Burst completed: 1000 events in 153.284738ms
Burst completed: 1000 events in 166.711903ms
Burst completed: 1000 events in 170.635427ms
Burst completed: 1000 events in 153.381031ms
Burst completed: 1000 events in 162.125949ms
Burst test completed: 10000 events in 16.674963543s
Events/sec: 599.70
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4665 reads in 1m0.035358264s
Combined ops/sec: 160.99
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 944 queries in 1m0.383519958s
Queries/sec: 15.63
Avg query latency: 421.75292ms
P95 query latency: 491.340259ms
P99 query latency: 664.614262ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10479 operations (479 queries, 10000 writes) in 1m0.291926697s
Operations/sec: 173.80
Avg latency: 18.049265ms
Avg query latency: 385.864458ms
Avg write latency: 430.918µs
P95 latency: 3.05038ms
P99 latency: 404.540502ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 9.050770003s
Total Events: 10000
Events/sec: 1104.88
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 153 MB
Avg Latency: 433.89µs
P90 Latency: 567.261µs
P95 Latency: 617.868µs
P99 Latency: 783.593µs
Bottom 10% Avg Latency: 653.813µs
----------------------------------------
Test: Burst Pattern
Duration: 15.572250139s
Total Events: 10000
Events/sec: 642.17
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 134 MB
Avg Latency: 186.306µs
P90 Latency: 243.995µs
P95 Latency: 279.192µs
P99 Latency: 392.859µs
Bottom 10% Avg Latency: 303.766µs
----------------------------------------
Test: Mixed Read/Write
Duration: 44.509677166s
Total Events: 10000
Events/sec: 224.67
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 163 MB
Avg Latency: 8.892738ms
P90 Latency: 19.406836ms
P95 Latency: 21.247322ms
P99 Latency: 23.452072ms
Bottom 10% Avg Latency: 21.397913ms
----------------------------------------
Test: Query Performance
Duration: 1m0.095238426s
Total Events: 3253
Events/sec: 54.13
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 126 MB
Avg Latency: 122.100718ms
P90 Latency: 136.523661ms
P95 Latency: 140.360749ms
P99 Latency: 148.353154ms
Bottom 10% Avg Latency: 142.067372ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.117581615s
Total Events: 11408
Events/sec: 189.76
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 149 MB
Avg Latency: 16.525268ms
P90 Latency: 121.696848ms
P95 Latency: 132.130964ms
P99 Latency: 146.285305ms
Bottom 10% Avg Latency: 134.054744ms
----------------------------------------
Test: Peak Throughput
Duration: 9.265496879s
Total Events: 10000
Events/sec: 1079.27
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 529.266µs
P90 Latency: 658.033µs
P95 Latency: 732.024µs
P99 Latency: 953.285µs
Bottom 10% Avg Latency: 1.168714ms
----------------------------------------
Test: Burst Pattern
Duration: 16.674963543s
Total Events: 10000
Events/sec: 599.70
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 142 MB
Avg Latency: 264.288µs
P90 Latency: 350.187µs
P95 Latency: 519.139µs
P99 Latency: 1.961326ms
Bottom 10% Avg Latency: 877.366µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.035358264s
Total Events: 9665
Events/sec: 160.99
Success Rate: 96.7%
Concurrent Workers: 8
Memory Used: 151 MB
Avg Latency: 16.019245ms
P90 Latency: 36.340362ms
P95 Latency: 39.113864ms
P99 Latency: 44.271098ms
Bottom 10% Avg Latency: 40.108462ms
----------------------------------------
Test: Query Performance
Duration: 1m0.383519958s
Total Events: 944
Events/sec: 15.63
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 280 MB
Avg Latency: 421.75292ms
P90 Latency: 460.902551ms
P95 Latency: 491.340259ms
P99 Latency: 664.614262ms
Bottom 10% Avg Latency: 538.014725ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.291926697s
Total Events: 10479
Events/sec: 173.80
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 122 MB
Avg Latency: 18.049265ms
P90 Latency: 843.867µs
P95 Latency: 3.05038ms
P99 Latency: 404.540502ms
Bottom 10% Avg Latency: 177.245211ms
----------------------------------------
Report saved to: /tmp/benchmark_relayer-basic_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_relayer-basic_8/benchmark_report.adoc
1758365287933287/tmp/benchmark_relayer-basic_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758365289807797/tmp/benchmark_relayer-basic_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758365289812921/tmp/benchmark_relayer-basic_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: relayer-basic
RELAY_URL: ws://relayer-basic:7447
TEST_TIMESTAMP: 2025-09-20T10:48:10+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

View File

@@ -0,0 +1,298 @@
Starting Nostr Relay Benchmark
Data Directory: /tmp/benchmark_strfry_8
Events: 10000, Workers: 8, Duration: 1m0s
1758365295110579/tmp/benchmark_strfry_8: All 0 tables opened in 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/levels.go:161 /build/pkg/database/logger.go:57
1758365295111085/tmp/benchmark_strfry_8: Discard stats nextEmptySlot: 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/discard.go:55 /build/pkg/database/logger.go:57
1758365295111113/tmp/benchmark_strfry_8: Set nextTxnTs to 0
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:358 /build/pkg/database/logger.go:57
1758365295111319(*types.Uint32)(0xc000141a3c)({
value: (uint32) 1
})
/build/pkg/database/migrations.go:65
1758365295111354migrating to version 1... /build/pkg/database/migrations.go:79
=== Starting test round 1/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.170212358s
Events/sec: 1090.49
Avg latency: 448.058µs
P90 latency: 597.558µs
P95 latency: 667.141µs
P99 latency: 920.784µs
Bottom 10% Avg latency: 729.464µs
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 172.138862ms
Burst completed: 1000 events in 168.99322ms
Burst completed: 1000 events in 162.213786ms
Burst completed: 1000 events in 161.027417ms
Burst completed: 1000 events in 183.148824ms
Burst completed: 1000 events in 178.152837ms
Burst completed: 1000 events in 158.65623ms
Burst completed: 1000 events in 186.7166ms
Burst completed: 1000 events in 177.202878ms
Burst completed: 1000 events in 182.780071ms
Burst test completed: 10000 events in 15.336760896s
Events/sec: 652.03
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 5000 reads in 44.257468151s
Combined ops/sec: 225.95
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 3002 queries in 1m0.091429487s
Queries/sec: 49.96
Avg query latency: 131.632043ms
P95 query latency: 175.810416ms
P99 query latency: 228.52716ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 11308 operations (1308 queries, 10000 writes) in 1m0.111257202s
Operations/sec: 188.12
Avg latency: 16.193707ms
Avg query latency: 137.019852ms
Avg write latency: 389.647µs
P95 latency: 136.70132ms
P99 latency: 156.996779ms
Pausing 10s before next round...
=== Test round completed ===
=== Starting test round 2/2 ===
RunPeakThroughputTest..
=== Peak Throughput Test ===
Events saved: 10000/10000 (100.0%)
Duration: 9.102738s
Events/sec: 1098.57
Avg latency: 493.093µs
P90 latency: 605.684µs
P95 latency: 659.477µs
P99 latency: 826.344µs
Bottom 10% Avg latency: 1.097884ms
RunBurstPatternTest..
=== Burst Pattern Test ===
Burst completed: 1000 events in 178.755916ms
Burst completed: 1000 events in 170.810722ms
Burst completed: 1000 events in 166.730701ms
Burst completed: 1000 events in 172.177576ms
Burst completed: 1000 events in 164.907178ms
Burst completed: 1000 events in 153.267727ms
Burst completed: 1000 events in 157.855743ms
Burst completed: 1000 events in 159.632496ms
Burst completed: 1000 events in 160.802526ms
Burst completed: 1000 events in 178.513954ms
Burst test completed: 10000 events in 15.535933443s
Events/sec: 643.67
RunMixedReadWriteTest..
=== Mixed Read/Write Test ===
Pre-populating database for read tests...
Mixed test completed: 5000 writes, 4550 reads in 1m0.032080518s
Combined ops/sec: 159.08
RunQueryTest..
=== Query Test ===
Pre-populating database with 10000 events for query tests...
Query test completed: 913 queries in 1m0.248877091s
Queries/sec: 15.15
Avg query latency: 436.472206ms
P95 query latency: 493.12732ms
P99 query latency: 623.201275ms
RunConcurrentQueryStoreTest..
=== Concurrent Query/Store Test ===
Pre-populating database with 5000 events for concurrent query/store test...
Concurrent test completed: 10470 operations (470 queries, 10000 writes) in 1m0.293280495s
Operations/sec: 173.65
Avg latency: 18.084009ms
Avg query latency: 395.171481ms
Avg write latency: 360.898µs
P95 latency: 1.338148ms
P99 latency: 413.21015ms
=== Test round completed ===
================================================================================
BENCHMARK REPORT
================================================================================
Test: Peak Throughput
Duration: 9.170212358s
Total Events: 10000
Events/sec: 1090.49
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 108 MB
Avg Latency: 448.058µs
P90 Latency: 597.558µs
P95 Latency: 667.141µs
P99 Latency: 920.784µs
Bottom 10% Avg Latency: 729.464µs
----------------------------------------
Test: Burst Pattern
Duration: 15.336760896s
Total Events: 10000
Events/sec: 652.03
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 123 MB
Avg Latency: 189.06µs
P90 Latency: 248.714µs
P95 Latency: 290.433µs
P99 Latency: 416.924µs
Bottom 10% Avg Latency: 324.174µs
----------------------------------------
Test: Mixed Read/Write
Duration: 44.257468151s
Total Events: 10000
Events/sec: 225.95
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 158 MB
Avg Latency: 8.745534ms
P90 Latency: 18.980294ms
P95 Latency: 20.822884ms
P99 Latency: 23.124918ms
Bottom 10% Avg Latency: 21.006886ms
----------------------------------------
Test: Query Performance
Duration: 1m0.091429487s
Total Events: 3002
Events/sec: 49.96
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 191 MB
Avg Latency: 131.632043ms
P90 Latency: 152.618309ms
P95 Latency: 175.810416ms
P99 Latency: 228.52716ms
Bottom 10% Avg Latency: 186.230874ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.111257202s
Total Events: 11308
Events/sec: 188.12
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 146 MB
Avg Latency: 16.193707ms
P90 Latency: 122.204256ms
P95 Latency: 136.70132ms
P99 Latency: 156.996779ms
Bottom 10% Avg Latency: 140.031139ms
----------------------------------------
Test: Peak Throughput
Duration: 9.102738s
Total Events: 10000
Events/sec: 1098.57
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 1441 MB
Avg Latency: 493.093µs
P90 Latency: 605.684µs
P95 Latency: 659.477µs
P99 Latency: 826.344µs
Bottom 10% Avg Latency: 1.097884ms
----------------------------------------
Test: Burst Pattern
Duration: 15.535933443s
Total Events: 10000
Events/sec: 643.67
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 130 MB
Avg Latency: 186.177µs
P90 Latency: 243.915µs
P95 Latency: 276.146µs
P99 Latency: 418.787µs
Bottom 10% Avg Latency: 309.015µs
----------------------------------------
Test: Mixed Read/Write
Duration: 1m0.032080518s
Total Events: 9550
Events/sec: 159.08
Success Rate: 95.5%
Concurrent Workers: 8
Memory Used: 115 MB
Avg Latency: 16.401942ms
P90 Latency: 37.575878ms
P95 Latency: 40.323279ms
P99 Latency: 45.453669ms
Bottom 10% Avg Latency: 41.331235ms
----------------------------------------
Test: Query Performance
Duration: 1m0.248877091s
Total Events: 913
Events/sec: 15.15
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 211 MB
Avg Latency: 436.472206ms
P90 Latency: 474.430346ms
P95 Latency: 493.12732ms
P99 Latency: 623.201275ms
Bottom 10% Avg Latency: 523.084076ms
----------------------------------------
Test: Concurrent Query/Store
Duration: 1m0.293280495s
Total Events: 10470
Events/sec: 173.65
Success Rate: 100.0%
Concurrent Workers: 8
Memory Used: 171 MB
Avg Latency: 18.084009ms
P90 Latency: 624.339µs
P95 Latency: 1.338148ms
P99 Latency: 413.21015ms
Bottom 10% Avg Latency: 177.8924ms
----------------------------------------
Report saved to: /tmp/benchmark_strfry_8/benchmark_report.txt
AsciiDoc report saved to: /tmp/benchmark_strfry_8/benchmark_report.adoc
1758365779337138/tmp/benchmark_strfry_8: Lifetime L0 stalled for: 0s
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:536 /build/pkg/database/logger.go:57
1758365780726692/tmp/benchmark_strfry_8:
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 5 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 2.0 MiB
Level 6 [ ]: NumTables: 04. Size: 87 MiB of 87 MiB. Score: 0.00->0.00 StaleData: 0 B Target FileSize: 4.0 MiB
Level Done
/go/pkg/mod/github.com/dgraph-io/badger/v4@v4.8.0/db.go:615 /build/pkg/database/logger.go:57
1758365780732292/tmp/benchmark_strfry_8: database closed /build/pkg/database/database.go:134
RELAY_NAME: strfry
RELAY_URL: ws://strfry:8080
TEST_TIMESTAMP: 2025-09-20T10:56:20+00:00
BENCHMARK_CONFIG:
Events: 10000
Workers: 8
Duration: 60s

634
cmd/stresstest/main.go Normal file
View File

@@ -0,0 +1,634 @@
package main
import (
"bufio"
"bytes"
"context"
"flag"
"fmt"
"math/rand"
"os"
"os/signal"
"runtime"
"strings"
"sync"
"sync/atomic"
"time"
"lol.mleku.dev/log"
"next.orly.dev/pkg/crypto/p256k"
"next.orly.dev/pkg/encoders/envelopes/eventenvelope"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/event/examples"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/encoders/timestamp"
"next.orly.dev/pkg/protocol/ws"
)
// randomHex returns a hex-encoded string of n random bytes (2n hex chars)
func randomHex(n int) string {
b := make([]byte, n)
_, _ = rand.Read(b)
return hex.Enc(b)
}
func makeEvent(rng *rand.Rand, signer *p256k.Signer) (*event.E, error) {
ev := &event.E{
CreatedAt: time.Now().Unix(),
Kind: kind.TextNote.K,
Tags: tag.NewS(),
Content: []byte(fmt.Sprintf("stresstest %d", rng.Int63())),
}
// Random number of p-tags up to 100
nPTags := rng.Intn(101) // 0..100 inclusive
for i := 0; i < nPTags; i++ {
// random 32-byte pubkey in hex (64 chars)
phex := randomHex(32)
ev.Tags.Append(tag.NewFromAny("p", phex))
}
// Sign and verify to ensure pubkey, id and signature are coherent
if err := ev.Sign(signer); err != nil {
return nil, err
}
if ok, err := ev.Verify(); err != nil || !ok {
return nil, fmt.Errorf("event signature verification failed: %v", err)
}
return ev, nil
}
type RelayConn struct {
mu sync.RWMutex
client *ws.Client
url string
}
type CacheIndex struct {
events []*event.E
ids [][]byte
authors [][]byte
times []int64
tags map[byte][][]byte // single-letter tag -> list of values
}
func (rc *RelayConn) Get() *ws.Client {
rc.mu.RLock()
defer rc.mu.RUnlock()
return rc.client
}
func (rc *RelayConn) Reconnect(ctx context.Context) error {
rc.mu.Lock()
defer rc.mu.Unlock()
if rc.client != nil {
_ = rc.client.Close()
}
c, err := ws.RelayConnect(ctx, rc.url)
if err != nil {
return err
}
rc.client = c
return nil
}
// loadCacheAndIndex parses examples.Cache (JSONL of events) and builds an index
func loadCacheAndIndex() (*CacheIndex, error) {
scanner := bufio.NewScanner(bytes.NewReader(examples.Cache))
idx := &CacheIndex{tags: make(map[byte][][]byte)}
for scanner.Scan() {
line := scanner.Bytes()
if len(bytes.TrimSpace(line)) == 0 {
continue
}
ev := event.New()
rem, err := ev.Unmarshal(line)
_ = rem
if err != nil {
// skip malformed lines
continue
}
idx.events = append(idx.events, ev)
// collect fields
if len(ev.ID) > 0 {
idx.ids = append(idx.ids, append([]byte(nil), ev.ID...))
}
if len(ev.Pubkey) > 0 {
idx.authors = append(idx.authors, append([]byte(nil), ev.Pubkey...))
}
idx.times = append(idx.times, ev.CreatedAt)
if ev.Tags != nil {
for _, tg := range *ev.Tags {
if tg == nil || tg.Len() < 2 {
continue
}
k := tg.Key()
if len(k) != 1 {
continue // only single-letter keys per requirement
}
key := k[0]
for _, v := range tg.T[1:] {
idx.tags[key] = append(
idx.tags[key], append([]byte(nil), v...),
)
}
}
}
}
return idx, nil
}
// publishCacheEvents uploads all cache events to the relay using multiple concurrent connections
func publishCacheEvents(
ctx context.Context, relayURL string, idx *CacheIndex,
) (sentCount int) {
numWorkers := runtime.NumCPU()
log.I.F("using %d concurrent connections for cache upload", numWorkers)
// Channel to distribute events to workers
eventChan := make(chan *event.E, len(idx.events))
var totalSent atomic.Int64
// Fill the event channel
for _, ev := range idx.events {
eventChan <- ev
}
close(eventChan)
// Start worker goroutines
var wg sync.WaitGroup
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
// Create separate connection for this worker
client, err := ws.RelayConnect(ctx, relayURL)
if err != nil {
log.E.F("worker %d: failed to connect: %v", workerID, err)
return
}
defer client.Close()
rc := &RelayConn{client: client, url: relayURL}
workerSent := 0
// Process events from the channel
for ev := range eventChan {
select {
case <-ctx.Done():
return
default:
}
// Get client connection
wsClient := rc.Get()
if wsClient == nil {
if err := rc.Reconnect(ctx); err != nil {
log.E.F("worker %d: reconnect failed: %v", workerID, err)
continue
}
wsClient = rc.Get()
}
// Send event without waiting for OK response (fire-and-forget)
envelope := eventenvelope.NewSubmissionWith(ev)
envBytes := envelope.Marshal(nil)
if err := <-wsClient.Write(envBytes); err != nil {
log.E.F("worker %d: write error: %v", workerID, err)
errStr := err.Error()
if strings.Contains(errStr, "connection closed") {
_ = rc.Reconnect(ctx)
}
time.Sleep(50 * time.Millisecond)
continue
}
workerSent++
totalSent.Add(1)
log.T.F("worker %d: sent event %d (total: %d)", workerID, workerSent, totalSent.Load())
// Small delay to prevent overwhelming the relay
select {
case <-time.After(10 * time.Millisecond):
case <-ctx.Done():
return
}
}
log.I.F("worker %d: completed, sent %d events", workerID, workerSent)
}(i)
}
// Wait for all workers to complete
wg.Wait()
return int(totalSent.Load())
}
// buildRandomFilter builds a filter combining random subsets of id, author, timestamp, and a single-letter tag value.
func buildRandomFilter(idx *CacheIndex, rng *rand.Rand, mask int) *filter.F {
// pick a random base event as anchor for fields
i := rng.Intn(len(idx.events))
ev := idx.events[i]
f := filter.New()
// clear defaults we don't set
f.Kinds = kind.NewS() // we don't constrain kinds
// include fields based on mask bits: 1=id, 2=author, 4=timestamp, 8=tag
if mask&1 != 0 {
f.Ids.T = append(f.Ids.T, append([]byte(nil), ev.ID...))
}
if mask&2 != 0 {
f.Authors.T = append(f.Authors.T, append([]byte(nil), ev.Pubkey...))
}
if mask&4 != 0 {
// use a tight window around the event timestamp (exact match)
f.Since = timestamp.FromUnix(ev.CreatedAt)
f.Until = timestamp.FromUnix(ev.CreatedAt)
}
if mask&8 != 0 {
// choose a random single-letter tag from this event if present; fallback to global index
var key byte
var val []byte
chosen := false
if ev.Tags != nil {
for _, tg := range *ev.Tags {
if tg == nil || tg.Len() < 2 {
continue
}
k := tg.Key()
if len(k) == 1 {
key = k[0]
vv := tg.T[1:]
val = vv[rng.Intn(len(vv))]
chosen = true
break
}
}
}
if !chosen && len(idx.tags) > 0 {
// pick a random entry from global tags map
keys := make([]byte, 0, len(idx.tags))
for k := range idx.tags {
keys = append(keys, k)
}
key = keys[rng.Intn(len(keys))]
vals := idx.tags[key]
val = vals[rng.Intn(len(vals))]
}
if key != 0 && len(val) > 0 {
f.Tags.Append(tag.NewFromBytesSlice([]byte{key}, val))
}
}
return f
}
func publisherWorker(
ctx context.Context, rc *RelayConn, id int, stats *uint64,
) {
// Unique RNG per worker
src := rand.NewSource(time.Now().UnixNano() ^ int64(id<<16))
rng := rand.New(src)
// Generate and reuse signing key per worker
signer := &p256k.Signer{}
if err := signer.Generate(); err != nil {
log.E.F("worker %d: signer generate error: %v", id, err)
return
}
for {
select {
case <-ctx.Done():
return
default:
}
ev, err := makeEvent(rng, signer)
if err != nil {
log.E.F("worker %d: makeEvent error: %v", id, err)
return
}
// Send event without waiting for OK response (fire-and-forget)
client := rc.Get()
if client == nil {
_ = rc.Reconnect(ctx)
continue
}
// Create EVENT envelope and send directly without waiting for OK
envelope := eventenvelope.NewSubmissionWith(ev)
envBytes := envelope.Marshal(nil)
if err := <-client.Write(envBytes); err != nil {
log.E.F("worker %d: write error: %v", id, err)
errStr := err.Error()
if strings.Contains(errStr, "connection closed") {
for attempt := 0; attempt < 5; attempt++ {
if ctx.Err() != nil {
return
}
if err := rc.Reconnect(ctx); err == nil {
log.I.F("worker %d: reconnected to %s", id, rc.url)
break
}
select {
case <-time.After(200 * time.Millisecond):
case <-ctx.Done():
return
}
}
}
// back off briefly on error to avoid tight loop if relay misbehaves
select {
case <-time.After(100 * time.Millisecond):
case <-ctx.Done():
return
}
continue
}
atomic.AddUint64(stats, 1)
// Randomly fluctuate pacing: small random sleep 0..50ms plus occasional longer jitter
sleep := time.Duration(rng.Intn(50)) * time.Millisecond
if rng.Intn(10) == 0 { // 10% chance add extra 100..400ms
sleep += time.Duration(100+rng.Intn(300)) * time.Millisecond
}
select {
case <-time.After(sleep):
case <-ctx.Done():
return
}
}
}
func queryWorker(
ctx context.Context, rc *RelayConn, idx *CacheIndex, id int,
queries, results *uint64, subTimeout time.Duration,
minInterval, maxInterval time.Duration,
) {
rng := rand.New(rand.NewSource(time.Now().UnixNano() ^ int64(id<<24)))
mask := 1
for {
select {
case <-ctx.Done():
return
default:
}
if len(idx.events) == 0 {
time.Sleep(200 * time.Millisecond)
continue
}
f := buildRandomFilter(idx, rng, mask)
mask++
if mask > 15 { // all combinations of 4 criteria (excluding 0)
mask = 1
}
client := rc.Get()
if client == nil {
_ = rc.Reconnect(ctx)
continue
}
ff := filter.S{f}
sCtx, cancel := context.WithTimeout(ctx, subTimeout)
sub, err := client.Subscribe(
sCtx, &ff, ws.WithLabel("stresstest-query"),
)
if err != nil {
cancel()
// reconnect on connection issues
errStr := err.Error()
if strings.Contains(errStr, "connection closed") {
_ = rc.Reconnect(ctx)
}
continue
}
atomic.AddUint64(queries, 1)
// read until EOSE or timeout
innerDone := false
for !innerDone {
select {
case <-sCtx.Done():
innerDone = true
case <-sub.EndOfStoredEvents:
innerDone = true
case ev, ok := <-sub.Events:
if !ok {
innerDone = true
break
}
if ev != nil {
atomic.AddUint64(results, 1)
}
}
}
sub.Unsub()
cancel()
// wait a random interval between queries
interval := minInterval
if maxInterval > minInterval {
delta := rng.Int63n(int64(maxInterval - minInterval))
interval += time.Duration(delta)
}
select {
case <-time.After(interval):
case <-ctx.Done():
return
}
}
}
func startReader(ctx context.Context, rl *ws.Client, received *uint64) error {
// Broad filter: subscribe to text notes since now-5m to catch our own writes
f := filter.New()
f.Kinds = kind.NewS(kind.TextNote)
// We don't set authors to ensure we read all text notes coming in
ff := filter.S{f}
sub, err := rl.Subscribe(ctx, &ff, ws.WithLabel("stresstest-reader"))
if err != nil {
return err
}
go func() {
for {
select {
case <-ctx.Done():
return
case ev, ok := <-sub.Events:
if !ok {
return
}
if ev != nil {
atomic.AddUint64(received, 1)
}
}
}
}()
return nil
}
func main() {
var (
address string
port int
workers int
duration time.Duration
publishTimeout time.Duration
queryWorkers int
queryTimeout time.Duration
queryMinInt time.Duration
queryMaxInt time.Duration
skipCache bool
)
flag.StringVar(
&address, "address", "localhost", "relay address (host or IP)",
)
flag.IntVar(&port, "port", 3334, "relay port")
flag.IntVar(
&workers, "workers", 8, "number of concurrent publisher workers",
)
flag.DurationVar(
&duration, "duration", 60*time.Second,
"how long to run the stress test",
)
flag.DurationVar(
&publishTimeout, "publish-timeout", 15*time.Second,
"timeout waiting for OK per publish",
)
flag.IntVar(
&queryWorkers, "query-workers", 4, "number of concurrent query workers",
)
flag.DurationVar(
&queryTimeout, "query-timeout", 3*time.Second,
"subscription timeout for queries",
)
flag.DurationVar(
&queryMinInt, "query-min-interval", 50*time.Millisecond,
"minimum interval between queries per worker",
)
flag.DurationVar(
&queryMaxInt, "query-max-interval", 300*time.Millisecond,
"maximum interval between queries per worker",
)
flag.BoolVar(
&skipCache, "skip-cache", false,
"skip uploading examples.Cache before running",
)
flag.Parse()
relayURL := fmt.Sprintf("ws://%s:%d", address, port)
log.I.F("stresstest: connecting to %s", relayURL)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Handle Ctrl+C
sigc := make(chan os.Signal, 1)
signal.Notify(sigc, os.Interrupt)
go func() {
select {
case <-sigc:
log.I.Ln("interrupt received, shutting down...")
cancel()
case <-ctx.Done():
}
}()
rl, err := ws.RelayConnect(ctx, relayURL)
if err != nil {
log.E.F("failed to connect to relay %s: %v", relayURL, err)
os.Exit(1)
}
defer rl.Close()
rc := &RelayConn{client: rl, url: relayURL}
// Load and publish cache events first (unless skipped)
idx, err := loadCacheAndIndex()
if err != nil {
log.E.F("failed to load examples.Cache: %v", err)
}
cacheSent := 0
if !skipCache && idx != nil && len(idx.events) > 0 {
log.I.F("sending %d events from examples.Cache...", len(idx.events))
cacheSent = publishCacheEvents(ctx, relayURL, idx)
log.I.F("sent %d/%d cache events", cacheSent, len(idx.events))
}
var pubOK uint64
var recvCount uint64
var qCount uint64
var qResults uint64
if err := startReader(ctx, rl, &recvCount); err != nil {
log.E.F("reader subscribe error: %v", err)
// continue anyway, we can still write
}
wg := sync.WaitGroup{}
// Start publisher workers
wg.Add(workers)
for i := 0; i < workers; i++ {
i := i
go func() {
defer wg.Done()
publisherWorker(ctx, rc, i, &pubOK)
}()
}
// Start query workers
if idx != nil && len(idx.events) > 0 && queryWorkers > 0 {
wg.Add(queryWorkers)
for i := 0; i < queryWorkers; i++ {
i := i
go func() {
defer wg.Done()
queryWorker(
ctx, rc, idx, i, &qCount, &qResults, queryTimeout,
queryMinInt, queryMaxInt,
)
}()
}
}
// Timer for duration and periodic stats
ticker := time.NewTicker(2 * time.Second)
defer ticker.Stop()
end := time.NewTimer(duration)
start := time.Now()
loop:
for {
select {
case <-ticker.C:
elapsed := time.Since(start).Seconds()
p := atomic.LoadUint64(&pubOK)
r := atomic.LoadUint64(&recvCount)
qc := atomic.LoadUint64(&qCount)
qr := atomic.LoadUint64(&qResults)
log.I.F(
"elapsed=%.1fs sent=%d (%.0f/s) received=%d cache_sent=%d queries=%d results=%d",
elapsed, p, float64(p)/elapsed, r, cacheSent, qc, qr,
)
case <-end.C:
break loop
case <-ctx.Done():
break loop
}
}
cancel()
wg.Wait()
p := atomic.LoadUint64(&pubOK)
r := atomic.LoadUint64(&recvCount)
qc := atomic.LoadUint64(&qCount)
qr := atomic.LoadUint64(&qResults)
log.I.F(
"stresstest complete: cache_sent=%d sent=%d received=%d queries=%d results=%d duration=%s",
cacheSent, p, r, qc, qr,
time.Since(start).Truncate(time.Millisecond),
)
}

5
go.mod
View File

@@ -19,7 +19,7 @@ require (
golang.org/x/lint v0.0.0-20241112194109-818c5a804067
golang.org/x/net v0.43.0
honnef.co/go/tools v0.6.1
lol.mleku.dev v1.0.2
lol.mleku.dev v1.0.3
lukechampine.com/frand v1.5.1
)
@@ -28,15 +28,12 @@ require (
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/dgraph-io/ristretto/v2 v2.2.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/felixge/fgprof v0.9.3 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/google/flatbuffers v25.2.10+incompatible // indirect
github.com/google/pprof v0.0.0-20211214055906-6f57359322fd // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/templexxx/cpu v0.0.1 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect

11
go.sum
View File

@@ -20,8 +20,6 @@ github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da h1:aIftn67I1fkbMa5
github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/felixge/fgprof v0.9.3 h1:VvyZxILNuCiUCSXtPtYmmtGvb65nqXh2QFWc0Wpf2/g=
github.com/felixge/fgprof v0.9.3/go.mod h1:RdbpDgzqYVh/T9fPELJyV7EYJuHB55UTEULNun8eiPw=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
@@ -46,10 +44,6 @@ github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA=
github.com/pkg/profile v1.7.0/go.mod h1:8Uer0jas47ZQMJ7VD+OHknK4YDY07LPUC6dEvqDjvNo=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
@@ -101,7 +95,6 @@ golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI=
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -121,7 +114,7 @@ gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI=
honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4=
lol.mleku.dev v1.0.2 h1:bSV1hHnkmt1hq+9nSvRwN6wgcI7itbM3XRZ4dMB438c=
lol.mleku.dev v1.0.2/go.mod h1:DQ0WnmkntA9dPLCXgvtIgYt5G0HSqx3wSTLolHgWeLA=
lol.mleku.dev v1.0.3 h1:IrqLd/wFRghu6MX7mgyKh//3VQiId2AM4RdCbFqSLnY=
lol.mleku.dev v1.0.3/go.mod h1:DQ0WnmkntA9dPLCXgvtIgYt5G0HSqx3wSTLolHgWeLA=
lukechampine.com/frand v1.5.1 h1:fg0eRtdmGFIxhP5zQJzM1lFDbD6CUfu/f+7WgAZd5/w=
lukechampine.com/frand v1.5.1/go.mod h1:4VstaWc2plN4Mjr10chUD46RAVGWhpkZ5Nja8+Azp0Q=

228
main.go
View File

@@ -3,25 +3,149 @@ package main
import (
"context"
"fmt"
"net/http"
pp "net/http/pprof"
"os"
"os/exec"
"os/signal"
"runtime"
"time"
"github.com/pkg/profile"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/app"
"next.orly.dev/app/config"
acl "next.orly.dev/pkg/acl"
database "next.orly.dev/pkg/database"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/spider"
"next.orly.dev/pkg/version"
)
// openBrowser attempts to open the specified URL in the default browser.
// It supports multiple platforms including Linux, macOS, and Windows.
func openBrowser(url string) {
var err error
switch runtime.GOOS {
case "linux":
err = exec.Command("xdg-open", url).Start()
case "windows":
err = exec.Command(
"rundll32", "url.dll,FileProtocolHandler", url,
).Start()
case "darwin":
err = exec.Command("open", url).Start()
default:
log.W.F("unsupported platform for opening browser: %s", runtime.GOOS)
return
}
if err != nil {
log.E.F("failed to open browser: %v", err)
} else {
log.I.F("opened browser to %s", url)
}
}
func main() {
runtime.GOMAXPROCS(runtime.NumCPU() * 4)
var err error
var cfg *config.C
if cfg, err = config.New(); chk.T(err) {
}
log.I.F("starting %s %s", cfg.AppName, version.V)
startProfiler(cfg.Pprof)
// If OpenPprofWeb is true and profiling is enabled, we need to ensure HTTP profiling is also enabled
if cfg.OpenPprofWeb && cfg.Pprof != "" && !cfg.PprofHTTP {
log.I.F("enabling HTTP pprof server to support web viewer")
cfg.PprofHTTP = true
}
switch cfg.Pprof {
case "cpu":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.CPUProfile, profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.CPUProfile)
defer prof.Stop()
}
case "memory":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.MemProfile, profile.MemProfileRate(32),
profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.MemProfile)
defer prof.Stop()
}
case "allocation":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.MemProfileAllocs, profile.MemProfileRate(32),
profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.MemProfileAllocs)
defer prof.Stop()
}
case "heap":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.MemProfileHeap, profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.MemProfileHeap)
defer prof.Stop()
}
case "mutex":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.MutexProfile, profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.MutexProfile)
defer prof.Stop()
}
case "threadcreate":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.ThreadcreationProfile,
profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.ThreadcreationProfile)
defer prof.Stop()
}
case "goroutine":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.GoroutineProfile, profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.GoroutineProfile)
defer prof.Stop()
}
case "block":
if cfg.PprofPath != "" {
prof := profile.Start(
profile.BlockProfile, profile.ProfilePath(cfg.PprofPath),
)
defer prof.Stop()
} else {
prof := profile.Start(profile.BlockProfile)
defer prof.Stop()
}
}
ctx, cancel := context.WithCancel(context.Background())
var db *database.D
if db, err = database.New(
@@ -34,6 +158,100 @@ func main() {
os.Exit(1)
}
acl.Registry.Syncer()
// Initialize and start spider functionality if enabled
spiderCtx, spiderCancel := context.WithCancel(ctx)
spiderInstance := spider.New(db, cfg, spiderCtx, spiderCancel)
spiderInstance.Start()
defer spiderInstance.Stop()
// Start HTTP pprof server if enabled
if cfg.PprofHTTP {
pprofAddr := fmt.Sprintf("%s:%d", cfg.Listen, 6060)
pprofMux := http.NewServeMux()
pprofMux.HandleFunc("/debug/pprof/", pp.Index)
pprofMux.HandleFunc("/debug/pprof/cmdline", pp.Cmdline)
pprofMux.HandleFunc("/debug/pprof/profile", pp.Profile)
pprofMux.HandleFunc("/debug/pprof/symbol", pp.Symbol)
pprofMux.HandleFunc("/debug/pprof/trace", pp.Trace)
for _, p := range []string{
"allocs", "block", "goroutine", "heap", "mutex", "threadcreate",
} {
pprofMux.Handle("/debug/pprof/"+p, pp.Handler(p))
}
ppSrv := &http.Server{Addr: pprofAddr, Handler: pprofMux}
go func() {
log.I.F("pprof server listening on %s", pprofAddr)
if err := ppSrv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.E.F("pprof server error: %v", err)
}
}()
go func() {
<-ctx.Done()
shutdownCtx, cancelShutdown := context.WithTimeout(
context.Background(), 2*time.Second,
)
defer cancelShutdown()
_ = ppSrv.Shutdown(shutdownCtx)
}()
// Open the pprof web viewer if enabled
if cfg.OpenPprofWeb && cfg.Pprof != "" {
pprofURL := fmt.Sprintf("http://localhost:6060/debug/pprof/")
go func() {
// Wait a moment for the server to start
time.Sleep(500 * time.Millisecond)
openBrowser(pprofURL)
}()
}
}
// Start health check HTTP server if configured
var healthSrv *http.Server
if cfg.HealthPort > 0 {
mux := http.NewServeMux()
mux.HandleFunc(
"/healthz", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("ok"))
log.I.F("health check ok")
},
)
// Optional shutdown endpoint to gracefully stop the process so profiling defers run
if cfg.EnableShutdown {
mux.HandleFunc(
"/shutdown", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("shutting down"))
log.I.F("shutdown requested via /shutdown; sending SIGINT to self")
go func() {
p, _ := os.FindProcess(os.Getpid())
_ = p.Signal(os.Interrupt)
}()
},
)
}
healthSrv = &http.Server{
Addr: fmt.Sprintf(
"%s:%d", cfg.Listen, cfg.HealthPort,
), Handler: mux,
}
go func() {
log.I.F("health check server listening on %s", healthSrv.Addr)
if err := healthSrv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.E.F("health server error: %v", err)
}
}()
go func() {
<-ctx.Done()
shutdownCtx, cancelShutdown := context.WithTimeout(
context.Background(), 2*time.Second,
)
defer cancelShutdown()
_ = healthSrv.Shutdown(shutdownCtx)
}()
}
quit := app.Run(ctx, cfg, db)
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, os.Interrupt)
@@ -43,12 +261,14 @@ func main() {
fmt.Printf("\r")
cancel()
chk.E(db.Close())
log.I.F("exiting")
return
case <-quit:
cancel()
chk.E(db.Close())
log.I.F("exiting")
return
}
}
log.I.F("exiting")
}

View File

@@ -12,7 +12,7 @@ import (
"lol.mleku.dev/errorf"
"lol.mleku.dev/log"
"next.orly.dev/app/config"
database "next.orly.dev/pkg/database"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/database/indexes/types"
"next.orly.dev/pkg/encoders/bech32encoding"
"next.orly.dev/pkg/encoders/envelopes"
@@ -24,7 +24,8 @@ import (
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
utils "next.orly.dev/pkg/utils"
"next.orly.dev/pkg/protocol/publish"
"next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils/normalize"
"next.orly.dev/pkg/utils/values"
)
@@ -33,6 +34,7 @@ type Follows struct {
Ctx context.Context
cfg *config.C
*database.D
pubs *publish.S
followsMx sync.RWMutex
admins [][]byte
follows [][]byte
@@ -53,6 +55,9 @@ func (f *Follows) Configure(cfg ...any) (err error) {
case context.Context:
// log.D.F("setting ACL context: %s", c.Value("id"))
f.Ctx = c
case *publish.S:
// set publisher for dispatching new events
f.pubs = c
default:
err = errorf.E("invalid type: %T", reflect.TypeOf(ca))
}
@@ -74,7 +79,7 @@ func (f *Follows) Configure(cfg ...any) (err error) {
} else {
adm = a
}
log.I.F("admin: %0x", adm)
// log.I.F("admin: %0x", adm)
f.admins = append(f.admins, adm)
fl := &filter.F{
Authors: tag.NewFromAny(adm),
@@ -203,6 +208,7 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
return
}
urls := f.adminRelays()
log.I.S(urls)
if len(urls) == 0 {
log.W.F("follows syncer: no admin relays found in DB (kind 10002)")
return
@@ -224,6 +230,15 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
c, _, err := websocket.Dial(ctx, u, nil)
if err != nil {
log.W.F("follows syncer: dial %s failed: %v", u, err)
if strings.Contains(
err.Error(), "response status code 101 but got 403",
) {
// 403 means the relay is not accepting connections from
// us. Forbidden is the meaning, usually used to
// indicate either the IP or user is blocked. so stop
// trying this one.
return
}
timer := time.NewTimer(backoff)
select {
case <-ctx.Done():
@@ -290,11 +305,16 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
)
}
// ignore duplicates and continue
} else {
// Only dispatch if the event was newly saved (no error)
if f.pubs != nil {
go f.pubs.Deliver(res.Event)
}
// log.I.F(
// "saved new event from follows syncer: %0x",
// res.Event.ID,
// )
}
log.I.F(
"saved new event from follows syncer: %0x",
res.Event.ID,
)
case eoseenvelope.L:
// ignore, continue subscription
default:
@@ -340,6 +360,16 @@ func (f *Follows) Syncer() {
f.updated <- struct{}{}
}
// GetFollowedPubkeys returns a copy of the followed pubkeys list
func (f *Follows) GetFollowedPubkeys() [][]byte {
f.followsMx.RLock()
defer f.followsMx.RUnlock()
followedPubkeys := make([][]byte, len(f.follows))
copy(followedPubkeys, f.follows)
return followedPubkeys
}
func init() {
log.T.F("registering follows ACL")
Registry.Register(new(Follows))

View File

@@ -2,6 +2,7 @@ package database
import (
"bytes"
"fmt"
"github.com/dgraph-io/badger/v4"
"lol.mleku.dev/chk"
@@ -25,8 +26,23 @@ func (d *D) FetchEventBySerial(ser *types.Uint40) (ev *event.E, err error) {
if v, err = item.ValueCopy(nil); chk.E(err) {
return
}
// Check if we have valid data before attempting to unmarshal
if len(v) < 32+32+1+2+1+1+64 { // ID + Pubkey + min varint fields + Sig
err = fmt.Errorf(
"incomplete event data: got %d bytes, expected at least %d",
len(v), 32+32+1+2+1+1+64,
)
return
}
ev = new(event.E)
if err = ev.UnmarshalBinary(bytes.NewBuffer(v)); chk.E(err) {
if err = ev.UnmarshalBinary(bytes.NewBuffer(v)); err != nil {
// Add more context to EOF errors for debugging
if err.Error() == "EOF" {
err = fmt.Errorf(
"EOF while unmarshaling event (serial=%v, data_len=%d): %w",
ser, len(v), err,
)
}
return
}
return

View File

@@ -0,0 +1,68 @@
package database
import (
"bytes"
"github.com/dgraph-io/badger/v4"
"lol.mleku.dev/chk"
"next.orly.dev/pkg/database/indexes"
"next.orly.dev/pkg/database/indexes/types"
"next.orly.dev/pkg/encoders/event"
)
// FetchEventsBySerials fetches multiple events by their serials in a single database transaction.
// Returns a map of serial uint64 value to event, only including successfully fetched events.
func (d *D) FetchEventsBySerials(serials []*types.Uint40) (events map[uint64]*event.E, err error) {
events = make(map[uint64]*event.E)
if len(serials) == 0 {
return events, nil
}
if err = d.View(
func(txn *badger.Txn) (err error) {
for _, ser := range serials {
buf := new(bytes.Buffer)
if err = indexes.EventEnc(ser).MarshalWrite(buf); chk.E(err) {
// Skip this serial on error but continue with others
continue
}
var item *badger.Item
if item, err = txn.Get(buf.Bytes()); err != nil {
// Skip this serial if not found but continue with others
err = nil
continue
}
var v []byte
if v, err = item.ValueCopy(nil); chk.E(err) {
// Skip this serial on error but continue with others
err = nil
continue
}
// Check if we have valid data before attempting to unmarshal
if len(v) < 32+32+1+2+1+1+64 { // ID + Pubkey + min varint fields + Sig
// Skip this serial - incomplete data
continue
}
ev := new(event.E)
if err = ev.UnmarshalBinary(bytes.NewBuffer(v)); err != nil {
// Skip this serial on unmarshal error but continue with others
err = nil
continue
}
// Successfully unmarshaled event, add to results
events[ser.Get()] = ev
}
return nil
},
); err != nil {
return
}
return events, nil
}

View File

@@ -362,7 +362,6 @@ func GetIndexesFromFilter(f *filter.F) (idxs []Range, err error) {
if f.Authors != nil && f.Authors.Len() > 0 {
for _, author := range f.Authors.T {
var p *types2.PubHash
log.I.S(author)
if p, err = CreatePubHashFromData(author); chk.E(err) {
return
}

View File

@@ -8,6 +8,7 @@ import (
"lol.mleku.dev/errorf"
"lol.mleku.dev/log"
"next.orly.dev/pkg/database/indexes/types"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/tag"
@@ -19,18 +20,16 @@ func (d *D) GetSerialById(id []byte) (ser *types.Uint40, err error) {
if idxs, err = GetIndexesFromFilter(&filter.F{Ids: tag.NewFromBytesSlice(id)}); chk.E(err) {
return
}
for i, idx := range idxs {
log.T.F(
"GetSerialById: searching range %d: start=%x, end=%x", i, idx.Start,
idx.End,
)
}
// for i, idx := range idxs {
// log.T.F(
// "GetSerialById: searching range %d: start=%x, end=%x", i, idx.Start,
// idx.End,
// )
// }
if len(idxs) == 0 {
err = errorf.E("no indexes found for id %0x", id)
return
}
idFound := false
if err = d.View(
func(txn *badger.Txn) (err error) {
@@ -49,16 +48,15 @@ func (d *D) GetSerialById(id []byte) (ser *types.Uint40, err error) {
idFound = true
} else {
// Item not found in database
log.T.F(
"GetSerialById: ID not found in database: %s", hex.Enc(id),
)
// log.T.F(
// "GetSerialById: ID not found in database: %s", hex.Enc(id),
// )
}
return
},
); chk.E(err) {
return
}
if !idFound {
err = errorf.T("id not found in database: %s", hex.Enc(id))
return
@@ -67,7 +65,99 @@ func (d *D) GetSerialById(id []byte) (ser *types.Uint40, err error) {
return
}
//
// GetSerialsByIds takes a tag.T containing multiple IDs and returns a map of IDs to their
// corresponding serial numbers. It directly queries the IdPrefix index for matching IDs,
// which is more efficient than using GetIndexesFromFilter.
func (d *D) GetSerialsByIds(ids *tag.T) (
serials map[string]*types.Uint40, err error,
) {
return d.GetSerialsByIdsWithFilter(ids, nil)
}
// GetSerialsByIdsWithFilter takes a tag.T containing multiple IDs and returns a
// map of IDs to their corresponding serial numbers, applying a filter function
// to each event. The function directly creates ID index prefixes for efficient querying.
func (d *D) GetSerialsByIdsWithFilter(
ids *tag.T, fn func(ev *event.E, ser *types.Uint40) bool,
) (serials map[string]*types.Uint40, err error) {
log.T.F("GetSerialsByIdsWithFilter: input ids count=%d", ids.Len())
// Initialize the result map
serials = make(map[string]*types.Uint40)
// Return early if no IDs are provided
if ids.Len() == 0 {
return
}
// Process all IDs in a single transaction
if err = d.View(
func(txn *badger.Txn) (err error) {
it := txn.NewIterator(badger.DefaultIteratorOptions)
defer it.Close()
// Process each ID sequentially
for _, id := range ids.T {
// idHex := hex.Enc(id)
// Get the index prefix for this ID
var idxs []Range
if idxs, err = GetIndexesFromFilter(&filter.F{Ids: tag.NewFromBytesSlice(id)}); chk.E(err) {
// Skip this ID if we can't create its index
continue
}
// Skip if no index was created
if len(idxs) == 0 {
continue
}
// Seek to the start of this ID's range in the database
it.Seek(idxs[0].Start)
if it.ValidForPrefix(idxs[0].Start) {
// Found an entry for this ID
item := it.Item()
key := item.Key()
// Extract the serial number from the key
ser := new(types.Uint40)
buf := bytes.NewBuffer(key[len(key)-5:])
if err = ser.UnmarshalRead(buf); chk.E(err) {
continue
}
// If a filter function is provided, fetch the event and apply the filter
if fn != nil {
var ev *event.E
if ev, err = d.FetchEventBySerial(ser); err != nil {
// Skip this event if we can't fetch it
continue
}
// Apply the filter
if !fn(ev, ser) {
// Skip this event if it doesn't pass the filter
continue
}
}
// Store the serial in the result map using the hex-encoded ID as the key
serials[string(id)] = ser
}
}
return
},
); chk.E(err) {
return
}
log.T.F(
"GetSerialsByIdsWithFilter: found %d serials out of %d requested ids",
len(serials), ids.Len(),
)
return
}
// func (d *D) GetSerialBytesById(id []byte) (ser []byte, err error) {
// var idxs []Range
// if idxs, err = GetIndexesFromFilter(&filter.F{Ids: tag.New(id)}); chk.E(err) {

View File

@@ -48,9 +48,11 @@ func TestGetSerialById(t *testing.T) {
// Unmarshal the event
if _, err = ev.Unmarshal(b); chk.E(err) {
ev.Free()
t.Fatal(err)
}
ev.Free()
events = append(events, ev)
// Save the event to the database

View File

@@ -55,8 +55,10 @@ func TestGetSerialsByRange(t *testing.T) {
// Unmarshal the event
if _, err = ev.Unmarshal(b); chk.E(err) {
ev.Free()
t.Fatal(err)
}
ev.Free()
events = append(events, ev)

62
pkg/database/markers.go Normal file
View File

@@ -0,0 +1,62 @@
package database
import (
"github.com/dgraph-io/badger/v4"
"lol.mleku.dev/chk"
)
const (
markerPrefix = "MARKER:"
)
// SetMarker stores an arbitrary marker in the database
func (d *D) SetMarker(key string, value []byte) (err error) {
markerKey := []byte(markerPrefix + key)
err = d.Update(func(txn *badger.Txn) error {
return txn.Set(markerKey, value)
})
return
}
// GetMarker retrieves an arbitrary marker from the database
func (d *D) GetMarker(key string) (value []byte, err error) {
markerKey := []byte(markerPrefix + key)
err = d.View(func(txn *badger.Txn) error {
item, err := txn.Get(markerKey)
if err != nil {
return err
}
value, err = item.ValueCopy(nil)
return err
})
return
}
// HasMarker checks if a marker exists in the database
func (d *D) HasMarker(key string) (exists bool) {
markerKey := []byte(markerPrefix + key)
err := d.View(func(txn *badger.Txn) error {
_, err := txn.Get(markerKey)
return err
})
exists = !chk.E(err)
return
}
// DeleteMarker removes a marker from the database
func (d *D) DeleteMarker(key string) (err error) {
markerKey := []byte(markerPrefix + key)
err = d.Update(func(txn *badger.Txn) error {
return txn.Delete(markerKey)
})
return
}

View File

@@ -5,7 +5,6 @@ import (
"context"
"sort"
"strconv"
"strings"
"time"
"lol.mleku.dev/chk"
@@ -43,73 +42,66 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
var expDeletes types.Uint40s
var expEvs event.S
if f.Ids != nil && f.Ids.Len() > 0 {
for _, id := range f.Ids.T {
log.T.F("QueryEvents: looking for ID=%s", hex.Enc(id))
}
// Get all serials for the requested IDs in a single batch operation
log.T.F("QueryEvents: ids path, count=%d", f.Ids.Len())
for _, idx := range f.Ids.T {
log.T.F("QueryEvents: lookup id=%s", hex.Enc(idx))
// we know there is only Ids in this, so run the ID query and fetch.
var ser *types.Uint40
var idErr error
if ser, idErr = d.GetSerialById(idx); idErr != nil {
// Check if this is a "not found" error which is expected for IDs we don't have
if strings.Contains(idErr.Error(), "id not found in database") {
log.T.F(
"QueryEvents: ID not found in database: %s",
hex.Enc(idx),
)
} else {
// Log unexpected errors but continue processing other IDs
log.E.F(
"QueryEvents: error looking up id=%s err=%v",
hex.Enc(idx), idErr,
)
}
continue
}
// Check if the serial is nil, which indicates the ID wasn't found
if ser == nil {
log.T.F("QueryEvents: Serial is nil for ID: %s", hex.Enc(idx))
continue
}
// fetch the events
var ev *event.E
if ev, err = d.FetchEventBySerial(ser); err != nil {
// Use GetSerialsByIds to batch process all IDs at once
serials, idErr := d.GetSerialsByIds(f.Ids)
if idErr != nil {
log.E.F("QueryEvents: error looking up ids: %v", idErr)
// Continue with whatever IDs we found
}
// Convert serials map to slice for batch fetch
var serialsSlice []*types.Uint40
idHexToSerial := make(map[uint64]string) // Map serial value back to original ID hex
for idHex, ser := range serials {
serialsSlice = append(serialsSlice, ser)
idHexToSerial[ser.Get()] = idHex
}
// Fetch all events in a single batch operation
var fetchedEvents map[uint64]*event.E
if fetchedEvents, err = d.FetchEventsBySerials(serialsSlice); err != nil {
log.E.F("QueryEvents: batch fetch failed: %v", err)
return
}
// Process each successfully fetched event and apply filters
for serialValue, ev := range fetchedEvents {
idHex := idHexToSerial[serialValue]
// Convert serial value back to Uint40 for expiration handling
ser := new(types.Uint40)
if err = ser.Set(serialValue); err != nil {
log.T.F(
"QueryEvents: fetch by serial failed for id=%s ser=%v err=%v",
hex.Enc(idx), ser, err,
"QueryEvents: error converting serial %d: %v", serialValue,
err,
)
continue
}
log.T.F(
"QueryEvents: found id=%s kind=%d created_at=%d",
hex.Enc(ev.ID), ev.Kind, ev.CreatedAt,
)
// check for an expiration tag and delete after returning the result
if CheckExpiration(ev) {
log.T.F(
"QueryEvents: id=%s filtered out due to expiration",
hex.Enc(ev.ID),
"QueryEvents: id=%s filtered out due to expiration", idHex,
)
expDeletes = append(expDeletes, ser)
expEvs = append(expEvs, ev)
continue
}
// skip events that have been deleted by a proper deletion event
if derr := d.CheckForDeleted(ev, nil); derr != nil {
log.T.F(
"QueryEvents: id=%s filtered out due to deletion: %v",
hex.Enc(ev.ID), derr,
)
// log.T.F("QueryEvents: id=%s filtered out due to deletion: %v", idHex, derr)
continue
}
log.T.F(
"QueryEvents: id=%s SUCCESSFULLY FOUND, adding to results",
hex.Enc(ev.ID),
)
// Add the event to the results
evs = append(evs, ev)
// log.T.F("QueryEvents: id=%s SUCCESSFULLY FOUND, adding to results", idHex)
}
// sort the events by timestamp
sort.Slice(
evs, func(i, j int) bool {
@@ -159,16 +151,33 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
// Add deletion events to the list of events to process
idPkTs = append(idPkTs, deletionIdPkTs...)
}
// First pass: collect all deletion events
// Prepare serials for batch fetch
var allSerials []*types.Uint40
serialToIdPk := make(map[uint64]*store.IdPkTs)
for _, idpk := range idPkTs {
var ev *event.E
ser := new(types.Uint40)
if err = ser.Set(idpk.Ser); chk.E(err) {
if err = ser.Set(idpk.Ser); err != nil {
continue
}
if ev, err = d.FetchEventBySerial(ser); err != nil {
allSerials = append(allSerials, ser)
serialToIdPk[ser.Get()] = idpk
}
// Fetch all events in batch
var allEvents map[uint64]*event.E
if allEvents, err = d.FetchEventsBySerials(allSerials); err != nil {
log.E.F("QueryEvents: batch fetch failed in non-IDs path: %v", err)
return
}
// First pass: collect all deletion events
for serialValue, ev := range allEvents {
// Convert serial value back to Uint40 for expiration handling
ser := new(types.Uint40)
if err = ser.Set(serialValue); err != nil {
continue
}
// check for an expiration tag and delete after returning the result
if CheckExpiration(ev) {
expDeletes = append(expDeletes, ser)
@@ -235,7 +244,7 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
// For replaceable events, we need to check if there are any
// e-tags that reference events with the same kind and pubkey
for _, eTag := range eTags {
if eTag.Len() < 2 {
if eTag.Len() != 64 {
continue
}
// Get the event ID from the e-tag
@@ -292,29 +301,20 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
}
}
// Second pass: process all events, filtering out deleted ones
for _, idpk := range idPkTs {
var ev *event.E
ser := new(types.Uint40)
if err = ser.Set(idpk.Ser); chk.E(err) {
continue
}
if ev, err = d.FetchEventBySerial(ser); err != nil {
continue
}
for _, ev := range allEvents {
// Add logging for tag filter debugging
if f.Tags != nil && f.Tags.Len() > 0 {
var eventTags []string
if ev.Tags != nil && ev.Tags.Len() > 0 {
for _, t := range *ev.Tags {
if t.Len() >= 2 {
eventTags = append(
eventTags,
string(t.Key())+"="+string(t.Value()),
)
}
}
}
// var eventTags []string
// if ev.Tags != nil && ev.Tags.Len() > 0 {
// for _, t := range *ev.Tags {
// if t.Len() >= 2 {
// eventTags = append(
// eventTags,
// string(t.Key())+"="+string(t.Value()),
// )
// }
// }
// }
// log.T.F(
// "QueryEvents: processing event ID=%s kind=%d tags=%v",
// hex.Enc(ev.ID), ev.Kind, eventTags,

View File

@@ -56,8 +56,10 @@ func setupTestDB(t *testing.T) (
// Unmarshal the event
if _, err = ev.Unmarshal(b); chk.E(err) {
ev.Free()
t.Fatal(err)
}
ev.Free()
events = append(events, ev)

View File

@@ -2,7 +2,6 @@ package database
import (
"fmt"
"sort"
"lol.mleku.dev/chk"
"lol.mleku.dev/errorf"
@@ -64,17 +63,17 @@ func (d *D) CheckForDeleted(ev *event.E, admins [][]byte) (err error) {
return
}
idPkTss = append(idPkTss, tmp...)
// sort by timestamp, so the first is the newest, which the event
// must be newer to not be deleted.
sort.Slice(
idPkTss, func(i, j int) bool {
return idPkTss[i].Ts > idPkTss[j].Ts
},
)
if ev.CreatedAt < idPkTss[0].Ts {
// find the newest deletion timestamp without sorting to reduce cost
maxTs := idPkTss[0].Ts
for i := 1; i < len(idPkTss); i++ {
if idPkTss[i].Ts > maxTs {
maxTs = idPkTss[i].Ts
}
}
if ev.CreatedAt < maxTs {
err = errorf.E(
"blocked: %0x was deleted by address %s because it is older than the delete: event: %d delete: %d",
ev.ID, at, ev.CreatedAt, idPkTss[0].Ts,
ev.ID, at, ev.CreatedAt, maxTs,
)
return
}
@@ -114,17 +113,19 @@ func (d *D) CheckForDeleted(ev *event.E, admins [][]byte) (err error) {
return
}
idPkTss = append(idPkTss, tmp...)
// sort by timestamp, so the first is the newest, which the event
// must be newer to not be deleted.
sort.Slice(
idPkTss, func(i, j int) bool {
return idPkTss[i].Ts > idPkTss[j].Ts
},
)
if ev.CreatedAt < idPkTss[0].Ts {
err = errorf.E(
// find the newest deletion without sorting to reduce cost
maxTs := idPkTss[0].Ts
maxId := idPkTss[0].Id
for i := 1; i < len(idPkTss); i++ {
if idPkTss[i].Ts > maxTs {
maxTs = idPkTss[i].Ts
maxId = idPkTss[i].Id
}
}
if ev.CreatedAt < maxTs {
err = fmt.Errorf(
"blocked: %0x was deleted: the event is older than the delete event %0x: event: %d delete: %d",
ev.ID, idPkTss[0].Id, ev.CreatedAt, idPkTss[0].Ts,
ev.ID, maxId, ev.CreatedAt, maxTs,
)
return
}
@@ -162,20 +163,25 @@ func (d *D) CheckForDeleted(ev *event.E, admins [][]byte) (err error) {
return
}
idPkTss = append(idPkTss, tmp...)
// sort by timestamp, so the first is the newest
sort.Slice(
idPkTss, func(i, j int) bool {
return idPkTss[i].Ts > idPkTss[j].Ts
},
)
if ev.CreatedAt < idPkTss[0].Ts {
// find the newest deletion without sorting to reduce cost
maxTs := idPkTss[0].Ts
// maxId := idPkTss[0].Id
for i := 1; i < len(idPkTss); i++ {
if idPkTss[i].Ts > maxTs {
maxTs = idPkTss[i].Ts
// maxId = idPkTss[i].Id
}
}
if ev.CreatedAt < maxTs {
err = errorf.E(
"blocked: %0x was deleted by address %s: event is older than the delete: event: %d delete: %d",
ev.ID, at, idPkTss[0].Id, ev.CreatedAt, idPkTss[0].Ts,
"blocked: %0x was deleted by address %s because it is older than the delete: event: %d delete: %d",
ev.ID, at, ev.CreatedAt, maxTs,
)
return
}
return
}
return
}
// otherwise we check for a delete by event id
var idxs []Range
@@ -196,15 +202,15 @@ func (d *D) CheckForDeleted(ev *event.E, admins [][]byte) (err error) {
if s, err = d.GetSerialsByRange(idx); chk.E(err) {
return
}
sers = append(sers, s...)
if len(s) > 0 {
// Any e-tag deletion found means the exact event was deleted and cannot be resubmitted
err = errorf.E("blocked: %0x has been deleted", ev.ID)
return
}
}
if len(sers) > 0 {
// For e-tag deletions (delete by ID), any deletion event means the event cannot be resubmitted
// regardless of timestamp, since it's a specific deletion of this exact event
err = errorf.E(
"blocked: %0x was deleted by ID and cannot be resubmitted",
ev.ID,
)
// Any e-tag deletion found means the exact event was deleted and cannot be resubmitted
err = errorf.E("blocked: %0x has been deleted", ev.ID)
return
}

View File

@@ -2,10 +2,10 @@ package database
import (
"context"
"errors"
"sort"
"lol.mleku.dev/chk"
"lol.mleku.dev/errorf"
"next.orly.dev/pkg/database/indexes/types"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/interfaces/store"
@@ -20,7 +20,7 @@ func (d *D) QueryForIds(c context.Context, f *filter.F) (
) {
if f.Ids != nil && f.Ids.Len() > 0 {
// if there is Ids in the query, this is an error for this query
err = errorf.E("query for Ids is invalid for a filter with Ids")
err = errors.New("query for Ids is invalid for a filter with Ids")
return
}
var idxs []Range

View File

@@ -17,11 +17,12 @@ func (d *D) QueryForSerials(c context.Context, f *filter.F) (
var founds []*types.Uint40
var idPkTs []*store.IdPkTs
if f.Ids != nil && f.Ids.Len() > 0 {
for _, id := range f.Ids.T {
var ser *types.Uint40
if ser, err = d.GetSerialById(id); chk.E(err) {
return
}
// Use batch lookup to minimize transactions when resolving IDs to serials
var serialMap map[string]*types.Uint40
if serialMap, err = d.GetSerialsByIds(f.Ids); chk.E(err) {
return
}
for _, ser := range serialMap {
founds = append(founds, ser)
}
var tmp []*store.IdPkTs

View File

@@ -3,18 +3,16 @@ package database
import (
"bytes"
"context"
"errors"
"fmt"
"strings"
"github.com/dgraph-io/badger/v4"
"lol.mleku.dev/chk"
"lol.mleku.dev/errorf"
"lol.mleku.dev/log"
"next.orly.dev/pkg/database/indexes"
"next.orly.dev/pkg/database/indexes/types"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
)
@@ -39,13 +37,13 @@ func (d *D) GetSerialsFromFilter(f *filter.F) (
// SaveEvent saves an event to the database, generating all the necessary indexes.
func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
if ev == nil {
err = errorf.E("nil event")
err = errors.New("nil event")
return
}
// check if the event already exists
var ser *types.Uint40
if ser, err = d.GetSerialById(ev.ID); err == nil && ser != nil {
err = errorf.E("blocked: event already exists: %0x", ev.ID)
err = errors.New("blocked: event already exists")
return
}
@@ -55,7 +53,7 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
err = nil
} else if err != nil {
// For any other error, return it
log.E.F("error checking if event exists: %s", err)
// log.E.F("error checking if event exists: %s", err)
return
}
@@ -65,7 +63,7 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
// "SaveEvent: rejecting resubmission of deleted event ID=%s: %v",
// hex.Enc(ev.ID), err,
// )
err = errorf.E("blocked: %s", err.Error())
err = fmt.Errorf("blocked: %s", err.Error())
return
}
// check for replacement
@@ -89,11 +87,11 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
}
// Only replace if the new event is newer or same timestamp
if ev.CreatedAt < oldEv.CreatedAt {
log.I.F(
"SaveEvent: rejecting older replaceable event ID=%s (created_at=%d) - existing event ID=%s (created_at=%d)",
hex.Enc(ev.ID), ev.CreatedAt, hex.Enc(oldEv.ID),
oldEv.CreatedAt,
)
// log.I.F(
// "SaveEvent: rejecting older replaceable event ID=%s (created_at=%d) - existing event ID=%s (created_at=%d)",
// hex.Enc(ev.ID), ev.CreatedAt, hex.Enc(oldEv.ID),
// oldEv.CreatedAt,
// )
shouldReplace = false
break
}
@@ -104,11 +102,11 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
if oldEv, err = d.FetchEventBySerial(s); chk.E(err) {
continue
}
log.I.F(
"SaveEvent: replacing older replaceable event ID=%s (created_at=%d) with newer event ID=%s (created_at=%d)",
hex.Enc(oldEv.ID), oldEv.CreatedAt, hex.Enc(ev.ID),
ev.CreatedAt,
)
// log.I.F(
// "SaveEvent: replacing older replaceable event ID=%s (created_at=%d) with newer event ID=%s (created_at=%d)",
// hex.Enc(oldEv.ID), oldEv.CreatedAt, hex.Enc(ev.ID),
// ev.CreatedAt,
// )
if err = d.DeleteEventBySerial(
c, s, oldEv,
); chk.E(err) {
@@ -117,7 +115,7 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
}
} else {
// Don't save the older event - return an error
err = errorf.E("blocked: event is older than existing replaceable event")
err = errors.New("blocked: event is older than existing replaceable event")
return
}
}
@@ -125,7 +123,7 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
// find the events and check timestamps before deleting
dTag := ev.Tags.GetFirst([]byte("d"))
if dTag == nil {
err = errorf.E("event is missing a d tag identifier")
err = errors.New("event is missing a d tag identifier")
return
}
f := &filter.F{
@@ -149,11 +147,11 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
}
// Only replace if the new event is newer or same timestamp
if ev.CreatedAt < oldEv.CreatedAt {
log.I.F(
"SaveEvent: rejecting older addressable event ID=%s (created_at=%d) - existing event ID=%s (created_at=%d)",
hex.Enc(ev.ID), ev.CreatedAt, hex.Enc(oldEv.ID),
oldEv.CreatedAt,
)
// log.I.F(
// "SaveEvent: rejecting older addressable event ID=%s (created_at=%d) - existing event ID=%s (created_at=%d)",
// hex.Enc(ev.ID), ev.CreatedAt, hex.Enc(oldEv.ID),
// oldEv.CreatedAt,
// )
shouldReplace = false
break
}
@@ -164,11 +162,11 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
if oldEv, err = d.FetchEventBySerial(s); chk.E(err) {
continue
}
log.I.F(
"SaveEvent: replacing older addressable event ID=%s (created_at=%d) with newer event ID=%s (created_at=%d)",
hex.Enc(oldEv.ID), oldEv.CreatedAt, hex.Enc(ev.ID),
ev.CreatedAt,
)
// log.I.F(
// "SaveEvent: replacing older addressable event ID=%s (created_at=%d) with newer event ID=%s (created_at=%d)",
// hex.Enc(oldEv.ID), oldEv.CreatedAt, hex.Enc(ev.ID),
// ev.CreatedAt,
// )
if err = d.DeleteEventBySerial(
c, s, oldEv,
); chk.E(err) {
@@ -177,7 +175,7 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
}
} else {
// Don't save the older event - return an error
err = errorf.E("blocked: event is older than existing addressable event")
err = errors.New("blocked: event is older than existing addressable event")
return
}
}
@@ -232,14 +230,14 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
return
},
)
log.T.F(
"total data written: %d bytes keys %d bytes values for event ID %s", kc,
vc, hex.Enc(ev.ID),
)
log.T.C(
func() string {
return fmt.Sprintf("event:\n%s\n", ev.Serialize())
},
)
// log.T.F(
// "total data written: %d bytes keys %d bytes values for event ID %s", kc,
// vc, hex.Enc(ev.ID),
// )
// log.T.C(
// func() string {
// return fmt.Sprintf("event:\n%s\n", ev.Serialize())
// },
// )
return
}

View File

@@ -48,7 +48,7 @@ func (d *D) IsSubscriptionActive(pubkey []byte) (bool, error) {
err := d.DB.Update(
func(txn *badger.Txn) error {
item, err := txn.Get([]byte(key))
if err == badger.ErrKeyNotFound {
if errors.Is(err, badger.ErrKeyNotFound) {
sub := &Subscription{TrialEnd: now.AddDate(0, 0, 30)}
data, err := json.Marshal(sub)
if err != nil {
@@ -90,7 +90,7 @@ func (d *D) ExtendSubscription(pubkey []byte, days int) error {
func(txn *badger.Txn) error {
var sub Subscription
item, err := txn.Get([]byte(key))
if err == badger.ErrKeyNotFound {
if errors.Is(err, badger.ErrKeyNotFound) {
sub.PaidUntil = now.AddDate(0, 0, days)
} else if err != nil {
return err

View File

@@ -11,7 +11,6 @@ import (
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/text"
"next.orly.dev/pkg/interfaces/codec"
"next.orly.dev/pkg/utils/bufpool"
"next.orly.dev/pkg/utils/constraints"
"next.orly.dev/pkg/utils/units"
)
@@ -75,8 +74,8 @@ func (en *Submission) Unmarshal(b []byte) (r []byte, err error) {
if r, err = en.E.Unmarshal(r); chk.T(err) {
return
}
buf := bufpool.Get()
r = en.E.Marshal(buf)
// after parsing the event object, r points just after the event JSON
// now skip to the end of the envelope (consume comma/closing bracket etc.)
if r, err = envelopes.SkipToTheEnd(r); chk.E(err) {
return
}

View File

@@ -15,13 +15,10 @@ import (
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/encoders/text"
"next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils/bufpool"
)
// E is the primary datatype of nostr. This is the form of the structure that
// defines its JSON string-based format. Always use New() and Free() to create
// and free event.E to take advantage of the bufpool which greatly improves
// memory allocation behaviour when encoding and decoding nostr events.
// defines its JSON string-based format.
//
// WARNING: DO NOT use json.Marshal with this type because it will not properly
// encode <, >, and & characters due to legacy bullcrap in the encoding/json
@@ -57,10 +54,6 @@ type E struct {
// Sig is the signature on the ID hash that validates as coming from the
// Pubkey in binary format.
Sig []byte
// b is the decode buffer for the event.E. this is where the UnmarshalJSON
// will source the memory to store all of the fields except for the tags.
b bufpool.B
}
var (
@@ -73,31 +66,75 @@ var (
jSig = []byte("sig")
)
// New returns a new event.E. The returned event.E should be freed with Free()
// to return the unmarshalling buffer to the bufpool.
// New returns a new event.E.
func New() *E {
return &E{
b: bufpool.Get(),
}
return &E{}
}
// Free returns the event.E to the pool, as well as nilling all of the fields.
// This should hint to the GC that the event.E can be freed, and the memory
// reused. The decode buffer will be returned to the pool for reuse.
// Free nils all of the fields to hint to the GC that the event.E can be freed.
func (ev *E) Free() {
bufpool.Put(ev.b)
ev.ID = nil
ev.Pubkey = nil
ev.Tags = nil
ev.Content = nil
ev.Sig = nil
ev.b = nil
}
// Clone creates a deep copy of the event with independent memory allocations.
// The clone does not use bufpool, ensuring it has a separate lifetime from
// the original event. This prevents corruption when the original is freed
// while the clone is still in use (e.g., in asynchronous delivery).
func (ev *E) Clone() *E {
clone := &E{
CreatedAt: ev.CreatedAt,
Kind: ev.Kind,
}
// Deep copy all byte slices with independent memory
if ev.ID != nil {
clone.ID = make([]byte, len(ev.ID))
copy(clone.ID, ev.ID)
}
if ev.Pubkey != nil {
clone.Pubkey = make([]byte, len(ev.Pubkey))
copy(clone.Pubkey, ev.Pubkey)
}
if ev.Content != nil {
clone.Content = make([]byte, len(ev.Content))
copy(clone.Content, ev.Content)
}
if ev.Sig != nil {
clone.Sig = make([]byte, len(ev.Sig))
copy(clone.Sig, ev.Sig)
}
// Deep copy tags
if ev.Tags != nil {
clone.Tags = tag.NewS()
for _, tg := range *ev.Tags {
if tg != nil {
// Create new tag with deep-copied elements
newTag := tag.NewWithCap(len(tg.T))
for _, element := range tg.T {
newElement := make([]byte, len(element))
copy(newElement, element)
newTag.T = append(newTag.T, newElement)
}
clone.Tags.Append(newTag)
}
}
}
return clone
}
// EstimateSize returns a size for the event that allows for worst case scenario
// expansion of the escaped content and tags.
func (ev *E) EstimateSize() (size int) {
size = len(ev.ID)*2 + len(ev.Pubkey)*2 + len(ev.Sig)*2 + len(ev.Content)*2
if ev.Tags == nil {
return
}
for _, v := range *ev.Tags {
for _, w := range (*v).T {
size += len(w) * 2
@@ -112,23 +149,7 @@ func (ev *E) Marshal(dst []byte) (b []byte) {
b = append(b, '"')
b = append(b, jId...)
b = append(b, `":"`...)
// it can happen the slice has insufficient capacity to hold the content AND
// the signature at this point, because the signature encoder must have
// sufficient capacity pre-allocated as it does not append to the buffer.
// unlike every other encoding function up to this point. This also ensures
// that since the bufpool defaults to 1kb, most events won't have a
// re-allocation required, but if they do, it will be this next one, and it
// integrates properly with the buffer pool, reducing GC pressure and
// avoiding new heap allocations.
if cap(b) < len(b)+len(ev.Content)+7+256+2 {
b2 := make([]byte, len(b)+len(ev.Content)*2+512)
copy(b2, b)
b2 = b2[:len(b)]
// return the old buffer to the pool for reuse.
bufpool.PutBytes(b)
b = b2
}
b = b[:len(b)+2*sha256.Size]
b = append(b, make([]byte, 2*sha256.Size)...)
xhex.Encode(b[len(b)-2*sha256.Size:], ev.ID)
b = append(b, `","`...)
b = append(b, jPubkey...)
@@ -148,6 +169,9 @@ func (ev *E) Marshal(dst []byte) (b []byte) {
b = append(b, `":`...)
if ev.Tags != nil {
b = ev.Tags.Marshal(b)
} else {
// Emit empty array for nil tags to keep JSON valid
b = append(b, '[', ']')
}
b = append(b, `,"`...)
b = append(b, jContent...)
@@ -156,7 +180,7 @@ func (ev *E) Marshal(dst []byte) (b []byte) {
b = append(b, `","`...)
b = append(b, jSig...)
b = append(b, `":"`...)
b = b[:len(b)+2*schnorr.SignatureSize]
b = append(b, make([]byte, 2*schnorr.SignatureSize)...)
xhex.Encode(b[len(b)-2*schnorr.SignatureSize:], ev.Sig)
b = append(b, `"}`...)
return
@@ -164,27 +188,21 @@ func (ev *E) Marshal(dst []byte) (b []byte) {
// MarshalJSON marshals an event.E into a JSON byte string.
//
// Call bufpool.PutBytes(b) to return the buffer to the bufpool after use.
//
// WARNING: if json.Marshal is called in the hopes of invoking this function on
// an event, if it has <, > or * in the content or tags they are escaped into
// unicode escapes and break the event ID. Call this function directly in order
// to bypass this issue.
func (ev *E) MarshalJSON() (b []byte, err error) {
b = bufpool.Get()
b = ev.Marshal(b[:0])
b = ev.Marshal(nil)
return
}
func (ev *E) Serialize() (b []byte) {
b = bufpool.Get()
b = ev.Marshal(b[:0])
b = ev.Marshal(nil)
return
}
// Unmarshal unmarshalls a JSON string into an event.E.
//
// Call ev.Free() to return the provided buffer to the bufpool afterwards.
func (ev *E) Unmarshal(b []byte) (rem []byte, err error) {
key := make([]byte, 0, 9)
for ; len(b) > 0; b = b[1:] {
@@ -197,7 +215,6 @@ func (ev *E) Unmarshal(b []byte) (rem []byte, err error) {
goto BetweenKeys
}
}
log.I.F("start")
goto eof
BetweenKeys:
for ; len(b) > 0; b = b[1:] {
@@ -210,7 +227,6 @@ BetweenKeys:
goto InKey
}
}
log.I.F("BetweenKeys")
goto eof
InKey:
for ; len(b) > 0; b = b[1:] {
@@ -220,7 +236,6 @@ InKey:
}
key = append(key, b[0])
}
log.I.F("InKey")
goto eof
InKV:
for ; len(b) > 0; b = b[1:] {
@@ -233,7 +248,6 @@ InKV:
goto InVal
}
}
log.I.F("InKV")
goto eof
InVal:
// Skip whitespace before value
@@ -357,8 +371,8 @@ BetweenKV:
goto InKey
}
}
log.I.F("between kv")
goto eof
// If we reach here, the buffer ended unexpectedly. Treat as end-of-object
goto AfterClose
AfterClose:
rem = b
return
@@ -377,6 +391,7 @@ eof:
//
// Call ev.Free() to return the provided buffer to the bufpool afterwards.
func (ev *E) UnmarshalJSON(b []byte) (err error) {
log.I.F("UnmarshalJSON: '%s'", b)
_, err = ev.Unmarshal(b)
return
}

View File

@@ -9,7 +9,6 @@ import (
"lol.mleku.dev/errorf"
"next.orly.dev/pkg/encoders/text"
utils "next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils/bufpool"
)
// The tag position meanings, so they are clear when reading.
@@ -21,18 +20,17 @@ const (
type T struct {
T [][]byte
b bufpool.B
}
func New() *T { return &T{b: bufpool.Get()} }
func New() *T { return &T{} }
func NewFromBytesSlice(t ...[]byte) (tt *T) {
tt = &T{T: t, b: bufpool.Get()}
tt = &T{T: t}
return
}
func NewFromAny(t ...any) (tt *T) {
tt = &T{b: bufpool.Get()}
tt = &T{}
for _, v := range t {
switch vv := v.(type) {
case []byte:
@@ -47,11 +45,10 @@ func NewFromAny(t ...any) (tt *T) {
}
func NewWithCap(c int) *T {
return &T{T: make([][]byte, 0, c), b: bufpool.Get()}
return &T{T: make([][]byte, 0, c)}
}
func (t *T) Free() {
bufpool.Put(t.b)
t.T = nil
}
@@ -99,18 +96,12 @@ func (t *T) Marshal(dst []byte) (b []byte) {
// in an event as you will have a bad time. Use the json.Marshal function in the
// pkg/encoders/json package instead, this has a fork of the json library that
// disables html escaping for json.Marshal.
//
// Call bufpool.PutBytes(b) to return the buffer to the bufpool after use.
func (t *T) MarshalJSON() (b []byte, err error) {
b = bufpool.Get()
b = t.Marshal(b)
b = t.Marshal(nil)
return
}
// Unmarshal decodes a standard minified JSON array of strings to a tags.T.
//
// Call bufpool.PutBytes(b) to return the buffer to the bufpool after use if it
// was originally created using bufpool.Get().
func (t *T) Unmarshal(b []byte) (r []byte, err error) {
var inQuotes, openedBracket bool
var quoteStart int
@@ -127,7 +118,11 @@ func (t *T) Unmarshal(b []byte) (r []byte, err error) {
i++
} else if b[i] == '"' {
inQuotes = false
t.T = append(t.T, text.NostrUnescape(b[quoteStart:i]))
// Copy the quoted substring before unescaping so we don't mutate the
// original JSON buffer in-place (which would corrupt subsequent parsing).
copyBuf := make([]byte, i-quoteStart)
copy(copyBuf, b[quoteStart:i])
t.T = append(t.T, text.NostrUnescape(copyBuf))
}
}
if !openedBracket || inQuotes {

View File

@@ -5,7 +5,6 @@ import (
"lol.mleku.dev/chk"
"next.orly.dev/pkg/utils"
"next.orly.dev/pkg/utils/bufpool"
)
// S is a list of tag.T - which are lists of string elements with ordering and
@@ -25,7 +24,8 @@ func NewSWithCap(c int) (s *S) {
func (s *S) Len() int {
if s == nil {
panic("tags cannot be used without initialization")
return 0
// panic("tags cannot be used without initialization")
}
return len(*s)
}
@@ -47,6 +47,9 @@ func (s *S) Append(t ...*T) {
// ContainsAny returns true if any of the values given in `values` matches any
// of the tag elements.
func (s *S) ContainsAny(tagName []byte, values [][]byte) bool {
if s == nil {
return false
}
if len(tagName) < 1 {
return false
}
@@ -67,10 +70,7 @@ func (s *S) ContainsAny(tagName []byte, values [][]byte) bool {
}
// MarshalJSON encodes a tags.T appended to a provided byte slice in JSON form.
//
// Call bufpool.PutBytes(b) to return the buffer to the bufpool after use.
func (s *S) MarshalJSON() (b []byte, err error) {
b = bufpool.Get()
b = append(b, '[')
for i, ss := range *s {
b = ss.Marshal(b)
@@ -97,8 +97,6 @@ func (s *S) Marshal(dst []byte) (b []byte) {
// UnmarshalJSON a tags.T from a provided byte slice and return what remains
// after the end of the array.
//
// Call bufpool.PutBytes(b) to return the buffer to the bufpool after use.
func (s *S) UnmarshalJSON(b []byte) (err error) {
_, err = s.Unmarshal(b)
return

View File

@@ -66,6 +66,10 @@ func UnmarshalQuoted(b []byte) (content, rem []byte, err error) {
}
rem = b[:]
for ; len(rem) >= 0; rem = rem[1:] {
if len(rem) == 0 {
err = io.EOF
return
}
// advance to open quotes
if rem[0] == '"' {
rem = rem[1:]
@@ -94,7 +98,10 @@ func UnmarshalQuoted(b []byte) (content, rem []byte, err error) {
if !escaping {
rem = rem[1:]
content = content[:contentLen]
content = NostrUnescape(content)
// Create a copy of the content to avoid corrupting the original input buffer
contentCopy := make([]byte, len(content))
copy(contentCopy, content)
content = NostrUnescape(contentCopy)
return
}
contentLen++

View File

@@ -140,6 +140,12 @@ func (r *Client) Context() context.Context { return r.connectionContext }
// IsConnected returns true if the connection to this relay seems to be active.
func (r *Client) IsConnected() bool { return r.connectionContext.Err() == nil }
// ConnectionCause returns the cancel cause for the relay connection context.
func (r *Client) ConnectionCause() error { return context.Cause(r.connectionContext) }
// LastError returns the last connection error observed by the reader loop.
func (r *Client) LastError() error { return r.ConnectionError }
// Connect tries to establish a websocket connection to r.URL.
// If the context expires before the connection is complete, an error is returned.
// Once successfully connected, context expiration has no effect: call r.Close
@@ -218,6 +224,11 @@ func (r *Client) ConnectWithTLS(
for {
select {
case <-r.connectionContext.Done():
log.T.F(
"WS.Client: connection context done for %s: cause=%v lastErr=%v",
r.URL, context.Cause(r.connectionContext),
r.ConnectionError,
)
ticker.Stop()
r.Connection = nil
@@ -241,13 +252,17 @@ func (r *Client) ConnectWithTLS(
"{%s} error writing ping: %v; closing websocket", r.URL,
err,
)
r.Close() // this should trigger a context cancelation
r.CloseWithReason(
fmt.Errorf(
"ping failed: %w", err,
),
) // this should trigger a context cancelation
return
}
case wr := <-r.writeQueue:
// all write requests will go through this to prevent races
log.D.F("{%s} sending %v\n", r.URL, string(wr.msg))
// log.D.F("{%s} sending %v\n", r.URL, string(wr.msg))
if err = r.Connection.WriteMessage(
r.connectionContext, wr.msg,
); err != nil {
@@ -269,7 +284,11 @@ func (r *Client) ConnectWithTLS(
r.connectionContext, buf,
); err != nil {
r.ConnectionError = err
r.Close()
log.T.F(
"WS.Client: reader loop error on %s: %v; closing connection",
r.URL, err,
)
r.CloseWithReason(fmt.Errorf("reader loop error: %w", err))
return
}
message := buf.Bytes()
@@ -358,11 +377,11 @@ func (r *Client) ConnectWithTLS(
if okCallback, exist := r.okCallbacks.Load(string(env.EventID)); exist {
okCallback(env.OK, env.ReasonString())
} else {
log.I.F(
"{%s} got an unexpected OK message for event %s",
r.URL,
env.EventID,
)
// log.I.F(
// "{%s} got an unexpected OK message for event %0x",
// r.URL,
// env.EventID,
// )
}
}
}
@@ -479,14 +498,27 @@ func (r *Client) Subscribe(
sub := r.PrepareSubscription(ctx, ff, opts...)
if r.Connection == nil {
log.T.F(
"WS.Subscribe: not connected to %s; aborting sub id=%s", r.URL,
sub.GetID(),
)
return nil, fmt.Errorf("not connected to %s", r.URL)
}
log.T.F(
"WS.Subscribe: firing subscription id=%s to %s with %d filters",
sub.GetID(), r.URL, len(*ff),
)
if err := sub.Fire(); err != nil {
log.T.F(
"WS.Subscribe: Fire failed id=%s to %s: %v", sub.GetID(), r.URL,
err,
)
return nil, fmt.Errorf(
"couldn't subscribe to %v at %s: %w", ff, r.URL, err,
)
}
log.T.F("WS.Subscribe: Fire succeeded id=%s to %s", sub.GetID(), r.URL)
return sub, nil
}
@@ -598,9 +630,10 @@ func (r *Client) QuerySync(
}
// Close closes the relay connection.
func (r *Client) Close() error {
return r.close(errors.New("Close() called"))
}
func (r *Client) Close() error { return r.CloseWithReason(errors.New("Close() called")) }
// CloseWithReason closes the relay connection with a specific reason that will be stored as the context cancel cause.
func (r *Client) CloseWithReason(reason error) error { return r.close(reason) }
func (r *Client) close(reason error) error {
r.closeMutex.Lock()
@@ -609,6 +642,10 @@ func (r *Client) close(reason error) error {
if r.connectionContextCancel == nil {
return fmt.Errorf("relay already closed")
}
log.T.F(
"WS.Client: closing connection to %s: reason=%v lastErr=%v", r.URL,
reason, r.ConnectionError,
)
r.connectionContextCancel(reason)
r.connectionContextCancel = nil

View File

@@ -8,6 +8,7 @@ import (
"sync/atomic"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/encoders/envelopes/closeenvelope"
"next.orly.dev/pkg/encoders/envelopes/reqenvelope"
"next.orly.dev/pkg/encoders/event"
@@ -88,8 +89,14 @@ var (
)
func (sub *Subscription) start() {
// Wait for the context to be done instead of blocking immediately
// This allows the subscription to receive events before terminating
sub.live.Store(true)
// debug: log start of subscription goroutine
log.T.F("WS.Subscription.start: started id=%s", sub.GetID())
<-sub.Context.Done()
// the subscription ends once the context is canceled (if not already)
log.T.F("WS.Subscription.start: context done for id=%s", sub.GetID())
sub.unsub(errors.New("context done on start()")) // this will set sub.live to false
// do this so we don't have the possibility of closing the Events channel and then trying to send to it
sub.mu.Lock()
@@ -180,10 +187,18 @@ func (sub *Subscription) Fire() (err error) {
var reqb []byte
reqb = reqenvelope.NewFrom(sub.id, sub.Filters).Marshal(nil)
sub.live.Store(true)
log.T.F(
"WS.Subscription.Fire: sending REQ id=%s filters=%d bytes=%d",
sub.GetID(), len(*sub.Filters), len(reqb),
)
if err = <-sub.Client.Write(reqb); err != nil {
err = fmt.Errorf("failed to write: %w", err)
log.T.F(
"WS.Subscription.Fire: write failed id=%s: %v", sub.GetID(), err,
)
sub.cancel(err)
return
}
log.T.F("WS.Subscription.Fire: write ok id=%s", sub.GetID())
return
}

372
pkg/spider/spider.go Normal file
View File

@@ -0,0 +1,372 @@
package spider
import (
"context"
"strconv"
"strings"
"time"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/app/config"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/database"
"next.orly.dev/pkg/database/indexes/types"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/encoders/timestamp"
"next.orly.dev/pkg/protocol/ws"
"next.orly.dev/pkg/utils/normalize"
)
const (
OneTimeSpiderSyncMarker = "spider_one_time_sync_completed"
SpiderLastScanMarker = "spider_last_scan_time"
)
type Spider struct {
db *database.D
cfg *config.C
ctx context.Context
cancel context.CancelFunc
}
func New(
db *database.D, cfg *config.C, ctx context.Context,
cancel context.CancelFunc,
) *Spider {
return &Spider{
db: db,
cfg: cfg,
ctx: ctx,
cancel: cancel,
}
}
// Start initializes the spider functionality based on configuration
func (s *Spider) Start() {
if s.cfg.SpiderMode != "follows" {
log.D.Ln("Spider mode is not set to 'follows', skipping spider functionality")
return
}
log.I.Ln("Starting spider in follow mode")
// Check if one-time sync has been completed
if !s.db.HasMarker(OneTimeSpiderSyncMarker) {
log.I.Ln("Performing one-time spider sync back one month")
go s.performOneTimeSync()
} else {
log.D.Ln("One-time spider sync already completed, skipping")
}
// Start periodic scanning
go s.startPeriodicScanning()
}
// performOneTimeSync performs the initial sync going back one month
func (s *Spider) performOneTimeSync() {
defer func() {
// Mark the one-time sync as completed
timestamp := strconv.FormatInt(time.Now().Unix(), 10)
if err := s.db.SetMarker(
OneTimeSpiderSyncMarker, []byte(timestamp),
); err != nil {
log.E.F("Failed to set one-time sync marker: %v", err)
} else {
log.I.Ln("One-time spider sync completed and marked")
}
}()
// Calculate the time one month ago
oneMonthAgo := time.Now().AddDate(0, -1, 0)
log.I.F("Starting one-time spider sync from %v", oneMonthAgo)
// Perform the sync (placeholder - would need actual implementation based on follows)
if err := s.performSync(oneMonthAgo, time.Now()); err != nil {
log.E.F("One-time spider sync failed: %v", err)
return
}
log.I.Ln("One-time spider sync completed successfully")
}
// startPeriodicScanning starts the regular scanning process
func (s *Spider) startPeriodicScanning() {
ticker := time.NewTicker(s.cfg.SpiderFrequency)
defer ticker.Stop()
log.I.F("Starting periodic spider scanning every %v", s.cfg.SpiderFrequency)
for {
select {
case <-s.ctx.Done():
log.D.Ln("Spider periodic scanning stopped due to context cancellation")
return
case <-ticker.C:
s.performPeriodicScan()
}
}
}
// performPeriodicScan performs the regular scan of the last two hours (double the frequency window)
func (s *Spider) performPeriodicScan() {
// Calculate the scanning window (double the frequency period)
scanWindow := s.cfg.SpiderFrequency * 2
scanStart := time.Now().Add(-scanWindow)
scanEnd := time.Now()
log.D.F(
"Performing periodic spider scan from %v to %v (window: %v)", scanStart,
scanEnd, scanWindow,
)
if err := s.performSync(scanStart, scanEnd); err != nil {
log.E.F("Periodic spider scan failed: %v", err)
return
}
// Update the last scan marker
timestamp := strconv.FormatInt(time.Now().Unix(), 10)
if err := s.db.SetMarker(
SpiderLastScanMarker, []byte(timestamp),
); err != nil {
log.E.F("Failed to update last scan marker: %v", err)
}
log.D.F("Periodic spider scan completed successfully")
}
// performSync performs the actual sync operation for the given time range
func (s *Spider) performSync(startTime, endTime time.Time) error {
log.D.F(
"Spider sync from %v to %v - starting implementation", startTime,
endTime,
)
// 1. Check ACL mode is set to "follows"
if s.cfg.ACLMode != "follows" {
log.D.F(
"Spider sync skipped - ACL mode is not 'follows' (current: %s)",
s.cfg.ACLMode,
)
return nil
}
// 2. Get the list of followed users from the ACL system
followedPubkeys, err := s.getFollowedPubkeys()
if err != nil {
return err
}
if len(followedPubkeys) == 0 {
log.D.Ln("Spider sync: no followed pubkeys found")
return nil
}
log.D.F("Spider sync: found %d followed pubkeys", len(followedPubkeys))
// 3. Discover relay lists from followed users
relayURLs, err := s.discoverRelays(followedPubkeys)
if err != nil {
return err
}
if len(relayURLs) == 0 {
log.W.Ln("Spider sync: no relays discovered from followed users")
return nil
}
log.I.F("Spider sync: discovered %d relay URLs", len(relayURLs))
// 4. Query each relay for events from followed pubkeys in the time range
eventsFound := 0
for _, relayURL := range relayURLs {
count, err := s.queryRelayForEvents(
relayURL, followedPubkeys, startTime, endTime,
)
if err != nil {
log.E.F("Spider sync: error querying relay %s: %v", relayURL, err)
continue
}
eventsFound += count
}
log.I.F(
"Spider sync completed: found %d new events from %d relays",
eventsFound, len(relayURLs),
)
return nil
}
// getFollowedPubkeys retrieves the list of followed pubkeys from the ACL system
func (s *Spider) getFollowedPubkeys() ([][]byte, error) {
// Access the ACL registry to get the current ACL instance
var followedPubkeys [][]byte
// Get all ACL instances and find the active one
for _, aclInstance := range acl.Registry.ACL {
if aclInstance.Type() == acl.Registry.Active.Load() {
// Cast to *Follows to access the follows field
if followsACL, ok := aclInstance.(*acl.Follows); ok {
followedPubkeys = followsACL.GetFollowedPubkeys()
break
}
}
}
return followedPubkeys, nil
}
// discoverRelays discovers relay URLs from kind 10002 events of followed users
func (s *Spider) discoverRelays(followedPubkeys [][]byte) ([]string, error) {
seen := make(map[string]struct{})
var urls []string
for _, pubkey := range followedPubkeys {
// Query for kind 10002 (RelayListMetadata) events from this pubkey
fl := &filter.F{
Authors: tag.NewFromAny(pubkey),
Kinds: kind.NewS(kind.New(kind.RelayListMetadata.K)),
}
idxs, err := database.GetIndexesFromFilter(fl)
if chk.E(err) {
continue
}
var sers types.Uint40s
for _, idx := range idxs {
s, err := s.db.GetSerialsByRange(idx)
if chk.E(err) {
continue
}
sers = append(sers, s...)
}
for _, ser := range sers {
ev, err := s.db.FetchEventBySerial(ser)
if chk.E(err) || ev == nil {
continue
}
// Extract relay URLs from 'r' tags
for _, v := range ev.Tags.GetAll([]byte("r")) {
u := string(v.Value())
n := string(normalize.URL(u))
if n == "" {
continue
}
if _, ok := seen[n]; ok {
continue
}
seen[n] = struct{}{}
urls = append(urls, n)
}
}
}
return urls, nil
}
// queryRelayForEvents connects to a relay and queries for events from followed pubkeys
func (s *Spider) queryRelayForEvents(
relayURL string, followedPubkeys [][]byte, startTime, endTime time.Time,
) (int, error) {
log.T.F("Spider sync: querying relay %s", relayURL)
// Connect to the relay with a timeout context
ctx, cancel := context.WithTimeout(s.ctx, 30*time.Second)
defer cancel()
client, err := ws.RelayConnect(ctx, relayURL)
if err != nil {
return 0, err
}
defer client.Close()
// Create filter for the time range and followed pubkeys
f := &filter.F{
Authors: tag.NewFromBytesSlice(followedPubkeys...),
Since: timestamp.FromUnix(startTime.Unix()),
Until: timestamp.FromUnix(endTime.Unix()),
Limit: func() *uint { l := uint(1000); return &l }(), // Limit to avoid overwhelming
}
// Subscribe to get events
sub, err := client.Subscribe(ctx, filter.NewS(f))
if err != nil {
return 0, err
}
defer sub.Unsub()
eventsCount := 0
eventsSaved := 0
timeout := time.After(10 * time.Second) // Timeout for receiving events
for {
select {
case <-ctx.Done():
log.T.F(
"Spider sync: context done for relay %s, saved %d/%d events",
relayURL, eventsSaved, eventsCount,
)
return eventsSaved, nil
case <-timeout:
log.T.F(
"Spider sync: timeout for relay %s, saved %d/%d events",
relayURL, eventsSaved, eventsCount,
)
return eventsSaved, nil
case <-sub.EndOfStoredEvents:
log.T.F(
"Spider sync: end of stored events for relay %s, saved %d/%d events",
relayURL, eventsSaved, eventsCount,
)
return eventsSaved, nil
case ev := <-sub.Events:
if ev == nil {
continue
}
eventsCount++
// Verify the event signature
if ok, err := ev.Verify(); !ok || err != nil {
log.T.F(
"Spider sync: invalid event signature from relay %s",
relayURL,
)
ev.Free()
continue
}
// Save the event to the database
if _, _, err := s.db.SaveEvent(s.ctx, ev); err != nil {
if !strings.HasPrefix(err.Error(), "blocked:") {
log.T.F(
"Spider sync: error saving event from relay %s: %v",
relayURL, err,
)
}
// Event might already exist, which is fine for deduplication
} else {
eventsSaved++
if eventsSaved%10 == 0 {
log.T.F(
"Spider sync: saved %d events from relay %s",
eventsSaved, relayURL,
)
}
}
ev.Free()
}
}
}
// Stop stops the spider functionality
func (s *Spider) Stop() {
log.D.Ln("Stopping spider")
s.cancel()
}

View File

@@ -15,3 +15,22 @@ func FastEqual[A constraints.Bytes, B constraints.Bytes](a A, b B) (same bool) {
}
return true
}
func FastCompare[A constraints.Bytes, B constraints.Bytes](
a A, b B,
) (diff int) {
if len(a) != len(b) {
return
}
ab := []byte(a)
bb := []byte(b)
for i, v := range ab {
if v != bb[i] {
if v > bb[i] {
return 1
}
return -1
}
}
return 0
}

View File

@@ -1 +1 @@
v0.4.4
v0.6.4

View File

@@ -1,20 +0,0 @@
package main
import (
"github.com/pkg/profile"
"next.orly.dev/pkg/utils/interrupt"
)
func startProfiler(mode string) {
switch mode {
case "cpu":
prof := profile.Start(profile.CPUProfile)
interrupt.AddHandler(prof.Stop)
case "memory":
prof := profile.Start(profile.MemProfile)
interrupt.AddHandler(prof.Stop)
case "allocation":
prof := profile.Start(profile.MemProfileAllocs)
interrupt.AddHandler(prof.Stop)
}
}

169
scripts/benchmark.sh Executable file
View File

@@ -0,0 +1,169 @@
#!/bin/bash
set -euo pipefail
# scripts/benchmark.sh - Run full benchmark suite on a relay at a configurable address
#
# Usage:
# ./scripts/benchmark.sh [relay_address] [relay_port]
#
# Example:
# ./scripts/benchmark.sh localhost 3334
# ./scripts/benchmark.sh nostr.example.com 8080
#
# If relay_address and relay_port are not provided, defaults to localhost:3334
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd -- "${SCRIPT_DIR}/.." && pwd)"
cd "$REPO_ROOT"
# Default values
RELAY_ADDRESS="${1:-localhost}"
RELAY_PORT="${2:-3334}"
RELAY_URL="ws://${RELAY_ADDRESS}:${RELAY_PORT}"
BENCHMARK_EVENTS="${BENCHMARK_EVENTS:-10000}"
BENCHMARK_WORKERS="${BENCHMARK_WORKERS:-8}"
BENCHMARK_DURATION="${BENCHMARK_DURATION:-60s}"
REPORTS_DIR="${REPORTS_DIR:-$REPO_ROOT/cmd/benchmark/reports}"
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
RUN_DIR="${REPORTS_DIR}/run_${TIMESTAMP}"
# Ensure the benchmark binary is built
BENCHMARK_BIN="${REPO_ROOT}/cmd/benchmark/benchmark"
if [[ ! -x "$BENCHMARK_BIN" ]]; then
echo "Building benchmark binary..."
go build -o "$BENCHMARK_BIN" "$REPO_ROOT/cmd/benchmark"
fi
# Create output directory
mkdir -p "${RUN_DIR}"
echo "=================================================="
echo "Nostr Relay Benchmark"
echo "=================================================="
echo "Timestamp: $(date)"
echo "Target Relay: ${RELAY_URL}"
echo "Events per test: ${BENCHMARK_EVENTS}"
echo "Concurrent workers: ${BENCHMARK_WORKERS}"
echo "Test duration: ${BENCHMARK_DURATION}"
echo "Output directory: ${RUN_DIR}"
echo "=================================================="
# Function to wait for relay to be ready
wait_for_relay() {
local url="$1"
local max_attempts=30
local attempt=0
echo "Waiting for relay to be ready at ${url}..."
while [ $attempt -lt $max_attempts ]; do
# Try to get HTTP status code with curl
local status=$(curl -s -o /dev/null -w "%{http_code}" --connect-timeout 5 --max-time 5 "http://${RELAY_ADDRESS}:${RELAY_PORT}" || echo 000)
case "$status" in
101|200|400|404|426)
echo "Relay is ready! (HTTP ${status})"
return 0
;;
esac
attempt=$((attempt + 1))
echo " Attempt ${attempt}/${max_attempts}: Relay not ready yet (HTTP ${status})..."
sleep 2
done
echo "ERROR: Relay failed to become ready after ${max_attempts} attempts"
return 1
}
# Function to run benchmark against the relay
run_benchmark() {
local output_file="${RUN_DIR}/benchmark_results.txt"
echo ""
echo "=================================================="
echo "Testing relay at ${RELAY_URL}"
echo "=================================================="
# Wait for relay to be ready
if ! wait_for_relay "${RELAY_ADDRESS}:${RELAY_PORT}"; then
echo "ERROR: Relay is not responding, aborting..."
echo "RELAY_URL: ${RELAY_URL}" > "${output_file}"
echo "STATUS: FAILED - Relay not responding" >> "${output_file}"
echo "ERROR: Connection failed" >> "${output_file}"
return 1
fi
# Run the benchmark
echo "Running benchmark against ${RELAY_URL}..."
# Create temporary directory for benchmark data
TEMP_DATA_DIR="/tmp/benchmark_${TIMESTAMP}"
mkdir -p "${TEMP_DATA_DIR}"
# Run benchmark and capture both stdout and stderr
if "${BENCHMARK_BIN}" \
-relay-url="${RELAY_URL}" \
-datadir="${TEMP_DATA_DIR}" \
-events="${BENCHMARK_EVENTS}" \
-workers="${BENCHMARK_WORKERS}" \
-duration="${BENCHMARK_DURATION}" \
# > "${output_file}"
2>&1; then
echo "✓ Benchmark completed successfully"
# Add relay identification to the report
echo "" >> "${output_file}"
echo "RELAY_URL: ${RELAY_URL}" >> "${output_file}"
echo "TEST_TIMESTAMP: $(date -Iseconds)" >> "${output_file}"
echo "BENCHMARK_CONFIG:" >> "${output_file}"
echo " Events: ${BENCHMARK_EVENTS}" >> "${output_file}"
echo " Workers: ${BENCHMARK_WORKERS}" >> "${output_file}"
echo " Duration: ${BENCHMARK_DURATION}" >> "${output_file}" else
echo "✗ Benchmark failed"
echo "" >> "${output_file}"
echo "RELAY_URL: ${RELAY_URL}" >> "${output_file}"
echo "STATUS: FAILED" >> "${output_file}"
echo "TEST_TIMESTAMP: $(date -Iseconds)" >> "${output_file}"
fi
# Clean up temporary data
rm -rf "${TEMP_DATA_DIR}"
return 0
}
# Main execution
echo "Starting relay benchmark..."
run_benchmark
# Display results
if [ -f "${RUN_DIR}/benchmark_results.txt" ]; then
echo ""
echo "=================================================="
echo "Benchmark Results Summary"
echo "=================================================="
# Extract key metrics from the benchmark report
if grep -q "STATUS: FAILED" "${RUN_DIR}/benchmark_results.txt"; then
echo "Status: FAILED"
grep "ERROR:" "${RUN_DIR}/benchmark_results.txt" | head -1 || echo "Error: Unknown failure"
else
echo "Status: COMPLETED"
# Extract performance metrics
grep "Events/sec:" "${RUN_DIR}/benchmark_results.txt" | head -3 || true
grep "Success Rate:" "${RUN_DIR}/benchmark_results.txt" | head -3 || true
grep "Avg Latency:" "${RUN_DIR}/benchmark_results.txt" | head -3 || true
grep "P95 Latency:" "${RUN_DIR}/benchmark_results.txt" | head -3 || true
grep "Memory:" "${RUN_DIR}/benchmark_results.txt" | head -3 || true
fi
echo ""
echo "Full results available in: ${RUN_DIR}/benchmark_results.txt"
fi
echo ""
echo "=================================================="
echo "Benchmark Completed!"
echo "=================================================="
echo "Results directory: ${RUN_DIR}"
echo "Benchmark finished at: $(date)"

97
scripts/update-embedded-web.sh Executable file
View File

@@ -0,0 +1,97 @@
#!/usr/bin/env bash
# scripts/update-embedded-web.sh
# Build the embedded web UI and then install the Go binary.
#
# This script will:
# - Build the React app in app/web to app/web/dist using Bun (preferred),
# or fall back to npm/yarn/pnpm if Bun isn't available.
# - Run `go install` from the repository root so the binary picks up the new
# embedded assets.
#
# Usage:
# ./scripts/update-embedded-web.sh
#
# Requirements:
# - Go 1.18+ installed (for `go install` and go:embed support)
# - Bun (https://bun.sh) recommended; alternatively Node.js with npm/yarn/pnpm
#
set -euo pipefail
# Resolve repo root to allow running from anywhere
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd -- "${SCRIPT_DIR}/.." && pwd)"
WEB_DIR="${REPO_ROOT}/app/web"
log() { printf "[update-embedded-web] %s\n" "$*"; }
err() { printf "[update-embedded-web][ERROR] %s\n" "$*" >&2; }
if [[ ! -d "${WEB_DIR}" ]]; then
err "Expected web directory at ${WEB_DIR} not found."
exit 1
fi
# Choose a JS package runner
JS_RUNNER=""
if command -v bun >/dev/null 2>&1; then
JS_RUNNER="bun"
elif command -v npm >/dev/null 2>&1; then
JS_RUNNER="npm"
elif command -v yarn >/dev/null 2>&1; then
JS_RUNNER="yarn"
elif command -v pnpm >/dev/null 2>&1; then
JS_RUNNER="pnpm"
else
err "No JavaScript package manager found. Install Bun (recommended) or npm/yarn/pnpm."
exit 1
fi
log "Using JavaScript runner: ${JS_RUNNER}"
# Install dependencies and build the web app
log "Installing frontend dependencies..."
pushd "${WEB_DIR}" >/dev/null
case "${JS_RUNNER}" in
bun)
bun install
log "Building web app with Bun..."
bun run build
;;
npm)
npm ci || npm install
log "Building web app with npm..."
npm run build
;;
yarn)
yarn install --frozen-lockfile || yarn install
log "Building web app with yarn..."
yarn build
;;
pnpm)
pnpm install --frozen-lockfile || pnpm install
log "Building web app with pnpm..."
pnpm build
;;
*)
err "Unsupported JS runner: ${JS_RUNNER}"
exit 1
;;
esac
popd >/dev/null
# Verify the output directory expected by go:embed exists
DIST_DIR="${WEB_DIR}/dist"
if [[ ! -d "${DIST_DIR}" ]]; then
err "Build did not produce ${DIST_DIR}. Check your frontend build configuration."
exit 1
fi
log "Frontend build complete at ${DIST_DIR}."
# Install the Go binary so it embeds the latest files
log "Running 'go install' from repo root..."
pushd "${REPO_ROOT}" >/dev/null
GO111MODULE=on go install ./...
popd >/dev/null
log "Done. Your installed binary now includes the updated embedded web UI."

File diff suppressed because it is too large Load Diff