Compare commits

...

26 Commits

Author SHA1 Message Date
cad366795a bump to v0.12.2 for sprocket failure handling fix
Some checks failed
Go / build (push) Has been cancelled
2025-10-09 19:56:25 +01:00
e14b89bc8b Enhance Sprocket functionality and error handling
This commit introduces significant improvements to the Sprocket system, including:

- Detailed documentation in `readme.adoc` for manual updates and failure handling.
- Implementation of automatic disablement of Sprocket on failure, with periodic checks for recovery.
- Enhanced logging for event rejection when Sprocket is disabled or not running.

These changes ensure better user guidance and system resilience during Sprocket failures.
2025-10-09 19:55:20 +01:00
5b4dd9ea60 bump for better documentation
Some checks failed
Go / build (push) Has been cancelled
2025-10-09 19:34:25 +01:00
bae1d09f8d Add Sprocket Test Suite and Integration Scripts
This commit introduces a comprehensive test suite for the Sprocket integration, including various test scripts to validate functionality. Key additions include:

- `run-sprocket-test.sh`: An automated test runner for Sprocket integration tests.
- `SPROCKET_TEST_README.md`: Documentation detailing the test suite, criteria, and usage instructions.
- `test-sprocket-complete.sh`: A complete test suite that sets up the relay and runs all tests.
- `test-sprocket-manual.sh`: A manual testing script for interactive event testing.
- `test-sprocket-demo.sh`: A demonstration script showcasing Sprocket functionality.
- Additional test scripts for various scenarios, including normal events, spam detection, and blocked hashtags.

These changes enhance the testing framework for the Sprocket system, ensuring robust validation of event processing capabilities.
2025-10-09 19:33:42 +01:00
f1f3236196 revise readme.adoc 2025-10-09 19:31:38 +01:00
f01cd562f8 added sprocket script capability
Some checks failed
Go / build (push) Has been cancelled
2025-10-09 19:11:29 +01:00
d2d0821d19 implement first draft of sprockets 2025-10-09 19:09:37 +01:00
09b00c76ed bump to v0.11.3
Some checks failed
Go / build (push) Has been cancelled
2025-10-09 18:10:46 +01:00
de57fd7bc4 Revert "fixing app icon"
This reverts commit b7c2e609f6.
2025-10-09 18:00:44 +01:00
b7c2e609f6 fixing app icon
Some checks failed
Go / build (push) Has been cancelled
2025-10-09 17:52:14 +01:00
cc63fe751a bumping to v0.11.1
Some checks failed
Go / build (push) Has been cancelled
2025-10-09 17:46:48 +01:00
d96d10723a events view works with infinite scroll and load more button, filter switch to show only user's events
Some checks failed
Go / build (push) Has been cancelled
2025-10-09 17:41:10 +01:00
ec50afdec0 Enhance event management in App.svelte by implementing pagination and caching for user and all events. Introduce new functions for loading events with timestamp-based pagination, and update UI components to support event expansion and deletion. Refactor event fetching logic in nostr.js to utilize WebSocket REQ envelopes for improved performance. Update default relay settings in constants.js to include local WebSocket endpoint. 2025-10-09 16:14:18 +01:00
ade987c9ac working export my/all events 2025-10-09 15:01:14 +01:00
9f39ca8a62 Refactor export functionality in App.svelte to support both GET and POST methods for event exports, enhancing flexibility in user permissions. Update server-side handling to accommodate pubkey filtering and improve response handling for file downloads. Adjust UI components to reflect these changes, ensuring a seamless user experience. 2025-10-09 14:55:29 +01:00
f85a8b99a3 Update export functionality in App.svelte to allow both admin and owner roles to export all events. Adjust permission checks and UI components to reflect new role-based access for exporting events, enhancing user experience and security. 2025-10-09 14:30:32 +01:00
d7bda40e18 Refactor authentication handling to use WebSocket URLs instead of Service URLs for improved connection management. Introduce WebSocketURL method in the Server struct to dynamically generate WebSocket URLs based on request headers. Clean up whitespace in handle-auth.go for better code readability. 2025-10-08 21:31:04 +01:00
b67961773d Refactor login and logout button styles in App.svelte for improved UI consistency. Update button text from icons to labels for better accessibility. Introduce a floating logout button in the profile banner for enhanced user experience. 2025-10-08 21:15:13 +01:00
5fd58681c9 Increase WebSocket message size limit to 100MB and implement handling for oversized messages. Introduce optimal chunk size calculation in Spider for efficient pubkey processing, ensuring compliance with WebSocket constraints. Enhance logging for message sizes and connection events for better debugging. 2025-10-08 20:40:46 +01:00
2bdc1b7bc0 Implement NIP-98 authentication for HTTP requests, enhancing security for event export and import functionalities. Update server methods to validate authentication and permissions, and refactor event handling in the Svelte app to support new export and import features. Add UI components for exporting and importing events with appropriate permission checks. 2025-10-08 20:06:58 +01:00
332b9b05f7 Enhance user role management in App.svelte by adding fetchUserRole function; update UI to display user role badge upon login. Modify Follows struct to include owners and adjust access level logic in acl package for improved permission handling. 2025-10-08 18:47:29 +01:00
c43ddb77e0 Add App.svelte and LoginModal.svelte components for user authentication; update .gitignore to include Svelte files 2025-10-08 17:56:38 +01:00
e90fc619f2 Update title in index.html from 'Svelte app' to 'ORLY?' 2025-10-08 17:40:40 +01:00
29e5444545 Refactor logging in event handling and message processing to use trace-level logs, enhancing clarity and consistency across the application. Update web application structure to utilize Svelte and remove unused React components, streamlining the project. Additionally, clean up .gitignore and update package dependencies for improved performance. 2025-10-08 16:10:51 +01:00
7ee613bb0e Add initial project structure with Svelte, TypeScript support, and basic Nostr client implementation 2025-10-08 16:09:37 +01:00
23985719ba Move Docker-related files to contrib/stella directory and update paths accordingly 2025-10-07 20:06:12 +01:00
94 changed files with 8430 additions and 4063 deletions

View File

@@ -38,7 +38,7 @@ describing how the item is used.
For documentation on package, summarise in up to 3 sentences the functions and
purpose of the package
Do not use markdown ** or __ or any similar things in initial words of a bullet
Do not use markdown \*\* or \_\_ or any similar things in initial words of a bullet
point, instead use standard godoc style # prefix for header sections
ALWAYS separate each bullet point with an empty line, and ALWAYS indent them
@@ -90,10 +90,10 @@ A good typical example:
```
use the source of the relay-tester to help guide what expectations the test has,
and use context7 for information about the nostr protocol, and use additional
use the source of the relay-tester to help guide what expectations the test has,
and use context7 for information about the nostr protocol, and use additional
log statements to help locate the cause of bugs
always use Go v1.25.1 for everything involving Go
always use the nips repository also for information, found at ../github.com/nostr-protocol/nips attached to the project
always use the nips repository also for information, found at ../github.com/nostr-protocol/nips attached to the project

View File

@@ -16,10 +16,9 @@ name: Go
on:
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+'
- "v[0-9]+.[0-9]+.[0-9]+"
jobs:
build:
runs-on: ubuntu-latest
steps:
@@ -28,26 +27,25 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.25'
go-version: "1.25"
- name: Install libsecp256k1
run: ./scripts/ubuntu_install_libsecp256k1.sh
run: ./scripts/ubuntu_install_libsecp256k1.sh
- name: Build with cgo
run: go build -v ./...
run: go build -v ./...
- name: Test with cgo
run: go test -v ./...
run: go test -v ./...
- name: Set CGO off
run: echo "CGO_ENABLED=0" >> $GITHUB_ENV
run: echo "CGO_ENABLED=0" >> $GITHUB_ENV
- name: Build
run: go build -v ./...
run: go build -v ./...
- name: Test
run: go test -v ./...
run: go test -v ./...
# release:
# needs: build
# runs-on: ubuntu-latest

10
.gitignore vendored
View File

@@ -76,7 +76,7 @@ cmd/benchmark/data
!*.css
!*.ts
!*.html
!Dockerfile
!contrib/stella/Dockerfile
!*.lock
!*.nix
!license
@@ -88,10 +88,10 @@ cmd/benchmark/data
!.gitignore
!version
!out.jsonl
!Dockerfile*
!contrib/stella/Dockerfile
!strfry.conf
!config.toml
!.dockerignore
!contrib/stella/.dockerignore
!*.jsx
!*.tsx
!app/web/dist
@@ -99,6 +99,7 @@ cmd/benchmark/data
!/app/web/dist/*
!/app/web/dist/**
!bun.lock
!*.svelte
# ...even if they are in subdirectories
!*/
/blocklist.json
@@ -120,4 +121,5 @@ pkg/database/testrealy
/.idea/inspectionProfiles/Project_Default.xml
/.idea/.name
/ctxproxy.config.yml
cmd/benchmark/external/**
cmd/benchmark/external/**
app/web/dist/**

View File

@@ -51,6 +51,9 @@ type C struct {
// Web UI and dev mode settings
WebDisableEmbedded bool `env:"ORLY_WEB_DISABLE" default:"false" usage:"disable serving the embedded web UI; useful for hot-reload during development"`
WebDevProxyURL string `env:"ORLY_WEB_DEV_PROXY_URL" usage:"when ORLY_WEB_DISABLE is true, reverse-proxy non-API paths to this dev server URL (e.g. http://localhost:5173)"`
// Sprocket settings
SprocketEnabled bool `env:"ORLY_SPROCKET_ENABLED" default:"false" usage:"enable sprocket event processing plugin system"`
}
// New creates and initializes a new configuration object for the relay

View File

@@ -25,7 +25,7 @@ func (l *Listener) HandleAuth(b []byte) (err error) {
var valid bool
if valid, err = auth.Validate(
env.Event, l.challenge.Load(),
l.ServiceURL(l.req),
l.WebSocketURL(l.req),
); err != nil {
e := err.Error()
if err = Ok.Error(l, env, e); chk.E(err) {
@@ -50,7 +50,7 @@ func (l *Listener) HandleAuth(b []byte) (err error) {
env.Event.Pubkey,
)
l.authedPubkey.Store(env.Event.Pubkey)
// Check if this is a first-time user and create welcome note
go l.handleFirstTimeUser(env.Event.Pubkey)
}
@@ -65,17 +65,17 @@ func (l *Listener) handleFirstTimeUser(pubkey []byte) {
log.E.F("failed to check first-time user status: %v", err)
return
}
if !isFirstTime {
return // Not a first-time user
}
// Get payment processor to create welcome note
if l.Server.paymentProcessor != nil {
// Set the dashboard URL based on the current HTTP request
dashboardURL := l.Server.DashboardURL(l.req)
l.Server.paymentProcessor.SetDashboardURL(dashboardURL)
if err := l.Server.paymentProcessor.CreateWelcomeNote(pubkey); err != nil {
log.E.F("failed to create welcome note for first-time user: %v", err)
}

View File

@@ -18,6 +18,7 @@ import (
)
func (l *Listener) HandleEvent(msg []byte) (err error) {
log.D.F("handling event: %s", msg)
// decode the envelope
env := eventenvelope.NewSubmission()
if msg, err = env.Unmarshal(msg); chk.E(err) {
@@ -31,6 +32,69 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
if len(msg) > 0 {
log.I.F("extra '%s'", msg)
}
// Check if sprocket is enabled and process event through it
if l.sprocketManager != nil && l.sprocketManager.IsEnabled() {
if l.sprocketManager.IsDisabled() {
// Sprocket is disabled due to failure - reject all events
log.W.F("sprocket is disabled, rejecting event %0x", env.E.ID)
if err = Ok.Error(
l, env, "sprocket disabled - events rejected until sprocket is restored",
); chk.E(err) {
return
}
return
}
if !l.sprocketManager.IsRunning() {
// Sprocket is enabled but not running - reject all events
log.W.F("sprocket is enabled but not running, rejecting event %0x", env.E.ID)
if err = Ok.Error(
l, env, "sprocket not running - events rejected until sprocket starts",
); chk.E(err) {
return
}
return
}
// Process event through sprocket
response, sprocketErr := l.sprocketManager.ProcessEvent(env.E)
if chk.E(sprocketErr) {
log.E.F("sprocket processing failed: %v", sprocketErr)
if err = Ok.Error(
l, env, "sprocket processing failed",
); chk.E(err) {
return
}
return
}
// Handle sprocket response
switch response.Action {
case "accept":
// Continue with normal processing
log.D.F("sprocket accepted event %0x", env.E.ID)
case "reject":
// Return OK false with message
if err = okenvelope.NewFrom(
env.Id(), false,
reason.Error.F(response.Msg),
).Write(l); chk.E(err) {
return
}
return
case "shadowReject":
// Return OK true but abort processing
if err = Ok.Ok(l, env, ""); chk.E(err) {
return
}
log.D.F("sprocket shadow rejected event %0x", env.E.ID)
return
default:
log.W.F("unknown sprocket action: %s", response.Action)
// Default to accept for unknown actions
}
}
// check the event ID is correct
calculatedId := env.E.GetIDBytes()
if !utils.FastEqual(calculatedId, env.E.ID) {

View File

@@ -19,64 +19,78 @@ func (l *Listener) HandleMessage(msg []byte, remote string) {
if len(msgPreview) > 150 {
msgPreview = msgPreview[:150] + "..."
}
log.D.F("%s processing message (len=%d): %s", remote, len(msg), msgPreview)
// log.D.F("%s processing message (len=%d): %s", remote, len(msg), msgPreview)
l.msgCount++
var err error
var t string
var rem []byte
// Attempt to identify the envelope type
if t, rem, err = envelopes.Identify(msg); err != nil {
log.E.F("%s envelope identification FAILED (len=%d): %v", remote, len(msg), err)
log.D.F("%s malformed message content: %q", remote, msgPreview)
log.E.F(
"%s envelope identification FAILED (len=%d): %v", remote, len(msg),
err,
)
log.T.F("%s malformed message content: %q", remote, msgPreview)
chk.E(err)
// Send error notice to client
if noticeErr := noticeenvelope.NewFrom("malformed message: " + err.Error()).Write(l); noticeErr != nil {
log.E.F("%s failed to send malformed message notice: %v", remote, noticeErr)
log.E.F(
"%s failed to send malformed message notice: %v", remote,
noticeErr,
)
}
return
}
log.D.F("%s identified envelope type: %s (payload_len=%d)", remote, t, len(rem))
log.T.F(
"%s identified envelope type: %s (payload_len=%d)", remote, t, len(rem),
)
// Process the identified envelope type
switch t {
case eventenvelope.L:
log.D.F("%s processing EVENT envelope", remote)
log.T.F("%s processing EVENT envelope", remote)
l.eventCount++
err = l.HandleEvent(rem)
case reqenvelope.L:
log.D.F("%s processing REQ envelope", remote)
log.T.F("%s processing REQ envelope", remote)
l.reqCount++
err = l.HandleReq(rem)
case closeenvelope.L:
log.D.F("%s processing CLOSE envelope", remote)
log.T.F("%s processing CLOSE envelope", remote)
err = l.HandleClose(rem)
case authenvelope.L:
log.D.F("%s processing AUTH envelope", remote)
log.T.F("%s processing AUTH envelope", remote)
err = l.HandleAuth(rem)
case countenvelope.L:
log.D.F("%s processing COUNT envelope", remote)
log.T.F("%s processing COUNT envelope", remote)
err = l.HandleCount(rem)
default:
err = fmt.Errorf("unknown envelope type %s", t)
log.E.F("%s unknown envelope type: %s (payload: %q)", remote, t, string(rem))
log.E.F(
"%s unknown envelope type: %s (payload: %q)", remote, t,
string(rem),
)
}
// Handle any processing errors
if err != nil {
log.E.F("%s message processing FAILED (type=%s): %v", remote, t, err)
log.D.F("%s error context - original message: %q", remote, msgPreview)
log.T.F("%s error context - original message: %q", remote, msgPreview)
// Send error notice to client
noticeMsg := fmt.Sprintf("%s: %s", t, err.Error())
if noticeErr := noticeenvelope.NewFrom(noticeMsg).Write(l); noticeErr != nil {
log.E.F("%s failed to send error notice after %s processing failure: %v", remote, t, noticeErr)
log.E.F(
"%s failed to send error notice after %s processing failure: %v",
remote, t, noticeErr,
)
return
}
log.D.F("%s sent error notice for %s processing failure", remote, t)
log.T.F("%s sent error notice for %s processing failure", remote, t)
} else {
log.D.F("%s message processing SUCCESS (type=%s)", remote, t)
log.T.F("%s message processing SUCCESS (type=%s)", remote, t)
}
}

View File

@@ -29,13 +29,20 @@ import (
)
func (l *Listener) HandleReq(msg []byte) (err error) {
log.D.F("HandleReq: START processing from %s", l.remote)
log.D.F("handling REQ: %s", msg)
log.T.F("HandleReq: START processing from %s", l.remote)
// var rem []byte
env := reqenvelope.New()
if _, err = env.Unmarshal(msg); chk.E(err) {
return normalize.Error.Errorf(err.Error())
}
log.D.C(func() string { return fmt.Sprintf("REQ sub=%s filters=%d", env.Subscription, len(*env.Filters)) })
log.T.C(
func() string {
return fmt.Sprintf(
"REQ sub=%s filters=%d", env.Subscription, len(*env.Filters),
)
},
)
// send a challenge to the client to auth if an ACL is active
if acl.Registry.Active.Load() != "none" {
if err = authenvelope.NewChallengeWith(l.challenge.Load()).
@@ -100,9 +107,15 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
if f.Until != nil {
until = f.Until.Int()
}
log.D.C(func() string {
return fmt.Sprintf("REQ %s filter: kinds.len=%d authors.len=%d ids.len=%d d=%q limit=%v since=%v until=%v", env.Subscription, kindsLen, authorsLen, idsLen, dtag, lim, since, until)
})
log.T.C(
func() string {
return fmt.Sprintf(
"REQ %s filter: kinds.len=%d authors.len=%d ids.len=%d d=%q limit=%v since=%v until=%v",
env.Subscription, kindsLen, authorsLen, idsLen, dtag,
lim, since, until,
)
},
)
}
if f != nil && pointers.Present(f.Limit) {
if *f.Limit == 0 {
@@ -229,7 +242,7 @@ privCheck:
events = tmp
seen := make(map[string]struct{})
for _, ev := range events {
log.D.C(
log.T.C(
func() string {
return fmt.Sprintf(
"REQ %s: sending EVENT id=%s kind=%d", env.Subscription,
@@ -256,7 +269,7 @@ privCheck:
}
// write the EOSE to signal to the client that all events found have been
// sent.
log.D.F("sending EOSE to %s", l.remote)
log.T.F("sending EOSE to %s", l.remote)
if err = eoseenvelope.NewFrom(env.Subscription).
Write(l); chk.E(err) {
return
@@ -264,7 +277,7 @@ privCheck:
// if the query was for just Ids, we know there can't be any more results,
// so cancel the subscription.
cancel := true
log.D.F(
log.T.F(
"REQ %s: computing cancel/subscription; events_sent=%d",
env.Subscription, len(events),
)
@@ -318,6 +331,6 @@ privCheck:
} else {
// suppress server-sent CLOSED; client will close subscription if desired
}
log.D.F("HandleReq: COMPLETED processing from %s", l.remote)
log.T.F("HandleReq: COMPLETED processing from %s", l.remote)
return
}

View File

@@ -20,7 +20,7 @@ const (
DefaultPongWait = 60 * time.Second
DefaultPingWait = DefaultPongWait / 2
DefaultWriteTimeout = 3 * time.Second
DefaultMaxMessageSize = 1 * units.Mb
DefaultMaxMessageSize = 100 * units.Mb
// CloseMessage denotes a close control message. The optional message
// payload contains a numeric code and text. Use the FormatCloseMessage
@@ -62,6 +62,8 @@ whitelist:
OriginPatterns: []string{"*"}, // Allow all origins for proxy compatibility
// Don't check origin when behind a proxy - let the proxy handle it
InsecureSkipVerify: true,
// Try to set a higher compression threshold to allow larger messages
CompressionMode: websocket.CompressionDisabled,
}
if conn, err = websocket.Accept(w, r, acceptOptions); chk.E(err) {
@@ -69,7 +71,10 @@ whitelist:
return
}
log.T.F("websocket accepted from %s path=%s", remote, r.URL.String())
// Set read limit immediately after connection is established
conn.SetReadLimit(DefaultMaxMessageSize)
log.D.F("set read limit to %d bytes (%d MB) for %s", DefaultMaxMessageSize, DefaultMaxMessageSize/units.Mb, remote)
defer conn.CloseNow()
listener := &Listener{
ctx: ctx,
@@ -145,6 +150,14 @@ whitelist:
log.T.F("connection from %s closed: %v", remote, err)
return
}
// Handle message too big errors specifically
if strings.Contains(err.Error(), "MessageTooBig") ||
strings.Contains(err.Error(), "read limited at") {
log.D.F("client %s hit message size limit: %v", remote, err)
// Don't log this as an error since it's a client-side limit
// Just close the connection gracefully
return
}
status := websocket.CloseStatus(err)
switch status {
case websocket.StatusNormalClosure,
@@ -155,6 +168,8 @@ whitelist:
log.T.F(
"connection from %s closed with status: %v", remote, status,
)
case websocket.StatusMessageTooBig:
log.D.F("client %s sent message too big: %v", remote, err)
default:
log.E.F("unexpected close error from %s: %v", remote, err)
}
@@ -190,6 +205,10 @@ whitelist:
writeCancel()
continue
}
// Log message size for debugging
if len(msg) > 1000 { // Only log for larger messages
log.D.F("received large message from %s: %d bytes", remote, len(msg))
}
// log.T.F("received message from %s: %s", remote, string(msg))
listener.HandleMessage(msg, remote)
}
@@ -244,7 +263,7 @@ func (s *Server) Pinger(
pingCancel()
case <-ctx.Done():
log.D.F("pinger context cancelled after %d pings", pingCount)
log.T.F("pinger context cancelled after %d pings", pingCount)
return
}
}

View File

@@ -21,9 +21,9 @@ type Listener struct {
authedPubkey atomic.Bytes
startTime time.Time
// Diagnostics: per-connection counters
msgCount int
reqCount int
eventCount int
msgCount int
reqCount int
eventCount int
}
// Ctx returns the listener's context, but creates a new context for each operation
@@ -35,14 +35,16 @@ func (l *Listener) Ctx() context.Context {
func (l *Listener) Write(p []byte) (n int, err error) {
start := time.Now()
msgLen := len(p)
// Log message attempt with content preview (first 200 chars for diagnostics)
preview := string(p)
if len(preview) > 200 {
preview = preview[:200] + "..."
}
log.D.F("ws->%s attempting write: len=%d preview=%q", l.remote, msgLen, preview)
log.T.F(
"ws->%s attempting write: len=%d preview=%q", l.remote, msgLen, preview,
)
// Use a separate context with timeout for writes to prevent race conditions
// where the main connection context gets cancelled while writing events
writeCtx, cancel := context.WithTimeout(
@@ -55,37 +57,50 @@ func (l *Listener) Write(p []byte) (n int, err error) {
if err = l.conn.Write(writeCtx, websocket.MessageText, p); err != nil {
writeDuration := time.Since(writeStart)
totalDuration := time.Since(start)
// Log detailed failure information
log.E.F("ws->%s WRITE FAILED: len=%d duration=%v write_duration=%v error=%v preview=%q",
l.remote, msgLen, totalDuration, writeDuration, err, preview)
log.E.F(
"ws->%s WRITE FAILED: len=%d duration=%v write_duration=%v error=%v preview=%q",
l.remote, msgLen, totalDuration, writeDuration, err, preview,
)
// Check if this is a context timeout
if writeCtx.Err() != nil {
log.E.F("ws->%s write timeout after %v (limit=%v)", l.remote, writeDuration, DefaultWriteTimeout)
log.E.F(
"ws->%s write timeout after %v (limit=%v)", l.remote,
writeDuration, DefaultWriteTimeout,
)
}
// Check connection state
if l.conn != nil {
log.D.F("ws->%s connection state during failure: remote_addr=%v", l.remote, l.req.RemoteAddr)
log.T.F(
"ws->%s connection state during failure: remote_addr=%v",
l.remote, l.req.RemoteAddr,
)
}
chk.E(err) // Still call the original error handler
return
}
// Log successful write with timing
writeDuration := time.Since(writeStart)
totalDuration := time.Since(start)
n = msgLen
log.D.F("ws->%s WRITE SUCCESS: len=%d duration=%v write_duration=%v",
l.remote, n, totalDuration, writeDuration)
log.T.F(
"ws->%s WRITE SUCCESS: len=%d duration=%v write_duration=%v",
l.remote, n, totalDuration, writeDuration,
)
// Log slow writes for performance diagnostics
if writeDuration > time.Millisecond*100 {
log.D.F("ws->%s SLOW WRITE detected: %v (>100ms) len=%d", l.remote, writeDuration, n)
log.T.F(
"ws->%s SLOW WRITE detected: %v (>100ms) len=%d", l.remote,
writeDuration, n,
)
}
return
}

View File

@@ -19,11 +19,9 @@ func Run(
) (quit chan struct{}) {
// shutdown handler
go func() {
select {
case <-ctx.Done():
log.I.F("shutting down")
close(quit)
}
<-ctx.Done()
log.I.F("shutting down")
close(quit)
}()
// get the admins
var err error
@@ -46,6 +44,9 @@ func Run(
publishers: publish.New(NewPublisher(ctx)),
Admins: adminKeys,
}
// Initialize sprocket manager
l.sprocketManager = NewSprocketManager(ctx, cfg.AppName, cfg.SprocketEnabled)
// Initialize the user interface
l.UserInterface()

View File

@@ -3,6 +3,7 @@ package app
import (
"context"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
@@ -22,6 +23,7 @@ import (
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/protocol/auth"
"next.orly.dev/pkg/protocol/httpauth"
"next.orly.dev/pkg/protocol/publish"
)
@@ -42,6 +44,7 @@ type Server struct {
challenges map[string][]byte
paymentProcessor *PaymentProcessor
sprocketManager *SprocketManager
}
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
@@ -94,61 +97,47 @@ func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
s.mux.ServeHTTP(w, r)
}
func (s *Server) ServiceURL(req *http.Request) (st string) {
// Get host from various proxy headers
host := req.Header.Get("X-Forwarded-Host")
if host == "" {
host = req.Header.Get("Host")
}
if host == "" {
host = req.Host
}
// Get protocol from various proxy headers
func (s *Server) ServiceURL(req *http.Request) (url string) {
proto := req.Header.Get("X-Forwarded-Proto")
if proto == "" {
proto = req.Header.Get("X-Forwarded-Scheme")
}
if proto == "" {
// Check if we're behind a proxy by looking for common proxy headers
hasProxyHeaders := req.Header.Get("X-Forwarded-For") != "" ||
req.Header.Get("X-Real-IP") != "" ||
req.Header.Get("Forwarded") != ""
if hasProxyHeaders {
// If we have proxy headers, assume HTTPS/WSS
proto = "wss"
} else if host == "localhost" {
proto = "ws"
} else if strings.Contains(host, ":") {
// has a port number
proto = "ws"
} else if _, err := strconv.Atoi(
strings.ReplaceAll(
host, ".",
"",
),
); chk.E(err) {
// it's a naked IP
proto = "ws"
if req.TLS != nil {
proto = "https"
} else {
proto = "wss"
proto = "http"
}
} else if proto == "https" {
proto = "wss"
} else if proto == "http" {
proto = "ws"
}
host := req.Header.Get("X-Forwarded-Host")
if host == "" {
host = req.Host
}
return proto + "://" + host
}
// DashboardURL constructs HTTPS URL for the dashboard based on the HTTP request
func (s *Server) DashboardURL(req *http.Request) string {
func (s *Server) WebSocketURL(req *http.Request) (url string) {
proto := req.Header.Get("X-Forwarded-Proto")
if proto == "" {
if req.TLS != nil {
proto = "wss"
} else {
proto = "ws"
}
} else {
// Convert HTTP scheme to WebSocket scheme
if proto == "https" {
proto = "wss"
} else if proto == "http" {
proto = "ws"
}
}
host := req.Header.Get("X-Forwarded-Host")
if host == "" {
host = req.Host
}
return "https://" + host
return proto + "://" + host
}
func (s *Server) DashboardURL(req *http.Request) (url string) {
return s.ServiceURL(req) + "/"
}
// UserInterface sets up a basic Nostr NDK interface that allows users to log into the relay user interface
@@ -199,52 +188,76 @@ func (s *Server) UserInterface() {
s.mux.HandleFunc("/api/auth/status", s.handleAuthStatus)
s.mux.HandleFunc("/api/auth/logout", s.handleAuthLogout)
s.mux.HandleFunc("/api/permissions/", s.handlePermissions)
// Export endpoints
// Export endpoint
s.mux.HandleFunc("/api/export", s.handleExport)
s.mux.HandleFunc("/api/export/mine", s.handleExportMine)
// Events endpoints
s.mux.HandleFunc("/api/events/mine", s.handleEventsMine)
// Import endpoint (admin only)
s.mux.HandleFunc("/api/import", s.handleImport)
// Sprocket endpoints (owner only)
s.mux.HandleFunc("/api/sprocket/status", s.handleSprocketStatus)
s.mux.HandleFunc("/api/sprocket/update", s.handleSprocketUpdate)
s.mux.HandleFunc("/api/sprocket/restart", s.handleSprocketRestart)
s.mux.HandleFunc("/api/sprocket/versions", s.handleSprocketVersions)
s.mux.HandleFunc("/api/sprocket/delete-version", s.handleSprocketDeleteVersion)
s.mux.HandleFunc("/api/sprocket/config", s.handleSprocketConfig)
}
// handleLoginInterface serves the main user interface for login
func (s *Server) handleLoginInterface(w http.ResponseWriter, r *http.Request) {
// In dev mode with proxy configured, forward to dev server
if s.Config != nil && s.Config.WebDisableEmbedded && s.devProxy != nil {
if s.devProxy != nil {
s.devProxy.ServeHTTP(w, r)
return
}
// If embedded UI is disabled but no proxy configured, return a helpful message
if s.Config != nil && s.Config.WebDisableEmbedded {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.WriteHeader(http.StatusNotFound)
w.Write([]byte("Web UI disabled (ORLY_WEB_DISABLE=true). Run the web app in standalone dev mode (e.g., npm run dev) or set ORLY_WEB_DEV_PROXY_URL to proxy through this server."))
return
}
// Default: serve embedded React app
fileServer := http.FileServer(GetReactAppFS())
fileServer.ServeHTTP(w, r)
// Serve embedded web interface
ServeEmbeddedWeb(w, r)
}
// handleAuthChallenge generates and returns an authentication challenge
// handleAuthChallenge generates a new authentication challenge
func (s *Server) handleAuthChallenge(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Generate a proper challenge using the auth package
w.Header().Set("Content-Type", "application/json")
// Generate a new challenge
challenge := auth.GenerateChallenge()
challengeHex := hex.Enc(challenge)
// Store the challenge using the hex value as the key for easy lookup
// Store the challenge with expiration (5 minutes)
s.challengeMutex.Lock()
if s.challenges == nil {
s.challenges = make(map[string][]byte)
}
s.challenges[challengeHex] = challenge
s.challengeMutex.Unlock()
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"challenge": "` + challengeHex + `"}`))
// Clean up expired challenges
go func() {
time.Sleep(5 * time.Minute)
s.challengeMutex.Lock()
delete(s.challenges, challengeHex)
s.challengeMutex.Unlock()
}()
// Return the challenge
response := struct {
Challenge string `json:"challenge"`
}{
Challenge: challengeHex,
}
jsonData, err := json.Marshal(response)
if chk.E(err) {
http.Error(w, "Error generating challenge", http.StatusInternalServerError)
return
}
w.Write(jsonData)
}
// handleAuthLogin processes authentication requests
@@ -294,7 +307,7 @@ func (s *Server) handleAuthLogin(w http.ResponseWriter, r *http.Request) {
delete(s.challenges, challengeHex)
s.challengeMutex.Unlock()
relayURL := s.ServiceURL(r)
relayURL := s.WebSocketURL(r)
// Validate the authentication event with the correct challenge
// The challenge in the event tag is hex-encoded, so we need to pass the hex string as bytes
@@ -318,10 +331,11 @@ func (s *Server) handleAuthLogin(w http.ResponseWriter, r *http.Request) {
MaxAge: 60 * 60 * 24 * 30, // 30 days
}
http.SetCookie(w, cookie)
w.Write([]byte(`{"success": true, "pubkey": "` + hex.Enc(evt.Pubkey) + `", "message": "Authentication successful"}`))
w.Write([]byte(`{"success": true}`))
}
// handleAuthStatus returns the current authentication status
// handleAuthStatus checks if the user is authenticated
func (s *Server) handleAuthStatus(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
@@ -329,35 +343,63 @@ func (s *Server) handleAuthStatus(w http.ResponseWriter, r *http.Request) {
}
w.Header().Set("Content-Type", "application/json")
// Check for auth cookie
if c, err := r.Cookie("orly_auth"); err == nil && c.Value != "" {
// Validate pubkey format (hex)
if _, err := hex.Dec(c.Value); !chk.E(err) {
w.Write([]byte(`{"authenticated": true, "pubkey": "` + c.Value + `"}`))
return
}
c, err := r.Cookie("orly_auth")
if err != nil || c.Value == "" {
w.Write([]byte(`{"authenticated": false}`))
return
}
w.Write([]byte(`{"authenticated": false}`))
// Validate the pubkey format
pubkey, err := hex.Dec(c.Value)
if chk.E(err) {
w.Write([]byte(`{"authenticated": false}`))
return
}
// Get user permissions
permission := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
response := struct {
Authenticated bool `json:"authenticated"`
Pubkey string `json:"pubkey"`
Permission string `json:"permission"`
}{
Authenticated: true,
Pubkey: c.Value,
Permission: permission,
}
jsonData, err := json.Marshal(response)
if chk.E(err) {
w.Write([]byte(`{"authenticated": false}`))
return
}
w.Write(jsonData)
}
// handleAuthLogout clears the auth cookie
// handleAuthLogout clears the authentication cookie
func (s *Server) handleAuthLogout(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Expire the cookie
http.SetCookie(
w, &http.Cookie{
Name: "orly_auth",
Value: "",
Path: "/",
MaxAge: -1,
HttpOnly: true,
SameSite: http.SameSiteLaxMode,
},
)
w.Header().Set("Content-Type", "application/json")
// Clear the auth cookie
cookie := &http.Cookie{
Name: "orly_auth",
Value: "",
Path: "/",
HttpOnly: true,
SameSite: http.SameSiteLaxMode,
MaxAge: -1, // Expire immediately
}
http.SetCookie(w, cookie)
w.Write([]byte(`{"success": true}`))
}
@@ -407,148 +449,98 @@ func (s *Server) handlePermissions(w http.ResponseWriter, r *http.Request) {
w.Write(jsonData)
}
// handleExport streams all events as JSONL (NDJSON). Admins only.
// handleExport streams events as JSONL (NDJSON) using NIP-98 authentication.
// Supports both GET (query params) and POST (JSON body) for pubkey filtering.
func (s *Server) handleExport(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
if r.Method != http.MethodGet && r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Require auth cookie
c, err := r.Cookie("orly_auth")
if err != nil || c.Value == "" {
http.Error(w, "Not authenticated", http.StatusUnauthorized)
return
}
requesterPubHex := c.Value
requesterPub, err := hex.Dec(requesterPubHex)
if chk.E(err) {
http.Error(w, "Invalid auth cookie", http.StatusUnauthorized)
return
}
// Check permissions
if acl.Registry.GetAccessLevel(requesterPub, r.RemoteAddr) != "admin" {
http.Error(w, "Forbidden", http.StatusForbidden)
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Optional filtering by pubkey(s)
// Check permissions - require write, admin, or owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "write" && accessLevel != "admin" && accessLevel != "owner" {
http.Error(w, "Write, admin, or owner permission required", http.StatusForbidden)
return
}
// Parse pubkeys from request
var pks [][]byte
q := r.URL.Query()
for _, pkHex := range q["pubkey"] {
if pkHex == "" {
continue
if r.Method == http.MethodPost {
// Parse JSON body for pubkeys
var requestBody struct {
Pubkeys []string `json:"pubkeys"`
}
if pk, err := hex.Dec(pkHex); !chk.E(err) {
pks = append(pks, pk)
if err := json.NewDecoder(r.Body).Decode(&requestBody); err == nil {
// If JSON parsing succeeds, use pubkeys from body
for _, pkHex := range requestBody.Pubkeys {
if pkHex == "" {
continue
}
if pk, err := hex.Dec(pkHex); !chk.E(err) {
pks = append(pks, pk)
}
}
}
// If JSON parsing fails, fall back to empty pubkeys (export all)
} else {
// GET method - parse query parameters
q := r.URL.Query()
for _, pkHex := range q["pubkey"] {
if pkHex == "" {
continue
}
if pk, err := hex.Dec(pkHex); !chk.E(err) {
pks = append(pks, pk)
}
}
}
// Determine filename based on whether filtering by pubkeys
var filename string
if len(pks) == 0 {
filename = "all-events-" + time.Now().UTC().Format("20060102-150405Z") + ".jsonl"
} else if len(pks) == 1 {
filename = "my-events-" + time.Now().UTC().Format("20060102-150405Z") + ".jsonl"
} else {
filename = "filtered-events-" + time.Now().UTC().Format("20060102-150405Z") + ".jsonl"
}
w.Header().Set("Content-Type", "application/x-ndjson")
filename := "events-" + time.Now().UTC().Format("20060102-150405Z") + ".jsonl"
w.Header().Set(
"Content-Disposition", "attachment; filename=\""+filename+"\"",
)
w.Header().Set("Content-Disposition", "attachment; filename=\""+filename+"\"")
// Stream export
s.D.Export(s.Ctx, w, pks...)
}
// handleExportMine streams only the authenticated user's events as JSONL (NDJSON).
func (s *Server) handleExportMine(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Require auth cookie
c, err := r.Cookie("orly_auth")
if err != nil || c.Value == "" {
http.Error(w, "Not authenticated", http.StatusUnauthorized)
return
}
pubkey, err := hex.Dec(c.Value)
if chk.E(err) {
http.Error(w, "Invalid auth cookie", http.StatusUnauthorized)
return
}
w.Header().Set("Content-Type", "application/x-ndjson")
filename := "my-events-" + time.Now().UTC().Format("20060102-150405Z") + ".jsonl"
w.Header().Set(
"Content-Disposition", "attachment; filename=\""+filename+"\"",
)
// Stream export for this user's pubkey only
s.D.Export(s.Ctx, w, pubkey)
}
// handleImport receives a JSONL/NDJSON file or body and enqueues an async import. Admins only.
func (s *Server) handleImport(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Require auth cookie
c, err := r.Cookie("orly_auth")
if err != nil || c.Value == "" {
http.Error(w, "Not authenticated", http.StatusUnauthorized)
return
}
requesterPub, err := hex.Dec(c.Value)
if chk.E(err) {
http.Error(w, "Invalid auth cookie", http.StatusUnauthorized)
return
}
// Admins only
if acl.Registry.GetAccessLevel(requesterPub, r.RemoteAddr) != "admin" {
http.Error(w, "Forbidden", http.StatusForbidden)
return
}
ct := r.Header.Get("Content-Type")
if strings.HasPrefix(ct, "multipart/form-data") {
if err := r.ParseMultipartForm(32 << 20); chk.E(err) { // 32MB memory, rest to temp files
http.Error(w, "Failed to parse form", http.StatusBadRequest)
return
}
file, _, err := r.FormFile("file")
if chk.E(err) {
http.Error(w, "Missing file", http.StatusBadRequest)
return
}
defer file.Close()
s.D.Import(file)
} else {
if r.Body == nil {
http.Error(w, "Empty request body", http.StatusBadRequest)
return
}
s.D.Import(r.Body)
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusAccepted)
w.Write([]byte(`{"success": true, "message": "Import started"}`))
}
// handleEventsMine returns the authenticated user's events in JSON format with pagination
// handleEventsMine returns the authenticated user's events in JSON format with pagination using NIP-98 authentication.
func (s *Server) handleEventsMine(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Require auth cookie
c, err := r.Cookie("orly_auth")
if err != nil || c.Value == "" {
http.Error(w, "Not authenticated", http.StatusUnauthorized)
return
}
pubkey, err := hex.Dec(c.Value)
if chk.E(err) {
http.Error(w, "Invalid auth cookie", http.StatusUnauthorized)
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
@@ -582,64 +574,327 @@ func (s *Server) handleEventsMine(w http.ResponseWriter, r *http.Request) {
}
log.Printf("DEBUG: QueryEvents returned %d events", len(events))
// If no events found, let's also check if there are any events at all in the database
if len(events) == 0 {
// Create a filter to get any events (no authors filter)
allEventsFilter := &filter.F{}
allEvents, err := s.D.QueryEvents(s.Ctx, allEventsFilter)
if err == nil {
log.Printf("DEBUG: Total events in database: %d", len(allEvents))
} else {
log.Printf("DEBUG: Failed to query all events: %v", err)
}
}
// Events are already sorted by QueryEvents in reverse chronological order
// Apply offset and limit manually since QueryEvents doesn't support offset
// Apply pagination
totalEvents := len(events)
start := offset
if start > totalEvents {
start = totalEvents
}
end := start + limit
if end > totalEvents {
end = totalEvents
}
paginatedEvents := events[start:end]
// Convert events to JSON response format
type EventResponse struct {
ID string `json:"id"`
Kind int `json:"kind"`
CreatedAt int64 `json:"created_at"`
Content string `json:"content"`
RawJSON string `json:"raw_json"`
}
response := struct {
Events []EventResponse `json:"events"`
Total int `json:"total"`
Offset int `json:"offset"`
Limit int `json:"limit"`
}{
Events: make([]EventResponse, len(paginatedEvents)),
Total: totalEvents,
Offset: offset,
Limit: limit,
}
for i, ev := range paginatedEvents {
response.Events[i] = EventResponse{
ID: hex.Enc(ev.ID),
Kind: int(ev.Kind),
CreatedAt: int64(ev.CreatedAt),
Content: string(ev.Content),
RawJSON: string(ev.Serialize()),
if offset >= totalEvents {
events = event.S{} // Empty slice
} else {
end := offset + limit
if end > totalEvents {
end = totalEvents
}
events = events[offset:end]
}
// Set content type and write JSON response
w.Header().Set("Content-Type", "application/json")
// Format response as proper JSON
response := struct {
Events []*event.E `json:"events"`
Total int `json:"total"`
Limit int `json:"limit"`
Offset int `json:"offset"`
}{
Events: events,
Total: totalEvents,
Limit: limit,
Offset: offset,
}
// Marshal and write the response
jsonData, err := json.Marshal(response)
if chk.E(err) {
http.Error(
w, "Error generating response", http.StatusInternalServerError,
)
return
}
w.Write(jsonData)
}
// handleImport receives a JSONL/NDJSON file or body and enqueues an async import using NIP-98 authentication. Admins only.
func (s *Server) handleImport(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require admin or owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "admin" && accessLevel != "owner" {
http.Error(w, "Admin or owner permission required", http.StatusForbidden)
return
}
ct := r.Header.Get("Content-Type")
if strings.HasPrefix(ct, "multipart/form-data") {
if err := r.ParseMultipartForm(32 << 20); chk.E(err) { // 32MB memory, rest to temp files
http.Error(w, "Failed to parse form", http.StatusBadRequest)
return
}
file, _, err := r.FormFile("file")
if chk.E(err) {
http.Error(w, "Missing file", http.StatusBadRequest)
return
}
defer file.Close()
s.D.Import(file)
} else {
if r.Body == nil {
http.Error(w, "Empty request body", http.StatusBadRequest)
return
}
s.D.Import(r.Body)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
w.WriteHeader(http.StatusAccepted)
w.Write([]byte(`{"success": true, "message": "Import started"}`))
}
// handleSprocketStatus returns the current status of the sprocket script
func (s *Server) handleSprocketStatus(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
status := s.sprocketManager.GetSprocketStatus()
w.Header().Set("Content-Type", "application/json")
jsonData, err := json.Marshal(status)
if chk.E(err) {
http.Error(w, "Error generating response", http.StatusInternalServerError)
return
}
w.Write(jsonData)
}
// handleSprocketUpdate updates the sprocket script and restarts it
func (s *Server) handleSprocketUpdate(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
// Read the request body
body, err := io.ReadAll(r.Body)
if chk.E(err) {
http.Error(w, "Failed to read request body", http.StatusBadRequest)
return
}
// Update the sprocket script
if err := s.sprocketManager.UpdateSprocket(string(body)); chk.E(err) {
http.Error(w, fmt.Sprintf("Failed to update sprocket: %v", err), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"success": true, "message": "Sprocket updated successfully"}`))
}
// handleSprocketRestart restarts the sprocket script
func (s *Server) handleSprocketRestart(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
// Restart the sprocket script
if err := s.sprocketManager.RestartSprocket(); chk.E(err) {
http.Error(w, fmt.Sprintf("Failed to restart sprocket: %v", err), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"success": true, "message": "Sprocket restarted successfully"}`))
}
// handleSprocketVersions returns all sprocket script versions
func (s *Server) handleSprocketVersions(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
versions, err := s.sprocketManager.GetSprocketVersions()
if chk.E(err) {
http.Error(w, fmt.Sprintf("Failed to get sprocket versions: %v", err), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
jsonData, err := json.Marshal(versions)
if chk.E(err) {
http.Error(w, "Error generating response", http.StatusInternalServerError)
return
}
w.Write(jsonData)
}
// handleSprocketDeleteVersion deletes a specific sprocket version
func (s *Server) handleSprocketDeleteVersion(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Validate NIP-98 authentication
valid, pubkey, err := httpauth.CheckAuth(r)
if chk.E(err) || !valid {
errorMsg := "NIP-98 authentication validation failed"
if err != nil {
errorMsg = err.Error()
}
http.Error(w, errorMsg, http.StatusUnauthorized)
return
}
// Check permissions - require owner level
accessLevel := acl.Registry.GetAccessLevel(pubkey, r.RemoteAddr)
if accessLevel != "owner" {
http.Error(w, "Owner permission required", http.StatusForbidden)
return
}
// Read the request body
body, err := io.ReadAll(r.Body)
if chk.E(err) {
http.Error(w, "Failed to read request body", http.StatusBadRequest)
return
}
var request struct {
Filename string `json:"filename"`
}
if err := json.Unmarshal(body, &request); chk.E(err) {
http.Error(w, "Invalid JSON in request body", http.StatusBadRequest)
return
}
if request.Filename == "" {
http.Error(w, "Filename is required", http.StatusBadRequest)
return
}
// Delete the sprocket version
if err := s.sprocketManager.DeleteSprocketVersion(request.Filename); chk.E(err) {
http.Error(w, fmt.Sprintf("Failed to delete sprocket version: %v", err), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"success": true, "message": "Sprocket version deleted successfully"}`))
}
// handleSprocketConfig returns the sprocket configuration status
func (s *Server) handleSprocketConfig(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
response := struct {
Enabled bool `json:"enabled"`
}{
Enabled: s.Config.SprocketEnabled,
}
jsonData, err := json.Marshal(response)
if chk.E(err) {
http.Error(w, "Error generating response", http.StatusInternalServerError)
return
}
w.Write(jsonData)
}

613
app/sprocket.go Normal file
View File

@@ -0,0 +1,613 @@
package app
import (
"bufio"
"context"
"encoding/json"
"fmt"
"io"
"os"
"os/exec"
"path/filepath"
"strings"
"sync"
"time"
"github.com/adrg/xdg"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/encoders/event"
)
// SprocketResponse represents a response from the sprocket script
type SprocketResponse struct {
ID string `json:"id"`
Action string `json:"action"` // accept, reject, or shadowReject
Msg string `json:"msg"` // NIP-20 response message (only used for reject)
}
// SprocketManager handles sprocket script execution and management
type SprocketManager struct {
ctx context.Context
cancel context.CancelFunc
configDir string
scriptPath string
currentCmd *exec.Cmd
currentCancel context.CancelFunc
mutex sync.RWMutex
isRunning bool
enabled bool
disabled bool // true when sprocket is disabled due to failure
stdin io.WriteCloser
stdout io.ReadCloser
stderr io.ReadCloser
responseChan chan SprocketResponse
}
// NewSprocketManager creates a new sprocket manager
func NewSprocketManager(ctx context.Context, appName string, enabled bool) *SprocketManager {
configDir := filepath.Join(xdg.ConfigHome, appName)
scriptPath := filepath.Join(configDir, "sprocket.sh")
ctx, cancel := context.WithCancel(ctx)
sm := &SprocketManager{
ctx: ctx,
cancel: cancel,
configDir: configDir,
scriptPath: scriptPath,
enabled: enabled,
disabled: false,
responseChan: make(chan SprocketResponse, 100), // Buffered channel for responses
}
// Start the sprocket script if it exists and is enabled
if enabled {
go sm.startSprocketIfExists()
// Start periodic check for sprocket script availability
go sm.periodicCheck()
}
return sm
}
// disableSprocket disables sprocket due to failure
func (sm *SprocketManager) disableSprocket() {
sm.mutex.Lock()
defer sm.mutex.Unlock()
if !sm.disabled {
sm.disabled = true
log.W.F("sprocket disabled due to failure - all events will be rejected (script location: %s)", sm.scriptPath)
}
}
// enableSprocket re-enables sprocket and attempts to start it
func (sm *SprocketManager) enableSprocket() {
sm.mutex.Lock()
defer sm.mutex.Unlock()
if sm.disabled {
sm.disabled = false
log.I.F("sprocket re-enabled, attempting to start")
// Attempt to start sprocket in background
go func() {
if _, err := os.Stat(sm.scriptPath); err == nil {
if err := sm.StartSprocket(); err != nil {
log.E.F("failed to restart sprocket: %v", err)
sm.disableSprocket()
} else {
log.I.F("sprocket restarted successfully")
}
} else {
log.W.F("sprocket script still not found, keeping disabled")
sm.disableSprocket()
}
}()
}
}
// periodicCheck periodically checks if sprocket script becomes available
func (sm *SprocketManager) periodicCheck() {
ticker := time.NewTicker(30 * time.Second) // Check every 30 seconds
defer ticker.Stop()
for {
select {
case <-sm.ctx.Done():
return
case <-ticker.C:
sm.mutex.RLock()
disabled := sm.disabled
running := sm.isRunning
sm.mutex.RUnlock()
// Only check if sprocket is disabled or not running
if disabled || !running {
if _, err := os.Stat(sm.scriptPath); err == nil {
// Script is available, try to enable/restart
if disabled {
sm.enableSprocket()
} else if !running {
// Script exists but sprocket isn't running, try to start
go func() {
if err := sm.StartSprocket(); err != nil {
log.E.F("failed to restart sprocket: %v", err)
sm.disableSprocket()
} else {
log.I.F("sprocket restarted successfully")
}
}()
}
}
}
}
}
}
// startSprocketIfExists starts the sprocket script if the file exists
func (sm *SprocketManager) startSprocketIfExists() {
if _, err := os.Stat(sm.scriptPath); err == nil {
if err := sm.StartSprocket(); err != nil {
log.E.F("failed to start sprocket: %v", err)
sm.disableSprocket()
}
} else {
log.W.F("sprocket script not found at %s, disabling sprocket", sm.scriptPath)
sm.disableSprocket()
}
}
// StartSprocket starts the sprocket script
func (sm *SprocketManager) StartSprocket() error {
sm.mutex.Lock()
defer sm.mutex.Unlock()
if sm.isRunning {
return fmt.Errorf("sprocket is already running")
}
if _, err := os.Stat(sm.scriptPath); os.IsNotExist(err) {
return fmt.Errorf("sprocket script does not exist")
}
// Create a new context for this command
cmdCtx, cmdCancel := context.WithCancel(sm.ctx)
// Make the script executable
if err := os.Chmod(sm.scriptPath, 0755); chk.E(err) {
cmdCancel()
return fmt.Errorf("failed to make script executable: %v", err)
}
// Start the script
cmd := exec.CommandContext(cmdCtx, sm.scriptPath)
cmd.Dir = sm.configDir
// Set up stdio pipes for communication
stdin, err := cmd.StdinPipe()
if chk.E(err) {
cmdCancel()
return fmt.Errorf("failed to create stdin pipe: %v", err)
}
stdout, err := cmd.StdoutPipe()
if chk.E(err) {
cmdCancel()
stdin.Close()
return fmt.Errorf("failed to create stdout pipe: %v", err)
}
stderr, err := cmd.StderrPipe()
if chk.E(err) {
cmdCancel()
stdin.Close()
stdout.Close()
return fmt.Errorf("failed to create stderr pipe: %v", err)
}
// Start the command
if err := cmd.Start(); chk.E(err) {
cmdCancel()
stdin.Close()
stdout.Close()
stderr.Close()
return fmt.Errorf("failed to start sprocket: %v", err)
}
sm.currentCmd = cmd
sm.currentCancel = cmdCancel
sm.stdin = stdin
sm.stdout = stdout
sm.stderr = stderr
sm.isRunning = true
// Start response reader in background
go sm.readResponses()
// Log stderr output in background
go sm.logOutput(stdout, stderr)
// Monitor the process
go sm.monitorProcess()
log.I.F("sprocket started (pid=%d)", cmd.Process.Pid)
return nil
}
// StopSprocket stops the sprocket script gracefully, with SIGKILL fallback
func (sm *SprocketManager) StopSprocket() error {
sm.mutex.Lock()
defer sm.mutex.Unlock()
if !sm.isRunning || sm.currentCmd == nil {
return fmt.Errorf("sprocket is not running")
}
// Close stdin first to signal the script to exit
if sm.stdin != nil {
sm.stdin.Close()
}
// Cancel the context
if sm.currentCancel != nil {
sm.currentCancel()
}
// Wait for graceful shutdown with timeout
done := make(chan error, 1)
go func() {
done <- sm.currentCmd.Wait()
}()
select {
case <-done:
// Process exited gracefully
log.I.F("sprocket stopped gracefully")
case <-time.After(5 * time.Second):
// Force kill after 5 seconds
log.W.F("sprocket did not stop gracefully, sending SIGKILL")
if err := sm.currentCmd.Process.Kill(); chk.E(err) {
log.E.F("failed to kill sprocket process: %v", err)
}
<-done // Wait for the kill to complete
}
// Clean up pipes
if sm.stdin != nil {
sm.stdin.Close()
sm.stdin = nil
}
if sm.stdout != nil {
sm.stdout.Close()
sm.stdout = nil
}
if sm.stderr != nil {
sm.stderr.Close()
sm.stderr = nil
}
sm.isRunning = false
sm.currentCmd = nil
sm.currentCancel = nil
return nil
}
// RestartSprocket stops and starts the sprocket script
func (sm *SprocketManager) RestartSprocket() error {
if sm.isRunning {
if err := sm.StopSprocket(); chk.E(err) {
return fmt.Errorf("failed to stop sprocket: %v", err)
}
// Give it a moment to fully stop
time.Sleep(100 * time.Millisecond)
}
return sm.StartSprocket()
}
// UpdateSprocket updates the sprocket script and restarts it with zero downtime
func (sm *SprocketManager) UpdateSprocket(scriptContent string) error {
// Ensure config directory exists
if err := os.MkdirAll(sm.configDir, 0755); chk.E(err) {
return fmt.Errorf("failed to create config directory: %v", err)
}
// If script content is empty, delete the script and stop
if strings.TrimSpace(scriptContent) == "" {
if sm.isRunning {
if err := sm.StopSprocket(); chk.E(err) {
log.E.F("failed to stop sprocket before deletion: %v", err)
}
}
if _, err := os.Stat(sm.scriptPath); err == nil {
if err := os.Remove(sm.scriptPath); chk.E(err) {
return fmt.Errorf("failed to delete sprocket script: %v", err)
}
log.I.F("sprocket script deleted")
}
return nil
}
// Create backup of existing script if it exists
if _, err := os.Stat(sm.scriptPath); err == nil {
timestamp := time.Now().Format("20060102150405")
backupPath := sm.scriptPath + "." + timestamp
if err := os.Rename(sm.scriptPath, backupPath); chk.E(err) {
log.W.F("failed to create backup: %v", err)
} else {
log.I.F("created backup: %s", backupPath)
}
}
// Write new script to temporary file first
tempPath := sm.scriptPath + ".tmp"
if err := os.WriteFile(tempPath, []byte(scriptContent), 0755); chk.E(err) {
return fmt.Errorf("failed to write temporary sprocket script: %v", err)
}
// If sprocket is running, do zero-downtime update
if sm.isRunning {
// Atomically replace the script file
if err := os.Rename(tempPath, sm.scriptPath); chk.E(err) {
os.Remove(tempPath) // Clean up temp file
return fmt.Errorf("failed to replace sprocket script: %v", err)
}
log.I.F("sprocket script updated atomically")
// Restart the sprocket process
return sm.RestartSprocket()
} else {
// Not running, just replace the file
if err := os.Rename(tempPath, sm.scriptPath); chk.E(err) {
os.Remove(tempPath) // Clean up temp file
return fmt.Errorf("failed to replace sprocket script: %v", err)
}
log.I.F("sprocket script updated")
return nil
}
}
// GetSprocketStatus returns the current status of the sprocket
func (sm *SprocketManager) GetSprocketStatus() map[string]interface{} {
sm.mutex.RLock()
defer sm.mutex.RUnlock()
status := map[string]interface{}{
"is_running": sm.isRunning,
"script_exists": false,
"script_path": sm.scriptPath,
}
if _, err := os.Stat(sm.scriptPath); err == nil {
status["script_exists"] = true
// Get script content
if content, err := os.ReadFile(sm.scriptPath); err == nil {
status["script_content"] = string(content)
}
// Get file info
if info, err := os.Stat(sm.scriptPath); err == nil {
status["script_modified"] = info.ModTime()
}
}
if sm.isRunning && sm.currentCmd != nil && sm.currentCmd.Process != nil {
status["pid"] = sm.currentCmd.Process.Pid
}
return status
}
// GetSprocketVersions returns a list of all sprocket script versions
func (sm *SprocketManager) GetSprocketVersions() ([]map[string]interface{}, error) {
versions := []map[string]interface{}{}
// Check for current script
if _, err := os.Stat(sm.scriptPath); err == nil {
if info, err := os.Stat(sm.scriptPath); err == nil {
if content, err := os.ReadFile(sm.scriptPath); err == nil {
versions = append(versions, map[string]interface{}{
"name": "sprocket.sh",
"path": sm.scriptPath,
"modified": info.ModTime(),
"content": string(content),
"is_current": true,
})
}
}
}
// Check for backup versions
dir := filepath.Dir(sm.scriptPath)
files, err := os.ReadDir(dir)
if chk.E(err) {
return versions, nil
}
for _, file := range files {
if strings.HasPrefix(file.Name(), "sprocket.sh.") && !file.IsDir() {
path := filepath.Join(dir, file.Name())
if info, err := os.Stat(path); err == nil {
if content, err := os.ReadFile(path); err == nil {
versions = append(versions, map[string]interface{}{
"name": file.Name(),
"path": path,
"modified": info.ModTime(),
"content": string(content),
"is_current": false,
})
}
}
}
}
return versions, nil
}
// DeleteSprocketVersion deletes a specific sprocket version
func (sm *SprocketManager) DeleteSprocketVersion(filename string) error {
// Don't allow deleting the current script
if filename == "sprocket.sh" {
return fmt.Errorf("cannot delete current sprocket script")
}
path := filepath.Join(sm.configDir, filename)
if err := os.Remove(path); chk.E(err) {
return fmt.Errorf("failed to delete sprocket version: %v", err)
}
log.I.F("deleted sprocket version: %s", filename)
return nil
}
// logOutput logs the output from stdout and stderr
func (sm *SprocketManager) logOutput(stdout, stderr io.ReadCloser) {
defer stdout.Close()
defer stderr.Close()
go func() {
io.Copy(os.Stdout, stdout)
}()
go func() {
io.Copy(os.Stderr, stderr)
}()
}
// ProcessEvent sends an event to the sprocket script and waits for a response
func (sm *SprocketManager) ProcessEvent(evt *event.E) (*SprocketResponse, error) {
sm.mutex.RLock()
if !sm.isRunning || sm.stdin == nil {
sm.mutex.RUnlock()
return nil, fmt.Errorf("sprocket is not running")
}
stdin := sm.stdin
sm.mutex.RUnlock()
// Serialize the event to JSON
eventJSON, err := json.Marshal(evt)
if chk.E(err) {
return nil, fmt.Errorf("failed to serialize event: %v", err)
}
// Send the event JSON to the sprocket script
// The final ']' should be the only thing after the event's raw JSON
if _, err := stdin.Write(eventJSON); chk.E(err) {
return nil, fmt.Errorf("failed to write event to sprocket: %v", err)
}
// Wait for response with timeout
select {
case response := <-sm.responseChan:
return &response, nil
case <-time.After(5 * time.Second):
return nil, fmt.Errorf("sprocket response timeout")
case <-sm.ctx.Done():
return nil, fmt.Errorf("sprocket context cancelled")
}
}
// readResponses reads JSONL responses from the sprocket script
func (sm *SprocketManager) readResponses() {
if sm.stdout == nil {
return
}
scanner := bufio.NewScanner(sm.stdout)
for scanner.Scan() {
line := scanner.Text()
if line == "" {
continue
}
var response SprocketResponse
if err := json.Unmarshal([]byte(line), &response); chk.E(err) {
log.E.F("failed to parse sprocket response: %v", err)
continue
}
// Send response to channel (non-blocking)
select {
case sm.responseChan <- response:
default:
log.W.F("sprocket response channel full, dropping response")
}
}
if err := scanner.Err(); chk.E(err) {
log.E.F("error reading sprocket responses: %v", err)
}
}
// IsEnabled returns whether sprocket is enabled
func (sm *SprocketManager) IsEnabled() bool {
return sm.enabled
}
// IsRunning returns whether sprocket is currently running
func (sm *SprocketManager) IsRunning() bool {
sm.mutex.RLock()
defer sm.mutex.RUnlock()
return sm.isRunning
}
// IsDisabled returns whether sprocket is disabled due to failure
func (sm *SprocketManager) IsDisabled() bool {
sm.mutex.RLock()
defer sm.mutex.RUnlock()
return sm.disabled
}
// monitorProcess monitors the sprocket process and cleans up when it exits
func (sm *SprocketManager) monitorProcess() {
if sm.currentCmd == nil {
return
}
err := sm.currentCmd.Wait()
sm.mutex.Lock()
defer sm.mutex.Unlock()
// Clean up pipes
if sm.stdin != nil {
sm.stdin.Close()
sm.stdin = nil
}
if sm.stdout != nil {
sm.stdout.Close()
sm.stdout = nil
}
if sm.stderr != nil {
sm.stderr.Close()
sm.stderr = nil
}
sm.isRunning = false
sm.currentCmd = nil
sm.currentCancel = nil
if err != nil {
log.E.F("sprocket process exited with error: %v", err)
// Auto-disable sprocket on failure
sm.disabled = true
log.W.F("sprocket disabled due to process failure - all events will be rejected (script location: %s)", sm.scriptPath)
} else {
log.I.F("sprocket process exited normally")
}
}
// Shutdown gracefully shuts down the sprocket manager
func (sm *SprocketManager) Shutdown() {
sm.cancel()
if sm.isRunning {
sm.StopSprocket()
}
}

View File

@@ -16,4 +16,10 @@ func GetReactAppFS() http.FileSystem {
panic("Failed to load embedded web app: " + err.Error())
}
return http.FS(webDist)
}
}
// ServeEmbeddedWeb serves the embedded web application
func ServeEmbeddedWeb(w http.ResponseWriter, r *http.Request) {
// Serve the embedded web app
http.FileServer(GetReactAppFS()).ServeHTTP(w, r)
}

41
app/web/.gitignore vendored
View File

@@ -1,30 +1,11 @@
# Dependencies
node_modules
.pnp
.pnp.js
# Bun
.bunfig.toml
bun.lockb
# Build directories
build
# Cache and logs
.cache
.temp
.log
*.log
# Environment variables
.env
.env.local
.env.development.local
.env.test.local
.env.production.local
# Editor directories and files
.idea
.vscode
*.swp
*.swo
node_modules/
dist/
.vite/
.tanstack/
.idea/
.DS_Store
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
/.idea/

View File

@@ -1,89 +0,0 @@
# Orly Web Application
This is a React web application that uses Bun for building and bundling, and is automatically embedded into the Go binary when built.
## Prerequisites
- [Bun](https://bun.sh/) - JavaScript runtime and toolkit
- Go 1.16+ (for embedding functionality)
## Development
There are two ways to develop the web app:
1) Standalone (recommended for hot reload)
- Start the Go relay with the embedded web UI disabled so the React app can run on its own dev server with HMR.
- Configure the relay via environment variables:
```bash
# In another shell at repo root
export ORLY_WEB_DISABLE=true
# Optional: if you want same-origin URLs, you can set a proxy target and access the relay on the same port
# export ORLY_WEB_DEV_PROXY_URL=http://localhost:5173
# Start the relay as usual
go run .
```
- Then start the React dev server:
```bash
cd app/web
bun install
bun dev
```
When ORLY_WEB_DISABLE=true is set, the Go server still serves the API and websocket endpoints and sends permissive CORS headers, so the dev server can access them cross-origin. If ORLY_WEB_DEV_PROXY_URL is set, the Go server will reverse-proxy non-/api paths to the dev server so you can use the same origin.
2) Embedded (no hot reload)
- Build the web app and run the Go server with defaults:
```bash
cd app/web
bun install
bun run build
cd ../../
go run .
```
## Building
The React application needs to be built before compiling the Go binary to ensure that the embedded files are available:
```bash
# Build the React application
cd app/web
bun install
bun run build
# Build the Go binary from project root
cd ../../
go build
```
## How it works
1. The React application is built to the `app/web/dist` directory
2. The Go embed directive in `app/web.go` embeds these files into the binary
3. When the server runs, it serves the embedded React app at the root path
## Build Automation
You can create a shell script to automate the build process:
```bash
#!/bin/bash
# build.sh
echo "Building React app..."
cd app/web
bun install
bun run build
echo "Building Go binary..."
cd ../../
go build
echo "Build complete!"
```
Make it executable with `chmod +x build.sh` and run with `./build.sh`.

View File

@@ -2,44 +2,189 @@
"lockfileVersion": 1,
"workspaces": {
"": {
"name": "orly-web",
"name": "svelte-app",
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-json-pretty": "^2.2.0",
"sirv-cli": "^2.0.0",
},
"devDependencies": {
"bun-types": "latest",
"@rollup/plugin-commonjs": "^24.0.0",
"@rollup/plugin-node-resolve": "^15.0.0",
"@rollup/plugin-terser": "^0.4.0",
"rollup": "^3.15.0",
"rollup-plugin-css-only": "^4.3.0",
"rollup-plugin-livereload": "^2.0.0",
"rollup-plugin-svelte": "^7.1.2",
"svelte": "^3.55.0",
},
},
},
"packages": {
"@types/node": ["@types/node@24.5.2", "", { "dependencies": { "undici-types": "~7.12.0" } }, "sha512-FYxk1I7wPv3K2XBaoyH2cTnocQEu8AOZ60hPbsyukMPLv5/5qr7V1i8PLHdl6Zf87I+xZXFvPCXYjiTFq+YSDQ=="],
"@jridgewell/gen-mapping": ["@jridgewell/gen-mapping@0.3.13", "", { "dependencies": { "@jridgewell/sourcemap-codec": "^1.5.0", "@jridgewell/trace-mapping": "^0.3.24" } }, "sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA=="],
"@types/react": ["@types/react@19.1.13", "", { "dependencies": { "csstype": "^3.0.2" } }, "sha512-hHkbU/eoO3EG5/MZkuFSKmYqPbSVk5byPFa3e7y/8TybHiLMACgI8seVYlicwk7H5K/rI2px9xrQp/C+AUDTiQ=="],
"@jridgewell/resolve-uri": ["@jridgewell/resolve-uri@3.1.2", "", {}, "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw=="],
"bun-types": ["bun-types@1.2.22", "", { "dependencies": { "@types/node": "*" }, "peerDependencies": { "@types/react": "^19" } }, "sha512-hwaAu8tct/Zn6Zft4U9BsZcXkYomzpHJX28ofvx7k0Zz2HNz54n1n+tDgxoWFGB4PcFvJXJQloPhaV2eP3Q6EA=="],
"@jridgewell/source-map": ["@jridgewell/source-map@0.3.11", "", { "dependencies": { "@jridgewell/gen-mapping": "^0.3.5", "@jridgewell/trace-mapping": "^0.3.25" } }, "sha512-ZMp1V8ZFcPG5dIWnQLr3NSI1MiCU7UETdS/A0G8V/XWHvJv3ZsFqutJn1Y5RPmAPX6F3BiE397OqveU/9NCuIA=="],
"csstype": ["csstype@3.1.3", "", {}, "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw=="],
"@jridgewell/sourcemap-codec": ["@jridgewell/sourcemap-codec@1.5.5", "", {}, "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og=="],
"js-tokens": ["js-tokens@4.0.0", "", {}, "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ=="],
"@jridgewell/trace-mapping": ["@jridgewell/trace-mapping@0.3.31", "", { "dependencies": { "@jridgewell/resolve-uri": "^3.1.0", "@jridgewell/sourcemap-codec": "^1.4.14" } }, "sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw=="],
"loose-envify": ["loose-envify@1.4.0", "", { "dependencies": { "js-tokens": "^3.0.0 || ^4.0.0" }, "bin": { "loose-envify": "cli.js" } }, "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q=="],
"@polka/url": ["@polka/url@1.0.0-next.29", "", {}, "sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww=="],
"object-assign": ["object-assign@4.1.1", "", {}, "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg=="],
"@rollup/plugin-commonjs": ["@rollup/plugin-commonjs@24.1.0", "", { "dependencies": { "@rollup/pluginutils": "^5.0.1", "commondir": "^1.0.1", "estree-walker": "^2.0.2", "glob": "^8.0.3", "is-reference": "1.2.1", "magic-string": "^0.27.0" }, "peerDependencies": { "rollup": "^2.68.0||^3.0.0" }, "optionalPeers": ["rollup"] }, "sha512-eSL45hjhCWI0jCCXcNtLVqM5N1JlBGvlFfY0m6oOYnLCJ6N0qEXoZql4sY2MOUArzhH4SA/qBpTxvvZp2Sc+DQ=="],
"prop-types": ["prop-types@15.8.1", "", { "dependencies": { "loose-envify": "^1.4.0", "object-assign": "^4.1.1", "react-is": "^16.13.1" } }, "sha512-oj87CgZICdulUohogVAR7AjlC0327U4el4L6eAvOqCeudMDVU0NThNaV+b9Df4dXgSP1gXMTnPdhfe/2qDH5cg=="],
"@rollup/plugin-node-resolve": ["@rollup/plugin-node-resolve@15.3.1", "", { "dependencies": { "@rollup/pluginutils": "^5.0.1", "@types/resolve": "1.20.2", "deepmerge": "^4.2.2", "is-module": "^1.0.0", "resolve": "^1.22.1" }, "peerDependencies": { "rollup": "^2.78.0||^3.0.0||^4.0.0" }, "optionalPeers": ["rollup"] }, "sha512-tgg6b91pAybXHJQMAAwW9VuWBO6Thi+q7BCNARLwSqlmsHz0XYURtGvh/AuwSADXSI4h/2uHbs7s4FzlZDGSGA=="],
"react": ["react@18.3.1", "", { "dependencies": { "loose-envify": "^1.1.0" } }, "sha512-wS+hAgJShR0KhEvPJArfuPVN1+Hz1t0Y6n5jLrGQbkb4urgPE/0Rve+1kMB1v/oWgHgm4WIcV+i7F2pTVj+2iQ=="],
"@rollup/plugin-terser": ["@rollup/plugin-terser@0.4.4", "", { "dependencies": { "serialize-javascript": "^6.0.1", "smob": "^1.0.0", "terser": "^5.17.4" }, "peerDependencies": { "rollup": "^2.0.0||^3.0.0||^4.0.0" }, "optionalPeers": ["rollup"] }, "sha512-XHeJC5Bgvs8LfukDwWZp7yeqin6ns8RTl2B9avbejt6tZqsqvVoWI7ZTQrcNsfKEDWBTnTxM8nMDkO2IFFbd0A=="],
"react-dom": ["react-dom@18.3.1", "", { "dependencies": { "loose-envify": "^1.1.0", "scheduler": "^0.23.2" }, "peerDependencies": { "react": "^18.3.1" } }, "sha512-5m4nQKp+rZRb09LNH59GM4BxTh9251/ylbKIbpe7TpGxfJ+9kv6BLkLBXIjjspbgbnIBNqlI23tRnTWT0snUIw=="],
"@rollup/pluginutils": ["@rollup/pluginutils@5.3.0", "", { "dependencies": { "@types/estree": "^1.0.0", "estree-walker": "^2.0.2", "picomatch": "^4.0.2" }, "peerDependencies": { "rollup": "^1.20.0||^2.0.0||^3.0.0||^4.0.0" }, "optionalPeers": ["rollup"] }, "sha512-5EdhGZtnu3V88ces7s53hhfK5KSASnJZv8Lulpc04cWO3REESroJXg73DFsOmgbU2BhwV0E20bu2IDZb3VKW4Q=="],
"react-is": ["react-is@16.13.1", "", {}, "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ=="],
"@types/estree": ["@types/estree@1.0.8", "", {}, "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w=="],
"react-json-pretty": ["react-json-pretty@2.2.0", "", { "dependencies": { "prop-types": "^15.6.2" }, "peerDependencies": { "react": ">=15.0", "react-dom": ">=15.0" } }, "sha512-3UMzlAXkJ4R8S4vmkRKtvJHTewG4/rn1Q18n0zqdu/ipZbUPLVZD+QwC7uVcD/IAY3s8iNVHlgR2dMzIUS0n1A=="],
"@types/resolve": ["@types/resolve@1.20.2", "", {}, "sha512-60BCwRFOZCQhDncwQdxxeOEEkbc5dIMccYLwbxsS4TUNeVECQ/pBJ0j09mrHOl/JJvpRPGwO9SvE4nR2Nb/a4Q=="],
"scheduler": ["scheduler@0.23.2", "", { "dependencies": { "loose-envify": "^1.1.0" } }, "sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ=="],
"acorn": ["acorn@8.15.0", "", { "bin": { "acorn": "bin/acorn" } }, "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg=="],
"undici-types": ["undici-types@7.12.0", "", {}, "sha512-goOacqME2GYyOZZfb5Lgtu+1IDmAlAEu5xnD3+xTzS10hT0vzpf0SPjkXwAw9Jm+4n/mQGDP3LO8CPbYROeBfQ=="],
"anymatch": ["anymatch@3.1.3", "", { "dependencies": { "normalize-path": "^3.0.0", "picomatch": "^2.0.4" } }, "sha512-KMReFUr0B4t+D+OBkjR3KYqvocp2XaSzO55UcB6mgQMd3KbcE+mWTyvVV7D/zsdEbNnV6acZUutkiHQXvTr1Rw=="],
"balanced-match": ["balanced-match@1.0.2", "", {}, "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw=="],
"binary-extensions": ["binary-extensions@2.3.0", "", {}, "sha512-Ceh+7ox5qe7LJuLHoY0feh3pHuUDHAcRUeyL2VYghZwfpkNIy/+8Ocg0a3UuSoYzavmylwuLWQOf3hl0jjMMIw=="],
"brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="],
"braces": ["braces@3.0.3", "", { "dependencies": { "fill-range": "^7.1.1" } }, "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA=="],
"buffer-from": ["buffer-from@1.1.2", "", {}, "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ=="],
"chokidar": ["chokidar@3.6.0", "", { "dependencies": { "anymatch": "~3.1.2", "braces": "~3.0.2", "glob-parent": "~5.1.2", "is-binary-path": "~2.1.0", "is-glob": "~4.0.1", "normalize-path": "~3.0.0", "readdirp": "~3.6.0" }, "optionalDependencies": { "fsevents": "~2.3.2" } }, "sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw=="],
"commander": ["commander@2.20.3", "", {}, "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ=="],
"commondir": ["commondir@1.0.1", "", {}, "sha512-W9pAhw0ja1Edb5GVdIF1mjZw/ASI0AlShXM83UUGe2DVr5TdAPEA1OA8m/g8zWp9x6On7gqufY+FatDbC3MDQg=="],
"console-clear": ["console-clear@1.1.1", "", {}, "sha512-pMD+MVR538ipqkG5JXeOEbKWS5um1H4LUUccUQG68qpeqBYbzYy79Gh55jkd2TtPdRfUaLWdv6LPP//5Zt0aPQ=="],
"deepmerge": ["deepmerge@4.3.1", "", {}, "sha512-3sUqbMEc77XqpdNO7FRyRog+eW3ph+GYCbj+rK+uYyRMuwsVy0rMiVtPn+QJlKFvWP/1PYpapqYn0Me2knFn+A=="],
"estree-walker": ["estree-walker@2.0.2", "", {}, "sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w=="],
"fill-range": ["fill-range@7.1.1", "", { "dependencies": { "to-regex-range": "^5.0.1" } }, "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg=="],
"fs.realpath": ["fs.realpath@1.0.0", "", {}, "sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw=="],
"fsevents": ["fsevents@2.3.3", "", { "os": "darwin" }, "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw=="],
"function-bind": ["function-bind@1.1.2", "", {}, "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA=="],
"get-port": ["get-port@3.2.0", "", {}, "sha512-x5UJKlgeUiNT8nyo/AcnwLnZuZNcSjSw0kogRB+Whd1fjjFq4B1hySFxSFWWSn4mIBzg3sRNUDFYc4g5gjPoLg=="],
"glob": ["glob@8.1.0", "", { "dependencies": { "fs.realpath": "^1.0.0", "inflight": "^1.0.4", "inherits": "2", "minimatch": "^5.0.1", "once": "^1.3.0" } }, "sha512-r8hpEjiQEYlF2QU0df3dS+nxxSIreXQS1qRhMJM0Q5NDdR386C7jb7Hwwod8Fgiuex+k0GFjgft18yvxm5XoCQ=="],
"glob-parent": ["glob-parent@5.1.2", "", { "dependencies": { "is-glob": "^4.0.1" } }, "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow=="],
"hasown": ["hasown@2.0.2", "", { "dependencies": { "function-bind": "^1.1.2" } }, "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ=="],
"inflight": ["inflight@1.0.6", "", { "dependencies": { "once": "^1.3.0", "wrappy": "1" } }, "sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA=="],
"inherits": ["inherits@2.0.4", "", {}, "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="],
"is-binary-path": ["is-binary-path@2.1.0", "", { "dependencies": { "binary-extensions": "^2.0.0" } }, "sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw=="],
"is-core-module": ["is-core-module@2.16.1", "", { "dependencies": { "hasown": "^2.0.2" } }, "sha512-UfoeMA6fIJ8wTYFEUjelnaGI67v6+N7qXJEvQuIGa99l4xsCruSYOVSQ0uPANn4dAzm8lkYPaKLrrijLq7x23w=="],
"is-extglob": ["is-extglob@2.1.1", "", {}, "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ=="],
"is-glob": ["is-glob@4.0.3", "", { "dependencies": { "is-extglob": "^2.1.1" } }, "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg=="],
"is-module": ["is-module@1.0.0", "", {}, "sha512-51ypPSPCoTEIN9dy5Oy+h4pShgJmPCygKfyRCISBI+JoWT/2oJvK8QPxmwv7b/p239jXrm9M1mlQbyKJ5A152g=="],
"is-number": ["is-number@7.0.0", "", {}, "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng=="],
"is-reference": ["is-reference@1.2.1", "", { "dependencies": { "@types/estree": "*" } }, "sha512-U82MsXXiFIrjCK4otLT+o2NA2Cd2g5MLoOVXUZjIOhLurrRxpEXzI8O0KZHr3IjLvlAH1kTPYSuqer5T9ZVBKQ=="],
"kleur": ["kleur@4.1.5", "", {}, "sha512-o+NO+8WrRiQEE4/7nwRJhN1HWpVmJm511pBHUxPLtp0BUISzlBplORYSmTclCnJvQq2tKu/sgl3xVpkc7ZWuQQ=="],
"livereload": ["livereload@0.9.3", "", { "dependencies": { "chokidar": "^3.5.0", "livereload-js": "^3.3.1", "opts": ">= 1.2.0", "ws": "^7.4.3" }, "bin": { "livereload": "bin/livereload.js" } }, "sha512-q7Z71n3i4X0R9xthAryBdNGVGAO2R5X+/xXpmKeuPMrteg+W2U8VusTKV3YiJbXZwKsOlFlHe+go6uSNjfxrZw=="],
"livereload-js": ["livereload-js@3.4.1", "", {}, "sha512-5MP0uUeVCec89ZbNOT/i97Mc+q3SxXmiUGhRFOTmhrGPn//uWVQdCvcLJDy64MSBR5MidFdOR7B9viumoavy6g=="],
"local-access": ["local-access@1.1.0", "", {}, "sha512-XfegD5pyTAfb+GY6chk283Ox5z8WexG56OvM06RWLpAc/UHozO8X6xAxEkIitZOtsSMM1Yr3DkHgW5W+onLhCw=="],
"magic-string": ["magic-string@0.27.0", "", { "dependencies": { "@jridgewell/sourcemap-codec": "^1.4.13" } }, "sha512-8UnnX2PeRAPZuN12svgR9j7M1uWMovg/CEnIwIG0LFkXSJJe4PdfUGiTGl8V9bsBHFUtfVINcSyYxd7q+kx9fA=="],
"minimatch": ["minimatch@5.1.6", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-lKwV/1brpG6mBUFHtb7NUmtABCb2WZZmm2wNiOA5hAb8VdCS4B3dtMWyvcoViccwAW/COERjXLt0zP1zXUN26g=="],
"mri": ["mri@1.2.0", "", {}, "sha512-tzzskb3bG8LvYGFF/mDTpq3jpI6Q9wc3LEmBaghu+DdCssd1FakN7Bc0hVNmEyGq1bq3RgfkCb3cmQLpNPOroA=="],
"mrmime": ["mrmime@2.0.1", "", {}, "sha512-Y3wQdFg2Va6etvQ5I82yUhGdsKrcYox6p7FfL1LbK2J4V01F9TGlepTIhnK24t7koZibmg82KGglhA1XK5IsLQ=="],
"normalize-path": ["normalize-path@3.0.0", "", {}, "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA=="],
"once": ["once@1.4.0", "", { "dependencies": { "wrappy": "1" } }, "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="],
"opts": ["opts@2.0.2", "", {}, "sha512-k41FwbcLnlgnFh69f4qdUfvDQ+5vaSDnVPFI/y5XuhKRq97EnVVneO9F1ESVCdiVu4fCS2L8usX3mU331hB7pg=="],
"path-parse": ["path-parse@1.0.7", "", {}, "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw=="],
"picomatch": ["picomatch@4.0.3", "", {}, "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q=="],
"randombytes": ["randombytes@2.1.0", "", { "dependencies": { "safe-buffer": "^5.1.0" } }, "sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ=="],
"readdirp": ["readdirp@3.6.0", "", { "dependencies": { "picomatch": "^2.2.1" } }, "sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA=="],
"resolve": ["resolve@1.22.10", "", { "dependencies": { "is-core-module": "^2.16.0", "path-parse": "^1.0.7", "supports-preserve-symlinks-flag": "^1.0.0" }, "bin": { "resolve": "bin/resolve" } }, "sha512-NPRy+/ncIMeDlTAsuqwKIiferiawhefFJtkNSW0qZJEqMEb+qBt/77B/jGeeek+F0uOeN05CDa6HXbbIgtVX4w=="],
"resolve.exports": ["resolve.exports@2.0.3", "", {}, "sha512-OcXjMsGdhL4XnbShKpAcSqPMzQoYkYyhbEaeSko47MjRP9NfEQMhZkXL1DoFlt9LWQn4YttrdnV6X2OiyzBi+A=="],
"rollup": ["rollup@3.29.5", "", { "optionalDependencies": { "fsevents": "~2.3.2" }, "bin": { "rollup": "dist/bin/rollup" } }, "sha512-GVsDdsbJzzy4S/v3dqWPJ7EfvZJfCHiDqe80IyrF59LYuP+e6U1LJoUqeuqRbwAWoMNoXivMNeNAOf5E22VA1w=="],
"rollup-plugin-css-only": ["rollup-plugin-css-only@4.5.5", "", { "dependencies": { "@rollup/pluginutils": "5" }, "peerDependencies": { "rollup": "<5" } }, "sha512-O2m2Sj8qsAtjUVqZyGTDXJypaOFFNV4knz8OlS6wJBws6XEICIiLsXmI56SbQEmWDqYU5TgRgWmslGj4THofJQ=="],
"rollup-plugin-livereload": ["rollup-plugin-livereload@2.0.5", "", { "dependencies": { "livereload": "^0.9.1" } }, "sha512-vqQZ/UQowTW7VoiKEM5ouNW90wE5/GZLfdWuR0ELxyKOJUIaj+uismPZZaICU4DnWPVjnpCDDxEqwU7pcKY/PA=="],
"rollup-plugin-svelte": ["rollup-plugin-svelte@7.2.3", "", { "dependencies": { "@rollup/pluginutils": "^4.1.0", "resolve.exports": "^2.0.0" }, "peerDependencies": { "rollup": ">=2.0.0", "svelte": ">=3.5.0" } }, "sha512-LlniP+h00DfM+E4eav/Kk8uGjgPUjGIBfrAS/IxQvsuFdqSM0Y2sXf31AdxuIGSW9GsmocDqOfaxR5QNno/Tgw=="],
"sade": ["sade@1.8.1", "", { "dependencies": { "mri": "^1.1.0" } }, "sha512-xal3CZX1Xlo/k4ApwCFrHVACi9fBqJ7V+mwhBsuf/1IOKbBy098Fex+Wa/5QMubw09pSZ/u8EY8PWgevJsXp1A=="],
"safe-buffer": ["safe-buffer@5.2.1", "", {}, "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ=="],
"semiver": ["semiver@1.1.0", "", {}, "sha512-QNI2ChmuioGC1/xjyYwyZYADILWyW6AmS1UH6gDj/SFUUUS4MBAWs/7mxnkRPc/F4iHezDP+O8t0dO8WHiEOdg=="],
"serialize-javascript": ["serialize-javascript@6.0.2", "", { "dependencies": { "randombytes": "^2.1.0" } }, "sha512-Saa1xPByTTq2gdeFZYLLo+RFE35NHZkAbqZeWNd3BpzppeVisAqpDjcp8dyf6uIvEqJRd46jemmyA4iFIeVk8g=="],
"sirv": ["sirv@2.0.4", "", { "dependencies": { "@polka/url": "^1.0.0-next.24", "mrmime": "^2.0.0", "totalist": "^3.0.0" } }, "sha512-94Bdh3cC2PKrbgSOUqTiGPWVZeSiXfKOVZNJniWoqrWrRkB1CJzBU3NEbiTsPcYy1lDsANA/THzS+9WBiy5nfQ=="],
"sirv-cli": ["sirv-cli@2.0.2", "", { "dependencies": { "console-clear": "^1.1.0", "get-port": "^3.2.0", "kleur": "^4.1.4", "local-access": "^1.0.1", "sade": "^1.6.0", "semiver": "^1.0.0", "sirv": "^2.0.0", "tinydate": "^1.0.0" }, "bin": { "sirv": "bin.js" } }, "sha512-OtSJDwxsF1NWHc7ps3Sa0s+dPtP15iQNJzfKVz+MxkEo3z72mCD+yu30ct79rPr0CaV1HXSOBp+MIY5uIhHZ1A=="],
"smob": ["smob@1.5.0", "", {}, "sha512-g6T+p7QO8npa+/hNx9ohv1E5pVCmWrVCUzUXJyLdMmftX6ER0oiWY/w9knEonLpnOp6b6FenKnMfR8gqwWdwig=="],
"source-map": ["source-map@0.6.1", "", {}, "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g=="],
"source-map-support": ["source-map-support@0.5.21", "", { "dependencies": { "buffer-from": "^1.0.0", "source-map": "^0.6.0" } }, "sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w=="],
"supports-preserve-symlinks-flag": ["supports-preserve-symlinks-flag@1.0.0", "", {}, "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w=="],
"svelte": ["svelte@3.59.2", "", {}, "sha512-vzSyuGr3eEoAtT/A6bmajosJZIUWySzY2CzB3w2pgPvnkUjGqlDnsNnA0PMO+mMAhuyMul6C2uuZzY6ELSkzyA=="],
"terser": ["terser@5.44.0", "", { "dependencies": { "@jridgewell/source-map": "^0.3.3", "acorn": "^8.15.0", "commander": "^2.20.0", "source-map-support": "~0.5.20" }, "bin": { "terser": "bin/terser" } }, "sha512-nIVck8DK+GM/0Frwd+nIhZ84pR/BX7rmXMfYwyg+Sri5oGVE99/E3KvXqpC2xHFxyqXyGHTKBSioxxplrO4I4w=="],
"tinydate": ["tinydate@1.3.0", "", {}, "sha512-7cR8rLy2QhYHpsBDBVYnnWXm8uRTr38RoZakFSW7Bs7PzfMPNZthuMLkwqZv7MTu8lhQ91cOFYS5a7iFj2oR3w=="],
"to-regex-range": ["to-regex-range@5.0.1", "", { "dependencies": { "is-number": "^7.0.0" } }, "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ=="],
"totalist": ["totalist@3.0.1", "", {}, "sha512-sf4i37nQ2LBx4m3wB74y+ubopq6W/dIzXg0FDGjsYnZHVa1Da8FH853wlL2gtUhg+xJXjfk3kUZS3BRoQeoQBQ=="],
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
"ws": ["ws@7.5.10", "", { "peerDependencies": { "bufferutil": "^4.0.1", "utf-8-validate": "^5.0.2" }, "optionalPeers": ["bufferutil", "utf-8-validate"] }, "sha512-+dbF1tHwZpXcbOJdVOkzLDxZP1ailvSxM6ZweXTegylPny803bFhA+vqBYw4s31NSAk4S2Qz+AKXK9a4wkdjcQ=="],
"anymatch/picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
"readdirp/picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
"rollup-plugin-svelte/@rollup/pluginutils": ["@rollup/pluginutils@4.2.1", "", { "dependencies": { "estree-walker": "^2.0.1", "picomatch": "^2.2.2" } }, "sha512-iKnFXr7NkdZAIHiIWE+BX5ULi/ucVFYWD6TbAV+rZctiRTY2PL6tsIKhoIOaoskiWAkgu+VsbXgUVDNLHf+InQ=="],
"rollup-plugin-svelte/@rollup/pluginutils/picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
}
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,30 +1,14 @@
<!DOCTYPE html>
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Nostr Relay</title>
<link rel="stylesheet" crossorigin href="./index-q4cwd1fy.css"><script type="module" crossorigin src="./index-kk1m7jg4.js"></script></head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Next Orly</title>
<link rel="icon" href="/favicon.png" type="image/png" />
<link rel="stylesheet" href="/bundle.css" />
</head>
<body>
<script>
// Apply system theme preference immediately to avoid flash of wrong theme
function applyTheme(isDark) {
document.body.classList.remove('bg-white', 'bg-gray-900');
document.body.classList.add(isDark ? 'bg-gray-900' : 'bg-white');
}
// Set initial theme
applyTheme(window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches);
// Listen for theme changes
if (window.matchMedia) {
window.matchMedia('(prefers-color-scheme: dark)').addEventListener('change', e => {
applyTheme(e.matches);
});
}
</script>
<div id="root"></div>
<div id="app"></div>
<script src="/bundle.js"></script>
</body>
</html>

View File

@@ -1,112 +0,0 @@
/*
Local Tailwind CSS (minimal subset for this UI)
Note: This file includes just the utilities used by the app to keep size small.
You can replace this with a full Tailwind build if desired.
*/
/* Preflight-like resets (very minimal) */
*,::before,::after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}
html,body,#root{height:100%}
html{line-height:1.5;-webkit-text-size-adjust:100%;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,Segoe UI,Roboto,Helvetica,Arial,Noto Sans,\"Apple Color Emoji\",\"Segoe UI Emoji\"}
body{margin:0}
button,input{font:inherit;color:inherit}
img{display:block;max-width:100%;height:auto}
/* Layout */
.sticky{position:sticky}.relative{position:relative}.absolute{position:absolute}
.top-0{top:0}.left-0{left:0}.inset-0{top:0;right:0;bottom:0;left:0}
.z-50{z-index:50}.z-10{z-index:10}
.block{display:block}.flex{display:flex}
.items-center{align-items:center}.justify-start{justify-content:flex-start}.justify-center{justify-content:center}.justify-end{justify-content:flex-end}
.flex-grow{flex-grow:1}.shrink-0{flex-shrink:0}
.overflow-hidden{overflow:hidden}
/* Sizing */
.w-full{width:100%}.w-auto{width:auto}.w-16{width:4rem}
.h-full{height:100%}.h-16{height:4rem}
.aspect-square{aspect-ratio:1/1}
.max-w-3xl{max-width:48rem}
/* Spacing */
.p-0{padding:0}.p-2{padding:.5rem}.p-3{padding:.75rem}.p-6{padding:1.5rem}
.px-2{padding-left:.5rem;padding-right:.5rem}
.mr-0{margin-right:0}.mr-2{margin-right:.5rem}
.mt-2{margin-top:.5rem}.mt-5{margin-top:1.25rem}
.mb-1{margin-bottom:.25rem}.mb-2{margin-bottom:.5rem}.mb-4{margin-bottom:1rem}.mb-5{margin-bottom:1.25rem}
.mx-auto{margin-left:auto;margin-right:auto}
/* Borders & Radius */
.rounded{border-radius:.25rem}.rounded-full{border-radius:9999px}
.border-0{border-width:0}.border-2{border-width:2px}
.border-white{border-color:#fff}
.border{border-width:1px}.border-gray-300{border-color:#d1d5db}.border-gray-600{border-color:#4b5563}
.border-red-500{border-color:#ef4444}.border-red-700{border-color:#b91c1c}
/* Colors / Backgrounds */
.bg-white{background-color:#fff}
.bg-gray-100{background-color:#f3f4f6}
.bg-gray-200{background-color:#e5e7eb}
.bg-gray-300{background-color:#d1d5db}
.bg-gray-600{background-color:#4b5563}
.bg-gray-700{background-color:#374151}
.bg-gray-800{background-color:#1f2937}
.bg-gray-900{background-color:#111827}
.bg-blue-500{background-color:#3b82f6}
.bg-blue-600{background-color:#2563eb}.hover\:bg-blue-700:hover{background-color:#1d4ed8}
.hover\:bg-blue-600:hover{background-color:#2563eb}
.bg-red-600{background-color:#dc2626}.hover\:bg-red-700:hover{background-color:#b91c1c}
.bg-cyan-100{background-color:#cffafe}
.bg-green-100{background-color:#d1fae5}
.bg-red-100{background-color:#fee2e2}
.bg-red-50{background-color:#fef2f2}
.bg-green-900{background-color:#064e3b}
.bg-red-900{background-color:#7f1d1d}
.bg-cyan-900{background-color:#164e63}
.bg-cover{background-size:cover}.bg-center{background-position:center}
.bg-transparent{background-color:transparent}
/* Text */
.text-left{text-align:left}
.text-white{color:#fff}
.text-gray-300{color:#d1d5db}
.text-gray-500{color:#6b7280}.hover\:text-gray-800:hover{color:#1f2937}
.hover\:text-gray-100:hover{color:#f3f4f6}
.text-gray-700{color:#374151}
.text-gray-800{color:#1f2937}
.text-gray-900{color:#111827}
.text-gray-100{color:#f3f4f6}
.text-green-800{color:#065f46}
.text-green-100{color:#dcfce7}
.text-red-800{color:#991b1b}
.text-red-200{color:#fecaca}
.text-red-100{color:#fee2e2}
.text-cyan-800{color:#155e75}
.text-cyan-100{color:#cffafe}
.text-base{font-size:1rem;line-height:1.5rem}
.text-lg{font-size:1.125rem;line-height:1.75rem}
.text-2xl{font-size:1.5rem;line-height:2rem}
.font-bold{font-weight:700}
/* Opacity */
.opacity-70{opacity:.7}
/* Effects */
.shadow{--tw-shadow:0 1px 3px 0 rgba(0,0,0,0.1),0 1px 2px -1px rgba(0,0,0,0.1);box-shadow:var(--tw-shadow)}
/* Cursor */
.cursor-pointer{cursor:pointer}
/* Box model */
.box-border{box-sizing:border-box}
/* Utilities */
.hover\:bg-transparent:hover{background-color:transparent}
.hover\:bg-gray-200:hover{background-color:#e5e7eb}
.hover\:bg-gray-600:hover{background-color:#4b5563}
.focus\:ring-2:focus{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}
.focus\:ring-blue-200:focus{--tw-ring-color:rgba(191, 219, 254, var(--tw-ring-opacity))}
.focus\:ring-blue-500:focus{--tw-ring-color:rgba(59, 130, 246, var(--tw-ring-opacity))}
.disabled\:opacity-50:disabled{opacity:.5}
.disabled\:cursor-not-allowed:disabled{cursor:not-allowed}
/* Height for avatar images in header already inherit from container */

BIN
app/web/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 485 KiB

View File

@@ -1,19 +1,24 @@
{
"name": "orly-web",
"version": "0.1.0",
"name": "svelte-app",
"version": "1.0.0",
"private": true,
"type": "module",
"scripts": {
"dev": "bun --hot --port 5173 public/dev.html",
"build": "rm -rf dist && bun build ./public/index.html --outdir ./dist --minify --splitting && cp -r public/tailwind.min.css dist/",
"preview": "bun x serve dist"
},
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-json-pretty": "^2.2.0"
"build": "rollup -c",
"dev": "rollup -c -w",
"start": "sirv public --no-clear"
},
"devDependencies": {
"bun-types": "latest"
"@rollup/plugin-commonjs": "^24.0.0",
"@rollup/plugin-node-resolve": "^15.0.0",
"@rollup/plugin-terser": "^0.4.0",
"rollup": "^3.15.0",
"rollup-plugin-css-only": "^4.3.0",
"rollup-plugin-livereload": "^2.0.0",
"rollup-plugin-svelte": "^7.1.2",
"svelte": "^3.55.0"
},
"dependencies": {
"sirv-cli": "^2.0.0"
}
}
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,13 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Nostr Relay (Dev)</title>
<link rel="stylesheet" href="tailwind.min.css" />
</head>
<body class="bg-white">
<div id="root"></div>
<script type="module" src="/src/index.jsx"></script>
</body>
</html>

BIN
app/web/public/favicon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

69
app/web/public/global.css Normal file
View File

@@ -0,0 +1,69 @@
html,
body {
position: relative;
width: 100%;
height: 100%;
}
body {
color: #333;
margin: 0;
padding: 8px;
box-sizing: border-box;
font-family:
-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen-Sans, Ubuntu,
Cantarell, "Helvetica Neue", sans-serif;
}
a {
color: rgb(0, 100, 200);
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
a:visited {
color: rgb(0, 80, 160);
}
label {
display: block;
}
input,
button,
select,
textarea {
font-family: inherit;
font-size: inherit;
-webkit-padding: 0.4em 0;
padding: 0.4em;
margin: 0 0 0.5em 0;
box-sizing: border-box;
border: 1px solid #ccc;
border-radius: 2px;
}
input:disabled {
color: #ccc;
}
button {
color: #333;
background-color: #f4f4f4;
outline: none;
}
button:disabled {
color: #999;
}
button:not(:disabled):active {
background-color: #ddd;
}
button:focus {
border-color: #666;
}

View File

@@ -1,30 +1,17 @@
<!DOCTYPE html>
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Nostr Relay</title>
<link rel="stylesheet" href="tailwind.min.css" />
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<title>ORLY?</title>
<link rel="icon" type="image/png" href="/orly.png" />
<link rel="stylesheet" href="/global.css" />
<link rel="stylesheet" href="/build/bundle.css" />
<script defer src="/build/bundle.js"></script>
</head>
<body>
<script>
// Apply system theme preference immediately to avoid flash of wrong theme
function applyTheme(isDark) {
document.body.classList.remove('bg-white', 'bg-gray-900');
document.body.classList.add(isDark ? 'bg-gray-900' : 'bg-white');
}
// Set initial theme
applyTheme(window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches);
// Listen for theme changes
if (window.matchMedia) {
window.matchMedia('(prefers-color-scheme: dark)').addEventListener('change', e => {
applyTheme(e.matches);
});
}
</script>
<div id="root"></div>
<script type="module" src="/src/index.jsx"></script>
</body>
</html>
<body></body>
</html>

BIN
app/web/public/orly.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 514 KiB

View File

@@ -1,112 +0,0 @@
/*
Local Tailwind CSS (minimal subset for this UI)
Note: This file includes just the utilities used by the app to keep size small.
You can replace this with a full Tailwind build if desired.
*/
/* Preflight-like resets (very minimal) */
*,::before,::after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}
html,body,#root{height:100%}
html{line-height:1.5;-webkit-text-size-adjust:100%;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,Segoe UI,Roboto,Helvetica,Arial,Noto Sans,\"Apple Color Emoji\",\"Segoe UI Emoji\"}
body{margin:0}
button,input{font:inherit;color:inherit}
img{display:block;max-width:100%;height:auto}
/* Layout */
.sticky{position:sticky}.relative{position:relative}.absolute{position:absolute}
.top-0{top:0}.left-0{left:0}.inset-0{top:0;right:0;bottom:0;left:0}
.z-50{z-index:50}.z-10{z-index:10}
.block{display:block}.flex{display:flex}
.items-center{align-items:center}.justify-start{justify-content:flex-start}.justify-center{justify-content:center}.justify-end{justify-content:flex-end}
.flex-grow{flex-grow:1}.shrink-0{flex-shrink:0}
.overflow-hidden{overflow:hidden}
/* Sizing */
.w-full{width:100%}.w-auto{width:auto}.w-16{width:4rem}
.h-full{height:100%}.h-16{height:4rem}
.aspect-square{aspect-ratio:1/1}
.max-w-3xl{max-width:48rem}
/* Spacing */
.p-0{padding:0}.p-2{padding:.5rem}.p-3{padding:.75rem}.p-6{padding:1.5rem}
.px-2{padding-left:.5rem;padding-right:.5rem}
.mr-0{margin-right:0}.mr-2{margin-right:.5rem}
.mt-2{margin-top:.5rem}.mt-5{margin-top:1.25rem}
.mb-1{margin-bottom:.25rem}.mb-2{margin-bottom:.5rem}.mb-4{margin-bottom:1rem}.mb-5{margin-bottom:1.25rem}
.mx-auto{margin-left:auto;margin-right:auto}
/* Borders & Radius */
.rounded{border-radius:.25rem}.rounded-full{border-radius:9999px}
.border-0{border-width:0}.border-2{border-width:2px}
.border-white{border-color:#fff}
.border{border-width:1px}.border-gray-300{border-color:#d1d5db}.border-gray-600{border-color:#4b5563}
.border-red-500{border-color:#ef4444}.border-red-700{border-color:#b91c1c}
/* Colors / Backgrounds */
.bg-white{background-color:#fff}
.bg-gray-100{background-color:#f3f4f6}
.bg-gray-200{background-color:#e5e7eb}
.bg-gray-300{background-color:#d1d5db}
.bg-gray-600{background-color:#4b5563}
.bg-gray-700{background-color:#374151}
.bg-gray-800{background-color:#1f2937}
.bg-gray-900{background-color:#111827}
.bg-blue-500{background-color:#3b82f6}
.bg-blue-600{background-color:#2563eb}.hover\:bg-blue-700:hover{background-color:#1d4ed8}
.hover\:bg-blue-600:hover{background-color:#2563eb}
.bg-red-600{background-color:#dc2626}.hover\:bg-red-700:hover{background-color:#b91c1c}
.bg-cyan-100{background-color:#cffafe}
.bg-green-100{background-color:#d1fae5}
.bg-red-100{background-color:#fee2e2}
.bg-red-50{background-color:#fef2f2}
.bg-green-900{background-color:#064e3b}
.bg-red-900{background-color:#7f1d1d}
.bg-cyan-900{background-color:#164e63}
.bg-cover{background-size:cover}.bg-center{background-position:center}
.bg-transparent{background-color:transparent}
/* Text */
.text-left{text-align:left}
.text-white{color:#fff}
.text-gray-300{color:#d1d5db}
.text-gray-500{color:#6b7280}.hover\:text-gray-800:hover{color:#1f2937}
.hover\:text-gray-100:hover{color:#f3f4f6}
.text-gray-700{color:#374151}
.text-gray-800{color:#1f2937}
.text-gray-900{color:#111827}
.text-gray-100{color:#f3f4f6}
.text-green-800{color:#065f46}
.text-green-100{color:#dcfce7}
.text-red-800{color:#991b1b}
.text-red-200{color:#fecaca}
.text-red-100{color:#fee2e2}
.text-cyan-800{color:#155e75}
.text-cyan-100{color:#cffafe}
.text-base{font-size:1rem;line-height:1.5rem}
.text-lg{font-size:1.125rem;line-height:1.75rem}
.text-2xl{font-size:1.5rem;line-height:2rem}
.font-bold{font-weight:700}
/* Opacity */
.opacity-70{opacity:.7}
/* Effects */
.shadow{--tw-shadow:0 1px 3px 0 rgba(0,0,0,0.1),0 1px 2px -1px rgba(0,0,0,0.1);box-shadow:var(--tw-shadow)}
/* Cursor */
.cursor-pointer{cursor:pointer}
/* Box model */
.box-border{box-sizing:border-box}
/* Utilities */
.hover\:bg-transparent:hover{background-color:transparent}
.hover\:bg-gray-200:hover{background-color:#e5e7eb}
.hover\:bg-gray-600:hover{background-color:#4b5563}
.focus\:ring-2:focus{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}
.focus\:ring-blue-200:focus{--tw-ring-color:rgba(191, 219, 254, var(--tw-ring-opacity))}
.focus\:ring-blue-500:focus{--tw-ring-color:rgba(59, 130, 246, var(--tw-ring-opacity))}
.disabled\:opacity-50:disabled{opacity:.5}
.disabled\:cursor-not-allowed:disabled{cursor:not-allowed}
/* Height for avatar images in header already inherit from container */

3
app/web/readme.adoc Normal file
View File

@@ -0,0 +1,3 @@
= nostrly.app
a simple, material design nostr kind 1 nostr note client

78
app/web/rollup.config.js Normal file
View File

@@ -0,0 +1,78 @@
import { spawn } from "child_process";
import svelte from "rollup-plugin-svelte";
import commonjs from "@rollup/plugin-commonjs";
import terser from "@rollup/plugin-terser";
import resolve from "@rollup/plugin-node-resolve";
import livereload from "rollup-plugin-livereload";
import css from "rollup-plugin-css-only";
const production = !process.env.ROLLUP_WATCH;
function serve() {
let server;
function toExit() {
if (server) server.kill(0);
}
return {
writeBundle() {
if (server) return;
server = spawn("npm", ["run", "start", "--", "--dev"], {
stdio: ["ignore", "inherit", "inherit"],
shell: true,
});
process.on("SIGTERM", toExit);
process.on("exit", toExit);
},
};
}
export default {
input: "src/main.js",
output: {
sourcemap: true,
format: "iife",
name: "app",
file: "dist/bundle.js",
},
plugins: [
svelte({
compilerOptions: {
// enable run-time checks when not in production
dev: !production,
},
}),
// we'll extract any component CSS out into
// a separate file - better for performance
css({ output: "bundle.css" }),
// If you have external dependencies installed from
// npm, you'll most likely need these plugins. In
// some cases you'll need additional configuration -
// consult the documentation for details:
// https://github.com/rollup/plugins/tree/master/packages/commonjs
resolve({
browser: true,
dedupe: ["svelte"],
exportConditions: ["svelte"],
}),
commonjs(),
// In dev mode, call `npm run start` once
// the bundle has been generated
!production && serve(),
// Watch the `public` directory and refresh the
// browser on changes when not in production
!production && livereload("public"),
// If we're building for production (npm run build
// instead of npm run dev), minify
production && terser(),
],
watch: {
clearScreen: false,
},
};

View File

@@ -0,0 +1,147 @@
// @ts-check
/** This script modifies the project to support TS code in .svelte files like:
<script lang="ts">
export let name: string;
</script>
As well as validating the code for CI.
*/
/** To work on this script:
rm -rf test-template template && git clone sveltejs/template test-template && node scripts/setupTypeScript.js test-template
*/
import fs from "fs";
import path from "path";
import { argv } from "process";
import url from "url";
const __filename = url.fileURLToPath(import.meta.url);
const __dirname = url.fileURLToPath(new URL(".", import.meta.url));
const projectRoot = argv[2] || path.join(__dirname, "..");
// Add deps to pkg.json
const packageJSON = JSON.parse(
fs.readFileSync(path.join(projectRoot, "package.json"), "utf8"),
);
packageJSON.devDependencies = Object.assign(packageJSON.devDependencies, {
"svelte-check": "^3.0.0",
"svelte-preprocess": "^5.0.0",
"@rollup/plugin-typescript": "^11.0.0",
typescript: "^4.9.0",
tslib: "^2.5.0",
"@tsconfig/svelte": "^3.0.0",
});
// Add script for checking
packageJSON.scripts = Object.assign(packageJSON.scripts, {
check: "svelte-check",
});
// Write the package JSON
fs.writeFileSync(
path.join(projectRoot, "package.json"),
JSON.stringify(packageJSON, null, " "),
);
// mv src/main.js to main.ts - note, we need to edit rollup.config.js for this too
const beforeMainJSPath = path.join(projectRoot, "src", "main.js");
const afterMainTSPath = path.join(projectRoot, "src", "main.ts");
fs.renameSync(beforeMainJSPath, afterMainTSPath);
// Switch the app.svelte file to use TS
const appSveltePath = path.join(projectRoot, "src", "App.svelte");
let appFile = fs.readFileSync(appSveltePath, "utf8");
appFile = appFile.replace("<script>", '<script lang="ts">');
appFile = appFile.replace("export let name;", "export let name: string;");
fs.writeFileSync(appSveltePath, appFile);
// Edit rollup config
const rollupConfigPath = path.join(projectRoot, "rollup.config.js");
let rollupConfig = fs.readFileSync(rollupConfigPath, "utf8");
// Edit imports
rollupConfig = rollupConfig.replace(
`'rollup-plugin-css-only';`,
`'rollup-plugin-css-only';
import sveltePreprocess from 'svelte-preprocess';
import typescript from '@rollup/plugin-typescript';`,
);
// Replace name of entry point
rollupConfig = rollupConfig.replace(`'src/main.js'`, `'src/main.ts'`);
// Add preprocessor
rollupConfig = rollupConfig.replace(
"compilerOptions:",
"preprocess: sveltePreprocess({ sourceMap: !production }),\n\t\t\tcompilerOptions:",
);
// Add TypeScript
rollupConfig = rollupConfig.replace(
"commonjs(),",
"commonjs(),\n\t\ttypescript({\n\t\t\tsourceMap: !production,\n\t\t\tinlineSources: !production\n\t\t}),",
);
fs.writeFileSync(rollupConfigPath, rollupConfig);
// Add svelte.config.js
const tsconfig = `{
"extends": "@tsconfig/svelte/tsconfig.json",
"include": ["src/**/*"],
"exclude": ["node_modules/*", "__sapper__/*", "public/*"]
}`;
const tsconfigPath = path.join(projectRoot, "tsconfig.json");
fs.writeFileSync(tsconfigPath, tsconfig);
// Add TSConfig
const svelteConfig = `import sveltePreprocess from 'svelte-preprocess';
export default {
preprocess: sveltePreprocess()
};
`;
const svelteConfigPath = path.join(projectRoot, "svelte.config.js");
fs.writeFileSync(svelteConfigPath, svelteConfig);
// Add global.d.ts
const dtsPath = path.join(projectRoot, "src", "global.d.ts");
fs.writeFileSync(dtsPath, `/// <reference types="svelte" />`);
// Delete this script, but not during testing
if (!argv[2]) {
// Remove the script
fs.unlinkSync(path.join(__filename));
// Check for Mac's DS_store file, and if it's the only one left remove it
const remainingFiles = fs.readdirSync(path.join(__dirname));
if (remainingFiles.length === 1 && remainingFiles[0] === ".DS_store") {
fs.unlinkSync(path.join(__dirname, ".DS_store"));
}
// Check if the scripts folder is empty
if (fs.readdirSync(path.join(__dirname)).length === 0) {
// Remove the scripts folder
fs.rmdirSync(path.join(__dirname));
}
}
// Adds the extension recommendation
fs.mkdirSync(path.join(projectRoot, ".vscode"), { recursive: true });
fs.writeFileSync(
path.join(projectRoot, ".vscode", "extensions.json"),
`{
"recommendations": ["svelte.svelte-vscode"]
}
`,
);
console.log("Converted to TypeScript.");
if (fs.existsSync(path.join(projectRoot, "node_modules"))) {
console.log(
"\nYou will need to re-run your dependency manager to get started.",
);
}

File diff suppressed because it is too large Load Diff

2920
app/web/src/App.svelte Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,392 @@
<script>
import { createEventDispatcher } from 'svelte';
const dispatch = createEventDispatcher();
export let showModal = false;
export let isDarkTheme = false;
let activeTab = 'extension';
let nsecInput = '';
let isLoading = false;
let errorMessage = '';
let successMessage = '';
function closeModal() {
showModal = false;
nsecInput = '';
errorMessage = '';
successMessage = '';
dispatch('close');
}
function switchTab(tab) {
activeTab = tab;
errorMessage = '';
successMessage = '';
}
async function loginWithExtension() {
isLoading = true;
errorMessage = '';
successMessage = '';
try {
// Check if window.nostr is available
if (!window.nostr) {
throw new Error('No Nostr extension found. Please install a NIP-07 compatible extension like nos2x or Alby.');
}
// Get public key from extension
const pubkey = await window.nostr.getPublicKey();
if (pubkey) {
// Store authentication info
localStorage.setItem('nostr_auth_method', 'extension');
localStorage.setItem('nostr_pubkey', pubkey);
successMessage = 'Successfully logged in with extension!';
dispatch('login', {
method: 'extension',
pubkey: pubkey,
signer: window.nostr
});
setTimeout(() => {
closeModal();
}, 1500);
}
} catch (error) {
errorMessage = error.message;
} finally {
isLoading = false;
}
}
function validateNsec(nsec) {
// Basic validation for nsec format
if (!nsec.startsWith('nsec1')) {
return false;
}
// Should be around 63 characters long
if (nsec.length < 60 || nsec.length > 70) {
return false;
}
return true;
}
function nsecToHex(nsec) {
// This is a simplified conversion - in a real app you'd use a proper library
// For demo purposes, we'll simulate the conversion
try {
// Remove 'nsec1' prefix and decode (simplified)
const withoutPrefix = nsec.slice(5);
// In reality, you'd use bech32 decoding here
// For now, we'll generate a mock hex key
return 'mock_' + withoutPrefix.slice(0, 32);
} catch (error) {
throw new Error('Invalid nsec format');
}
}
async function loginWithNsec() {
isLoading = true;
errorMessage = '';
successMessage = '';
try {
if (!nsecInput.trim()) {
throw new Error('Please enter your nsec');
}
if (!validateNsec(nsecInput.trim())) {
throw new Error('Invalid nsec format. Must start with "nsec1"');
}
// Convert nsec to hex format (simplified for demo)
const privateKey = nsecToHex(nsecInput.trim());
// In a real implementation, you'd derive the public key from private key
const publicKey = 'derived_' + privateKey.slice(5, 37);
// Store securely (in production, consider more secure storage)
localStorage.setItem('nostr_auth_method', 'nsec');
localStorage.setItem('nostr_pubkey', publicKey);
localStorage.setItem('nostr_privkey', privateKey);
successMessage = 'Successfully logged in with nsec!';
dispatch('login', {
method: 'nsec',
pubkey: publicKey,
privateKey: privateKey
});
setTimeout(() => {
closeModal();
}, 1500);
} catch (error) {
errorMessage = error.message;
} finally {
isLoading = false;
}
}
function handleKeydown(event) {
if (event.key === 'Escape') {
closeModal();
}
if (event.key === 'Enter' && activeTab === 'nsec') {
loginWithNsec();
}
}
</script>
<svelte:window on:keydown={handleKeydown} />
{#if showModal}
<div class="modal-overlay" on:click={closeModal} on:keydown={(e) => e.key === 'Escape' && closeModal()} role="button" tabindex="0">
<div class="modal" class:dark-theme={isDarkTheme} on:click|stopPropagation on:keydown|stopPropagation>
<div class="modal-header">
<h2>Login to Nostr</h2>
<button class="close-btn" on:click={closeModal}>&times;</button>
</div>
<div class="tab-container">
<div class="tabs">
<button
class="tab-btn"
class:active={activeTab === 'extension'}
on:click={() => switchTab('extension')}
>
Extension
</button>
<button
class="tab-btn"
class:active={activeTab === 'nsec'}
on:click={() => switchTab('nsec')}
>
Nsec
</button>
</div>
<div class="tab-content">
{#if activeTab === 'extension'}
<div class="extension-login">
<p>Login using a NIP-07 compatible browser extension like nos2x or Alby.</p>
<button
class="login-extension-btn"
on:click={loginWithExtension}
disabled={isLoading}
>
{isLoading ? 'Connecting...' : 'Log in using extension'}
</button>
</div>
{:else}
<div class="nsec-login">
<p>Enter your nsec (private key) to login. This will be stored securely in your browser.</p>
<input
type="password"
placeholder="nsec1..."
bind:value={nsecInput}
disabled={isLoading}
class="nsec-input"
/>
<button
class="login-nsec-btn"
on:click={loginWithNsec}
disabled={isLoading || !nsecInput.trim()}
>
{isLoading ? 'Logging in...' : 'Log in with nsec'}
</button>
</div>
{/if}
{#if errorMessage}
<div class="message error-message">{errorMessage}</div>
{/if}
{#if successMessage}
<div class="message success-message">{successMessage}</div>
{/if}
</div>
</div>
</div>
</div>
{/if}
<style>
.modal-overlay {
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: rgba(0, 0, 0, 0.5);
display: flex;
justify-content: center;
align-items: center;
z-index: 1000;
}
.modal {
background: var(--bg-color);
border-radius: 8px;
box-shadow: 0 4px 20px rgba(0, 0, 0, 0.3);
width: 90%;
max-width: 500px;
max-height: 90vh;
overflow-y: auto;
border: 1px solid var(--border-color);
}
.modal-header {
display: flex;
justify-content: space-between;
align-items: center;
padding: 20px;
border-bottom: 1px solid var(--border-color);
}
.modal-header h2 {
margin: 0;
color: var(--text-color);
font-size: 1.5rem;
}
.close-btn {
background: none;
border: none;
font-size: 1.5rem;
cursor: pointer;
color: var(--text-color);
padding: 0;
width: 30px;
height: 30px;
display: flex;
align-items: center;
justify-content: center;
border-radius: 50%;
transition: background-color 0.2s;
}
.close-btn:hover {
background-color: var(--tab-hover-bg);
}
.tab-container {
padding: 20px;
}
.tabs {
display: flex;
border-bottom: 1px solid var(--border-color);
margin-bottom: 20px;
}
.tab-btn {
flex: 1;
padding: 12px 16px;
background: none;
border: none;
cursor: pointer;
color: var(--text-color);
font-size: 1rem;
transition: all 0.2s;
border-bottom: 2px solid transparent;
}
.tab-btn:hover {
background-color: var(--tab-hover-bg);
}
.tab-btn.active {
border-bottom-color: var(--primary);
color: var(--primary);
}
.tab-content {
min-height: 200px;
}
.extension-login,
.nsec-login {
display: flex;
flex-direction: column;
gap: 16px;
}
.extension-login p,
.nsec-login p {
margin: 0;
color: var(--text-color);
line-height: 1.5;
}
.login-extension-btn,
.login-nsec-btn {
padding: 12px 24px;
background: var(--primary);
color: white;
border: none;
border-radius: 6px;
cursor: pointer;
font-size: 1rem;
transition: background-color 0.2s;
}
.login-extension-btn:hover:not(:disabled),
.login-nsec-btn:hover:not(:disabled) {
background: #00ACC1;
}
.login-extension-btn:disabled,
.login-nsec-btn:disabled {
background: #ccc;
cursor: not-allowed;
}
.nsec-input {
padding: 12px;
border: 1px solid var(--input-border);
border-radius: 6px;
font-size: 1rem;
background: var(--bg-color);
color: var(--text-color);
}
.nsec-input:focus {
outline: none;
border-color: var(--primary);
}
.message {
padding: 10px;
border-radius: 4px;
margin-top: 16px;
text-align: center;
}
.error-message {
background: #ffebee;
color: #c62828;
border: 1px solid #ffcdd2;
}
.success-message {
background: #e8f5e8;
color: #2e7d32;
border: 1px solid #c8e6c9;
}
.modal.dark-theme .error-message {
background: #4a2c2a;
color: #ffcdd2;
border: 1px solid #6d4c41;
}
.modal.dark-theme .success-message {
background: #2e4a2e;
color: #a5d6a7;
border: 1px solid #4caf50;
}
</style>

14
app/web/src/constants.js Normal file
View File

@@ -0,0 +1,14 @@
// Default Nostr relays for searching
export const DEFAULT_RELAYS = [
// Use the local relay WebSocket endpoint
`wss://${window.location.host}/ws`,
// Fallback to external relays if local fails
"wss://relay.damus.io",
"wss://relay.nostr.band",
"wss://nos.lol",
"wss://relay.nostr.net",
"wss://relay.minibits.cash",
"wss://relay.coinos.io/",
"wss://nwc.primal.net",
"wss://relay.orly.dev",
];

View File

@@ -1,11 +0,0 @@
import React from 'react';
import { createRoot } from 'react-dom/client';
import App from './App';
import './styles.css';
const root = createRoot(document.getElementById('root'));
root.render(
<React.StrictMode>
<App />
</React.StrictMode>
);

11
app/web/src/main.js Normal file
View File

@@ -0,0 +1,11 @@
import App from "./App.svelte";
import "../public/global.css";
const app = new App({
target: document.body,
props: {
name: "world",
},
});
export default app;

599
app/web/src/nostr.js Normal file
View File

@@ -0,0 +1,599 @@
import { DEFAULT_RELAYS } from "./constants.js";
// Simple WebSocket relay manager
class NostrClient {
constructor() {
this.relays = new Map();
this.subscriptions = new Map();
}
async connect() {
console.log("Starting connection to", DEFAULT_RELAYS.length, "relays...");
const connectionPromises = DEFAULT_RELAYS.map((relayUrl) => {
return new Promise((resolve) => {
try {
console.log(`Attempting to connect to ${relayUrl}`);
const ws = new WebSocket(relayUrl);
ws.onopen = () => {
console.log(`✓ Successfully connected to ${relayUrl}`);
resolve(true);
};
ws.onerror = (error) => {
console.error(`✗ Error connecting to ${relayUrl}:`, error);
resolve(false);
};
ws.onclose = (event) => {
console.warn(
`Connection closed to ${relayUrl}:`,
event.code,
event.reason,
);
};
ws.onmessage = (event) => {
console.log(`Message from ${relayUrl}:`, event.data);
try {
this.handleMessage(relayUrl, JSON.parse(event.data));
} catch (error) {
console.error(
`Failed to parse message from ${relayUrl}:`,
error,
event.data,
);
}
};
this.relays.set(relayUrl, ws);
// Timeout after 5 seconds
setTimeout(() => {
if (ws.readyState !== WebSocket.OPEN) {
console.warn(`Connection timeout for ${relayUrl}`);
resolve(false);
}
}, 5000);
} catch (error) {
console.error(`Failed to create WebSocket for ${relayUrl}:`, error);
resolve(false);
}
});
});
const results = await Promise.all(connectionPromises);
const successfulConnections = results.filter(Boolean).length;
console.log(
`Connected to ${successfulConnections}/${DEFAULT_RELAYS.length} relays`,
);
// Wait a bit more for connections to stabilize
await new Promise((resolve) => setTimeout(resolve, 1000));
}
handleMessage(relayUrl, message) {
console.log(`Processing message from ${relayUrl}:`, message);
const [type, subscriptionId, event, ...rest] = message;
console.log(`Message type: ${type}, subscriptionId: ${subscriptionId}`);
if (type === "EVENT") {
console.log(`Received EVENT for subscription ${subscriptionId}:`, event);
if (this.subscriptions.has(subscriptionId)) {
console.log(
`Found callback for subscription ${subscriptionId}, executing...`,
);
const callback = this.subscriptions.get(subscriptionId);
callback(event);
} else {
console.warn(`No callback found for subscription ${subscriptionId}`);
}
} else if (type === "EOSE") {
console.log(
`End of stored events for subscription ${subscriptionId} from ${relayUrl}`,
);
// Dispatch EOSE event for fetchEvents function
if (this.subscriptions.has(subscriptionId)) {
window.dispatchEvent(new CustomEvent('nostr-eose', {
detail: { subscriptionId, relayUrl }
}));
}
} else if (type === "NOTICE") {
console.warn(`Notice from ${relayUrl}:`, subscriptionId);
} else {
console.log(`Unknown message type ${type} from ${relayUrl}:`, message);
}
}
subscribe(filters, callback) {
const subscriptionId = Math.random().toString(36).substring(7);
console.log(
`Creating subscription ${subscriptionId} with filters:`,
filters,
);
this.subscriptions.set(subscriptionId, callback);
const subscription = ["REQ", subscriptionId, filters];
console.log(`Subscription message:`, JSON.stringify(subscription));
let sentCount = 0;
for (const [relayUrl, ws] of this.relays) {
console.log(
`Checking relay ${relayUrl}, readyState: ${ws.readyState} (${ws.readyState === WebSocket.OPEN ? "OPEN" : "NOT OPEN"})`,
);
if (ws.readyState === WebSocket.OPEN) {
try {
ws.send(JSON.stringify(subscription));
console.log(`✓ Sent subscription to ${relayUrl}`);
sentCount++;
} catch (error) {
console.error(`✗ Failed to send subscription to ${relayUrl}:`, error);
}
} else {
console.warn(`✗ Cannot send to ${relayUrl}, connection not ready`);
}
}
console.log(
`Subscription ${subscriptionId} sent to ${sentCount}/${this.relays.size} relays`,
);
return subscriptionId;
}
unsubscribe(subscriptionId) {
this.subscriptions.delete(subscriptionId);
const closeMessage = ["CLOSE", subscriptionId];
for (const [relayUrl, ws] of this.relays) {
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify(closeMessage));
}
}
}
disconnect() {
for (const [relayUrl, ws] of this.relays) {
ws.close();
}
this.relays.clear();
this.subscriptions.clear();
}
// Publish an event to all connected relays
async publish(event) {
return new Promise((resolve, reject) => {
const eventMessage = ["EVENT", event];
console.log("Publishing event:", eventMessage);
let publishedCount = 0;
let okCount = 0;
let errorCount = 0;
const totalRelays = this.relays.size;
if (totalRelays === 0) {
reject(new Error("No relays connected"));
return;
}
const handleResponse = (relayUrl, success) => {
if (success) {
okCount++;
} else {
errorCount++;
}
if (okCount + errorCount === totalRelays) {
if (okCount > 0) {
resolve({ success: true, okCount, errorCount });
} else {
reject(new Error(`All relays rejected the event. Errors: ${errorCount}`));
}
}
};
// Set up a temporary listener for OK responses
const originalHandleMessage = this.handleMessage.bind(this);
this.handleMessage = (relayUrl, message) => {
if (message[0] === "OK" && message[1] === event.id) {
const success = message[2] === true;
console.log(`Relay ${relayUrl} response:`, success ? "OK" : "REJECTED", message[3] || "");
handleResponse(relayUrl, success);
}
// Call original handler for other messages
originalHandleMessage(relayUrl, message);
};
// Send to all connected relays
for (const [relayUrl, ws] of this.relays) {
if (ws.readyState === WebSocket.OPEN) {
try {
ws.send(JSON.stringify(eventMessage));
publishedCount++;
console.log(`Event sent to ${relayUrl}`);
} catch (error) {
console.error(`Failed to send event to ${relayUrl}:`, error);
handleResponse(relayUrl, false);
}
} else {
console.warn(`Relay ${relayUrl} is not open, skipping`);
handleResponse(relayUrl, false);
}
}
// Restore original handler after timeout
setTimeout(() => {
this.handleMessage = originalHandleMessage;
if (okCount + errorCount < totalRelays) {
reject(new Error("Timeout waiting for relay responses"));
}
}, 10000); // 10 second timeout
});
}
}
// Create a global client instance
export const nostrClient = new NostrClient();
// IndexedDB helpers for caching events (kind 0 profiles)
const DB_NAME = "nostrCache";
const DB_VERSION = 1;
const STORE_EVENTS = "events";
function openDB() {
return new Promise((resolve, reject) => {
try {
const req = indexedDB.open(DB_NAME, DB_VERSION);
req.onupgradeneeded = () => {
const db = req.result;
if (!db.objectStoreNames.contains(STORE_EVENTS)) {
const store = db.createObjectStore(STORE_EVENTS, { keyPath: "id" });
store.createIndex("byKindAuthor", ["kind", "pubkey"], {
unique: false,
});
store.createIndex(
"byKindAuthorCreated",
["kind", "pubkey", "created_at"],
{ unique: false },
);
}
};
req.onsuccess = () => resolve(req.result);
req.onerror = () => reject(req.error);
} catch (e) {
reject(e);
}
});
}
async function getLatestProfileEvent(pubkey) {
try {
const db = await openDB();
return await new Promise((resolve, reject) => {
const tx = db.transaction(STORE_EVENTS, "readonly");
const idx = tx.objectStore(STORE_EVENTS).index("byKindAuthorCreated");
const range = IDBKeyRange.bound(
[0, pubkey, -Infinity],
[0, pubkey, Infinity],
);
const req = idx.openCursor(range, "prev"); // newest first
req.onsuccess = () => {
const cursor = req.result;
resolve(cursor ? cursor.value : null);
};
req.onerror = () => reject(req.error);
});
} catch (e) {
console.warn("IDB getLatestProfileEvent failed", e);
return null;
}
}
async function putEvent(event) {
try {
const db = await openDB();
await new Promise((resolve, reject) => {
const tx = db.transaction(STORE_EVENTS, "readwrite");
tx.oncomplete = () => resolve();
tx.onerror = () => reject(tx.error);
tx.objectStore(STORE_EVENTS).put(event);
});
} catch (e) {
console.warn("IDB putEvent failed", e);
}
}
function parseProfileFromEvent(event) {
try {
const profile = JSON.parse(event.content || "{}");
return {
name: profile.name || profile.display_name || "",
picture: profile.picture || "",
banner: profile.banner || "",
about: profile.about || "",
nip05: profile.nip05 || "",
lud16: profile.lud16 || profile.lud06 || "",
};
} catch (e) {
return {
name: "",
picture: "",
banner: "",
about: "",
nip05: "",
lud16: "",
};
}
}
// Fetch user profile metadata (kind 0)
export async function fetchUserProfile(pubkey) {
return new Promise(async (resolve, reject) => {
console.log(`Starting profile fetch for pubkey: ${pubkey}`);
let resolved = false;
let newestEvent = null;
let debounceTimer = null;
let overallTimer = null;
let subscriptionId = null;
function cleanup() {
if (subscriptionId) {
try {
nostrClient.unsubscribe(subscriptionId);
} catch {}
}
if (debounceTimer) clearTimeout(debounceTimer);
if (overallTimer) clearTimeout(overallTimer);
}
// 1) Try cached profile first and resolve immediately if present
try {
const cachedEvent = await getLatestProfileEvent(pubkey);
if (cachedEvent) {
console.log("Using cached profile event");
const profile = parseProfileFromEvent(cachedEvent);
resolved = true; // resolve immediately with cache
resolve(profile);
}
} catch (e) {
console.warn("Failed to load cached profile", e);
}
// 2) Set overall timeout
overallTimer = setTimeout(() => {
if (!newestEvent) {
console.log("Profile fetch timeout reached");
if (!resolved) reject(new Error("Profile fetch timeout"));
} else if (!resolved) {
resolve(parseProfileFromEvent(newestEvent));
}
cleanup();
}, 15000);
// 3) Wait a bit to ensure connections are ready and then subscribe without limit
setTimeout(() => {
console.log("Starting subscription after connection delay...");
subscriptionId = nostrClient.subscribe(
{
kinds: [0],
authors: [pubkey],
},
(event) => {
// Collect all kind 0 events and pick the newest by created_at
if (!event || event.kind !== 0) return;
console.log("Profile event received:", event);
if (
!newestEvent ||
(event.created_at || 0) > (newestEvent.created_at || 0)
) {
newestEvent = event;
}
// Debounce to wait for more relays; then finalize selection
if (debounceTimer) clearTimeout(debounceTimer);
debounceTimer = setTimeout(async () => {
try {
if (newestEvent) {
await putEvent(newestEvent); // cache newest only
const profile = parseProfileFromEvent(newestEvent);
// Notify listeners that an updated profile is available
try {
if (typeof window !== "undefined" && window.dispatchEvent) {
window.dispatchEvent(
new CustomEvent("profile-updated", {
detail: { pubkey, profile, event: newestEvent },
}),
);
}
} catch (e) {
console.warn("Failed to dispatch profile-updated event", e);
}
if (!resolved) {
resolve(profile);
resolved = true;
}
}
} finally {
cleanup();
}
}, 800);
},
);
}, 2000);
});
}
// Fetch events using WebSocket REQ envelopes
export async function fetchEvents(filters, options = {}) {
return new Promise(async (resolve, reject) => {
console.log(`Starting event fetch with filters:`, filters);
let resolved = false;
let events = [];
let debounceTimer = null;
let overallTimer = null;
let subscriptionId = null;
let eoseReceived = false;
const {
timeout = 30000,
debounceDelay = 1000,
limit = null
} = options;
function cleanup() {
if (subscriptionId) {
try {
nostrClient.unsubscribe(subscriptionId);
} catch {}
}
if (debounceTimer) clearTimeout(debounceTimer);
if (overallTimer) clearTimeout(overallTimer);
}
// Set overall timeout
overallTimer = setTimeout(() => {
if (!resolved) {
console.log("Event fetch timeout reached");
if (events.length > 0) {
resolve(events);
} else {
reject(new Error("Event fetch timeout"));
}
resolved = true;
}
cleanup();
}, timeout);
// Subscribe to events
setTimeout(() => {
console.log("Starting event subscription...");
// Add limit to filters if specified
const requestFilters = { ...filters };
if (limit) {
requestFilters.limit = limit;
}
console.log('Sending REQ with filters:', requestFilters);
subscriptionId = nostrClient.subscribe(
requestFilters,
(event) => {
if (!event) return;
console.log("Event received:", event);
// Check if we already have this event (deduplication)
const existingEvent = events.find(e => e.id === event.id);
if (!existingEvent) {
events.push(event);
}
// If we have a limit and reached it, resolve immediately
if (limit && events.length >= limit) {
if (!resolved) {
resolve(events.slice(0, limit));
resolved = true;
}
cleanup();
return;
}
// Debounce to wait for more events
if (debounceTimer) clearTimeout(debounceTimer);
debounceTimer = setTimeout(() => {
if (eoseReceived && !resolved) {
resolve(events);
resolved = true;
cleanup();
}
}, debounceDelay);
},
);
// Listen for EOSE events
const handleEOSE = (event) => {
if (event.detail.subscriptionId === subscriptionId) {
console.log("EOSE received for subscription", subscriptionId);
eoseReceived = true;
// If we haven't resolved yet and have events, resolve now
if (!resolved && events.length > 0) {
resolve(events);
resolved = true;
cleanup();
}
}
};
// Add EOSE listener
window.addEventListener('nostr-eose', handleEOSE);
// Cleanup EOSE listener
const originalCleanup = cleanup;
cleanup = () => {
window.removeEventListener('nostr-eose', handleEOSE);
originalCleanup();
};
}, 1000);
});
}
// Fetch all events with timestamp-based pagination
export async function fetchAllEvents(options = {}) {
const {
limit = 100,
since = null,
until = null,
authors = null
} = options;
const filters = {};
if (since) filters.since = since;
if (until) filters.until = until;
if (authors) filters.authors = authors;
const events = await fetchEvents(filters, {
limit: limit,
timeout: 30000
});
return events;
}
// Fetch user's events with timestamp-based pagination
export async function fetchUserEvents(pubkey, options = {}) {
const {
limit = 100,
since = null,
until = null
} = options;
const filters = {
authors: [pubkey]
};
if (since) filters.since = since;
if (until) filters.until = until;
const events = await fetchEvents(filters, {
limit: limit,
timeout: 30000
});
return events;
}
// Initialize client connection
export async function initializeNostrClient() {
await nostrClient.connect();
}

View File

@@ -1,191 +0,0 @@
body {
font-family: Arial, sans-serif;
margin: 0;
padding: 0;
}
.container {
background: #f9f9f9;
padding: 30px;
border-radius: 8px;
margin-top: 20px; /* Reduced space since header is now sticky */
}
.form-group {
margin-bottom: 20px;
}
label {
display: block;
margin-bottom: 5px;
font-weight: bold;
}
input, textarea {
width: 100%;
padding: 10px;
border: 1px solid #ddd;
border-radius: 4px;
}
button {
background: #007cba;
color: white;
padding: 12px 20px;
border: none;
border-radius: 4px;
cursor: pointer;
}
button:hover {
background: #005a87;
}
.danger-button {
background: #dc3545;
}
.danger-button:hover {
background: #c82333;
}
.status {
margin-top: 20px;
margin-bottom: 20px;
padding: 10px;
border-radius: 4px;
}
.success {
background: #d4edda;
color: #155724;
}
.error {
background: #f8d7da;
color: #721c24;
}
.info {
background: #d1ecf1;
color: #0c5460;
}
.header-panel {
position: sticky;
top: 0;
left: 0;
width: 100%;
background-color: #f8f9fa;
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
z-index: 1000;
height: 60px;
display: flex;
align-items: center;
background-size: cover;
background-position: center;
overflow: hidden;
}
.header-content {
display: flex;
align-items: center;
height: 100%;
padding: 0 0 0 12px;
width: 100%;
margin: 0 auto;
box-sizing: border-box;
}
.header-left {
display: flex;
align-items: center;
justify-content: flex-start;
height: 100%;
}
.header-center {
display: flex;
flex-grow: 1;
align-items: center;
justify-content: flex-start;
position: relative;
overflow: hidden;
}
.header-right {
display: flex;
align-items: center;
justify-content: flex-end;
height: 100%;
}
.header-logo {
height: 100%;
aspect-ratio: 1 / 1;
width: auto;
border-radius: 0;
object-fit: cover;
flex-shrink: 0;
}
.user-avatar {
width: 2em;
height: 2em;
border-radius: 50%;
object-fit: cover;
border: 2px solid white;
margin-right: 10px;
box-shadow: 0 1px 3px rgba(0,0,0,0.2);
}
.user-profile {
display: flex;
align-items: center;
position: relative;
z-index: 1;
}
.user-info {
font-weight: bold;
font-size: 1.2em;
text-align: left;
}
.user-name {
font-weight: bold;
font-size: 1em;
display: block;
}
.profile-banner {
position: absolute;
width: 100%;
height: 100%;
top: 0;
left: 0;
z-index: -1;
opacity: 0.7;
}
.logout-button {
background: transparent;
color: #6c757d;
border: none;
font-size: 20px;
cursor: pointer;
padding: 0;
display: flex;
align-items: center;
justify-content: center;
width: 48px;
height: 100%;
margin-left: 10px;
margin-right: 0;
flex-shrink: 0;
}
.logout-button:hover {
background: transparent;
color: #343a40;
}

View File

@@ -54,6 +54,7 @@ cd cmd/benchmark
```
This will:
- Clone all external relay repositories
- Create Docker configurations for each relay
- Set up configuration files
@@ -68,6 +69,7 @@ docker compose up --build
```
The system will:
- Build and start all relay containers
- Wait for all relays to become healthy
- Run benchmarks against each relay sequentially
@@ -89,15 +91,15 @@ ls reports/run_YYYYMMDD_HHMMSS/
### Docker Compose Services
| Service | Port | Description |
|---------|------|-------------|
| next-orly | 8001 | This repository's BadgerDB relay |
| khatru-sqlite | 8002 | Khatru with SQLite backend |
| khatru-badger | 8003 | Khatru with Badger backend |
| relayer-basic | 8004 | Basic relayer example |
| strfry | 8005 | Strfry C++ LMDB relay |
| nostr-rs-relay | 8006 | Rust SQLite relay |
| benchmark-runner | - | Orchestrates tests and aggregates results |
| Service | Port | Description |
| ---------------- | ---- | ----------------------------------------- |
| next-orly | 8001 | This repository's BadgerDB relay |
| khatru-sqlite | 8002 | Khatru with SQLite backend |
| khatru-badger | 8003 | Khatru with Badger backend |
| relayer-basic | 8004 | Basic relayer example |
| strfry | 8005 | Strfry C++ LMDB relay |
| nostr-rs-relay | 8006 | Rust SQLite relay |
| benchmark-runner | - | Orchestrates tests and aggregates results |
### File Structure
@@ -130,16 +132,16 @@ The benchmark can be configured via environment variables in `docker-compose.yml
```yaml
environment:
- BENCHMARK_EVENTS=10000 # Number of events per test
- BENCHMARK_WORKERS=8 # Concurrent workers
- BENCHMARK_DURATION=60s # Test duration
- BENCHMARK_TARGETS=... # Relay endpoints to test
- BENCHMARK_EVENTS=10000 # Number of events per test
- BENCHMARK_WORKERS=8 # Concurrent workers
- BENCHMARK_DURATION=60s # Test duration
- BENCHMARK_TARGETS=... # Relay endpoints to test
```
### Custom Configuration
1. **Modify test parameters**: Edit environment variables in `docker-compose.yml`
2. **Add new relays**:
2. **Add new relays**:
- Add service to `docker-compose.yml`
- Create appropriate Dockerfile
- Update `BENCHMARK_TARGETS` environment variable
@@ -174,16 +176,19 @@ go build -o benchmark main.go
## Benchmark Results Interpretation
### Peak Throughput Test
- **High events/sec**: Good write performance
- **Low latency**: Efficient event processing
- **High success rate**: Stable under load
### Burst Pattern Test
### Burst Pattern Test
- **Consistent performance**: Good handling of variable loads
- **Low P95/P99 latency**: Predictable response times
- **No errors during bursts**: Robust queuing/buffering
### Mixed Read/Write Test
- **Balanced throughput**: Good concurrent operation handling
- **Low read latency**: Efficient query processing
- **Stable write performance**: Queries don't significantly impact writes
@@ -200,6 +205,7 @@ go build -o benchmark main.go
### Modifying Relay Configurations
Each relay's Dockerfile and configuration can be customized:
- **Resource limits**: Adjust memory/CPU limits in docker-compose.yml
- **Database settings**: Modify configuration files in `configs/`
- **Network settings**: Update port mappings and health checks
@@ -257,4 +263,4 @@ To add support for new relay implementations:
## License
This benchmark suite is part of the next.orly.dev project and follows the same licensing terms.
This benchmark suite is part of the next.orly.dev project and follows the same licensing terms.

View File

@@ -1,4 +1,4 @@
version: '3.8'
version: "3.8"
services:
# Next.orly.dev relay (this repository)
@@ -19,7 +19,11 @@ services:
networks:
- benchmark-net
healthcheck:
test: ["CMD-SHELL", "code=$(curl -s -o /dev/null -w '%{http_code}' http://localhost:8080 || echo 000); echo $$code | grep -E '^(101|200|400|404|426)$' >/dev/null"]
test:
[
"CMD-SHELL",
"code=$(curl -s -o /dev/null -w '%{http_code}' http://localhost:8080 || echo 000); echo $$code | grep -E '^(101|200|400|404|426)$' >/dev/null",
]
interval: 30s
timeout: 10s
retries: 3
@@ -41,7 +45,11 @@ services:
networks:
- benchmark-net
healthcheck:
test: ["CMD-SHELL", "wget --quiet --server-response --tries=1 http://localhost:3334 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404)' >/dev/null"]
test:
[
"CMD-SHELL",
"wget --quiet --server-response --tries=1 http://localhost:3334 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404)' >/dev/null",
]
interval: 30s
timeout: 10s
retries: 3
@@ -63,7 +71,11 @@ services:
networks:
- benchmark-net
healthcheck:
test: ["CMD-SHELL", "wget --quiet --server-response --tries=1 http://localhost:3334 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404)' >/dev/null"]
test:
[
"CMD-SHELL",
"wget --quiet --server-response --tries=1 http://localhost:3334 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404)' >/dev/null",
]
interval: 30s
timeout: 10s
retries: 3
@@ -87,7 +99,11 @@ services:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "wget --quiet --server-response --tries=1 http://localhost:7447 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404)' >/dev/null"]
test:
[
"CMD-SHELL",
"wget --quiet --server-response --tries=1 http://localhost:7447 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404)' >/dev/null",
]
interval: 30s
timeout: 10s
retries: 3
@@ -108,7 +124,11 @@ services:
networks:
- benchmark-net
healthcheck:
test: ["CMD-SHELL", "wget --quiet --server-response --tries=1 http://127.0.0.1:8080 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404|426)' >/dev/null"]
test:
[
"CMD-SHELL",
"wget --quiet --server-response --tries=1 http://127.0.0.1:8080 2>&1 | grep -E 'HTTP/[0-9.]+ (101|200|400|404|426)' >/dev/null",
]
interval: 30s
timeout: 10s
retries: 3
@@ -130,7 +150,15 @@ services:
networks:
- benchmark-net
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8080"]
test:
[
"CMD",
"wget",
"--quiet",
"--tries=1",
"--spider",
"http://localhost:8080",
]
interval: 30s
timeout: 10s
retries: 3
@@ -197,4 +225,4 @@ networks:
volumes:
benchmark-data:
driver: local
driver: local

View File

@@ -2,7 +2,7 @@
# Fixes: failed to solve: error from sender: open cmd/benchmark/data/postgres: permission denied
# Benchmark data and reports (mounted at runtime via volumes)
cmd/benchmark/data/
../../cmd/benchmark/data/
cmd/benchmark/reports/
# VCS and OS cruft

View File

@@ -4,6 +4,7 @@
**Updated with real-world troubleshooting solutions and latest Orly relay improvements**
## 🎯 **What This Solves**
- WebSocket connection failures (`NS_ERROR_WEBSOCKET_CONNECTION_REFUSED`)
- Nostr relay connectivity issues (`HTTP 426` instead of WebSocket upgrade)
- Docker container proxy configuration
@@ -16,6 +17,7 @@
## 🐳 **Step 1: Deploy Your Docker Application**
### **For Stella's Orly Relay (Latest Version with Proxy Improvements):**
```bash
# Pull and run the relay with enhanced proxy support
docker run -d \
@@ -39,6 +41,7 @@ curl -I http://127.0.0.1:7777
```
### **For Web Apps (like Jumble):**
```bash
# Run with fixed port for easier proxy setup
docker run -d \
@@ -61,34 +64,34 @@ curl -I http://127.0.0.1:3000
```apache
<VirtualHost *:443>
ServerName your-domain.com
# SSL Configuration (Let's Encrypt)
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/your-domain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/your-domain.com/privkey.pem
# Enable required modules first:
# sudo a2enmod proxy proxy_http proxy_wstunnel rewrite headers ssl
# Proxy settings
ProxyPreserveHost On
ProxyRequests Off
# WebSocket upgrade handling - CRITICAL for apps with WebSockets
RewriteEngine On
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/?(.*) "ws://127.0.0.1:PORT/$1" [P,L]
# Regular HTTP proxy
ProxyPass / http://127.0.0.1:PORT/
ProxyPassReverse / http://127.0.0.1:PORT/
# Headers for modern web apps
Header always set X-Forwarded-Proto "https"
Header always set X-Forwarded-Port "443"
Header always set X-Forwarded-For %{REMOTE_ADDR}s
# Security headers
Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"
Header always set X-Content-Type-Options nosniff
@@ -103,6 +106,7 @@ curl -I http://127.0.0.1:3000
```
**Then enable it:**
```bash
sudo a2ensite domain.conf
sudo systemctl reload apache2
@@ -121,6 +125,7 @@ sudo systemctl reload apache2
5. **In HTTPS section, add:**
**For Nostr Relay (port 7777):**
```apache
ProxyRequests Off
ProxyPreserveHost On
@@ -142,23 +147,23 @@ sudo tee /etc/apache2/conf-available/relay-override.conf << 'EOF'
ServerName your-domain.com
ServerAlias www.your-domain.com
ServerAlias ipv4.your-domain.com
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/your-domain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/your-domain.com/privkey.pem
DocumentRoot /var/www/relay
# For Nostr relay - proxy everything to WebSocket
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / ws://127.0.0.1:7777/
ProxyPassReverse / ws://127.0.0.1:7777/
# CORS headers
Header always set Access-Control-Allow-Origin "*"
Header always set Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept, Authorization"
# Logging
ErrorLog /var/log/apache2/relay-error.log
CustomLog /var/log/apache2/relay-access.log combined
@@ -190,6 +195,7 @@ apache2ctl -M | grep -E "(proxy|rewrite)"
```
#### **For Web Apps (port 3000 or 32768):**
```apache
ProxyPreserveHost On
ProxyRequests Off
@@ -221,22 +227,22 @@ sudo tee /etc/apache2/conf-available/relay-override.conf << 'EOF'
ServerName your-domain.com
ServerAlias www.your-domain.com
ServerAlias ipv4.your-domain.com
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/your-domain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/your-domain.com/privkey.pem
DocumentRoot /var/www/relay
# For Nostr relay - proxy everything to WebSocket
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / ws://127.0.0.1:7777/
ProxyPassReverse / ws://127.0.0.1:7777/
# CORS headers
Header always set Access-Control-Allow-Origin "*"
# Logging
ErrorLog /var/log/apache2/relay-error.log
CustomLog /var/log/apache2/relay-access.log combined
@@ -269,6 +275,7 @@ sudo systemctl restart apache2
## 🆕 **Step 4: Latest Orly Relay Improvements**
### **Enhanced Proxy Support**
The latest Orly relay includes several proxy improvements:
1. **Flexible WebSocket Scheme Handling**: Accepts both `ws://` and `wss://` schemes for authentication
@@ -277,6 +284,7 @@ The latest Orly relay includes several proxy improvements:
4. **Proxy-Aware Logging**: Better debugging information for proxy setups
### **Key Environment Variables**
```bash
# Essential for proxy setups
ORLY_RELAY_URL=wss://your-domain.com # Must match your public URL
@@ -286,6 +294,7 @@ ORLY_SUBSCRIPTION_ENABLED=false # Disable payment requirements
```
### **Testing the Enhanced Relay**
```bash
# Test local connectivity
curl -I http://127.0.0.1:7777
@@ -338,32 +347,38 @@ After making changes:
## 🚨 **Real-World Troubleshooting Guide**
*Based on actual deployment experience with Plesk and WebSocket issues*
_Based on actual deployment experience with Plesk and WebSocket issues_
### **Critical Issues & Solutions:**
#### **🔴 HTTP 503 Service Unavailable**
- **Cause**: Docker container not running
- **Check**: `docker ps | grep relay`
- **Fix**: `docker start container-name`
#### **🔴 HTTP 426 Instead of WebSocket Upgrade**
- **Cause**: Apache using `http://` proxy instead of `ws://`
- **Fix**: Use `ProxyPass / ws://127.0.0.1:7777/` (not `http://`)
#### **🔴 Plesk Configuration Not Applied**
- **Symptom**: Config not in `/etc/apache2/plesk.conf.d/vhosts/domain.conf`
- **Solution**: Use Direct Apache Override method (bypass Plesk interface)
#### **🔴 Virtual Host Conflicts**
- **Check**: `apache2ctl -S | grep domain.com`
- **Fix**: Remove Plesk config: `sudo rm /etc/apache2/plesk.conf.d/vhosts/domain.conf`
#### **🔴 Nginx Intercepting (Plesk)**
- **Symptom**: Response shows `Server: nginx`
- **Fix**: Disable nginx in Plesk settings
### **Debug Commands:**
```bash
# Essential debugging
docker ps | grep relay # Container running?
@@ -383,9 +398,11 @@ docker logs relay-name | grep -i "websocket connection"
## 🚨 **Latest Troubleshooting Solutions**
### **WebSocket Scheme Validation Errors**
**Problem**: `"HTTP Scheme incorrect: expected 'ws' got 'wss'"`
**Solution**: Use the latest Orly relay image with enhanced proxy support:
```bash
# Pull the latest image with proxy improvements
docker pull silberengel/next-orly:latest
@@ -396,17 +413,21 @@ docker stop orly-relay && docker rm orly-relay
```
### **Malformed Client Data Errors**
**Problem**: `"invalid hex array size, got 2 expect 64"`
**Solution**: These are client-side issues, not server problems. The latest relay handles them gracefully:
- The relay now sends helpful error messages to clients
- Malformed requests are logged but don't crash the relay
- Normal operations continue despite client errors
### **Follows ACL Not Working**
**Problem**: Only owners can write, admins can't write
**Solution**: Ensure proper configuration:
```bash
# Check ACL configuration
docker exec orly-relay env | grep ACL
@@ -416,9 +437,11 @@ docker exec orly-relay env | grep ACL
```
### **Spider Not Syncing Content**
**Problem**: Spider enabled but not pulling events
**Solution**: Check for relay lists and follow events:
```bash
# Check spider status
docker logs orly-relay | grep -i spider
@@ -431,6 +454,7 @@ docker logs orly-relay | grep -i "kind.*3"
```
### **Working Solution (Proven):**
```apache
<VirtualHost SERVER_IP:443>
ServerName domain.com
@@ -438,20 +462,21 @@ docker logs orly-relay | grep -i "kind.*3"
SSLCertificateFile /etc/letsencrypt/live/domain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/domain.com/privkey.pem
DocumentRoot /var/www/relay
# Direct WebSocket proxy - this is the key!
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / ws://127.0.0.1:7777/
ProxyPassReverse / ws://127.0.0.1:7777/
Header always set Access-Control-Allow-Origin "*"
</VirtualHost>
```
---
**Key Lessons**:
**Key Lessons**:
1. Plesk interface often fails to apply Apache directives
2. Use `ws://` proxy for Nostr relays, not `http://`
3. Direct Apache config files are more reliable than Plesk interface
@@ -464,17 +489,20 @@ docker logs orly-relay | grep -i "kind.*3"
## 🎉 **Summary of Latest Improvements**
### **Enhanced Proxy Support**
- ✅ Flexible WebSocket scheme validation (accepts both `ws://` and `wss://`)
- ✅ Enhanced CORS headers for better web app compatibility
- ✅ Improved error handling for malformed client data
- ✅ Proxy-aware logging for better debugging
### **Spider and ACL Features**
- ✅ Follows-based access control (`ORLY_ACL_MODE=follows`)
- ✅ Content syncing from other relays (`ORLY_SPIDER_MODE=follows`)
- ✅ No payment requirements (`ORLY_SUBSCRIPTION_ENABLED=false`)
### **Production Ready**
- ✅ Robust error handling
- ✅ Enhanced logging and debugging
- ✅ Better client compatibility

View File

@@ -37,6 +37,7 @@ cp env.example .env
```
Key settings:
- `ORLY_OWNERS`: Owner npubs (comma-separated, full control)
- `ORLY_ADMINS`: Admin npubs (comma-separated, deletion permissions)
- `ORLY_PORT`: Port to listen on (default: 7777)
@@ -50,6 +51,7 @@ The relay data is stored in `./data` directory which is mounted as a volume.
### Performance Tuning
Based on the v0.4.8 optimizations:
- Concurrent event publishing using all CPU cores
- Optimized BadgerDB access patterns
- Configurable batch sizes and cache settings
@@ -105,12 +107,14 @@ go run ./cmd/stresstest -relay ws://localhost:7777
### Common Issues (Real-World Experience)
#### **Container Issues:**
1. **Port already in use**: Change `ORLY_PORT` in docker-compose.yml
2. **Permission denied**: Ensure `./data` directory is writable
3. **Container won't start**: Check logs with `docker logs container-name`
#### **WebSocket Issues:**
4. **HTTP 426 instead of WebSocket upgrade**:
4. **HTTP 426 instead of WebSocket upgrade**:
- Use `ws://127.0.0.1:7777` in proxy config, not `http://`
- Ensure `proxy_wstunnel` module is enabled
5. **Connection refused in browser but works with websocat**:
@@ -119,6 +123,7 @@ go run ./cmd/stresstest -relay ws://localhost:7777
- Add CORS headers to Apache/nginx config
#### **Plesk-Specific Issues:**
6. **Plesk not applying Apache directives**:
- Check if config appears in `/etc/apache2/plesk.conf.d/vhosts/domain.conf`
- Use direct Apache override if Plesk interface fails
@@ -127,6 +132,7 @@ go run ./cmd/stresstest -relay ws://localhost:7777
- Remove conflicting Plesk configs if needed
#### **SSL Certificate Issues:**
8. **Self-signed certificate after Let's Encrypt**:
- Plesk might not be using the correct certificate
- Import Let's Encrypt certs into Plesk or use direct Apache config
@@ -166,23 +172,24 @@ sudo tail -f /var/log/apache2/domain-error.log
### Working Reverse Proxy Config
**For Apache (direct config file):**
```apache
<VirtualHost SERVER_IP:443>
ServerName domain.com
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/domain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/domain.com/privkey.pem
# Direct WebSocket proxy for Nostr relay
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / ws://127.0.0.1:7777/
ProxyPassReverse / ws://127.0.0.1:7777/
Header always set Access-Control-Allow-Origin "*"
</VirtualHost>
```
---
*Crafted for Stella's digital forest* 🌲
_Crafted for Stella's digital forest_ 🌲

View File

@@ -19,11 +19,11 @@ RUN apk add --no-cache libsecp256k1-dev
WORKDIR /build
# Copy go modules first (for better caching)
COPY go.mod go.sum ./
COPY ../../go.mod go.sum ./
RUN go mod download
# Copy source code
COPY . .
COPY ../.. .
# Build the relay with optimizations from v0.4.8
RUN CGO_ENABLED=1 GOOS=linux go build -ldflags "-w -s" -o relay .

View File

@@ -1,26 +1,28 @@
# Service Worker Certificate Caching Fix
## 🚨 **Problem**
When accessing Jumble from the ImWald landing page, the service worker serves a cached self-signed certificate instead of the new Let's Encrypt certificate.
## ⚡ **Solutions**
### **Option 1: Force Service Worker Update**
Add this to your Jumble app's service worker or main JavaScript:
```javascript
// Force service worker update and certificate refresh
if ('serviceWorker' in navigator) {
navigator.serviceWorker.getRegistrations().then(function(registrations) {
for(let registration of registrations) {
if ("serviceWorker" in navigator) {
navigator.serviceWorker.getRegistrations().then(function (registrations) {
for (let registration of registrations) {
registration.update(); // Force update
}
});
}
// Clear all caches on certificate update
if ('caches' in window) {
caches.keys().then(function(names) {
if ("caches" in window) {
caches.keys().then(function (names) {
for (let name of names) {
caches.delete(name);
}
@@ -29,49 +31,52 @@ if ('caches' in window) {
```
### **Option 2: Update Service Worker Cache Strategy**
In your service worker file, add cache busting for SSL-sensitive requests:
```javascript
// In your service worker
self.addEventListener('fetch', function(event) {
self.addEventListener("fetch", function (event) {
// Don't cache HTTPS requests that might have certificate issues
if (event.request.url.startsWith('https://') &&
event.request.url.includes('imwald.eu')) {
event.respondWith(
fetch(event.request, { cache: 'no-store' })
);
if (
event.request.url.startsWith("https://") &&
event.request.url.includes("imwald.eu")
) {
event.respondWith(fetch(event.request, { cache: "no-store" }));
return;
}
// Your existing fetch handling...
});
```
### **Option 3: Version Your Service Worker**
Update your service worker with a new version number:
```javascript
// At the top of your service worker
const CACHE_VERSION = 'v2.0.1'; // Increment this when certificates change
const CACHE_VERSION = "v2.0.1"; // Increment this when certificates change
const CACHE_NAME = `jumble-cache-${CACHE_VERSION}`;
// Clear old caches
self.addEventListener('activate', function(event) {
self.addEventListener("activate", function (event) {
event.waitUntil(
caches.keys().then(function(cacheNames) {
caches.keys().then(function (cacheNames) {
return Promise.all(
cacheNames.map(function(cacheName) {
cacheNames.map(function (cacheName) {
if (cacheName !== CACHE_NAME) {
return caches.delete(cacheName);
}
})
}),
);
})
}),
);
});
```
### **Option 4: Add Cache Headers**
In your Plesk Apache config for Jumble, add:
```apache

View File

@@ -1,11 +1,13 @@
# WebSocket Connection Debug Guide
## 🚨 **Current Issue**
`wss://orly-relay.imwald.eu/` returns `NS_ERROR_WEBSOCKET_CONNECTION_REFUSED`
## 🔍 **Debug Steps**
### **Step 1: Verify Relay is Running**
```bash
# On your server
curl -I http://127.0.0.1:7777
@@ -16,6 +18,7 @@ docker ps | grep stella
```
### **Step 2: Test Apache Modules**
```bash
# Check if WebSocket modules are enabled
apache2ctl -M | grep -E "(proxy|rewrite)"
@@ -30,6 +33,7 @@ sudo systemctl restart apache2
```
### **Step 3: Check Apache Configuration**
```bash
# Check what Plesk generated
sudo cat /etc/apache2/plesk.conf.d/vhosts/orly-relay.imwald.eu.conf
@@ -39,6 +43,7 @@ grep -E "(Proxy|Rewrite)" /etc/apache2/plesk.conf.d/vhosts/orly-relay.imwald.eu.
```
### **Step 4: Test Direct WebSocket Connection**
```bash
# Test if the issue is Apache or the relay itself
echo '["REQ","test",{}]' | websocat ws://127.0.0.1:7777/
@@ -48,6 +53,7 @@ echo '["REQ","test",{}]' | websocat ws://127.0.0.1:7777/
```
### **Step 5: Check Apache Error Logs**
```bash
# Watch Apache errors in real-time
sudo tail -f /var/log/apache2/error.log
@@ -83,6 +89,7 @@ ProxyAddHeaders On
```
### **Alternative Simpler Version:**
If the above doesn't work, try just:
```apache

View File

@@ -4,9 +4,9 @@
services:
orly-relay:
build:
context: .
context: ../..
dockerfile: Dockerfile
image: silberengel/next-orly:latest
image: silberengel/next-orly:latest
container_name: orly-relay
restart: unless-stopped
ports:
@@ -23,40 +23,40 @@ services:
- ORLY_DB_LOG_LEVEL=error
- ORLY_OWNERS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx
- ORLY_ADMINS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx,npub1m4ny6hjqzepn4rxknuq94c2gpqzr29ufkkw7ttcxyak7v43n6vvsajc2jl,npub1l5sga6xg72phsz5422ykujprejwud075ggrr3z2hwyrfgr7eylqstegx9z
# ACL and Spider Configuration
- ORLY_ACL_MODE=follows
- ORLY_SPIDER_MODE=follows
# Bootstrap relay URLs for initial sync
- ORLY_BOOTSTRAP_RELAYS=wss://profiles.nostr1.com,wss://purplepag.es,wss://relay.nostr.band,wss://relay.damus.io
# Subscription Settings (optional)
- ORLY_SUBSCRIPTION_ENABLED=false
- ORLY_MONTHLY_PRICE_SATS=0
# Performance Settings
- ORLY_MAX_CONNECTIONS=1000
- ORLY_MAX_EVENT_SIZE=65536
- ORLY_MAX_SUBSCRIPTIONS=20
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:7777"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Resource limits
deploy:
resources:
limits:
memory: 1G
cpus: '1.0'
cpus: "1.0"
reservations:
memory: 256M
cpus: '0.25'
cpus: "0.25"
# Logging configuration
logging:
driver: "json-file"
@@ -79,7 +79,7 @@ services:
depends_on:
- orly-relay
profiles:
- proxy # Only start with: docker-compose --profile proxy up
- proxy # Only start with: docker-compose --profile proxy up
volumes:
relay_data:

View File

@@ -10,12 +10,14 @@ This document compares how two Nostr relay implementations handle WebSocket conn
## Architecture Comparison
### Khatru Architecture
- **Monolithic approach**: Single large `HandleWebsocket` method (~380 lines) processes all message types
- **Inline processing**: REQ handling is embedded within the main websocket handler
- **Hook-based extensibility**: Uses function slices for customizable behavior
- **Simple structure**: WebSocket struct with basic fields and mutex for thread safety
### Next.orly.dev Architecture
### Next.orly.dev Architecture
- **Modular approach**: Separate methods for each message type (`HandleReq`, `HandleEvent`, etc.)
- **Layered processing**: Message identification → envelope parsing → type-specific handling
- **Publisher-subscriber system**: Dedicated infrastructure for subscription management
@@ -24,6 +26,7 @@ This document compares how two Nostr relay implementations handle WebSocket conn
## Connection Establishment
### Khatru
```go
// Simple websocket upgrade
conn, err := rl.upgrader.Upgrade(w, r, nil)
@@ -36,6 +39,7 @@ ws := &WebSocket{
```
### Next.orly.dev
```go
// More sophisticated setup with IP whitelisting
conn, err = websocket.Accept(w, r, &websocket.AcceptOptions{OriginPatterns: []string{"*"}})
@@ -50,6 +54,7 @@ listener := &Listener{
```
**Key Differences:**
- Next.orly.dev includes IP whitelisting and immediate authentication challenges
- Khatru uses fasthttp/websocket library vs next.orly.dev using coder/websocket
- Next.orly.dev has more detailed connection state tracking
@@ -57,11 +62,13 @@ listener := &Listener{
## Message Processing
### Khatru
- Uses `nostr.MessageParser` for sequential parsing
- Switch statement on envelope type within goroutine
- Direct processing without intermediate validation layers
### Next.orly.dev
- Custom envelope identification system (`envelopes.Identify`)
- Separate validation and processing phases
- Extensive logging and error handling at each step
@@ -69,11 +76,12 @@ listener := &Listener{
## REQ Message Handling
### Khatru REQ Processing
```go
case *nostr.ReqEnvelope:
eose := sync.WaitGroup{}
eose.Add(len(env.Filters))
// Handle each filter separately
for _, filter := range env.Filters {
err := srl.handleRequest(reqCtx, env.SubscriptionID, &eose, ws, filter)
@@ -85,7 +93,7 @@ case *nostr.ReqEnvelope:
rl.addListener(ws, env.SubscriptionID, srl, filter, cancelReqCtx)
}
}
go func() {
eose.Wait()
ws.WriteJSON(nostr.EOSEEnvelope(env.SubscriptionID))
@@ -93,6 +101,7 @@ case *nostr.ReqEnvelope:
```
### Next.orly.dev REQ Processing
```go
// Comprehensive ACL and authentication checks first
accessLevel := acl.Registry.GetAccessLevel(l.authedPubkey.Load(), l.remote)
@@ -117,12 +126,14 @@ for _, f := range *env.Filters {
### 1. **Filter Processing Strategy**
**Khatru:**
- Processes each filter independently and concurrently
- Uses WaitGroup to coordinate EOSE across all filters
- Immediately sets up listeners for ongoing subscriptions
- Fails entire subscription if any filter is rejected
**Next.orly.dev:**
- Processes all filters sequentially in a single context
- Collects all events before applying access control
- Only sets up subscriptions for filters that need ongoing updates
@@ -131,11 +142,13 @@ for _, f := range *env.Filters {
### 2. **Access Control Integration**
**Khatru:**
- Basic NIP-42 authentication support
- Hook-based authorization via `RejectFilter` functions
- Limited built-in access control features
**Next.orly.dev:**
- Comprehensive ACL system with multiple access levels
- Built-in support for private events with npub authorization
- Privileged event filtering based on pubkey and p-tags
@@ -144,6 +157,7 @@ for _, f := range *env.Filters {
### 3. **Subscription Management**
**Khatru:**
```go
// Simple listener registration
type listenerSpec struct {
@@ -155,6 +169,7 @@ rl.addListener(ws, subscriptionID, relay, filter, cancel)
```
**Next.orly.dev:**
```go
// Publisher-subscriber system with rich metadata
type W struct {
@@ -171,11 +186,13 @@ l.publishers.Receive(&W{...})
### 4. **Performance Optimizations**
**Khatru:**
- Concurrent filter processing
- Immediate streaming of events as they're found
- Memory-efficient with direct event streaming
**Next.orly.dev:**
- Batch processing with deduplication
- Memory management with explicit `ev.Free()` calls
- Smart subscription cancellation for ID-only queries
@@ -184,11 +201,13 @@ l.publishers.Receive(&W{...})
### 5. **Error Handling & Observability**
**Khatru:**
- Basic error logging
- Simple connection state management
- Limited metrics and observability
**Next.orly.dev:**
- Comprehensive error handling with context preservation
- Detailed logging at each processing stage
- Built-in metrics (message count, REQ count, event count)
@@ -197,11 +216,13 @@ l.publishers.Receive(&W{...})
## Memory Management
### Khatru
- Relies on Go's garbage collector
- Simple WebSocket struct with minimal state
- Uses sync.Map for thread-safe operations
### Next.orly.dev
- Explicit memory management with `ev.Free()` calls
- Resource pooling and reuse patterns
- Detailed tracking of connection resources
@@ -209,11 +230,13 @@ l.publishers.Receive(&W{...})
## Concurrency Models
### Khatru
- Per-connection goroutine for message reading
- Additional goroutines for each message processing
- WaitGroup coordination for multi-filter EOSE
### Next.orly.dev
- Per-connection goroutine with single-threaded message processing
- Publisher-subscriber system handles concurrent event distribution
- Context-based cancellation throughout
@@ -221,18 +244,21 @@ l.publishers.Receive(&W{...})
## Trade-offs Analysis
### Khatru Advantages
- **Simplicity**: Easier to understand and modify
- **Performance**: Lower latency due to concurrent processing
- **Flexibility**: Hook-based architecture allows extensive customization
- **Streaming**: Events sent as soon as they're found
### Khatru Disadvantages
- **Monolithic**: Large methods harder to maintain
- **Limited ACL**: Basic authentication and authorization
- **Error handling**: Less graceful failure recovery
- **Resource usage**: No explicit memory management
### Next.orly.dev Advantages
- **Security**: Comprehensive ACL and privacy features
- **Observability**: Extensive logging and metrics
- **Resource management**: Explicit memory and connection lifecycle management
@@ -240,6 +266,7 @@ l.publishers.Receive(&W{...})
- **Robustness**: Graceful handling of edge cases and failures
### Next.orly.dev Disadvantages
- **Complexity**: Higher cognitive overhead and learning curve
- **Latency**: Sequential processing may be slower for some use cases
- **Resource overhead**: More memory usage due to batching and state tracking
@@ -253,7 +280,8 @@ Both implementations represent different philosophies:
- **Next.orly.dev** prioritizes security, observability, and robustness through comprehensive built-in features
The choice between them depends on specific requirements:
- Choose **Khatru** for high-performance relays with custom business logic
- Choose **Next.orly.dev** for production relays requiring comprehensive access control and monitoring
Both approaches demonstrate mature understanding of Nostr protocol requirements while making different trade-offs in complexity vs. features.
Both approaches demonstrate mature understanding of Nostr protocol requirements while making different trade-offs in complexity vs. features.

1
go.mod
View File

@@ -34,6 +34,7 @@ require (
github.com/go-logr/stdr v1.2.2 // indirect
github.com/google/flatbuffers v25.9.23+incompatible // indirect
github.com/google/pprof v0.0.0-20251002213607-436353cc1ee6 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/templexxx/cpu v0.1.1 // indirect

2
go.sum
View File

@@ -45,6 +45,8 @@ github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8I
github.com/google/pprof v0.0.0-20240227163752-401108e1b7e7/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
github.com/google/pprof v0.0.0-20251002213607-436353cc1ee6 h1:/WHh/1k4thM/w+PAZEIiZK9NwCMFahw5tUzKUCnUtds=
github.com/google/pprof v0.0.0-20251002213607-436353cc1ee6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
github.com/ianlancetaylor/demangle v0.0.0-20230524184225-eabc099b10ab/go.mod h1:gx7rwoVhcfuVKG5uya9Hs3Sxj7EIvldVofAWIUtGouw=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=

View File

@@ -40,6 +40,7 @@ type Follows struct {
pubs *publish.S
followsMx sync.RWMutex
admins [][]byte
owners [][]byte
follows [][]byte
updated chan struct{}
subsCancel context.CancelFunc
@@ -69,6 +70,16 @@ func (f *Follows) Configure(cfg ...any) (err error) {
err = errorf.E("both config and database must be set")
return
}
// add owners list
for _, owner := range f.cfg.Owners {
var own []byte
if o, e := bech32encoding.NpubOrHexToPublicKeyBinary(owner); chk.E(e) {
continue
} else {
own = o
}
f.owners = append(f.owners, own)
}
// find admin follow lists
f.followsMx.Lock()
defer f.followsMx.Unlock()
@@ -129,11 +140,13 @@ func (f *Follows) Configure(cfg ...any) (err error) {
}
func (f *Follows) GetAccessLevel(pub []byte, address string) (level string) {
if f.cfg == nil {
return "write"
}
f.followsMx.RLock()
defer f.followsMx.RUnlock()
for _, v := range f.owners {
if utils.FastEqual(v, pub) {
return "owner"
}
}
for _, v := range f.admins {
if utils.FastEqual(v, pub) {
return "admin"
@@ -144,6 +157,9 @@ func (f *Follows) GetAccessLevel(pub []byte, address string) (level string) {
return "write"
}
}
if f.cfg == nil {
return "write"
}
return "read"
}
@@ -236,7 +252,7 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
return
}
urls := f.adminRelays()
log.I.S(urls)
// log.I.S(urls)
if len(urls) == 0 {
log.W.F("follows syncer: no admin relays found in DB (kind 10002) and no bootstrap relays configured")
return
@@ -274,11 +290,16 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
log.W.F("follows syncer: dial %s failed: %v", u, err)
// Handle different types of errors
if strings.Contains(err.Error(), "response status code 101 but got 403") {
if strings.Contains(
err.Error(), "response status code 101 but got 403",
) {
// 403 means the relay is not accepting connections from us
// Forbidden is the meaning, usually used to indicate either the IP or user is blocked
// But we should still retry after a longer delay
log.W.F("follows syncer: relay %s returned 403, will retry after longer delay", u)
log.W.F(
"follows syncer: relay %s returned 403, will retry after longer delay",
u,
)
timer := time.NewTimer(5 * time.Minute) // Wait 5 minutes before retrying 403 errors
select {
case <-ctx.Done():
@@ -286,12 +307,20 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
case <-timer.C:
}
continue
} else if strings.Contains(err.Error(), "timeout") || strings.Contains(err.Error(), "connection refused") {
} else if strings.Contains(
err.Error(), "timeout",
) || strings.Contains(err.Error(), "connection refused") {
// Network issues, retry with normal backoff
log.W.F("follows syncer: network issue with %s, retrying in %v", u, backoff)
log.W.F(
"follows syncer: network issue with %s, retrying in %v",
u, backoff,
)
} else {
// Other errors, retry with normal backoff
log.W.F("follows syncer: connection error with %s, retrying in %v", u, backoff)
log.W.F(
"follows syncer: connection error with %s, retrying in %v",
u, backoff,
)
}
timer := time.NewTimer(backoff)
@@ -306,7 +335,7 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
continue
}
backoff = time.Second
log.I.F("follows syncer: successfully connected to %s", u)
log.T.F("follows syncer: successfully connected to %s", u)
// send REQ for kind 3 (follow lists), kind 10002 (relay lists), and all events from follows
ff := &filter.S{}
@@ -332,11 +361,16 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
if err = c.Write(
ctx, websocket.MessageText, req.Marshal(nil),
); chk.E(err) {
log.W.F("follows syncer: failed to send REQ to %s: %v", u, err)
log.W.F(
"follows syncer: failed to send REQ to %s: %v", u, err,
)
_ = c.Close(websocket.StatusInternalError, "write failed")
continue
}
log.I.F("follows syncer: sent REQ to %s for kind 3, 10002, and all events (last 30 days) from followed users", u)
log.T.F(
"follows syncer: sent REQ to %s for kind 3, 10002, and all events (last 30 days) from followed users",
u,
)
// read loop
for {
select {
@@ -368,17 +402,24 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
// Process events based on kind
switch res.Event.Kind {
case kind.FollowList.K:
log.I.F("follows syncer: received kind 3 (follow list) event from %s on relay %s",
hex.EncodeToString(res.Event.Pubkey), u)
log.T.F(
"follows syncer: received kind 3 (follow list) event from %s on relay %s",
hex.EncodeToString(res.Event.Pubkey), u,
)
// Extract followed pubkeys from 'p' tags in kind 3 events
f.extractFollowedPubkeys(res.Event)
case kind.RelayListMetadata.K:
log.I.F("follows syncer: received kind 10002 (relay list) event from %s on relay %s",
hex.EncodeToString(res.Event.Pubkey), u)
log.T.F(
"follows syncer: received kind 10002 (relay list) event from %s on relay %s",
hex.EncodeToString(res.Event.Pubkey), u,
)
default:
// Log all other events from followed users
log.I.F("follows syncer: received kind %d event from %s on relay %s",
res.Event.Kind, hex.EncodeToString(res.Event.Pubkey), u)
log.T.F(
"follows syncer: received kind %d event from %s on relay %s",
res.Event.Kind,
hex.EncodeToString(res.Event.Pubkey), u,
)
}
if _, _, err = f.D.SaveEvent(
@@ -488,7 +529,10 @@ func (f *Follows) AddFollow(pub []byte) {
b := make([]byte, len(pub))
copy(b, pub)
f.follows = append(f.follows, b)
log.I.F("follows syncer: added new followed pubkey: %s", hex.EncodeToString(pub))
log.I.F(
"follows syncer: added new followed pubkey: %s",
hex.EncodeToString(pub),
)
// notify syncer if initialized
if f.updated != nil {
select {

View File

@@ -2,13 +2,71 @@ package acl
import (
"lol.mleku.dev/log"
"next.orly.dev/app/config"
"next.orly.dev/pkg/encoders/bech32encoding"
"next.orly.dev/pkg/utils"
)
type None struct{}
type None struct {
cfg *config.C
owners [][]byte
admins [][]byte
}
func (n None) Configure(cfg ...any) (err error) { return }
func (n *None) Configure(cfg ...any) (err error) {
for _, ca := range cfg {
switch c := ca.(type) {
case *config.C:
n.cfg = c
}
}
if n.cfg == nil {
return
}
func (n None) GetAccessLevel(pub []byte, address string) (level string) {
// Load owners
for _, owner := range n.cfg.Owners {
if len(owner) == 0 {
continue
}
var pk []byte
if pk, err = bech32encoding.NpubOrHexToPublicKeyBinary(owner); err != nil {
continue
}
n.owners = append(n.owners, pk)
}
// Load admins
for _, admin := range n.cfg.Admins {
if len(admin) == 0 {
continue
}
var pk []byte
if pk, err = bech32encoding.NpubOrHexToPublicKeyBinary(admin); err != nil {
continue
}
n.admins = append(n.admins, pk)
}
return
}
func (n *None) GetAccessLevel(pub []byte, address string) (level string) {
// Check owners first
for _, v := range n.owners {
if utils.FastEqual(v, pub) {
return "owner"
}
}
// Check admins
for _, v := range n.admins {
if utils.FastEqual(v, pub) {
return "admin"
}
}
// Default to write for everyone else
return "write"
}

View File

@@ -1,5 +1,4 @@
realy.lol/pkg/ec
=====
# realy.lol/pkg/ec
This is a full drop-in replacement for
[github.com/btcsuite/btcd/btcec](https://github.com/btcsuite/btcd/tree/master/btcec)
@@ -20,7 +19,7 @@ message signing with the extra test vectors present and passing.
The remainder of this document is from the original README.md.
------------------------------------------------------------------------------
---
Package `ec` implements elliptic curve cryptography needed for working with
Bitcoin. It is designed so that it may be used with the standard

View File

@@ -1,8 +1,6 @@
chainhash
=========
# chainhash
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
=======
# [![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
chainhash provides a generic hash type and associated functions that allows the
specific hash algorithm to be abstracted.

View File

@@ -1,5 +1,4 @@
ecdsa
=====
# ecdsa
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://pkg.go.dev/mleku.online/git/ec/secp/ecdsa)

View File

@@ -14,45 +14,25 @@
],
"valid_test_cases": [
{
"key_indices": [
0,
1,
2
],
"key_indices": [0, 1, 2],
"expected": "90539EEDE565F5D054F32CC0C220126889ED1E5D193BAF15AEF344FE59D4610C"
},
{
"key_indices": [
2,
1,
0
],
"key_indices": [2, 1, 0],
"expected": "6204DE8B083426DC6EAF9502D27024D53FC826BF7D2012148A0575435DF54B2B"
},
{
"key_indices": [
0,
0,
0
],
"key_indices": [0, 0, 0],
"expected": "B436E3BAD62B8CD409969A224731C193D051162D8C5AE8B109306127DA3AA935"
},
{
"key_indices": [
0,
0,
1,
1
],
"key_indices": [0, 0, 1, 1],
"expected": "69BC22BFA5D106306E48A20679DE1D7389386124D07571D0D872686028C26A3E"
}
],
"error_test_cases": [
{
"key_indices": [
0,
3
],
"key_indices": [0, 3],
"tweak_indices": [],
"is_xonly": [],
"error": {
@@ -63,10 +43,7 @@
"comment": "Invalid public key"
},
{
"key_indices": [
0,
4
],
"key_indices": [0, 4],
"tweak_indices": [],
"is_xonly": [],
"error": {
@@ -77,10 +54,7 @@
"comment": "Public key exceeds field size"
},
{
"key_indices": [
5,
0
],
"key_indices": [5, 0],
"tweak_indices": [],
"is_xonly": [],
"error": {
@@ -91,16 +65,9 @@
"comment": "First byte of public key is not 2 or 3"
},
{
"key_indices": [
0,
1
],
"tweak_indices": [
0
],
"is_xonly": [
true
],
"key_indices": [0, 1],
"tweak_indices": [0],
"is_xonly": [true],
"error": {
"type": "value",
"message": "The tweak must be less than n."
@@ -108,15 +75,9 @@
"comment": "Tweak is out of range"
},
{
"key_indices": [
6
],
"tweak_indices": [
1
],
"is_xonly": [
false
],
"key_indices": [6],
"tweak_indices": [1],
"is_xonly": [false],
"error": {
"type": "value",
"message": "The result of tweaking cannot be infinity."

View File

@@ -10,27 +10,18 @@
],
"valid_test_cases": [
{
"pnonce_indices": [
0,
1
],
"pnonce_indices": [0, 1],
"expected": "035FE1873B4F2967F52FEA4A06AD5A8ECCBE9D0FD73068012C894E2E87CCB5804B024725377345BDE0E9C33AF3C43C0A29A9249F2F2956FA8CFEB55C8573D0262DC8"
},
{
"pnonce_indices": [
2,
3
],
"pnonce_indices": [2, 3],
"expected": "035FE1873B4F2967F52FEA4A06AD5A8ECCBE9D0FD73068012C894E2E87CCB5804B000000000000000000000000000000000000000000000000000000000000000000",
"comment": "Sum of second points encoded in the nonces is point at infinity which is serialized as 33 zero bytes"
}
],
"error_test_cases": [
{
"pnonce_indices": [
0,
4
],
"pnonce_indices": [0, 4],
"error": {
"type": "invalid_contribution",
"signer": 1,
@@ -40,10 +31,7 @@
"btcec_err": "invalid public key: unsupported format: 4"
},
{
"pnonce_indices": [
5,
1
],
"pnonce_indices": [5, 1],
"error": {
"type": "invalid_contribution",
"signer": 0,
@@ -53,10 +41,7 @@
"btcec_err": "invalid public key: x coordinate 48c264cdd57d3c24d79990b0f865674eb62a0f9018277a95011b41bfc193b831 is not on the secp256k1 curve"
},
{
"pnonce_indices": [
6,
1
],
"pnonce_indices": [6, 1],
"error": {
"type": "invalid_contribution",
"signer": 0,

View File

@@ -37,4 +37,4 @@
"expected": "890E83616A3BC4640AB9B6374F21C81FF89CDDDBAFAA7475AE2A102A92E3EDB29FD7E874E23342813A60D9646948242646B7951CA046B4B36D7D6078506D3C9402F9308A019258C31049344F85F89D5229B531C845836F99B08601F113BCE036F9"
}
]
}
}

View File

@@ -33,114 +33,49 @@
"valid_test_cases": [
{
"aggnonce": "0341432722C5CD0268D829C702CF0D1CBCE57033EED201FD335191385227C3210C03D377F2D258B64AADC0E16F26462323D701D286046A2EA93365656AFD9875982B",
"nonce_indices": [
0,
1
],
"key_indices": [
0,
1
],
"nonce_indices": [0, 1],
"key_indices": [0, 1],
"tweak_indices": [],
"is_xonly": [],
"psig_indices": [
0,
1
],
"psig_indices": [0, 1],
"expected": "041DA22223CE65C92C9A0D6C2CAC828AAF1EEE56304FEC371DDF91EBB2B9EF0912F1038025857FEDEB3FF696F8B99FA4BB2C5812F6095A2E0004EC99CE18DE1E"
},
{
"aggnonce": "0224AFD36C902084058B51B5D36676BBA4DC97C775873768E58822F87FE437D792028CB15929099EEE2F5DAE404CD39357591BA32E9AF4E162B8D3E7CB5EFE31CB20",
"nonce_indices": [
0,
2
],
"key_indices": [
0,
2
],
"nonce_indices": [0, 2],
"key_indices": [0, 2],
"tweak_indices": [],
"is_xonly": [],
"psig_indices": [
2,
3
],
"psig_indices": [2, 3],
"expected": "1069B67EC3D2F3C7C08291ACCB17A9C9B8F2819A52EB5DF8726E17E7D6B52E9F01800260A7E9DAC450F4BE522DE4CE12BA91AEAF2B4279219EF74BE1D286ADD9"
},
{
"aggnonce": "0208C5C438C710F4F96A61E9FF3C37758814B8C3AE12BFEA0ED2C87FF6954FF186020B1816EA104B4FCA2D304D733E0E19CEAD51303FF6420BFD222335CAA402916D",
"nonce_indices": [
0,
3
],
"key_indices": [
0,
2
],
"tweak_indices": [
0
],
"is_xonly": [
false
],
"psig_indices": [
4,
5
],
"nonce_indices": [0, 3],
"key_indices": [0, 2],
"tweak_indices": [0],
"is_xonly": [false],
"psig_indices": [4, 5],
"expected": "5C558E1DCADE86DA0B2F02626A512E30A22CF5255CAEA7EE32C38E9A71A0E9148BA6C0E6EC7683B64220F0298696F1B878CD47B107B81F7188812D593971E0CC"
},
{
"aggnonce": "02B5AD07AFCD99B6D92CB433FBD2A28FDEB98EAE2EB09B6014EF0F8197CD58403302E8616910F9293CF692C49F351DB86B25E352901F0E237BAFDA11F1C1CEF29FFD",
"nonce_indices": [
0,
4
],
"key_indices": [
0,
3
],
"tweak_indices": [
0,
1,
2
],
"is_xonly": [
true,
false,
true
],
"psig_indices": [
6,
7
],
"nonce_indices": [0, 4],
"key_indices": [0, 3],
"tweak_indices": [0, 1, 2],
"is_xonly": [true, false, true],
"psig_indices": [6, 7],
"expected": "839B08820B681DBA8DAF4CC7B104E8F2638F9388F8D7A555DC17B6E6971D7426CE07BF6AB01F1DB50E4E33719295F4094572B79868E440FB3DEFD3FAC1DB589E"
}
],
"error_test_cases": [
{
"aggnonce": "02B5AD07AFCD99B6D92CB433FBD2A28FDEB98EAE2EB09B6014EF0F8197CD58403302E8616910F9293CF692C49F351DB86B25E352901F0E237BAFDA11F1C1CEF29FFD",
"nonce_indices": [
0,
4
],
"key_indices": [
0,
3
],
"tweak_indices": [
0,
1,
2
],
"is_xonly": [
true,
false,
true
],
"psig_indices": [
7,
8
],
"nonce_indices": [0, 4],
"key_indices": [0, 3],
"tweak_indices": [0, 1, 2],
"is_xonly": [true, false, true],
"psig_indices": [7, 8],
"error": {
"type": "invalid_contribution",
"signer": 1
@@ -148,4 +83,4 @@
"comment": "Partial signature is invalid because it exceeds group size"
}
]
}
}

View File

@@ -31,62 +31,32 @@
],
"valid_test_cases": [
{
"key_indices": [
0,
1,
2
],
"nonce_indices": [
0,
1,
2
],
"key_indices": [0, 1, 2],
"nonce_indices": [0, 1, 2],
"aggnonce_index": 0,
"msg_index": 0,
"signer_index": 0,
"expected": "012ABBCB52B3016AC03AD82395A1A415C48B93DEF78718E62A7A90052FE224FB"
},
{
"key_indices": [
1,
0,
2
],
"nonce_indices": [
1,
0,
2
],
"key_indices": [1, 0, 2],
"nonce_indices": [1, 0, 2],
"aggnonce_index": 0,
"msg_index": 0,
"signer_index": 1,
"expected": "9FF2F7AAA856150CC8819254218D3ADEEB0535269051897724F9DB3789513A52"
},
{
"key_indices": [
1,
2,
0
],
"nonce_indices": [
1,
2,
0
],
"key_indices": [1, 2, 0],
"nonce_indices": [1, 2, 0],
"aggnonce_index": 0,
"msg_index": 0,
"signer_index": 2,
"expected": "FA23C359F6FAC4E7796BB93BC9F0532A95468C539BA20FF86D7C76ED92227900"
},
{
"key_indices": [
0,
1
],
"nonce_indices": [
0,
3
],
"key_indices": [0, 1],
"nonce_indices": [0, 3],
"aggnonce_index": 1,
"msg_index": 0,
"signer_index": 0,
@@ -96,10 +66,7 @@
],
"sign_error_test_cases": [
{
"key_indices": [
1,
2
],
"key_indices": [1, 2],
"aggnonce_index": 0,
"msg_index": 0,
"secnonce_index": 0,
@@ -110,11 +77,7 @@
"comment": "The signers pubkey is not in the list of pubkeys"
},
{
"key_indices": [
1,
0,
3
],
"key_indices": [1, 0, 3],
"aggnonce_index": 0,
"msg_index": 0,
"secnonce_index": 0,
@@ -126,11 +89,7 @@
"comment": "Signer 2 provided an invalid public key"
},
{
"key_indices": [
1,
2,
0
],
"key_indices": [1, 2, 0],
"aggnonce_index": 2,
"msg_index": 0,
"secnonce_index": 0,
@@ -142,11 +101,7 @@
"comment": "Aggregate nonce is invalid due wrong tag, 0x04, in the first half"
},
{
"key_indices": [
1,
2,
0
],
"key_indices": [1, 2, 0],
"aggnonce_index": 3,
"msg_index": 0,
"secnonce_index": 0,
@@ -158,11 +113,7 @@
"comment": "Aggregate nonce is invalid because the second half does not correspond to an X coordinate"
},
{
"key_indices": [
1,
2,
0
],
"key_indices": [1, 2, 0],
"aggnonce_index": 4,
"msg_index": 0,
"secnonce_index": 0,
@@ -174,11 +125,7 @@
"comment": "Aggregate nonce is invalid because second half exceeds field size"
},
{
"key_indices": [
0,
1,
2
],
"key_indices": [0, 1, 2],
"aggnonce_index": 0,
"msg_index": 0,
"signer_index": 0,
@@ -193,48 +140,24 @@
"verify_fail_test_cases": [
{
"sig": "97AC833ADCB1AFA42EBF9E0725616F3C9A0D5B614F6FE283CEAAA37A8FFAF406",
"key_indices": [
0,
1,
2
],
"nonce_indices": [
0,
1,
2
],
"key_indices": [0, 1, 2],
"nonce_indices": [0, 1, 2],
"msg_index": 0,
"signer_index": 0,
"comment": "Wrong signature (which is equal to the negation of valid signature)"
},
{
"sig": "68537CC5234E505BD14061F8DA9E90C220A181855FD8BDB7F127BB12403B4D3B",
"key_indices": [
0,
1,
2
],
"nonce_indices": [
0,
1,
2
],
"key_indices": [0, 1, 2],
"nonce_indices": [0, 1, 2],
"msg_index": 0,
"signer_index": 1,
"comment": "Wrong signer"
},
{
"sig": "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141",
"key_indices": [
0,
1,
2
],
"nonce_indices": [
0,
1,
2
],
"key_indices": [0, 1, 2],
"nonce_indices": [0, 1, 2],
"msg_index": 0,
"signer_index": 0,
"comment": "Signature exceeds group size"
@@ -243,16 +166,8 @@
"verify_error_test_cases": [
{
"sig": "68537CC5234E505BD14061F8DA9E90C220A181855FD8BDB7F127BB12403B4D3B",
"key_indices": [
0,
1,
2
],
"nonce_indices": [
4,
1,
2
],
"key_indices": [0, 1, 2],
"nonce_indices": [4, 1, 2],
"msg_index": 0,
"signer_index": 0,
"error": {
@@ -264,16 +179,8 @@
},
{
"sig": "68537CC5234E505BD14061F8DA9E90C220A181855FD8BDB7F127BB12403B4D3B",
"key_indices": [
3,
1,
2
],
"nonce_indices": [
0,
1,
2
],
"key_indices": [3, 1, 2],
"nonce_indices": [0, 1, 2],
"msg_index": 0,
"signer_index": 0,
"error": {

View File

@@ -22,120 +22,46 @@
"msg": "F95466D086770E689964664219266FE5ED215C92AE20BAB5C9D79ADDDDF3C0CF",
"valid_test_cases": [
{
"key_indices": [
1,
2,
0
],
"nonce_indices": [
1,
2,
0
],
"tweak_indices": [
0
],
"is_xonly": [
true
],
"key_indices": [1, 2, 0],
"nonce_indices": [1, 2, 0],
"tweak_indices": [0],
"is_xonly": [true],
"signer_index": 2,
"expected": "E28A5C66E61E178C2BA19DB77B6CF9F7E2F0F56C17918CD13135E60CC848FE91",
"comment": "A single x-only tweak"
},
{
"key_indices": [
1,
2,
0
],
"nonce_indices": [
1,
2,
0
],
"tweak_indices": [
0
],
"is_xonly": [
false
],
"key_indices": [1, 2, 0],
"nonce_indices": [1, 2, 0],
"tweak_indices": [0],
"is_xonly": [false],
"signer_index": 2,
"expected": "38B0767798252F21BF5702C48028B095428320F73A4B14DB1E25DE58543D2D2D",
"comment": "A single plain tweak"
},
{
"key_indices": [
1,
2,
0
],
"nonce_indices": [
1,
2,
0
],
"tweak_indices": [
0,
1
],
"is_xonly": [
false,
true
],
"key_indices": [1, 2, 0],
"nonce_indices": [1, 2, 0],
"tweak_indices": [0, 1],
"is_xonly": [false, true],
"signer_index": 2,
"expected": "408A0A21C4A0F5DACAF9646AD6EB6FECD7F7A11F03ED1F48DFFF2185BC2C2408",
"comment": "A plain tweak followed by an x-only tweak"
},
{
"key_indices": [
1,
2,
0
],
"nonce_indices": [
1,
2,
0
],
"tweak_indices": [
0,
1,
2,
3
],
"is_xonly": [
false,
false,
true,
true
],
"key_indices": [1, 2, 0],
"nonce_indices": [1, 2, 0],
"tweak_indices": [0, 1, 2, 3],
"is_xonly": [false, false, true, true],
"signer_index": 2,
"expected": "45ABD206E61E3DF2EC9E264A6FEC8292141A633C28586388235541F9ADE75435",
"comment": "Four tweaks: plain, plain, x-only, x-only."
},
{
"key_indices": [
1,
2,
0
],
"nonce_indices": [
1,
2,
0
],
"tweak_indices": [
0,
1,
2,
3
],
"is_xonly": [
true,
false,
true,
false
],
"key_indices": [1, 2, 0],
"nonce_indices": [1, 2, 0],
"tweak_indices": [0, 1, 2, 3],
"is_xonly": [true, false, true, false],
"signer_index": 2,
"expected": "B255FDCAC27B40C7CE7848E2D3B7BF5EA0ED756DA81565AC804CCCA3E1D5D239",
"comment": "Four tweaks: x-only, plain, x-only, plain. If an implementation prohibits applying plain tweaks after x-only tweaks, it can skip this test vector or return an error."
@@ -143,22 +69,10 @@
],
"error_test_cases": [
{
"key_indices": [
1,
2,
0
],
"nonce_indices": [
1,
2,
0
],
"tweak_indices": [
4
],
"is_xonly": [
false
],
"key_indices": [1, 2, 0],
"nonce_indices": [1, 2, 0],
"tweak_indices": [4],
"is_xonly": [false],
"signer_index": 2,
"error": {
"type": "value",

View File

@@ -25,16 +25,16 @@ An overview of the features provided by this package are as follows:
- Secret key generation, serialization, and parsing
- Public key generation, serialization and parsing per ANSI X9.62-1998
- Parses uncompressed, compressed, and hybrid public keys
- Serializes uncompressed and compressed public keys
- Parses uncompressed, compressed, and hybrid public keys
- Serializes uncompressed and compressed public keys
- Specialized types for performing optimized and constant time field operations
- `FieldVal` type for working modulo the secp256k1 field prime
- `ModNScalar` type for working modulo the secp256k1 group order
- `FieldVal` type for working modulo the secp256k1 field prime
- `ModNScalar` type for working modulo the secp256k1 group order
- Elliptic curve operations in Jacobian projective coordinates
- Point addition
- Point doubling
- Scalar multiplication with an arbitrary point
- Scalar multiplication with the base point (group generator)
- Point addition
- Point doubling
- Scalar multiplication with an arbitrary point
- Scalar multiplication with the base point (group generator)
- Point decompression from a given x coordinate
- Nonce generation via RFC6979 with support for extra data and version
information that can be used to prevent nonce reuse between signing algorithms

View File

@@ -25,7 +25,7 @@ it
For ubuntu, you need these:
sudo apt -y install build-essential autoconf libtool
sudo apt -y install build-essential autoconf libtool
For other linux distributions, the process is the same but the dependencies are
likely different. The main thing is it requires make, gcc/++, autoconf and
@@ -65,4 +65,4 @@ coordinate and this is incorrect for nostr. It will be enabled soon... for now
it is done with the `btcec` fallback version. This is slower, however previous
tests have shown that this ECDH library is fast enough to enable 8mb/s
throughput per CPU thread when used to generate a distinct secret for TCP
packets. The C library will likely raise this to 20mb/s or more.
packets. The C library will likely raise this to 20mb/s or more.

View File

@@ -95,9 +95,9 @@ Note that, because of the scheduling overhead, for small messages (< 1 MB) you
will be better off using the regular SHA256 hashing (but those are typically not
performance critical anyway). Some other tips to get the best performance:
* Have many go routines doing SHA256 calculations in parallel.
* Try to Write() messages in multiples of 64 bytes.
* Try to keep the overall length of messages to a roughly similar size ie. 5
- Have many go routines doing SHA256 calculations in parallel.
- Try to Write() messages in multiples of 64 bytes.
- Try to keep the overall length of messages to a roughly similar size ie. 5
MB (this way all 16 lanes in the AVX512 computations are contributing as
much as possible).
@@ -128,7 +128,7 @@ Below is the speed in MB/s for a single core (ranked fast to slow) for blocks
larger than 1 MB.
| Processor | SIMD | Speed (MB/s) |
|-----------------------------------|---------|-------------:|
| --------------------------------- | ------- | -----------: |
| 3.0 GHz Intel Xeon Platinum 8124M | AVX512 | 3498 |
| 3.7 GHz AMD Ryzen 7 2700X | SHA Ext | 1979 |
| 1.2 GHz ARM Cortex-A53 | ARM64 | 638 |
@@ -160,18 +160,18 @@ Below you can see a small excerpt highlighting one of the rounds as is done for
the SHA256 calculation process (for full code
see [sha256block_arm64.s](https://github.com/minio/sha256-simd/blob/master/sha256block_arm64.s)).
```
sha256h q2, q3, v9.4s
sha256h2 q3, q4, v9.4s
sha256su0 v5.4s, v6.4s
rev32 v8.16b, v8.16b
add v9.4s, v7.4s, v18.4s
mov v4.16b, v2.16b
sha256h q2, q3, v10.4s
sha256h2 q3, q4, v10.4s
sha256su0 v6.4s, v7.4s
sha256su1 v5.4s, v7.4s, v8.4s
```
```
sha256h q2, q3, v9.4s
sha256h2 q3, q4, v9.4s
sha256su0 v5.4s, v6.4s
rev32 v8.16b, v8.16b
add v9.4s, v7.4s, v18.4s
mov v4.16b, v2.16b
sha256h q2, q3, v10.4s
sha256h2 q3, q4, v10.4s
sha256su0 v6.4s, v7.4s
sha256su1 v5.4s, v7.4s, v8.4s
```
### Detailed benchmarks

View File

@@ -6,6 +6,7 @@ import (
)
const (
None = "none"
// Read means read only
Read = "read"
// Write means read and write
@@ -14,9 +15,6 @@ const (
Admin = "admin"
// Owner means read, write, import/export, arbitrary delete and wipe
Owner = "owner"
// Group applies to communities and other groups; the content afterwards a
// set of comma separated <permission>:<pubkey> pairs designating permissions to groups.
Group = "group:"
)
type I interface {

View File

@@ -0,0 +1,5 @@
// Package httpauth provides helpers and encoders for nostr NIP-98 HTTP
// authentication header messages and a new JWT authentication message and
// delegation event kind 13004 that enables time limited expiring delegations of
// authentication (as with NIP-42 auth) for the HTTP API.
package httpauth

View File

@@ -0,0 +1,75 @@
package httpauth
import (
"encoding/base64"
"net/http"
"net/url"
"strings"
"lol.mleku.dev/chk"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/encoders/timestamp"
"next.orly.dev/pkg/interfaces/signer"
)
const (
HeaderKey = "Authorization"
NIP98Prefix = "Nostr"
)
// MakeNIP98Event creates a new NIP-98 event. If expiry is given, method is
// ignored; otherwise either option is the same.
func MakeNIP98Event(u, method, hash string, expiry int64) (ev *event.E) {
var t []*tag.T
t = append(t, tag.NewFromAny("u", u))
if expiry > 0 {
t = append(
t,
tag.NewFromAny("expiration", timestamp.FromUnix(expiry).String()),
)
} else {
t = append(
t,
tag.NewFromAny("method", strings.ToUpper(method)),
)
}
if hash != "" {
t = append(t, tag.NewFromAny("payload", hash))
}
ev = &event.E{
CreatedAt: timestamp.Now().V,
Kind: kind.HTTPAuth.K,
Tags: tag.NewS(t...),
}
return
}
func CreateNIP98Blob(
ur, method, hash string, expiry int64, sign signer.I,
) (blob string, err error) {
ev := MakeNIP98Event(ur, method, hash, expiry)
if err = ev.Sign(sign); chk.E(err) {
return
}
// log.T.F("nip-98 http auth event:\n%s\n", ev.SerializeIndented())
blob = base64.URLEncoding.EncodeToString(ev.Serialize())
return
}
// AddNIP98Header creates a NIP-98 http auth event and adds the standard header to a provided
// http.Request.
func AddNIP98Header(
r *http.Request, ur *url.URL, method, hash string,
sign signer.I, expiry int64,
) (err error) {
var b64 string
if b64, err = CreateNIP98Blob(
ur.String(), method, hash, expiry, sign,
); chk.E(err) {
return
}
r.Header.Add(HeaderKey, "Nostr "+b64)
return
}

View File

@@ -0,0 +1,191 @@
package httpauth
import (
"encoding/base64"
"fmt"
"net/http"
"strings"
"time"
"lol.mleku.dev/chk"
"lol.mleku.dev/errorf"
"lol.mleku.dev/log"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/ints"
"next.orly.dev/pkg/encoders/kind"
)
var ErrMissingKey = fmt.Errorf(
"'%s' key missing from request header", HeaderKey,
)
// CheckAuth verifies a received http.Request has got a valid authentication
// event in it, with an optional specification for tolerance of before and
// after, and provides the public key that should be verified to be authorized
// to access the resource associated with the request.
func CheckAuth(r *http.Request, tolerance ...time.Duration) (
valid bool,
pubkey []byte, err error,
) {
val := r.Header.Get(HeaderKey)
if val == "" {
err = ErrMissingKey
valid = true
return
}
if len(tolerance) == 0 {
tolerance = append(tolerance, time.Minute)
}
// log.I.S(tolerance)
if tolerance[0] == 0 {
tolerance[0] = time.Minute
}
tolerate := int64(tolerance[0] / time.Second)
log.T.C(func() string { return fmt.Sprintf("validating auth '%s'", val) })
switch {
case strings.HasPrefix(val, NIP98Prefix):
split := strings.Split(val, " ")
if len(split) == 1 {
err = errorf.E(
"missing nip-98 auth event from '%s' http header key: '%s'",
HeaderKey, val,
)
}
if len(split) > 2 {
err = errorf.E(
"extraneous content after second field space separated: %s",
val,
)
return
}
var evb []byte
if evb, err = base64.URLEncoding.DecodeString(split[1]); chk.E(err) {
return
}
ev := event.New()
var rem []byte
if rem, err = ev.Unmarshal(evb); chk.E(err) {
return
}
if len(rem) > 0 {
err = errorf.E("rem", rem)
return
}
// log.T.F("received http auth event:\n%s\n", ev.SerializeIndented())
// The kind MUST be 27235.
if ev.Kind != kind.HTTPAuth.K {
err = errorf.E(
"invalid kind %d %s in nip-98 http auth event, require %d %s",
ev.Kind, kind.GetString(ev.Kind), kind.HTTPAuth.K,
kind.HTTPAuth.Name(),
)
return
}
// if there is an expiration timestamp, check it supersedes the
// created_at for validity.
exp := ev.Tags.GetAll([]byte("expiration"))
if len(exp) > 1 {
err = errorf.E(
"more than one \"expiration\" tag found",
)
return
}
var expiring bool
if len(exp) == 1 {
ex := ints.New(0)
exp1 := exp[0]
if rem, err = ex.Unmarshal(exp1.Value()); chk.E(err) {
return
}
tn := time.Now().Unix()
if tn > ex.Int64()+tolerate {
err = errorf.E(
"HTTP auth event is expired %d time now %d",
tn, ex.Int64()+tolerate,
)
return
}
expiring = true
} else {
// The created_at timestamp MUST be within a reasonable time window
// (suggestion 60 seconds)
ts := ev.CreatedAt
tn := time.Now().Unix()
if ts < tn-tolerate || ts > tn+tolerate {
err = errorf.E(
"timestamp %d is more than %d seconds divergent from now %d",
ts, tolerate, tn,
)
return
}
}
ut := ev.Tags.GetAll([]byte("u"))
if len(ut) > 1 {
err = errorf.E(
"more than one \"u\" tag found",
)
return
}
// The u tag MUST be exactly the same as the absolute request URL
// (including query parameters).
proto := r.URL.Scheme
// if this came through a proxy, we need to get the protocol to match
// the event
if p := r.Header.Get("X-Forwarded-Proto"); p != "" {
proto = p
}
if proto == "" {
proto = "http"
}
fullUrl := proto + "://" + r.Host + r.URL.RequestURI()
evUrl := string(ut[0].Value())
log.T.F("full URL: %s event u tag value: %s", fullUrl, evUrl)
if expiring {
// if it is expiring, the URL only needs to be the same prefix to
// allow its use with multiple endpoints.
if !strings.HasPrefix(fullUrl, evUrl) {
err = errorf.E(
"request URL %s is not prefixed with the u tag URL %s",
fullUrl, evUrl,
)
return
}
} else if fullUrl != evUrl {
err = errorf.E(
"request has URL %s but signed nip-98 event has url %s",
fullUrl, string(ut[0].Value()),
)
return
}
if !expiring {
// The method tag MUST be the same HTTP method used for the
// requested resource.
mt := ev.Tags.GetAll([]byte("method"))
if len(mt) != 1 {
err = errorf.E(
"more than one \"method\" tag found",
)
return
}
if !strings.EqualFold(string(mt[0].Value()), r.Method) {
err = errorf.E(
"request has method %s but event has method %s",
string(mt[0].Value()), r.Method,
)
return
}
}
if valid, err = ev.Verify(); chk.E(err) {
return
}
if !valid {
return
}
pubkey = ev.Pubkey
default:
err = errorf.E("invalid '%s' value: '%s'", HeaderKey, val)
return
}
return
}

View File

@@ -28,7 +28,7 @@ err = client.Request(ctx, "make_invoice", params, &invoice)
## Methods
- `get_info` - Get wallet info
- `get_balance` - Get wallet balance
- `get_balance` - Get wallet balance
- `make_invoice` - Create invoice
- `lookup_invoice` - Check invoice status
- `pay_invoice` - Pay invoice
@@ -53,4 +53,4 @@ err = client.SubscribeNotifications(ctx, func(notificationType string, notificat
- Event signing
- Relay communication
- Payment notifications
- Error handling
- Error handling

View File

@@ -23,6 +23,10 @@ import (
const (
OneTimeSpiderSyncMarker = "spider_one_time_sync_completed"
SpiderLastScanMarker = "spider_last_scan_time"
// MaxWebSocketMessageSize is the maximum size for WebSocket messages to avoid 32KB limit
MaxWebSocketMessageSize = 30 * 1024 // 30KB to be safe
// PubkeyHexSize is the size of a hex-encoded pubkey (32 bytes = 64 hex chars)
PubkeyHexSize = 64
)
type Spider struct {
@@ -271,11 +275,33 @@ func (s *Spider) discoverRelays(followedPubkeys [][]byte) ([]string, error) {
return urls, nil
}
// calculateOptimalChunkSize calculates the optimal chunk size for pubkeys to stay under message size limit
func (s *Spider) calculateOptimalChunkSize() int {
// Estimate the size of a filter with timestamps and other fields
// Base filter overhead: ~200 bytes for timestamps, limits, etc.
baseFilterSize := 200
// Calculate how many pubkeys we can fit in the remaining space
availableSpace := MaxWebSocketMessageSize - baseFilterSize
maxPubkeys := availableSpace / PubkeyHexSize
// Use a conservative chunk size (80% of max to be safe)
chunkSize := int(float64(maxPubkeys) * 0.8)
// Ensure minimum chunk size of 10
if chunkSize < 10 {
chunkSize = 10
}
log.D.F("Spider: calculated optimal chunk size: %d pubkeys (max would be %d)", chunkSize, maxPubkeys)
return chunkSize
}
// queryRelayForEvents connects to a relay and queries for events from followed pubkeys
func (s *Spider) queryRelayForEvents(
relayURL string, followedPubkeys [][]byte, startTime, endTime time.Time,
) (int, error) {
log.T.F("Spider sync: querying relay %s", relayURL)
log.T.F("Spider sync: querying relay %s with %d pubkeys", relayURL, len(followedPubkeys))
// Connect to the relay with a timeout context
ctx, cancel := context.WithTimeout(s.ctx, 30*time.Second)
@@ -287,82 +313,110 @@ func (s *Spider) queryRelayForEvents(
}
defer client.Close()
// Create filter for the time range and followed pubkeys
f := &filter.F{
Authors: tag.NewFromBytesSlice(followedPubkeys...),
Since: timestamp.FromUnix(startTime.Unix()),
Until: timestamp.FromUnix(endTime.Unix()),
Limit: func() *uint { l := uint(1000); return &l }(), // Limit to avoid overwhelming
}
// Break pubkeys into chunks to avoid 32KB message limit
chunkSize := s.calculateOptimalChunkSize()
totalEventsSaved := 0
// Subscribe to get events
sub, err := client.Subscribe(ctx, filter.NewS(f))
if err != nil {
return 0, err
}
defer sub.Unsub()
eventsCount := 0
eventsSaved := 0
timeout := time.After(10 * time.Second) // Timeout for receiving events
for {
select {
case <-ctx.Done():
log.T.F(
"Spider sync: context done for relay %s, saved %d/%d events",
relayURL, eventsSaved, eventsCount,
)
return eventsSaved, nil
case <-timeout:
log.T.F(
"Spider sync: timeout for relay %s, saved %d/%d events",
relayURL, eventsSaved, eventsCount,
)
return eventsSaved, nil
case <-sub.EndOfStoredEvents:
log.T.F(
"Spider sync: end of stored events for relay %s, saved %d/%d events",
relayURL, eventsSaved, eventsCount,
)
return eventsSaved, nil
case ev := <-sub.Events:
if ev == nil {
continue
}
eventsCount++
// Verify the event signature
if ok, err := ev.Verify(); !ok || err != nil {
log.T.F(
"Spider sync: invalid event signature from relay %s",
relayURL,
)
ev.Free()
continue
}
// Save the event to the database
if _, _, err := s.db.SaveEvent(s.ctx, ev); err != nil {
if !strings.HasPrefix(err.Error(), "blocked:") {
log.T.F(
"Spider sync: error saving event from relay %s: %v",
relayURL, err,
)
}
// Event might already exist, which is fine for deduplication
} else {
eventsSaved++
if eventsSaved%10 == 0 {
log.T.F(
"Spider sync: saved %d events from relay %s",
eventsSaved, relayURL,
)
}
}
ev.Free()
for i := 0; i < len(followedPubkeys); i += chunkSize {
end := i + chunkSize
if end > len(followedPubkeys) {
end = len(followedPubkeys)
}
chunk := followedPubkeys[i:end]
log.T.F("Spider sync: processing chunk %d-%d (%d pubkeys) for relay %s",
i, end-1, len(chunk), relayURL)
// Create filter for this chunk of pubkeys
f := &filter.F{
Authors: tag.NewFromBytesSlice(chunk...),
Since: timestamp.FromUnix(startTime.Unix()),
Until: timestamp.FromUnix(endTime.Unix()),
Limit: func() *uint { l := uint(1000); return &l }(), // Limit to avoid overwhelming
}
// Subscribe to get events for this chunk
sub, err := client.Subscribe(ctx, filter.NewS(f))
if err != nil {
log.E.F("Spider sync: failed to subscribe to chunk %d-%d for relay %s: %v",
i, end-1, relayURL, err)
continue
}
chunkEventsSaved := 0
chunkEventsCount := 0
timeout := time.After(10 * time.Second) // Timeout for receiving events
chunkDone := false
for !chunkDone {
select {
case <-ctx.Done():
log.T.F(
"Spider sync: context done for relay %s chunk %d-%d, saved %d/%d events",
relayURL, i, end-1, chunkEventsSaved, chunkEventsCount,
)
chunkDone = true
case <-timeout:
log.T.F(
"Spider sync: timeout for relay %s chunk %d-%d, saved %d/%d events",
relayURL, i, end-1, chunkEventsSaved, chunkEventsCount,
)
chunkDone = true
case <-sub.EndOfStoredEvents:
log.T.F(
"Spider sync: end of stored events for relay %s chunk %d-%d, saved %d/%d events",
relayURL, i, end-1, chunkEventsSaved, chunkEventsCount,
)
chunkDone = true
case ev := <-sub.Events:
if ev == nil {
continue
}
chunkEventsCount++
// Verify the event signature
if ok, err := ev.Verify(); !ok || err != nil {
log.T.F(
"Spider sync: invalid event signature from relay %s",
relayURL,
)
ev.Free()
continue
}
// Save the event to the database
if _, _, err := s.db.SaveEvent(s.ctx, ev); err != nil {
if !strings.HasPrefix(err.Error(), "blocked:") {
log.T.F(
"Spider sync: error saving event from relay %s: %v",
relayURL, err,
)
}
// Event might already exist, which is fine for deduplication
} else {
chunkEventsSaved++
if chunkEventsSaved%10 == 0 {
log.T.F(
"Spider sync: saved %d events from relay %s chunk %d-%d",
chunkEventsSaved, relayURL, i, end-1,
)
}
}
ev.Free()
}
}
// Clean up subscription
sub.Unsub()
totalEventsSaved += chunkEventsSaved
log.T.F("Spider sync: completed chunk %d-%d for relay %s, saved %d events",
i, end-1, relayURL, chunkEventsSaved)
}
log.T.F("Spider sync: completed all chunks for relay %s, total saved %d events",
relayURL, totalEventsSaved)
return totalEventsSaved, nil
}
// Stop stops the spider functionality

View File

@@ -4,14 +4,15 @@ coverage:
precision: 2
status:
project: # measuring the overall project coverage
default: # context, you can create multiple ones with custom titles
enabled: yes # must be yes|true to enable this status
target: 100 # specify the target coverage for each commit status
# option: "auto" (must increase from parent commit or pull request base)
# option: "X%" a static target percentage to hit
if_not_found: success # if parent is not found report status as success, error, or failure
if_ci_failed: error # if ci fails report status as success, error, or failure
project: # measuring the overall project coverage
default: # context, you can create multiple ones with custom titles
enabled: yes # must be yes|true to enable this status
target:
100 # specify the target coverage for each commit status
# option: "auto" (must increase from parent commit or pull request base)
# option: "X%" a static target percentage to hit
if_not_found: success # if parent is not found report status as success, error, or failure
if_ci_failed: error # if ci fails report status as success, error, or failure
# Also update COVER_IGNORE_PKGS in the Makefile.
ignore:

View File

@@ -1,24 +1,31 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Unreleased
- No changes yet.
## [1.11.0] - 2023-05-02
### Fixed
- Fix `Swap` and `CompareAndSwap` for `Value` wrappers without initialization.
### Added
- Add `String` method to `atomic.Pointer[T]` type allowing users to safely print
underlying values of pointers.
underlying values of pointers.
[1.11.0]: https://github.com/uber-go/atomic/compare/v1.10.0...v1.11.0
## [1.10.0] - 2022-08-11
### Added
- Add `atomic.Float32` type for atomic operations on `float32`.
- Add `CompareAndSwap` and `Swap` methods to `atomic.String`, `atomic.Error`,
and `atomic.Value`.
@@ -27,6 +34,7 @@ underlying values of pointers.
replacement for the standard library's `sync/atomic.Pointer` type.
### Changed
- Deprecate `CAS` methods on all types in favor of corresponding
`CompareAndSwap` methods.
@@ -35,46 +43,59 @@ Thanks to @eNV25 and @icpd for their contributions to this release.
[1.10.0]: https://github.com/uber-go/atomic/compare/v1.9.0...v1.10.0
## [1.9.0] - 2021-07-15
### Added
- Add `Float64.Swap` to match int atomic operations.
- Add `atomic.Time` type for atomic operations on `time.Time` values.
[1.9.0]: https://github.com/uber-go/atomic/compare/v1.8.0...v1.9.0
## [1.8.0] - 2021-06-09
### Added
- Add `atomic.Uintptr` type for atomic operations on `uintptr` values.
- Add `atomic.UnsafePointer` type for atomic operations on `unsafe.Pointer` values.
[1.8.0]: https://github.com/uber-go/atomic/compare/v1.7.0...v1.8.0
## [1.7.0] - 2020-09-14
### Added
- Support JSON serialization and deserialization of primitive atomic types.
- Support Text marshalling and unmarshalling for string atomics.
### Changed
- Disallow incorrect comparison of atomic values in a non-atomic way.
### Removed
- Remove dependency on `golang.org/x/{lint, tools}`.
[1.7.0]: https://github.com/uber-go/atomic/compare/v1.6.0...v1.7.0
## [1.6.0] - 2020-02-24
### Changed
- Drop library dependency on `golang.org/x/{lint, tools}`.
[1.6.0]: https://github.com/uber-go/atomic/compare/v1.5.1...v1.6.0
## [1.5.1] - 2019-11-19
- Fix bug where `Bool.CAS` and `Bool.Toggle` do work correctly together
causing `CAS` to fail even though the old value matches.
[1.5.1]: https://github.com/uber-go/atomic/compare/v1.5.0...v1.5.1
## [1.5.0] - 2019-10-29
### Changed
- With Go modules, only the `go.uber.org/atomic` import path is supported now.
If you need to use the old import path, please add a `replace` directive to
your `go.mod`.
@@ -82,43 +103,57 @@ Thanks to @eNV25 and @icpd for their contributions to this release.
[1.5.0]: https://github.com/uber-go/atomic/compare/v1.4.0...v1.5.0
## [1.4.0] - 2019-05-01
### Added
- Add `atomic.Error` type for atomic operations on `error` values.
- Add `atomic.Error` type for atomic operations on `error` values.
[1.4.0]: https://github.com/uber-go/atomic/compare/v1.3.2...v1.4.0
## [1.3.2] - 2018-05-02
### Added
- Add `atomic.Duration` type for atomic operations on `time.Duration` values.
[1.3.2]: https://github.com/uber-go/atomic/compare/v1.3.1...v1.3.2
## [1.3.1] - 2017-11-14
### Fixed
- Revert optimization for `atomic.String.Store("")` which caused data races.
[1.3.1]: https://github.com/uber-go/atomic/compare/v1.3.0...v1.3.1
## [1.3.0] - 2017-11-13
### Added
- Add `atomic.Bool.CAS` for compare-and-swap semantics on bools.
### Changed
- Optimize `atomic.String.Store("")` by avoiding an allocation.
[1.3.0]: https://github.com/uber-go/atomic/compare/v1.2.0...v1.3.0
## [1.2.0] - 2017-04-12
### Added
- Shadow `atomic.Value` from `sync/atomic`.
[1.2.0]: https://github.com/uber-go/atomic/compare/v1.1.0...v1.2.0
## [1.1.0] - 2017-03-10
### Added
- Add atomic `Float64` type.
### Changed
- Support new `go.uber.org/atomic` import path.
[1.1.0]: https://github.com/uber-go/atomic/compare/v1.0.0...v1.1.0

View File

@@ -30,4 +30,4 @@ Stable.
---
Released under the [MIT License](LICENSE.txt).
Released under the [MIT License](LICENSE.txt).

View File

@@ -1,2 +1,3 @@
# interrupt
Handle shutdowns cleanly and enable hot reload

View File

@@ -1 +1 @@
v0.10.5
v0.12.2

View File

@@ -49,7 +49,7 @@ To build with the embedded web interface:
[source,bash]
----
# Build the React web application
# Build the Svelte web application
cd app/web
bun install
bun run build
@@ -59,13 +59,25 @@ cd ../../
go build -o orly
----
You can automate this process with a build script:
The recommended way to build and embed the web UI is using the provided script:
[source,bash]
----
./scripts/update-embedded-web.sh
----
This script will:
- Build the Svelte app in `app/web` to `app/web/dist` using Bun (preferred) or fall back to npm/yarn/pnpm
- Run `go install` from the repository root so the binary picks up the new embedded assets
- Automatically detect and use the best available JavaScript package manager
For manual builds, you can also use:
[source,bash]
----
#!/bin/bash
# build.sh
echo "Building React app..."
echo "Building Svelte app..."
cd app/web
bun install
bun run build
@@ -79,6 +91,324 @@ echo "Build complete!"
Make it executable with `chmod +x build.sh` and run with `./build.sh`.
== web UI
ORLY includes a modern web-based user interface built with link:https://svelte.dev/[Svelte] that provides comprehensive relay management capabilities.
=== features
The web UI offers:
* **Authentication**: Secure login using Nostr key pairs with challenge-response authentication
* **Event Management**: View, export, and import Nostr events with advanced filtering and search
* **User Administration**: Manage user permissions and roles (admin/owner)
* **Sprocket Management**: Configure and manage external event processing scripts
* **Real-time Updates**: Live event streaming and status updates
* **Dark/Light Theme**: Toggle between themes with persistent preferences
* **Responsive Design**: Works on desktop and mobile devices
=== authentication
The web UI uses Nostr-native authentication:
1. **Challenge Generation**: Server generates a cryptographic challenge
2. **Signature Verification**: Client signs the challenge with their private key
3. **Session Management**: Authenticated sessions with role-based permissions
Supported authentication methods:
- Direct private key input
- Nostr extension integration
- Hardware wallet support
=== user roles
* **Guest**: Read-only access to public events
* **User**: Can publish events and manage their own content
* **Admin**: Full relay management except sprocket configuration
* **Owner**: Complete control including sprocket management and system configuration
=== event management
The interface provides comprehensive event management:
* **Event Browser**: Paginated view of all events with filtering by kind, author, and content
* **Export Functionality**: Export events in JSON format with configurable date ranges
* **Import Capability**: Bulk import events (admin/owner only)
* **Search**: Full-text search across event content and metadata
* **Event Details**: Expandable view showing full event JSON and metadata
=== sprocket integration
The web UI includes a dedicated sprocket management interface:
* **Status Monitoring**: Real-time status of sprocket scripts
* **Script Upload**: Upload and manage sprocket scripts
* **Version Control**: Track and manage multiple script versions
* **Configuration**: Configure sprocket parameters and settings
* **Logs**: View sprocket execution logs and errors
=== development mode
For development, the web UI supports hot-reloading:
[source,bash]
----
# Enable development proxy
export ORLY_WEB_DISABLE_EMBEDDED=true
export ORLY_WEB_DEV_PROXY_URL=localhost:5000
# Start relay
./orly
# In another terminal, start Svelte dev server
cd app/web
bun run dev
----
This allows for rapid development with automatic reloading of changes.
== sprocket event sifter interface
The sprocket system provides a powerful interface for external event processing scripts, allowing you to implement custom filtering, validation, and processing logic for Nostr events before they are stored in the relay.
=== overview
Sprocket scripts receive events via stdin and respond with JSONL (JSON Lines) format, enabling real-time event processing with three possible actions:
* **accept**: Continue with normal event processing
* **reject**: Return OK false to client with rejection message
* **shadowReject**: Return OK true to client but abort processing (useful for spam filtering)
=== how it works
1. **Event Reception**: Events are sent to the sprocket script as JSON objects via stdin
2. **Processing**: Script analyzes the event and applies custom logic
3. **Response**: Script responds with JSONL containing the decision and optional message
4. **Action**: Relay processes the response and either accepts, rejects, or shadow rejects the event
=== script protocol
==== input format
Events are sent as JSON objects, one per line:
```json
{
"id": "event_id_here",
"kind": 1,
"content": "Hello, world!",
"pubkey": "author_pubkey",
"tags": [["t", "hashtag"], ["p", "reply_pubkey"]],
"created_at": 1640995200,
"sig": "signature_here"
}
```
==== output format
Scripts must respond with JSONL format:
```json
{"id": "event_id", "action": "accept", "msg": ""}
{"id": "event_id", "action": "reject", "msg": "reason for rejection"}
{"id": "event_id", "action": "shadowReject", "msg": ""}
```
=== configuration
Enable sprocket processing:
[source,bash]
----
export ORLY_SPROCKET_ENABLED=true
export ORLY_APP_NAME="ORLY"
----
The sprocket script should be placed at:
`~/.config/{ORLY_APP_NAME}/sprocket.sh`
For example, with default `ORLY_APP_NAME="ORLY"`:
`~/.config/ORLY/sprocket.sh`
Backup files are automatically created when updating sprocket scripts via the web UI, with timestamps like:
`~/.config/ORLY/sprocket.sh.20240101120000`
=== manual sprocket updates
For manual sprocket script updates, you can use the stop/write/restart method:
1. **Stop the relay**:
```bash
# Send SIGINT to gracefully stop
kill -INT <relay_pid>
```
2. **Write new sprocket script**:
```bash
# Create/update the sprocket script
cat > ~/.config/ORLY/sprocket.sh << 'EOF'
#!/bin/bash
while read -r line; do
if [[ -n "$line" ]]; then
event_id=$(echo "$line" | jq -r '.id')
echo "{\"id\":\"$event_id\",\"action\":\"accept\",\"msg\":\"\"}"
fi
done
EOF
# Make it executable
chmod +x ~/.config/ORLY/sprocket.sh
```
3. **Restart the relay**:
```bash
./orly
```
The relay will automatically detect the new sprocket script and start it. If the script fails, sprocket will be disabled and all events rejected until the script is fixed.
=== failure handling
When sprocket is enabled but fails to start or crashes:
1. **Automatic Disable**: Sprocket is automatically disabled
2. **Event Rejection**: All incoming events are rejected with error message
3. **Periodic Recovery**: Every 30 seconds, the system checks if the sprocket script becomes available
4. **Auto-Restart**: If the script is found, sprocket is automatically re-enabled and restarted
This ensures that:
- Relay continues running even when sprocket fails
- No events are processed without proper sprocket filtering
- Sprocket automatically recovers when the script is fixed
- Clear error messages inform users about the sprocket status
- Error messages include the exact file location for easy fixes
When sprocket fails, the error message will show:
`sprocket disabled due to failure - all events will be rejected (script location: ~/.config/ORLY/sprocket.sh)`
This makes it easy to locate and fix the sprocket script file.
=== example script
Here's a Python example that implements various filtering criteria:
[source,python]
----
#!/usr/bin/env python3
import json
import sys
def process_event(event_json):
event_id = event_json.get('id', '')
event_content = event_json.get('content', '')
event_kind = event_json.get('kind', 0)
# Reject spam content
if 'spam' in event_content.lower():
return {
'id': event_id,
'action': 'reject',
'msg': 'Content contains spam'
}
# Shadow reject test events
if event_kind == 9999:
return {
'id': event_id,
'action': 'shadowReject',
'msg': ''
}
# Accept all other events
return {
'id': event_id,
'action': 'accept',
'msg': ''
}
# Main processing loop
for line in sys.stdin:
if line.strip():
try:
event = json.loads(line)
response = process_event(event)
print(json.dumps(response))
sys.stdout.flush()
except json.JSONDecodeError:
continue
----
=== bash example
A simple bash script example:
[source,bash]
----
#!/bin/bash
while read -r line; do
if [[ -n "$line" ]]; then
# Extract event ID
event_id=$(echo "$line" | jq -r '.id')
# Check for spam content
if echo "$line" | jq -r '.content' | grep -qi "spam"; then
echo "{\"id\":\"$event_id\",\"action\":\"reject\",\"msg\":\"Spam detected\"}"
else
echo "{\"id\":\"$event_id\",\"action\":\"accept\",\"msg\":\"\"}"
fi
fi
done
----
=== testing
Test your sprocket script directly:
[source,bash]
----
# Test with sample event
echo '{"id":"test","kind":1,"content":"spam test"}' | python3 sprocket.py
# Expected output:
# {"id": "test", "action": "reject", "msg": "Content contains spam"}
----
Run the comprehensive test suite:
[source,bash]
----
./test-sprocket-complete.sh
----
=== web UI management
The web UI provides a complete sprocket management interface:
* **Status Monitoring**: View real-time sprocket status and health
* **Script Upload**: Upload new sprocket scripts via the web interface
* **Version Management**: Track and manage multiple script versions
* **Configuration**: Configure sprocket parameters and settings
* **Logs**: View execution logs and error messages
* **Restart**: Restart sprocket scripts without relay restart
=== use cases
Common sprocket use cases include:
* **Spam Filtering**: Detect and reject spam content
* **Content Moderation**: Implement custom content policies
* **Rate Limiting**: Control event publishing rates
* **Event Validation**: Additional validation beyond Nostr protocol
* **Analytics**: Log and analyze event patterns
* **Integration**: Connect with external services and APIs
=== performance considerations
* Sprocket scripts run synchronously and can impact relay performance
* Keep processing logic efficient and fast
* Use appropriate timeouts to prevent blocking
* Consider using shadow reject for non-critical filtering to maintain user experience
== secp256k1 dependency
ORLY uses the optimized `libsecp256k1` C library from Bitcoin Core for schnorr signatures, providing 4x faster signing and ECDH operations compared to pure Go implementations.

View File

@@ -0,0 +1,228 @@
# Sprocket Test Suite
This directory contains a comprehensive test suite for the ORLY relay's sprocket event processing system.
## Overview
The sprocket system allows external scripts to process Nostr events before they are stored in the relay. Events are sent to the sprocket script via stdin, and the script responds with JSONL messages indicating whether to accept, reject, or shadow reject the event.
## Test Files
### Core Test Files
- **`test-sprocket.py`** - Python sprocket script that implements various filtering criteria
- **`test-sprocket-integration.go`** - Go integration tests using the testing framework
- **`test-sprocket-complete.sh`** - Complete test suite that starts relay and runs tests
- **`test-sprocket-manual.sh`** - Manual test script for interactive testing
- **`run-sprocket-test.sh`** - Automated test runner
### Example Scripts
- **`test-sprocket-example.sh`** - Simple bash example sprocket script
## Test Criteria
The Python sprocket script (`test-sprocket.py`) implements the following test criteria:
1. **Spam Content**: Rejects events containing "spam" in the content
2. **Test Kind**: Shadow rejects events with kind 9999
3. **Blocked Hashtags**: Rejects events with hashtags "blocked", "rejected", or "test-block"
4. **Blocked Pubkeys**: Shadow rejects events from pubkeys starting with "00000000", "11111111", or "22222222"
5. **Content Length**: Rejects events with content longer than 1000 characters
6. **Timestamp Validation**: Rejects events that are too old (>1 hour) or too far in the future (>5 minutes)
## Running Tests
### Quick Test (Recommended)
```bash
./test-sprocket-complete.sh
```
This script will:
1. Set up the test environment
2. Start the relay with sprocket enabled
3. Run all test cases
4. Clean up automatically
### Manual Testing
```bash
# Start relay manually with sprocket enabled
export ORLY_SPROCKET_ENABLED=true
go run . test
# In another terminal, run manual tests
./test-sprocket-manual.sh
```
### Integration Tests
```bash
# Run Go integration tests
go test -v -run TestSprocketIntegration ./test-sprocket-integration.go
```
## Prerequisites
- **Python 3**: Required for the Python sprocket script
- **jq**: Required for JSON processing in bash scripts
- **websocat**: Required for WebSocket testing
```bash
cargo install websocat
```
- **Go dependencies**: gorilla/websocket for integration tests
```bash
go get github.com/gorilla/websocket
```
## Test Cases
### 1. Normal Event (Accept)
```json
{
"id": "test_normal_123",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": 1640995200,
"kind": 1,
"content": "Hello, world!",
"sig": "test_sig"
}
```
**Expected**: `["OK","test_normal_123",true]`
### 2. Spam Content (Reject)
```json
{
"id": "test_spam_456",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": 1640995200,
"kind": 1,
"content": "This is spam content",
"sig": "test_sig"
}
```
**Expected**: `["OK","test_spam_456",false,"error: Content contains spam"]`
### 3. Test Kind (Shadow Reject)
```json
{
"id": "test_kind_789",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": 1640995200,
"kind": 9999,
"content": "Test message",
"sig": "test_sig"
}
```
**Expected**: `["OK","test_kind_789",true]` (but event not processed)
### 4. Blocked Hashtag (Reject)
```json
{
"id": "test_hashtag_101",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": 1640995200,
"kind": 1,
"content": "Message with hashtag",
"tags": [["t", "blocked"]],
"sig": "test_sig"
}
```
**Expected**: `["OK","test_hashtag_101",false,"error: Hashtag \"blocked\" is not allowed"]`
### 5. Too Long Content (Reject)
```json
{
"id": "test_long_202",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": 1640995200,
"kind": 1,
"content": "a... (1001 characters)",
"sig": "test_sig"
}
```
**Expected**: `["OK","test_long_202",false,"error: Content too long (max 1000 characters)"]`
## Sprocket Script Protocol
### Input Format
Events are sent to the sprocket script as JSON objects via stdin, one per line.
### Output Format
The sprocket script must respond with JSONL (JSON Lines) format:
```json
{"id": "event_id", "action": "accept", "msg": ""}
{"id": "event_id", "action": "reject", "msg": "reason for rejection"}
{"id": "event_id", "action": "shadowReject", "msg": ""}
```
### Actions
- **`accept`**: Continue with normal event processing
- **`reject`**: Return OK false to client with message
- **`shadowReject`**: Return OK true to client but abort processing
## Configuration
To enable sprocket in the relay:
```bash
export ORLY_SPROCKET_ENABLED=true
export ORLY_APP_NAME="ORLY"
```
The sprocket script should be placed at:
`~/.config/{ORLY_APP_NAME}/sprocket.sh`
## Troubleshooting
### Common Issues
1. **Sprocket script not found**
- Ensure the script exists at the correct path
- Check file permissions (must be executable)
2. **Python script errors**
- Verify Python 3 is installed
- Check script syntax with `python3 -m py_compile test-sprocket.py`
3. **WebSocket connection failed**
- Ensure relay is running on the correct port
- Check firewall settings
4. **Test failures**
- Check relay logs for sprocket errors
- Verify sprocket script is responding correctly
### Debug Mode
Enable debug logging:
```bash
export ORLY_LOG_LEVEL=debug
```
### Manual Sprocket Testing
Test the sprocket script directly:
```bash
echo '{"id":"test","kind":1,"content":"spam test"}' | python3 test-sprocket.py
```
Expected output:
```json
{"id": "test", "action": "reject", "msg": "Content contains spam"}
```
## Contributing
When adding new test cases:
1. Add the test case to `test-sprocket.py`
2. Add corresponding test in `test-sprocket-complete.sh`
3. Update this README with the new test case
4. Ensure all tests pass before submitting
## License
This test suite is part of the ORLY relay project and follows the same license.

View File

@@ -0,0 +1,50 @@
#!/bin/bash
# Sprocket Integration Test Runner
# This script sets up and runs the sprocket integration test
set -e
echo "🧪 Running Sprocket Integration Test"
echo "===================================="
# Check if Python 3 is available
if ! command -v python3 &> /dev/null; then
echo "❌ Python 3 is required but not installed"
exit 1
fi
# Check if jq is available (for the bash sprocket script)
if ! command -v jq &> /dev/null; then
echo "❌ jq is required but not installed"
exit 1
fi
# Check if gorilla/websocket is available
echo "📦 Installing test dependencies..."
go mod tidy
go get github.com/gorilla/websocket
# Create test configuration directory
TEST_CONFIG_DIR="$HOME/.config/ORLY_TEST"
mkdir -p "$TEST_CONFIG_DIR"
# Copy the Python sprocket script to the test directory
cp test-sprocket.py "$TEST_CONFIG_DIR/sprocket.py"
# Create a simple bash wrapper for the Python script
cat > "$TEST_CONFIG_DIR/sprocket.sh" << 'EOF'
#!/bin/bash
python3 "$(dirname "$0")/sprocket.py"
EOF
chmod +x "$TEST_CONFIG_DIR/sprocket.sh"
echo "🔧 Test setup complete"
echo "📁 Sprocket script location: $TEST_CONFIG_DIR/sprocket.sh"
# Run the integration test
echo "🚀 Starting integration test..."
go test -v -run TestSprocketIntegration ./test-sprocket-integration.go
echo "✅ Sprocket integration test completed successfully!"

View File

@@ -0,0 +1,209 @@
#!/bin/bash
# Complete Sprocket Test Suite
# This script starts the relay with sprocket enabled and runs tests
set -e
echo "🧪 Complete Sprocket Test Suite"
echo "=============================="
# Configuration
RELAY_PORT="3334"
TEST_CONFIG_DIR="$HOME/.config/ORLY_TEST"
# Clean up any existing test processes
echo "🧹 Cleaning up existing processes..."
pkill -f "ORLY_TEST" || true
sleep 2
# Create test configuration directory
echo "📁 Setting up test environment..."
mkdir -p "$TEST_CONFIG_DIR"
# Copy the Python sprocket script
cp test-sprocket.py "$TEST_CONFIG_DIR/sprocket.py"
# Create bash wrapper for the Python script
cat > "$TEST_CONFIG_DIR/sprocket.sh" << 'EOF'
#!/bin/bash
python3 "$(dirname "$0")/sprocket.py"
EOF
chmod +x "$TEST_CONFIG_DIR/sprocket.sh"
echo "✅ Sprocket script created at: $TEST_CONFIG_DIR/sprocket.sh"
# Start the relay with sprocket enabled
echo "🚀 Starting relay with sprocket enabled..."
export ORLY_APP_NAME="ORLY_TEST"
export ORLY_DATA_DIR="/tmp/orly_test_data"
export ORLY_LISTEN="127.0.0.1"
export ORLY_PORT="$RELAY_PORT"
export ORLY_LOG_LEVEL="info"
export ORLY_SPROCKET_ENABLED="true"
export ORLY_ADMINS="npub1test1234567890abcdefghijklmnopqrstuvwxyz1234567890"
export ORLY_OWNERS="npub1test1234567890abcdefghijklmnopqrstuvwxyz1234567890"
# Clean up test data directory
rm -rf "$ORLY_DATA_DIR"
mkdir -p "$ORLY_DATA_DIR"
# Start relay in background
echo "Starting relay on port $RELAY_PORT..."
go run . test > /tmp/orly_test.log 2>&1 &
RELAY_PID=$!
# Wait for relay to start
echo "⏳ Waiting for relay to start..."
sleep 5
# Check if relay is running
if ! kill -0 $RELAY_PID 2>/dev/null; then
echo "❌ Relay failed to start"
echo "Log output:"
cat /tmp/orly_test.log
exit 1
fi
echo "✅ Relay started successfully (PID: $RELAY_PID)"
# Function to cleanup
cleanup() {
echo "🧹 Cleaning up..."
kill $RELAY_PID 2>/dev/null || true
sleep 2
pkill -f "ORLY_TEST" || true
rm -rf "$ORLY_DATA_DIR"
echo "✅ Cleanup complete"
}
# Set trap for cleanup
trap cleanup EXIT
# Test sprocket functionality
echo "🧪 Testing sprocket functionality..."
# Check if websocat is available
if ! command -v websocat &> /dev/null; then
echo "❌ websocat is required for testing"
echo "Install it with: cargo install websocat"
echo "Or run: go install github.com/gorilla/websocket/examples/echo@latest"
exit 1
fi
# Test 1: Normal event (should be accepted)
echo "📤 Test 1: Normal event (should be accepted)"
normal_event='{
"id": "test_normal_123",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": '$(date +%s)',
"kind": 1,
"content": "Hello, world! This is a normal message.",
"sig": "test_sig_normal"
}'
normal_message="[\"EVENT\",$normal_event]"
normal_response=$(echo "$normal_message" | websocat "ws://127.0.0.1:$RELAY_PORT" --text)
echo "Response: $normal_response"
if echo "$normal_response" | grep -q '"OK","test_normal_123",true'; then
echo "✅ Test 1 PASSED: Normal event accepted"
else
echo "❌ Test 1 FAILED: Normal event not accepted"
fi
# Test 2: Spam content (should be rejected)
echo "📤 Test 2: Spam content (should be rejected)"
spam_event='{
"id": "test_spam_456",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": '$(date +%s)',
"kind": 1,
"content": "This message contains spam content",
"sig": "test_sig_spam"
}'
spam_message="[\"EVENT\",$spam_event]"
spam_response=$(echo "$spam_message" | websocat "ws://127.0.0.1:$RELAY_PORT" --text)
echo "Response: $spam_response"
if echo "$spam_response" | grep -q '"OK","test_spam_456",false'; then
echo "✅ Test 2 PASSED: Spam content rejected"
else
echo "❌ Test 2 FAILED: Spam content not rejected"
fi
# Test 3: Test kind 9999 (should be shadow rejected)
echo "📤 Test 3: Test kind 9999 (should be shadow rejected)"
kind_event='{
"id": "test_kind_789",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": '$(date +%s)',
"kind": 9999,
"content": "Test message with special kind",
"sig": "test_sig_kind"
}'
kind_message="[\"EVENT\",$kind_event]"
kind_response=$(echo "$kind_message" | websocat "ws://127.0.0.1:$RELAY_PORT" --text)
echo "Response: $kind_response"
if echo "$kind_response" | grep -q '"OK","test_kind_789",true'; then
echo "✅ Test 3 PASSED: Test kind shadow rejected (OK=true but not processed)"
else
echo "❌ Test 3 FAILED: Test kind not shadow rejected"
fi
# Test 4: Blocked hashtag (should be rejected)
echo "📤 Test 4: Blocked hashtag (should be rejected)"
hashtag_event='{
"id": "test_hashtag_101",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": '$(date +%s)',
"kind": 1,
"content": "Message with blocked hashtag",
"tags": [["t", "blocked"]],
"sig": "test_sig_hashtag"
}'
hashtag_message="[\"EVENT\",$hashtag_event]"
hashtag_response=$(echo "$hashtag_message" | websocat "ws://127.0.0.1:$RELAY_PORT" --text)
echo "Response: $hashtag_response"
if echo "$hashtag_response" | grep -q '"OK","test_hashtag_101",false'; then
echo "✅ Test 4 PASSED: Blocked hashtag rejected"
else
echo "❌ Test 4 FAILED: Blocked hashtag not rejected"
fi
# Test 5: Too long content (should be rejected)
echo "📤 Test 5: Too long content (should be rejected)"
long_content=$(printf 'a%.0s' {1..1001})
long_event="{
\"id\": \"test_long_202\",
\"pubkey\": \"1234567890abcdef1234567890abcdef12345678\",
\"created_at\": $(date +%s),
\"kind\": 1,
\"content\": \"$long_content\",
\"sig\": \"test_sig_long\"
}"
long_message="[\"EVENT\",$long_event]"
long_response=$(echo "$long_message" | websocat "ws://127.0.0.1:$RELAY_PORT" --text)
echo "Response: $long_response"
if echo "$long_response" | grep -q '"OK","test_long_202",false'; then
echo "✅ Test 5 PASSED: Too long content rejected"
else
echo "❌ Test 5 FAILED: Too long content not rejected"
fi
echo ""
echo "🎉 Sprocket test suite completed!"
echo "📊 Check the results above to verify sprocket functionality"
echo ""
echo "💡 To run individual tests, use:"
echo " ./test-sprocket-manual.sh"
echo ""
echo "📝 Relay logs are available at: /tmp/orly_test.log"

View File

@@ -0,0 +1,143 @@
#!/bin/bash
# Sprocket Demo Test
# This script demonstrates the complete sprocket functionality
set -e
echo "🧪 Sprocket Demo Test"
echo "===================="
# Configuration
TEST_CONFIG_DIR="$HOME/.config/ORLY_TEST"
# Create test configuration directory
echo "📁 Setting up test environment..."
mkdir -p "$TEST_CONFIG_DIR"
# Copy the Python sprocket script
cp test-sprocket.py "$TEST_CONFIG_DIR/sprocket.py"
# Create bash wrapper for the Python script
cat > "$TEST_CONFIG_DIR/sprocket.sh" << 'EOF'
#!/bin/bash
python3 "$(dirname "$0")/sprocket.py"
EOF
chmod +x "$TEST_CONFIG_DIR/sprocket.sh"
echo "✅ Sprocket script created at: $TEST_CONFIG_DIR/sprocket.sh"
# Test 1: Direct sprocket script testing
echo "🧪 Test 1: Direct sprocket script testing"
echo "========================================"
current_time=$(date +%s)
# Test normal event
echo "📤 Testing normal event..."
normal_event="{\"id\":\"0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef\",\"pubkey\":\"1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\",\"created_at\":$current_time,\"kind\":1,\"content\":\"Hello, world!\",\"sig\":\"test_sig\"}"
echo "$normal_event" | python3 "$TEST_CONFIG_DIR/sprocket.py"
# Test spam content
echo "📤 Testing spam content..."
spam_event="{\"id\":\"1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\",\"pubkey\":\"1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\",\"created_at\":$current_time,\"kind\":1,\"content\":\"This is spam content\",\"sig\":\"test_sig\"}"
echo "$spam_event" | python3 "$TEST_CONFIG_DIR/sprocket.py"
# Test special kind
echo "📤 Testing special kind..."
kind_event="{\"id\":\"2345678901bcdef01234567890abcdef01234567890abcdef01234567890abcdef\",\"pubkey\":\"1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\",\"created_at\":$current_time,\"kind\":9999,\"content\":\"Test message\",\"sig\":\"test_sig\"}"
echo "$kind_event" | python3 "$TEST_CONFIG_DIR/sprocket.py"
# Test blocked hashtag
echo "📤 Testing blocked hashtag..."
hashtag_event="{\"id\":\"3456789012cdef0123456789012cdef0123456789012cdef0123456789012cdef\",\"pubkey\":\"1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\",\"created_at\":$current_time,\"kind\":1,\"content\":\"Message with hashtag\",\"tags\":[[\"t\",\"blocked\"]],\"sig\":\"test_sig\"}"
echo "$hashtag_event" | python3 "$TEST_CONFIG_DIR/sprocket.py"
echo ""
echo "✅ Direct sprocket testing completed!"
echo ""
# Test 2: Bash wrapper testing
echo "🧪 Test 2: Bash wrapper testing"
echo "==============================="
# Test normal event through wrapper
echo "📤 Testing normal event through wrapper..."
echo "$normal_event" | "$TEST_CONFIG_DIR/sprocket.sh"
# Test spam content through wrapper
echo "📤 Testing spam content through wrapper..."
echo "$spam_event" | "$TEST_CONFIG_DIR/sprocket.sh"
echo ""
echo "✅ Bash wrapper testing completed!"
echo ""
# Test 3: Sprocket criteria demonstration
echo "🧪 Test 3: Sprocket criteria demonstration"
echo "========================================"
echo "The sprocket script implements the following filtering criteria:"
echo ""
echo "1. ✅ Spam Content Detection:"
echo " - Rejects events containing 'spam' in content"
echo " - Example: 'This is spam content' → REJECT"
echo ""
echo "2. ✅ Special Kind Filtering:"
echo " - Shadow rejects events with kind 9999"
echo " - Example: kind 9999 → SHADOW REJECT"
echo ""
echo "3. ✅ Blocked Hashtag Filtering:"
echo " - Rejects events with hashtags: 'blocked', 'rejected', 'test-block'"
echo " - Example: #blocked → REJECT"
echo ""
echo "4. ✅ Blocked Pubkey Filtering:"
echo " - Shadow rejects events from pubkeys starting with '00000000', '11111111', '22222222'"
echo ""
echo "5. ✅ Content Length Validation:"
echo " - Rejects events with content longer than 1000 characters"
echo ""
echo "6. ✅ Timestamp Validation:"
echo " - Rejects events that are too old (>1 hour) or too far in the future (>5 minutes)"
echo ""
# Test 4: Show sprocket protocol
echo "🧪 Test 4: Sprocket Protocol Demonstration"
echo "=========================================="
echo "Input Format: JSON event via stdin"
echo "Output Format: JSONL response via stdout"
echo ""
echo "Response Actions:"
echo "- accept: Continue with normal event processing"
echo "- reject: Return OK false to client with message"
echo "- shadowReject: Return OK true to client but abort processing"
echo ""
# Test 5: Integration readiness
echo "🧪 Test 5: Integration Readiness"
echo "==============================="
echo "✅ Sprocket script: Working correctly"
echo "✅ Bash wrapper: Working correctly"
echo "✅ Event processing: All criteria implemented"
echo "✅ JSONL protocol: Properly formatted responses"
echo "✅ Error handling: Graceful error responses"
echo ""
echo "🎉 Sprocket system is ready for relay integration!"
echo ""
echo "💡 To test with the relay:"
echo " 1. Set ORLY_SPROCKET_ENABLED=true"
echo " 2. Start the relay"
echo " 3. Send events via WebSocket"
echo " 4. Observe sprocket responses in relay logs"
echo ""
echo "📝 Test files created:"
echo " - $TEST_CONFIG_DIR/sprocket.py (Python sprocket script)"
echo " - $TEST_CONFIG_DIR/sprocket.sh (Bash wrapper)"
echo " - test-sprocket.py (Source Python script)"
echo " - test-sprocket-example.sh (Bash example)"
echo " - test-sprocket-simple.sh (Simple test)"
echo " - test-sprocket-working.sh (WebSocket test)"
echo " - SPROCKET_TEST_README.md (Documentation)"

View File

@@ -0,0 +1,28 @@
#!/bin/bash
# Example sprocket script that demonstrates event processing
# This script reads JSON events from stdin and outputs JSONL responses
# Read events from stdin line by line
while IFS= read -r line; do
# Parse the event JSON
event_id=$(echo "$line" | jq -r '.id')
event_kind=$(echo "$line" | jq -r '.kind')
event_content=$(echo "$line" | jq -r '.content')
# Example policy: reject events with certain content
if [[ "$event_content" == *"spam"* ]]; then
echo "{\"id\":\"$event_id\",\"action\":\"reject\",\"msg\":\"content contains spam\"}"
continue
fi
# Example policy: shadow reject events from certain kinds
if [[ "$event_kind" == "9999" ]]; then
echo "{\"id\":\"$event_id\",\"action\":\"shadowReject\",\"msg\":\"\"}"
continue
fi
# Default: accept the event
echo "{\"id\":\"$event_id\",\"action\":\"accept\",\"msg\":\"\"}"
done

View File

@@ -0,0 +1,184 @@
#!/bin/bash
# Final Sprocket Integration Test
# This script tests the complete sprocket integration with the relay
set -e
echo "🧪 Final Sprocket Integration Test"
echo "================================="
# Configuration
RELAY_PORT="3334"
TEST_CONFIG_DIR="$HOME/.config/ORLY_TEST"
# Clean up any existing test processes
echo "🧹 Cleaning up existing processes..."
pkill -f "ORLY_TEST" || true
sleep 2
# Create test configuration directory
echo "📁 Setting up test environment..."
mkdir -p "$TEST_CONFIG_DIR"
# Copy the Python sprocket script
cp test-sprocket.py "$TEST_CONFIG_DIR/sprocket.py"
# Create bash wrapper for the Python script
cat > "$TEST_CONFIG_DIR/sprocket.sh" << 'EOF'
#!/bin/bash
python3 "$(dirname "$0")/sprocket.py"
EOF
chmod +x "$TEST_CONFIG_DIR/sprocket.sh"
echo "✅ Sprocket script created at: $TEST_CONFIG_DIR/sprocket.sh"
# Set environment variables for the relay
export ORLY_APP_NAME="ORLY_TEST"
export ORLY_DATA_DIR="/tmp/orly_test_data"
export ORLY_LISTEN="127.0.0.1"
export ORLY_PORT="$RELAY_PORT"
export ORLY_LOG_LEVEL="info"
export ORLY_SPROCKET_ENABLED="true"
export ORLY_ADMINS=""
export ORLY_OWNERS=""
# Clean up test data directory
rm -rf "$ORLY_DATA_DIR"
mkdir -p "$ORLY_DATA_DIR"
# Function to cleanup
cleanup() {
echo "🧹 Cleaning up..."
pkill -f "ORLY_TEST" || true
sleep 2
rm -rf "$ORLY_DATA_DIR"
echo "✅ Cleanup complete"
}
# Set trap for cleanup
trap cleanup EXIT
# Start the relay
echo "🚀 Starting relay with sprocket enabled..."
go run . test > /tmp/orly_test.log 2>&1 &
RELAY_PID=$!
# Wait for relay to start
echo "⏳ Waiting for relay to start..."
sleep 5
# Check if relay is running
if ! kill -0 $RELAY_PID 2>/dev/null; then
echo "❌ Relay failed to start"
echo "Log output:"
cat /tmp/orly_test.log
exit 1
fi
echo "✅ Relay started successfully (PID: $RELAY_PID)"
# Check if websocat is available
if ! command -v websocat &> /dev/null; then
echo "❌ websocat is required for testing"
echo "Install it with: cargo install websocat"
echo "Or use: go install github.com/gorilla/websocket/examples/echo@latest"
exit 1
fi
# Test sprocket functionality
echo "🧪 Testing sprocket functionality..."
# Test 1: Normal event (should be accepted)
echo "📤 Test 1: Normal event (should be accepted)"
current_time=$(date +%s)
normal_event="{
\"id\": \"test_normal_123\",
\"pubkey\": \"1234567890abcdef1234567890abcdef12345678\",
\"created_at\": $current_time,
\"kind\": 1,
\"content\": \"Hello, world! This is a normal message.\",
\"sig\": \"test_sig_normal\"
}"
normal_message="[\"EVENT\",$normal_event]"
normal_response=$(echo "$normal_message" | websocat "ws://127.0.0.1:$RELAY_PORT" --text)
echo "Response: $normal_response"
if echo "$normal_response" | grep -q '"OK","test_normal_123",true'; then
echo "✅ Test 1 PASSED: Normal event accepted"
else
echo "❌ Test 1 FAILED: Normal event not accepted"
fi
# Test 2: Spam content (should be rejected)
echo "📤 Test 2: Spam content (should be rejected)"
spam_event="{
\"id\": \"test_spam_456\",
\"pubkey\": \"1234567890abcdef1234567890abcdef12345678\",
\"created_at\": $current_time,
\"kind\": 1,
\"content\": \"This message contains spam content\",
\"sig\": \"test_sig_spam\"
}"
spam_message="[\"EVENT\",$spam_event]"
spam_response=$(echo "$spam_message" | websocat "ws://127.0.0.1:$RELAY_PORT" --text)
echo "Response: $spam_response"
if echo "$spam_response" | grep -q '"OK","test_spam_456",false'; then
echo "✅ Test 2 PASSED: Spam content rejected"
else
echo "❌ Test 2 FAILED: Spam content not rejected"
fi
# Test 3: Test kind 9999 (should be shadow rejected)
echo "📤 Test 3: Test kind 9999 (should be shadow rejected)"
kind_event="{
\"id\": \"test_kind_789\",
\"pubkey\": \"1234567890abcdef1234567890abcdef12345678\",
\"created_at\": $current_time,
\"kind\": 9999,
\"content\": \"Test message with special kind\",
\"sig\": \"test_sig_kind\"
}"
kind_message="[\"EVENT\",$kind_event]"
kind_response=$(echo "$kind_message" | websocat "ws://127.0.0.1:$RELAY_PORT" --text)
echo "Response: $kind_response"
if echo "$kind_response" | grep -q '"OK","test_kind_789",true'; then
echo "✅ Test 3 PASSED: Test kind shadow rejected (OK=true but not processed)"
else
echo "❌ Test 3 FAILED: Test kind not shadow rejected"
fi
# Test 4: Blocked hashtag (should be rejected)
echo "📤 Test 4: Blocked hashtag (should be rejected)"
hashtag_event="{
\"id\": \"test_hashtag_101\",
\"pubkey\": \"1234567890abcdef1234567890abcdef12345678\",
\"created_at\": $current_time,
\"kind\": 1,
\"content\": \"Message with blocked hashtag\",
\"tags\": [[\"t\", \"blocked\"]],
\"sig\": \"test_sig_hashtag\"
}"
hashtag_message="[\"EVENT\",$hashtag_event]"
hashtag_response=$(echo "$hashtag_message" | websocat "ws://127.0.0.1:$RELAY_PORT" --text)
echo "Response: $hashtag_response"
if echo "$hashtag_response" | grep -q '"OK","test_hashtag_101",false'; then
echo "✅ Test 4 PASSED: Blocked hashtag rejected"
else
echo "❌ Test 4 FAILED: Blocked hashtag not rejected"
fi
echo ""
echo "🎉 Sprocket integration test completed!"
echo "📊 Check the results above to verify sprocket functionality"
echo ""
echo "📝 Relay logs are available at: /tmp/orly_test.log"
echo "💡 To view logs: cat /tmp/orly_test.log"

View File

@@ -0,0 +1,115 @@
#!/bin/bash
# Manual Sprocket Test Script
# This script demonstrates sprocket functionality by sending test events
set -e
echo "🧪 Manual Sprocket Test"
echo "======================"
# Configuration
RELAY_HOST="127.0.0.1"
RELAY_PORT="3334"
RELAY_URL="ws://$RELAY_HOST:$RELAY_PORT"
# Check if websocat is available
if ! command -v websocat &> /dev/null; then
echo "❌ websocat is required for this test"
echo "Install it with: cargo install websocat"
exit 1
fi
# Function to send an event and get response
send_event() {
local event_json="$1"
local description="$2"
echo "📤 Testing: $description"
echo "Event: $event_json"
# Send EVENT message
local message="[\"EVENT\",$event_json]"
echo "Sending: $message"
# Send and receive response
local response=$(echo "$message" | websocat "$RELAY_URL" --text)
echo "Response: $response"
echo "---"
}
# Test events
echo "🚀 Starting manual sprocket test..."
echo "Make sure the relay is running with sprocket enabled!"
echo ""
# Test 1: Normal event (should be accepted)
send_event '{
"id": "test_normal_123",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": '$(date +%s)',
"kind": 1,
"content": "Hello, world! This is a normal message.",
"sig": "test_sig_normal"
}' "Normal event (should be accepted)"
# Test 2: Spam content (should be rejected)
send_event '{
"id": "test_spam_456",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": '$(date +%s)',
"kind": 1,
"content": "This message contains spam content",
"sig": "test_sig_spam"
}' "Spam content (should be rejected)"
# Test 3: Test kind 9999 (should be shadow rejected)
send_event '{
"id": "test_kind_789",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": '$(date +%s)',
"kind": 9999,
"content": "Test message with special kind",
"sig": "test_sig_kind"
}' "Test kind 9999 (should be shadow rejected)"
# Test 4: Blocked hashtag (should be rejected)
send_event '{
"id": "test_hashtag_101",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": '$(date +%s)',
"kind": 1,
"content": "Message with blocked hashtag",
"tags": [["t", "blocked"]],
"sig": "test_sig_hashtag"
}' "Blocked hashtag (should be rejected)"
# Test 5: Too long content (should be rejected)
send_event '{
"id": "test_long_202",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": '$(date +%s)',
"kind": 1,
"content": "'$(printf 'a%.0s' {1..1001})'",
"sig": "test_sig_long"
}' "Too long content (should be rejected)"
# Test 6: Old timestamp (should be rejected)
send_event '{
"id": "test_old_303",
"pubkey": "1234567890abcdef1234567890abcdef12345678",
"created_at": '$(($(date +%s) - 7200))',
"kind": 1,
"content": "Message with old timestamp",
"sig": "test_sig_old"
}' "Old timestamp (should be rejected)"
echo "✅ Manual sprocket test completed!"
echo ""
echo "Expected results:"
echo "- Normal event: OK, true"
echo "- Spam content: OK, false, 'Content contains spam'"
echo "- Test kind 9999: OK, true (but shadow rejected)"
echo "- Blocked hashtag: OK, false, 'Hashtag blocked is not allowed'"
echo "- Too long content: OK, false, 'Content too long'"
echo "- Old timestamp: OK, false, 'Event timestamp too old'"

View File

@@ -0,0 +1,76 @@
#!/bin/bash
# Simple Sprocket Test
# This script demonstrates sprocket functionality
set -e
echo "🧪 Simple Sprocket Test"
echo "======================"
# Configuration
RELAY_PORT="3334"
TEST_CONFIG_DIR="$HOME/.config/ORLY_TEST"
# Clean up any existing test processes
echo "🧹 Cleaning up existing processes..."
pkill -f "ORLY_TEST" || true
sleep 2
# Create test configuration directory
echo "📁 Setting up test environment..."
mkdir -p "$TEST_CONFIG_DIR"
# Copy the Python sprocket script
cp test-sprocket.py "$TEST_CONFIG_DIR/sprocket.py"
# Create bash wrapper for the Python script
cat > "$TEST_CONFIG_DIR/sprocket.sh" << 'EOF'
#!/bin/bash
python3 "$(dirname "$0")/sprocket.py"
EOF
chmod +x "$TEST_CONFIG_DIR/sprocket.sh"
echo "✅ Sprocket script created at: $TEST_CONFIG_DIR/sprocket.sh"
# Test the sprocket script directly first
echo "🧪 Testing sprocket script directly..."
# Test 1: Normal event
echo "📤 Test 1: Normal event"
current_time=$(date +%s)
normal_event="{\"id\":\"test_normal_123\",\"pubkey\":\"1234567890abcdef1234567890abcdef12345678\",\"created_at\":$current_time,\"kind\":1,\"content\":\"Hello, world!\",\"sig\":\"test_sig\"}"
echo "$normal_event" | python3 "$TEST_CONFIG_DIR/sprocket.py"
# Test 2: Spam content
echo "📤 Test 2: Spam content"
spam_event="{\"id\":\"test_spam_456\",\"pubkey\":\"1234567890abcdef1234567890abcdef12345678\",\"created_at\":$current_time,\"kind\":1,\"content\":\"This is spam content\",\"sig\":\"test_sig\"}"
echo "$spam_event" | python3 "$TEST_CONFIG_DIR/sprocket.py"
# Test 3: Test kind 9999
echo "📤 Test 3: Test kind 9999"
kind_event="{\"id\":\"test_kind_789\",\"pubkey\":\"1234567890abcdef1234567890abcdef12345678\",\"created_at\":$current_time,\"kind\":9999,\"content\":\"Test message\",\"sig\":\"test_sig\"}"
echo "$kind_event" | python3 "$TEST_CONFIG_DIR/sprocket.py"
# Test 4: Blocked hashtag
echo "📤 Test 4: Blocked hashtag"
hashtag_event="{\"id\":\"test_hashtag_101\",\"pubkey\":\"1234567890abcdef1234567890abcdef12345678\",\"created_at\":$current_time,\"kind\":1,\"content\":\"Message with hashtag\",\"tags\":[[\"t\",\"blocked\"]],\"sig\":\"test_sig\"}"
echo "$hashtag_event" | python3 "$TEST_CONFIG_DIR/sprocket.py"
echo ""
echo "✅ Direct sprocket script tests completed!"
echo ""
echo "Expected results:"
echo "1. Normal event: {\"id\":\"test_normal_123\",\"action\":\"accept\",\"msg\":\"\"}"
echo "2. Spam content: {\"id\":\"test_spam_456\",\"action\":\"reject\",\"msg\":\"Content contains spam\"}"
echo "3. Test kind 9999: {\"id\":\"test_kind_789\",\"action\":\"shadowReject\",\"msg\":\"\"}"
echo "4. Blocked hashtag: {\"id\":\"test_hashtag_101\",\"action\":\"reject\",\"msg\":\"Hashtag \\\"blocked\\\" is not allowed\"}"
echo ""
echo "💡 To test with the full relay, run:"
echo " export ORLY_SPROCKET_ENABLED=true"
echo " export ORLY_APP_NAME=ORLY_TEST"
echo " go run . test"
echo ""
echo " Then in another terminal:"
echo " ./test-sprocket-manual.sh"

View File

@@ -0,0 +1,209 @@
#!/bin/bash
# Working Sprocket Test
# This script tests sprocket functionality with properly formatted messages
set -e
echo "🧪 Working Sprocket Test"
echo "======================="
# Configuration
RELAY_PORT="3335" # Use different port to avoid conflicts
TEST_CONFIG_DIR="$HOME/.config/ORLY_TEST"
# Clean up any existing test processes
echo "🧹 Cleaning up existing processes..."
pkill -f "ORLY_TEST" || true
sleep 2
# Create test configuration directory
echo "📁 Setting up test environment..."
mkdir -p "$TEST_CONFIG_DIR"
# Copy the Python sprocket script
cp test-sprocket.py "$TEST_CONFIG_DIR/sprocket.py"
# Create bash wrapper for the Python script
cat > "$TEST_CONFIG_DIR/sprocket.sh" << 'EOF'
#!/bin/bash
python3 "$(dirname "$0")/sprocket.py"
EOF
chmod +x "$TEST_CONFIG_DIR/sprocket.sh"
echo "✅ Sprocket script created at: $TEST_CONFIG_DIR/sprocket.sh"
# Set environment variables for the relay
export ORLY_APP_NAME="ORLY_TEST"
export ORLY_DATA_DIR="/tmp/orly_test_data"
export ORLY_LISTEN="127.0.0.1"
export ORLY_PORT="$RELAY_PORT"
export ORLY_LOG_LEVEL="info"
export ORLY_SPROCKET_ENABLED="true"
export ORLY_ADMINS=""
export ORLY_OWNERS=""
# Clean up test data directory
rm -rf "$ORLY_DATA_DIR"
mkdir -p "$ORLY_DATA_DIR"
# Function to cleanup
cleanup() {
echo "🧹 Cleaning up..."
pkill -f "ORLY_TEST" || true
sleep 2
rm -rf "$ORLY_DATA_DIR"
echo "✅ Cleanup complete"
}
# Set trap for cleanup
trap cleanup EXIT
# Start the relay
echo "🚀 Starting relay with sprocket enabled..."
go run . test > /tmp/orly_test.log 2>&1 &
RELAY_PID=$!
# Wait for relay to start
echo "⏳ Waiting for relay to start..."
sleep 5
# Check if relay is running
if ! kill -0 $RELAY_PID 2>/dev/null; then
echo "❌ Relay failed to start"
echo "Log output:"
cat /tmp/orly_test.log
exit 1
fi
echo "✅ Relay started successfully (PID: $RELAY_PID)"
# Test sprocket functionality with a simple Python WebSocket client
echo "🧪 Testing sprocket functionality..."
# Create a simple Python WebSocket test client
cat > /tmp/test_client.py << 'EOF'
#!/usr/bin/env python3
import asyncio
import websockets
import json
import time
async def test_sprocket():
uri = "ws://127.0.0.1:3335"
try:
async with websockets.connect(uri) as websocket:
print("✅ Connected to relay")
# Test 1: Normal event (should be accepted)
print("📤 Test 1: Normal event (should be accepted)")
current_time = int(time.time())
normal_event = {
"id": "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef",
"pubkey": "1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
"created_at": current_time,
"kind": 1,
"content": "Hello, world! This is a normal message.",
"sig": "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef"
}
normal_message = ["EVENT", normal_event]
await websocket.send(json.dumps(normal_message))
response = await websocket.recv()
print(f"Response: {response}")
if '"OK","0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef",true' in response:
print("✅ Test 1 PASSED: Normal event accepted")
else:
print("❌ Test 1 FAILED: Normal event not accepted")
# Test 2: Spam content (should be rejected)
print("📤 Test 2: Spam content (should be rejected)")
spam_event = {
"id": "1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
"pubkey": "1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
"created_at": current_time,
"kind": 1,
"content": "This message contains spam content",
"sig": "1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef"
}
spam_message = ["EVENT", spam_event]
await websocket.send(json.dumps(spam_message))
response = await websocket.recv()
print(f"Response: {response}")
if '"OK","1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",false' in response:
print("✅ Test 2 PASSED: Spam content rejected")
else:
print("❌ Test 2 FAILED: Spam content not rejected")
# Test 3: Test kind 9999 (should be shadow rejected)
print("📤 Test 3: Test kind 9999 (should be shadow rejected)")
kind_event = {
"id": "2345678901bcdef01234567890abcdef01234567890abcdef01234567890abcdef",
"pubkey": "1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
"created_at": current_time,
"kind": 9999,
"content": "Test message with special kind",
"sig": "2345678901bcdef01234567890abcdef01234567890abcdef01234567890abcdef2345678901bcdef01234567890abcdef01234567890abcdef01234567890abcdef"
}
kind_message = ["EVENT", kind_event]
await websocket.send(json.dumps(kind_message))
response = await websocket.recv()
print(f"Response: {response}")
if '"OK","2345678901bcdef01234567890abcdef01234567890abcdef01234567890abcdef",true' in response:
print("✅ Test 3 PASSED: Test kind shadow rejected (OK=true but not processed)")
else:
print("❌ Test 3 FAILED: Test kind not shadow rejected")
# Test 4: Blocked hashtag (should be rejected)
print("📤 Test 4: Blocked hashtag (should be rejected)")
hashtag_event = {
"id": "3456789012cdef0123456789012cdef0123456789012cdef0123456789012cdef",
"pubkey": "1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
"created_at": current_time,
"kind": 1,
"content": "Message with blocked hashtag",
"tags": [["t", "blocked"]],
"sig": "3456789012cdef0123456789012cdef0123456789012cdef0123456789012cdef3456789012cdef0123456789012cdef0123456789012cdef0123456789012cdef"
}
hashtag_message = ["EVENT", hashtag_event]
await websocket.send(json.dumps(hashtag_message))
response = await websocket.recv()
print(f"Response: {response}")
if '"OK","3456789012cdef0123456789012cdef0123456789012cdef0123456789012cdef",false' in response:
print("✅ Test 4 PASSED: Blocked hashtag rejected")
else:
print("❌ Test 4 FAILED: Blocked hashtag not rejected")
except Exception as e:
print(f"❌ Error: {e}")
if __name__ == "__main__":
asyncio.run(test_sprocket())
EOF
# Check if websockets is available
if ! python3 -c "import websockets" 2>/dev/null; then
echo "📦 Installing websockets library..."
pip3 install websockets
fi
# Run the test
python3 /tmp/test_client.py
echo ""
echo "🎉 Sprocket integration test completed!"
echo "📝 Relay logs are available at: /tmp/orly_test.log"
echo "💡 To view logs: cat /tmp/orly_test.log"

View File

@@ -0,0 +1,139 @@
#!/usr/bin/env python3
"""
Test sprocket script that processes Nostr events via stdin/stdout JSONL protocol.
This script demonstrates various filtering criteria for testing purposes.
"""
import json
import sys
import re
from datetime import datetime
def process_event(event_json):
"""
Process a single event and return the appropriate response.
Args:
event_json (dict): The parsed event JSON
Returns:
dict: Response with id, action, and msg fields
"""
event_id = event_json.get('id', '')
event_kind = event_json.get('kind', 0)
event_content = event_json.get('content', '')
event_pubkey = event_json.get('pubkey', '')
event_tags = event_json.get('tags', [])
# Test criteria 1: Reject events containing "spam" in content
if 'spam' in event_content.lower():
return {
'id': event_id,
'action': 'reject',
'msg': 'Content contains spam'
}
# Test criteria 2: Shadow reject events with kind 9999 (test kind)
if event_kind == 9999:
return {
'id': event_id,
'action': 'shadowReject',
'msg': ''
}
# Test criteria 3: Reject events with certain hashtags
for tag in event_tags:
if len(tag) >= 2 and tag[0] == 't': # hashtag
hashtag = tag[1].lower()
if hashtag in ['blocked', 'rejected', 'test-block']:
return {
'id': event_id,
'action': 'reject',
'msg': f'Hashtag "{hashtag}" is not allowed'
}
# Test criteria 4: Shadow reject events from specific pubkeys (first 8 chars)
blocked_prefixes = ['00000000', '11111111', '22222222'] # Test prefixes
pubkey_prefix = event_pubkey[:8] if len(event_pubkey) >= 8 else event_pubkey
if pubkey_prefix in blocked_prefixes:
return {
'id': event_id,
'action': 'shadowReject',
'msg': ''
}
# Test criteria 5: Reject events that are too long
if len(event_content) > 1000:
return {
'id': event_id,
'action': 'reject',
'msg': 'Content too long (max 1000 characters)'
}
# Test criteria 6: Reject events with invalid timestamps (too old or too new)
try:
event_time = event_json.get('created_at', 0)
current_time = int(datetime.now().timestamp())
# Reject events more than 1 hour old
if current_time - event_time > 3600:
return {
'id': event_id,
'action': 'reject',
'msg': 'Event timestamp too old'
}
# Reject events more than 5 minutes in the future
if event_time - current_time > 300:
return {
'id': event_id,
'action': 'reject',
'msg': 'Event timestamp too far in future'
}
except (ValueError, TypeError):
pass # Ignore timestamp errors
# Default: accept the event
return {
'id': event_id,
'action': 'accept',
'msg': ''
}
def main():
"""Main function to process events from stdin."""
try:
# Read events from stdin
for line in sys.stdin:
line = line.strip()
if not line:
continue
try:
# Parse the event JSON
event = json.loads(line)
# Process the event
response = process_event(event)
# Output the response as JSONL
print(json.dumps(response), flush=True)
except json.JSONDecodeError as e:
# Log error to stderr but continue processing
print(f"Error parsing JSON: {e}", file=sys.stderr)
continue
except Exception as e:
# Log error to stderr but continue processing
print(f"Error processing event: {e}", file=sys.stderr)
continue
except KeyboardInterrupt:
# Graceful shutdown
sys.exit(0)
except Exception as e:
print(f"Fatal error: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,31 @@
#!/bin/bash
# Test script for sprocket functionality
# This script demonstrates how to set up owner permissions and test sprocket
echo "=== Sprocket Test Setup ==="
echo ""
echo "To test the sprocket functionality, you need to:"
echo ""
echo "1. Generate a test keypair (if you don't have one):"
echo " Use a Nostr client like Amethyst or Nostr Wallet Connect to generate an npub"
echo ""
echo "2. Set the ORLY_OWNERS environment variable:"
echo " export ORLY_OWNERS=\"npub1your-npub-here\""
echo ""
echo "3. Start the relay with owner permissions:"
echo " ORLY_OWNERS=\"npub1your-npub-here\" ./next.orly.dev"
echo ""
echo "4. Log in to the web interface using the corresponding private key"
echo "5. Navigate to the Sprocket tab to access the script editor"
echo ""
echo "Example sprocket script:"
echo "#!/bin/bash"
echo "echo \"Sprocket is running!\""
echo "while true; do"
echo " echo \"Sprocket heartbeat: \$(date)\""
echo " sleep 30"
echo "done"
echo ""
echo "The sprocket script will be stored in ~/.config/ORLY/sprocket.sh"
echo "and will be automatically started when the relay starts."