Compare commits

...

14 Commits

Author SHA1 Message Date
f092d817c9 Update Go build flags and bump version to v0.21.4
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
- Modified the Go build command in the GitHub Actions workflow to include linker flags for reduced binary size.
- Updated version from v0.21.3 to v0.21.4 to reflect the latest changes.
2025-11-01 13:41:36 +00:00
c7eb532443 Update Go build configurations and bump version to v0.21.3
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
- Commented out unused build commands for different platforms in the GitHub Actions workflow to streamline the build process.
- Updated version from v0.21.2 to v0.21.3 to reflect recent changes.
2025-11-01 13:17:09 +00:00
e56b3f0083 Refactor event handling and policy script error management
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
- Removed redundant log statement in HandleEvent for cleaner output.
- Enhanced policy script handling to check for script existence before execution, improving error handling and fallback logic.
- Updated error messages to provide clearer feedback when policy scripts are missing or fail to start.
- Bumped version to v0.21.2 to reflect these changes.
2025-11-01 12:55:42 +00:00
daniyal
9064b3ab5f Fix deployment script issues (#1)
- Fix Go installation by extracting to /tmp first then moving to final destination
- Return to original directory after Go installation
- Add attempt to install secp256k1 from package manager before building from source
- Add missing automake package for autoreconf
- Fix binary build by running go build after embedded web update

Co-authored-by: mleku <me@mleku.dev>
Reviewed-on: https://git.nostrdev.com/mleku/next.orly.dev/pulls/1
Co-authored-by: daniyal <daniyal@nostrdev.com>
Co-committed-by: daniyal <daniyal@nostrdev.com>
2025-10-30 20:05:22 +00:00
3486d3d4ab added simple websocket test
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
- bump to v0.21.1
2025-10-30 19:32:45 +00:00
0ba555c6a8 Update version to v0.21.0 and enhance relay client functionality
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
- Bumped version from v0.20.6 to v0.21.0.
- Added a `complete` map in the Client struct to track subscription completion status.
- Improved event handling in the read loop to manage EOSE messages and subscription closures.
- Introduced new tests for filtering, event ordering, and subscription behaviors, enhancing test coverage and reliability.
2025-10-30 19:26:42 +00:00
54f65d8740 Enhance relay testing and event handling
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
- Updated TestRelay to include a wait mechanism for relay readiness, improving test reliability.
- Refactored startTestRelay to return the assigned port, allowing dynamic port assignment.
- Added timestamp validation in HandleEvent to reject events with timestamps more than one hour in the future.
- Introduced channels for handling OK and COUNT messages in the Client struct, improving message processing.
- Updated tests to reflect changes in event timestamp handling and increased wait times for event processing.
- Bumped version to v0.20.6 to reflect these enhancements.
2025-10-30 19:12:11 +00:00
2ff8b47410 bump to v0.20.5
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
2025-10-30 18:37:30 +00:00
ba2d35012c Enhance WebSocket connection management and error handling
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
- Set initial read deadline for connections to prevent premature timeouts on idle connections.
- Improved pong and ping handlers to extend read deadlines and handle timeout errors more effectively.
- Refined error logging for connection issues, distinguishing between timeouts and connection errors to enhance debugging.
- Updated subscriber delivery logic to handle timeouts gracefully, allowing for potential recovery without immediate disconnection.
2025-10-30 18:32:03 +00:00
b70f03bce0 Refactor policy script handling and improve fallback logic
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
- Renamed test functions for clarity, changing "NotRunning" to "Disabled" to better reflect the policy state.
- Updated policy checks to ensure that if the policy is disabled, it falls back to the default policy immediately.
- Enhanced error handling in the policy manager to ensure proper startup and running state management.
- Introduced a new method to ensure the policy is running, with timeout handling for startup completion.
- Bumped version to v0.20.3 to reflect these changes.
2025-10-30 18:22:56 +00:00
8954846864 Add relay testing framework and utilities
- Introduced a new `relaytester` package to facilitate testing of relay functionalities.
- Implemented a `TestSuite` structure to manage and execute various test cases against the relay.
- Added multiple test cases for event publishing, retrieval, and validation, ensuring comprehensive coverage of relay behavior.
- Created utility functions for generating key pairs and events, enhancing test reliability and maintainability.
- Established a WebSocket client for interacting with the relay during tests, including subscription and message handling.
- Included JSON formatting for test results to improve output readability.
- This commit lays the groundwork for robust integration testing of relay features.
2025-10-30 18:14:22 +00:00
5e6c0b80aa Add Relay functionality for managing startup and shutdown processes
- Introduced a new package `run` with a `Relay` struct to manage the lifecycle of a relay instance.
- Implemented `Start` and `Stop` methods for initializing and gracefully shutting down the relay, including options for log capturing and data directory cleanup.
- Added methods to retrieve captured stdout and stderr logs.
- Enhanced configuration handling for data directory and logging based on user-defined options.
2025-10-30 17:57:57 +00:00
80ab3caa5f Implement policy-based event filtering and add integration tests
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
- Enhanced the HandleReq function to incorporate policy checks for privileged events, ensuring only authorized users can access sensitive data.
- Introduced a new integration test suite for policy filtering, validating the behavior of event access based on user authentication and policy rules.
- Added a script to automate the policy filter integration tests, improving testing efficiency and reliability.
- Updated version to v0.20.2 to reflect the new features and improvements.
2025-10-30 17:51:15 +00:00
62f244d114 Refactor event handling and testing utilities
Some checks failed
Go / build (push) Has been cancelled
Go / release (push) Has been cancelled
- Updated the HandleReq function to improve event filtering logic, ensuring that privileged events are consistently checked against user access levels.
- Refactored event deduplication to utilize filtered events instead of all events, enhancing performance and clarity.
- Enhanced test utilities by generating keypairs for event creation, ensuring proper signing and validation in tests.
- Updated various test cases to use the new event creation methods, improving reliability and maintainability of tests.
- Bumped version to reflect changes made.
2025-10-30 15:53:02 +00:00
33 changed files with 5633 additions and 224 deletions

View File

@@ -75,11 +75,11 @@ jobs:
mkdir -p release-binaries
# Build for different platforms
GOEXPERIMENT=greenteagc,jsonv2 GOOS=linux GOARCH=amd64 CGO_ENABLED=1 go build -o release-binaries/orly-${VERSION}-linux-amd64 .
GOEXPERIMENT=greenteagc,jsonv2 GOOS=linux GOARCH=arm64 CGO_ENABLED=0 go build -o release-binaries/orly-${VERSION}-linux-arm64 .
GOEXPERIMENT=greenteagc,jsonv2 GOOS=darwin GOARCH=amd64 CGO_ENABLED=0 go build -o release-binaries/orly-${VERSION}-darwin-amd64 .
GOEXPERIMENT=greenteagc,jsonv2 GOOS=darwin GOARCH=arm64 CGO_ENABLED=0 go build -o release-binaries/orly-${VERSION}-darwin-arm64 .
GOEXPERIMENT=greenteagc,jsonv2 GOOS=windows GOARCH=amd64 CGO_ENABLED=0 go build -o release-binaries/orly-${VERSION}-windows-amd64.exe .
GOEXPERIMENT=greenteagc,jsonv2 GOOS=linux GOARCH=amd64 CGO_ENABLED=1 go build -ldflags "-s -w" -o release-binaries/orly-${VERSION}-linux-amd64 .
# GOEXPERIMENT=greenteagc,jsonv2 GOOS=linux GOARCH=arm64 CGO_ENABLED=0 go build -o release-binaries/orly-${VERSION}-linux-arm64 .
# GOEXPERIMENT=greenteagc,jsonv2 GOOS=darwin GOARCH=amd64 CGO_ENABLED=0 go build -o release-binaries/orly-${VERSION}-darwin-amd64 .
# GOEXPERIMENT=greenteagc,jsonv2 GOOS=darwin GOARCH=arm64 CGO_ENABLED=0 go build -o release-binaries/orly-${VERSION}-darwin-arm64 .
# GOEXPERIMENT=greenteagc,jsonv2 GOOS=windows GOARCH=amd64 CGO_ENABLED=0 go build -o release-binaries/orly-${VERSION}-windows-amd64.exe .
# Note: Only building orly binary as requested
# Other cmd utilities (aggregator, benchmark, convert, policytest, stresstest) are development tools

View File

@@ -37,7 +37,6 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
}
}()
log.I.F("HandleEvent: continuing with event processing...")
if len(msg) > 0 {
log.I.F("extra '%s'", msg)
}
@@ -176,6 +175,18 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
}
return
}
// validate timestamp - reject events too far in the future (more than 1 hour)
now := time.Now().Unix()
if env.E.CreatedAt > now+3600 {
if err = Ok.Invalid(
l, env,
"timestamp too far in the future",
); chk.E(err) {
return
}
return
}
// verify the signature
var ok bool
if ok, err = env.Verify(); chk.T(err) {

View File

@@ -283,13 +283,13 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
if !authorized {
continue // not authorized to see this private event
}
tmp = append(tmp, ev)
continue
// Event has private tag and user is authorized - continue to privileged check
}
if l.Config.ACLMode != "none" &&
kind.IsPrivileged(ev.Kind) && accessLevel != "admin" { // admins can see all events
// Always filter privileged events based on kind, regardless of ACLMode
// Privileged events should only be sent to users who are authenticated and
// are either the event author or listed in p tags
if kind.IsPrivileged(ev.Kind) && accessLevel != "admin" { // admins can see all events
log.T.C(
func() string {
return fmt.Sprintf(
@@ -357,6 +357,57 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
)
}
} else {
// Check if policy defines this event as privileged (even if not in hardcoded list)
// Policy check will handle this later, but we can skip it here if not authenticated
// to avoid unnecessary processing
if l.policyManager != nil && l.policyManager.Manager != nil && l.policyManager.Manager.IsEnabled() {
rule, hasRule := l.policyManager.Rules[int(ev.Kind)]
if hasRule && rule.Privileged && accessLevel != "admin" {
pk := l.authedPubkey.Load()
if pk == nil {
// Not authenticated - cannot see policy-privileged events
log.T.C(
func() string {
return fmt.Sprintf(
"policy-privileged event %s denied - not authenticated",
ev.ID,
)
},
)
continue
}
// Policy check will verify authorization later, but we need to check
// if user is party to the event here
authorized := false
if utils.FastEqual(ev.Pubkey, pk) {
authorized = true
} else {
// Check p tags
pTags := ev.Tags.GetAll([]byte("p"))
for _, pTag := range pTags {
var pt []byte
if pt, err = hexenc.Dec(string(pTag.Value())); chk.E(err) {
continue
}
if utils.FastEqual(pt, pk) {
authorized = true
break
}
}
}
if !authorized {
log.T.C(
func() string {
return fmt.Sprintf(
"policy-privileged event %s does not contain the logged in pubkey %0x",
ev.ID, pk,
)
},
)
continue
}
}
}
tmp = append(tmp, ev)
}
}
@@ -384,27 +435,28 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
}
// Deduplicate events (in case chunk processing returned duplicates)
if len(allEvents) > 0 {
// Use events (already filtered for privileged/policy) instead of allEvents
if len(events) > 0 {
seen := make(map[string]struct{})
var deduplicatedEvents event.S
originalCount := len(allEvents)
for _, ev := range allEvents {
originalCount := len(events)
for _, ev := range events {
eventID := hexenc.Enc(ev.ID)
if _, exists := seen[eventID]; !exists {
seen[eventID] = struct{}{}
deduplicatedEvents = append(deduplicatedEvents, ev)
}
}
allEvents = deduplicatedEvents
if originalCount != len(allEvents) {
log.T.F("REQ %s: deduplicated %d events to %d unique events", env.Subscription, originalCount, len(allEvents))
events = deduplicatedEvents
if originalCount != len(events) {
log.T.F("REQ %s: deduplicated %d events to %d unique events", env.Subscription, originalCount, len(events))
}
}
// Apply managed ACL filtering for read access if managed ACL is active
if acl.Registry.Active.Load() == "managed" {
var aclFilteredEvents event.S
for _, ev := range allEvents {
for _, ev := range events {
// Check if event is banned
eventID := hex.EncodeToString(ev.ID)
if banned, err := l.getManagedACL().IsEventBanned(eventID); err == nil && banned {
@@ -430,13 +482,13 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
aclFilteredEvents = append(aclFilteredEvents, ev)
}
allEvents = aclFilteredEvents
events = aclFilteredEvents
}
// Apply private tag filtering - only show events with "private" tags to authorized users
var privateFilteredEvents event.S
authedPubkey := l.authedPubkey.Load()
for _, ev := range allEvents {
for _, ev := range events {
// Check if event has private tags
hasPrivateTag := false
var privatePubkey []byte
@@ -469,10 +521,10 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
log.D.F("private tag: filtering out event %s from unauthorized user", hexenc.Enc(ev.ID))
}
}
allEvents = privateFilteredEvents
events = privateFilteredEvents
seen := make(map[string]struct{})
for _, ev := range allEvents {
for _, ev := range events {
log.T.C(
func() string {
return fmt.Sprintf(

View File

@@ -71,6 +71,10 @@ whitelist:
// Set read limit immediately after connection is established
conn.SetReadLimit(DefaultMaxMessageSize)
log.D.F("set read limit to %d bytes (%d MB) for %s", DefaultMaxMessageSize, DefaultMaxMessageSize/units.Mb, remote)
// Set initial read deadline - pong handler will extend it when pongs are received
conn.SetReadDeadline(time.Now().Add(DefaultPongWait))
defer conn.Close()
listener := &Listener{
ctx: ctx,
@@ -100,12 +104,12 @@ whitelist:
log.D.F("AUTH challenge sent successfully to %s", remote)
}
ticker := time.NewTicker(DefaultPingWait)
// Set pong handler
// Set pong handler - extends read deadline when pongs are received
conn.SetPongHandler(func(string) error {
conn.SetReadDeadline(time.Now().Add(DefaultPongWait))
return nil
})
// Set ping handler
// Set ping handler - extends read deadline when pings are received
conn.SetPingHandler(func(string) error {
conn.SetReadDeadline(time.Now().Add(DefaultPongWait))
return conn.WriteControl(websocket.PongMessage, []byte{}, time.Now().Add(DefaultWriteTimeout))
@@ -159,14 +163,14 @@ whitelist:
var msg []byte
log.T.F("waiting for message from %s", remote)
// Set read deadline for context cancellation
deadline := time.Now().Add(DefaultPongWait)
// Don't set read deadline here - it's set initially and extended by pong handler
// This prevents premature timeouts on idle connections with active subscriptions
if ctx.Err() != nil {
return
}
conn.SetReadDeadline(deadline)
// Block waiting for message; rely on pings and context cancellation to detect dead peers
// The read deadline is managed by the pong handler which extends it when pongs are received
typ, msg, err = conn.ReadMessage()
if err != nil {
@@ -187,6 +191,12 @@ whitelist:
log.T.F("connection from %s closed: %v", remote, err)
return
}
// Handle timeout errors specifically - these can occur on idle connections
// but pongs should extend the deadline, so a timeout usually means dead connection
if strings.Contains(err.Error(), "timeout") || strings.Contains(err.Error(), "deadline exceeded") {
log.T.F("connection from %s read timeout (likely dead connection): %v", remote, err)
return
}
// Handle message too big errors specifically
if strings.Contains(err.Error(), "message too large") ||
strings.Contains(err.Error(), "read limited at") {
@@ -216,13 +226,41 @@ whitelist:
deadline := time.Now().Add(DefaultWriteTimeout)
conn.SetWriteDeadline(deadline)
pongStart := time.Now()
if err = conn.WriteControl(websocket.PongMessage, msg, deadline); chk.E(err) {
if err = conn.WriteControl(websocket.PongMessage, msg, deadline); err != nil {
pongDuration := time.Since(pongStart)
log.E.F(
"failed to send PONG to %s after %v: %v", remote,
pongDuration, err,
)
return
// Check if this is a timeout vs a connection error
isTimeout := strings.Contains(err.Error(), "timeout") || strings.Contains(err.Error(), "deadline exceeded")
isConnectionError := strings.Contains(err.Error(), "use of closed network connection") ||
strings.Contains(err.Error(), "broken pipe") ||
strings.Contains(err.Error(), "connection reset") ||
websocket.IsCloseError(err, websocket.CloseAbnormalClosure,
websocket.CloseGoingAway,
websocket.CloseNoStatusReceived)
if isConnectionError {
log.E.F(
"failed to send PONG to %s after %v (connection error): %v", remote,
pongDuration, err,
)
return
} else if isTimeout {
// Timeout on pong - log but don't close immediately
// The read deadline will catch dead connections
log.W.F(
"failed to send PONG to %s after %v (timeout, but connection may still be alive): %v", remote,
pongDuration, err,
)
// Continue - don't close connection on pong timeout
} else {
// Unknown error - log and continue
log.E.F(
"failed to send PONG to %s after %v (unknown error): %v", remote,
pongDuration, err,
)
// Continue - don't close on unknown errors
}
continue
}
pongDuration := time.Since(pongStart)
log.D.F("sent PONG to %s successfully in %v", remote, pongDuration)
@@ -264,12 +302,40 @@ func (s *Server) Pinger(
if err = conn.WriteControl(websocket.PingMessage, []byte{}, deadline); err != nil {
pingDuration := time.Since(pingStart)
log.E.F(
"PING #%d FAILED after %v: %v", pingCount, pingDuration,
err,
)
chk.E(err)
return
// Check if this is a timeout vs a connection error
isTimeout := strings.Contains(err.Error(), "timeout") || strings.Contains(err.Error(), "deadline exceeded")
isConnectionError := strings.Contains(err.Error(), "use of closed network connection") ||
strings.Contains(err.Error(), "broken pipe") ||
strings.Contains(err.Error(), "connection reset") ||
websocket.IsCloseError(err, websocket.CloseAbnormalClosure,
websocket.CloseGoingAway,
websocket.CloseNoStatusReceived)
if isConnectionError {
log.E.F(
"PING #%d FAILED after %v (connection error): %v", pingCount, pingDuration,
err,
)
chk.E(err)
return
} else if isTimeout {
// Timeout on ping - log but don't stop pinger immediately
// The read deadline will catch dead connections
log.W.F(
"PING #%d timeout after %v (connection may still be alive): %v", pingCount, pingDuration,
err,
)
// Continue - don't stop pinger on timeout
} else {
// Unknown error - log and continue
log.E.F(
"PING #%d FAILED after %v (unknown error): %v", pingCount, pingDuration,
err,
)
// Continue - don't stop pinger on unknown errors
}
continue
}
pingDuration := time.Since(pingStart)

View File

@@ -283,17 +283,36 @@ func (p *P) Deliver(ev *event.E) {
hex.Enc(ev.ID), d.sub.remote, d.id, deliveryDuration, err)
// Check for timeout specifically
if strings.Contains(err.Error(), "timeout") || strings.Contains(err.Error(), "deadline") {
isTimeout := strings.Contains(err.Error(), "timeout") || strings.Contains(err.Error(), "deadline exceeded")
if isTimeout {
log.E.F("subscription delivery TIMEOUT: event=%s to=%s after %v (limit=%v)",
hex.Enc(ev.ID), d.sub.remote, deliveryDuration, DefaultWriteTimeout)
}
// Log connection cleanup
log.D.F("removing failed subscriber connection: %s", d.sub.remote)
// Only close connection on permanent errors, not transient timeouts
// WebSocket write errors typically indicate connection issues, but we should
// distinguish between timeouts (client might be slow) and connection errors
isConnectionError := strings.Contains(err.Error(), "use of closed network connection") ||
strings.Contains(err.Error(), "broken pipe") ||
strings.Contains(err.Error(), "connection reset") ||
websocket.IsCloseError(err, websocket.CloseAbnormalClosure,
websocket.CloseGoingAway,
websocket.CloseNoStatusReceived)
// On error, remove the subscriber connection safely
p.removeSubscriber(d.w)
_ = d.w.Close()
if isConnectionError {
log.D.F("removing failed subscriber connection due to connection error: %s", d.sub.remote)
p.removeSubscriber(d.w)
_ = d.w.Close()
} else if isTimeout {
// For timeouts, log but don't immediately close - give it another chance
// The read deadline will catch dead connections eventually
log.W.F("subscription delivery timeout for %s (client may be slow), skipping event but keeping connection", d.sub.remote)
} else {
// Unknown error - be conservative and close
log.D.F("removing failed subscriber connection due to unknown error: %s", d.sub.remote)
p.removeSubscriber(d.w)
_ = d.w.Close()
}
continue
}

View File

@@ -0,0 +1,319 @@
package main
import (
"context"
"flag"
"fmt"
"os"
"strings"
"time"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/crypto/p256k"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/protocol/ws"
)
func main() {
var err error
url := flag.String("url", "ws://127.0.0.1:34568", "relay websocket URL")
allowedPubkeyHex := flag.String("allowed-pubkey", "", "hex-encoded allowed pubkey")
allowedSecHex := flag.String("allowed-sec", "", "hex-encoded allowed secret key")
unauthorizedPubkeyHex := flag.String("unauthorized-pubkey", "", "hex-encoded unauthorized pubkey")
unauthorizedSecHex := flag.String("unauthorized-sec", "", "hex-encoded unauthorized secret key")
timeout := flag.Duration("timeout", 10*time.Second, "operation timeout")
flag.Parse()
if *allowedPubkeyHex == "" || *allowedSecHex == "" {
log.E.F("required flags: -allowed-pubkey and -allowed-sec")
os.Exit(1)
}
if *unauthorizedPubkeyHex == "" || *unauthorizedSecHex == "" {
log.E.F("required flags: -unauthorized-pubkey and -unauthorized-sec")
os.Exit(1)
}
// Decode keys
allowedSecBytes, err := hex.Dec(*allowedSecHex)
if err != nil {
log.E.F("failed to decode allowed secret key: %v", err)
os.Exit(1)
}
allowedSigner := &p256k.Signer{}
if err = allowedSigner.InitSec(allowedSecBytes); chk.E(err) {
log.E.F("failed to initialize allowed signer: %v", err)
os.Exit(1)
}
unauthorizedSecBytes, err := hex.Dec(*unauthorizedSecHex)
if err != nil {
log.E.F("failed to decode unauthorized secret key: %v", err)
os.Exit(1)
}
unauthorizedSigner := &p256k.Signer{}
if err = unauthorizedSigner.InitSec(unauthorizedSecBytes); chk.E(err) {
log.E.F("failed to initialize unauthorized signer: %v", err)
os.Exit(1)
}
ctx, cancel := context.WithTimeout(context.Background(), *timeout)
defer cancel()
// Test 1: Authenticated as allowed pubkey - should work
fmt.Println("Test 1: Publishing event 30520 with allowed pubkey (authenticated)...")
if err := testWriteEvent(ctx, *url, 30520, allowedSigner, allowedSigner); err != nil {
fmt.Printf("❌ FAILED: %v\n", err)
os.Exit(1)
}
fmt.Println("✅ PASSED: Event published successfully")
// Test 2: Authenticated as allowed pubkey, then read event 10306 - should work
// First publish an event, then read it
fmt.Println("\nTest 2: Publishing and reading event 10306 with allowed pubkey (authenticated)...")
if err := testWriteEvent(ctx, *url, 10306, allowedSigner, allowedSigner); err != nil {
fmt.Printf("❌ FAILED to publish: %v\n", err)
os.Exit(1)
}
if err := testReadEvent(ctx, *url, 10306, allowedSigner); err != nil {
fmt.Printf("❌ FAILED to read: %v\n", err)
os.Exit(1)
}
fmt.Println("✅ PASSED: Event readable by allowed user")
// Test 3: Unauthenticated request - should be blocked
fmt.Println("\nTest 3: Publishing event 30520 without authentication...")
if err := testWriteEventUnauthenticated(ctx, *url, 30520, allowedSigner); err != nil {
fmt.Printf("✅ PASSED: Event correctly blocked (expected): %v\n", err)
} else {
fmt.Println("❌ FAILED: Event was allowed when it should have been blocked")
os.Exit(1)
}
// Test 4: Authenticated as unauthorized pubkey - should be blocked
fmt.Println("\nTest 4: Publishing event 30520 with unauthorized pubkey...")
if err := testWriteEvent(ctx, *url, 30520, unauthorizedSigner, unauthorizedSigner); err != nil {
fmt.Printf("✅ PASSED: Event correctly blocked (expected): %v\n", err)
} else {
fmt.Println("❌ FAILED: Event was allowed when it should have been blocked")
os.Exit(1)
}
// Test 5: Read event 10306 without authentication - should be blocked
// Event was published in test 2, so it exists in the database
fmt.Println("\nTest 5: Reading event 10306 without authentication (should be blocked)...")
// Wait a bit to ensure event is stored
time.Sleep(500 * time.Millisecond)
// If no error is returned, that means no events were received (which is correct)
// If an error is returned, it means an event was received (which is wrong)
if err := testReadEventUnauthenticated(ctx, *url, 10306); err != nil {
// If we got an error about receiving an event, that's a failure
if strings.Contains(err.Error(), "unexpected event received") {
fmt.Printf("❌ FAILED: %v\n", err)
os.Exit(1)
}
// Other errors (like connection errors) are also failures
fmt.Printf("❌ FAILED: Unexpected error: %v\n", err)
os.Exit(1)
}
fmt.Println("✅ PASSED: No events received (correctly filtered by policy)")
// Test 6: Read event 10306 with unauthorized pubkey - should be blocked
fmt.Println("\nTest 6: Reading event 10306 with unauthorized pubkey (should be blocked)...")
// If no error is returned, that means no events were received (which is correct)
// If an error is returned about receiving an event, that's a failure
if err := testReadEvent(ctx, *url, 10306, unauthorizedSigner); err != nil {
// Connection/subscription errors are failures
fmt.Printf("❌ FAILED: Unexpected error: %v\n", err)
os.Exit(1)
}
fmt.Println("✅ PASSED: No events received (correctly filtered by policy)")
fmt.Println("\n✅ All tests passed!")
}
func testWriteEvent(ctx context.Context, url string, kindNum uint16, eventSigner, authSigner *p256k.Signer) error {
rl, err := ws.RelayConnect(ctx, url)
if err != nil {
return fmt.Errorf("connect error: %w", err)
}
defer rl.Close()
// Send a REQ first to trigger AUTH challenge (when AuthToWrite is enabled)
// This is needed because challenges are sent on REQ, not on connect
limit := uint(1)
ff := filter.NewS(&filter.F{
Kinds: kind.NewS(kind.New(kindNum)),
Limit: &limit,
})
sub, err := rl.Subscribe(ctx, ff)
if err != nil {
return fmt.Errorf("subscription error (may be expected): %w", err)
}
// Wait a bit for challenge to arrive
time.Sleep(500 * time.Millisecond)
sub.Unsub()
// Authenticate
if err = rl.Auth(ctx, authSigner); err != nil {
return fmt.Errorf("auth error: %w", err)
}
// Create and sign event
ev := &event.E{
CreatedAt: time.Now().Unix(),
Kind: kind.K{K: kindNum}.K,
Tags: tag.NewS(),
Content: []byte(fmt.Sprintf("test event kind %d", kindNum)),
}
// Add p tag for privileged check
pTag := tag.NewFromAny("p", hex.Enc(authSigner.Pub()))
ev.Tags.Append(pTag)
// Add d tag for addressable events (kinds 30000-39999)
if kindNum >= 30000 && kindNum < 40000 {
dTag := tag.NewFromAny("d", "test")
ev.Tags.Append(dTag)
}
if err = ev.Sign(eventSigner); err != nil {
return fmt.Errorf("sign error: %w", err)
}
// Publish
if err = rl.Publish(ctx, ev); err != nil {
return fmt.Errorf("publish error: %w", err)
}
return nil
}
func testWriteEventUnauthenticated(ctx context.Context, url string, kindNum uint16, eventSigner *p256k.Signer) error {
rl, err := ws.RelayConnect(ctx, url)
if err != nil {
return fmt.Errorf("connect error: %w", err)
}
defer rl.Close()
// Do NOT authenticate
// Create and sign event
ev := &event.E{
CreatedAt: time.Now().Unix(),
Kind: kind.K{K: kindNum}.K,
Tags: tag.NewS(),
Content: []byte(fmt.Sprintf("test event kind %d (unauthenticated)", kindNum)),
}
// Add d tag for addressable events (kinds 30000-39999)
if kindNum >= 30000 && kindNum < 40000 {
dTag := tag.NewFromAny("d", "test")
ev.Tags.Append(dTag)
}
if err = ev.Sign(eventSigner); err != nil {
return fmt.Errorf("sign error: %w", err)
}
// Publish (should fail)
if err = rl.Publish(ctx, ev); err != nil {
return fmt.Errorf("publish error (expected): %w", err)
}
return nil
}
func testReadEvent(ctx context.Context, url string, kindNum uint16, authSigner *p256k.Signer) error {
rl, err := ws.RelayConnect(ctx, url)
if err != nil {
return fmt.Errorf("connect error: %w", err)
}
defer rl.Close()
// Send a REQ first to trigger AUTH challenge (when AuthToWrite is enabled)
// Then authenticate
ff := filter.NewS(&filter.F{
Kinds: kind.NewS(kind.New(kindNum)),
})
sub, err := rl.Subscribe(ctx, ff)
if err != nil {
return fmt.Errorf("subscription error: %w", err)
}
// Wait a bit for challenge to arrive
time.Sleep(500 * time.Millisecond)
// Authenticate
if err = rl.Auth(ctx, authSigner); err != nil {
sub.Unsub()
return fmt.Errorf("auth error: %w", err)
}
// Wait for events or timeout
// If we receive any events, return nil (success)
// If we don't receive events, also return nil (no events found, which may be expected)
select {
case ev := <-sub.Events:
if ev != nil {
sub.Unsub()
return nil // Event received
}
case <-sub.EndOfStoredEvents:
// EOSE received, no more events
sub.Unsub()
return nil
case <-time.After(5 * time.Second):
// No events received - this might be OK if no events exist or they're filtered
sub.Unsub()
return nil
case <-ctx.Done():
sub.Unsub()
return ctx.Err()
}
return nil
}
func testReadEventUnauthenticated(ctx context.Context, url string, kindNum uint16) error {
rl, err := ws.RelayConnect(ctx, url)
if err != nil {
return fmt.Errorf("connect error: %w", err)
}
defer rl.Close()
// Do NOT authenticate
// Subscribe to events
ff := filter.NewS(&filter.F{
Kinds: kind.NewS(kind.New(kindNum)),
})
sub, err := rl.Subscribe(ctx, ff)
if err != nil {
return fmt.Errorf("subscription error (may be expected): %w", err)
}
defer sub.Unsub()
// Wait for events or timeout
// If we receive any events, that's a failure (should be blocked)
select {
case ev := <-sub.Events:
if ev != nil {
return fmt.Errorf("unexpected event received: should have been blocked by policy (event ID: %s)", hex.Enc(ev.ID))
}
case <-sub.EndOfStoredEvents:
// EOSE received, no events (this is expected for unauthenticated privileged events)
return nil
case <-time.After(5 * time.Second):
// No events received - this is expected for unauthenticated requests
return nil
case <-ctx.Done():
return ctx.Err()
}
return nil
}

View File

@@ -0,0 +1,71 @@
# relay-tester
A command-line tool for testing Nostr relay implementations against the NIP-01 specification and related NIPs.
## Usage
```bash
relay-tester -url <relay-url> [options]
```
## Options
- `-url` (required): Relay websocket URL (e.g., `ws://127.0.0.1:3334` or `wss://relay.example.com`)
- `-test <name>`: Run a specific test by name (default: run all tests)
- `-json`: Output results in JSON format
- `-v`: Verbose output (shows additional info for each test)
- `-list`: List all available tests and exit
## Examples
### Run all tests against a local relay:
```bash
relay-tester -url ws://127.0.0.1:3334
```
### Run all tests with verbose output:
```bash
relay-tester -url ws://127.0.0.1:3334 -v
```
### Run a specific test:
```bash
relay-tester -url ws://127.0.0.1:3334 -test "Publishes basic event"
```
### Output results as JSON:
```bash
relay-tester -url ws://127.0.0.1:3334 -json
```
### List all available tests:
```bash
relay-tester -list
```
## Exit Codes
- `0`: All required tests passed
- `1`: One or more required tests failed, or an error occurred
## Test Categories
The relay-tester runs tests covering:
- **Basic Event Operations**: Publishing, finding by ID/author/kind/tags
- **Filtering**: Time ranges, limits, multiple filters, scrape queries
- **Replaceable Events**: Metadata and contact list replacement
- **Parameterized Replaceable Events**: Addressable events with `d` tags
- **Event Deletion**: Deletion events (NIP-09)
- **Ephemeral Events**: Event handling for ephemeral kinds
- **EOSE Handling**: End of stored events signaling
- **Event Validation**: Signature verification, ID hash verification
- **JSON Compliance**: NIP-01 JSON escape sequences
## Notes
- Tests are run in dependency order (some tests depend on others)
- Required tests must pass for the relay to be considered compliant
- Optional tests may fail without affecting overall compliance
- The tool connects to the relay using WebSocket and runs tests sequentially

160
cmd/relay-tester/main.go Normal file
View File

@@ -0,0 +1,160 @@
package main
import (
"flag"
"fmt"
"os"
"strings"
"lol.mleku.dev/log"
relaytester "next.orly.dev/relay-tester"
)
func main() {
var (
relayURL = flag.String("url", "", "relay websocket URL (required, e.g., ws://127.0.0.1:3334)")
testName = flag.String("test", "", "run specific test by name (default: run all tests)")
jsonOut = flag.Bool("json", false, "output results in JSON format")
verbose = flag.Bool("v", false, "verbose output")
listTests = flag.Bool("list", false, "list all available tests and exit")
)
flag.Parse()
if *listTests {
listAllTests()
return
}
if *relayURL == "" {
log.E.F("required flag: -url (relay websocket URL)")
flag.Usage()
os.Exit(1)
}
// Validate URL format
if !strings.HasPrefix(*relayURL, "ws://") && !strings.HasPrefix(*relayURL, "wss://") {
log.E.F("URL must start with ws:// or wss://")
os.Exit(1)
}
// Create test suite
if *verbose {
log.I.F("Creating test suite for %s...", *relayURL)
}
suite, err := relaytester.NewTestSuite(*relayURL)
if err != nil {
log.E.F("failed to create test suite: %v", err)
os.Exit(1)
}
// Run tests
var results []relaytester.TestResult
if *testName != "" {
if *verbose {
log.I.F("Running test: %s", *testName)
}
result, err := suite.RunTest(*testName)
if err != nil {
log.E.F("failed to run test %s: %v", *testName, err)
os.Exit(1)
}
results = []relaytester.TestResult{result}
} else {
if *verbose {
log.I.F("Running all tests...")
}
if results, err = suite.Run(); err != nil {
log.E.F("failed to run tests: %v", err)
os.Exit(1)
}
}
// Output results
if *jsonOut {
jsonOutput, err := relaytester.FormatJSON(results)
if err != nil {
log.E.F("failed to format JSON: %v", err)
os.Exit(1)
}
fmt.Println(jsonOutput)
} else {
outputResults(results, *verbose)
}
// Check exit code
hasRequiredFailures := false
for _, result := range results {
if result.Required && !result.Pass {
hasRequiredFailures = true
break
}
}
if hasRequiredFailures {
os.Exit(1)
}
}
func outputResults(results []relaytester.TestResult, verbose bool) {
passed := 0
failed := 0
requiredFailed := 0
for _, result := range results {
if result.Pass {
passed++
if verbose {
fmt.Printf("PASS: %s", result.Name)
if result.Info != "" {
fmt.Printf(" - %s", result.Info)
}
fmt.Println()
} else {
fmt.Printf("PASS: %s\n", result.Name)
}
} else {
failed++
if result.Required {
requiredFailed++
fmt.Printf("FAIL (required): %s", result.Name)
} else {
fmt.Printf("FAIL (optional): %s", result.Name)
}
if result.Info != "" {
fmt.Printf(" - %s", result.Info)
}
fmt.Println()
}
}
fmt.Println()
fmt.Println("Test Summary:")
fmt.Printf(" Total: %d\n", len(results))
fmt.Printf(" Passed: %d\n", passed)
fmt.Printf(" Failed: %d\n", failed)
fmt.Printf(" Required Failed: %d\n", requiredFailed)
}
func listAllTests() {
// Create a dummy test suite to get the list of tests
suite, err := relaytester.NewTestSuite("ws://127.0.0.1:0")
if err != nil {
log.E.F("failed to create test suite: %v", err)
os.Exit(1)
}
fmt.Println("Available tests:")
fmt.Println()
testNames := suite.ListTests()
testInfo := suite.GetTestNames()
for _, name := range testNames {
required := ""
if testInfo[name] {
required = " (required)"
}
fmt.Printf(" - %s%s\n", name, required)
}
}

View File

@@ -8,20 +8,27 @@ import (
"testing"
"time"
"lol.mleku.dev/chk"
"next.orly.dev/pkg/crypto/p256k"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/tag"
)
// Helper function to create test event
func createTestEventBench(id, pubkey, content string, kind uint16) *event.E {
return &event.E{
ID: []byte(id),
Kind: kind,
Pubkey: []byte(pubkey),
Content: []byte(content),
Tags: &tag.S{},
CreatedAt: time.Now().Unix(),
// Helper function to create test event for benchmarks (reuses signer)
func createTestEventBench(b *testing.B, signer *p256k.Signer, content string, kind uint16) *event.E {
ev := event.New()
ev.CreatedAt = time.Now().Unix()
ev.Kind = kind
ev.Content = []byte(content)
ev.Tags = tag.NewS()
// Sign the event properly
if err := ev.Sign(signer); chk.E(err) {
b.Fatalf("Failed to sign test event: %v", err)
}
return ev
}
func BenchmarkCheckKindsPolicy(b *testing.B) {
@@ -38,12 +45,13 @@ func BenchmarkCheckKindsPolicy(b *testing.B) {
}
func BenchmarkCheckRulePolicy(b *testing.B) {
// Create test event
testEvent := createTestEventBench("test-event-id", "test-pubkey", "test content", 1)
// Generate keypair once for all events
signer, pubkey := generateTestKeypairB(b)
testEvent := createTestEventBench(b, signer, "test content", 1)
rule := Rule{
Description: "test rule",
WriteAllow: []string{"test-pubkey"},
WriteAllow: []string{hex.Enc(pubkey)},
SizeLimit: int64Ptr(10000),
ContentLimit: int64Ptr(1000),
MustHaveTags: []string{"p"},
@@ -53,13 +61,14 @@ func BenchmarkCheckRulePolicy(b *testing.B) {
b.ResetTimer()
for i := 0; i < b.N; i++ {
policy.checkRulePolicy("write", testEvent, rule, []byte("test-pubkey"))
policy.checkRulePolicy("write", testEvent, rule, pubkey)
}
}
func BenchmarkCheckPolicy(b *testing.B) {
// Create test event
testEvent := createTestEventBench("test-event-id", "test-pubkey", "test content", 1)
// Generate keypair once for all events
signer, pubkey := generateTestKeypairB(b)
testEvent := createTestEventBench(b, signer, "test content", 1)
policy := &P{
Kind: Kinds{
@@ -68,14 +77,14 @@ func BenchmarkCheckPolicy(b *testing.B) {
Rules: map[int]Rule{
1: {
Description: "test rule",
WriteAllow: []string{"test-pubkey"},
WriteAllow: []string{hex.Enc(pubkey)},
},
},
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
policy.CheckPolicy("write", testEvent, []byte("test-pubkey"), "127.0.0.1")
policy.CheckPolicy("write", testEvent, pubkey, "127.0.0.1")
}
}
@@ -114,8 +123,9 @@ done
// Give the script time to start
time.Sleep(100 * time.Millisecond)
// Create test event
testEvent := createTestEventBench("test-event-id", "test-pubkey", "test content", 1)
// Generate keypair once for all events
signer, pubkey := generateTestKeypairB(b)
testEvent := createTestEventBench(b, signer, "test content", 1)
policy := &P{
Manager: manager,
@@ -130,7 +140,7 @@ done
b.ResetTimer()
for i := 0; i < b.N; i++ {
policy.CheckPolicy("write", testEvent, []byte("test-pubkey"), "127.0.0.1")
policy.CheckPolicy("write", testEvent, pubkey, "127.0.0.1")
}
}
@@ -190,16 +200,19 @@ func BenchmarkCheckPolicyMultipleKinds(b *testing.B) {
Rules: rules,
}
// Generate keypair once for all events
signer, pubkey := generateTestKeypairB(b)
// Create test events with different kinds
events := make([]*event.E, 100)
for i := 0; i < 100; i++ {
events[i] = createTestEvent("test-event-id", "test-pubkey", "test content", uint16(i+1))
events[i] = createTestEventBench(b, signer, "test content", uint16(i+1))
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
event := events[i%100]
policy.CheckPolicy("write", event, []byte("test-pubkey"), "127.0.0.1")
policy.CheckPolicy("write", event, pubkey, "127.0.0.1")
}
}
@@ -217,11 +230,13 @@ func BenchmarkCheckPolicyLargeWhitelist(b *testing.B) {
Rules: map[int]Rule{},
}
testEvent := createTestEvent("test-event-id", "test-pubkey", "test content", 500) // Kind in the middle of the whitelist
// Generate keypair once for all events
signer, pubkey := generateTestKeypairB(b)
testEvent := createTestEventBench(b, signer, "test content", 500) // Kind in the middle of the whitelist
b.ResetTimer()
for i := 0; i < b.N; i++ {
policy.CheckPolicy("write", testEvent, []byte("test-pubkey"), "127.0.0.1")
policy.CheckPolicy("write", testEvent, pubkey, "127.0.0.1")
}
}
@@ -239,22 +254,25 @@ func BenchmarkCheckPolicyLargeBlacklist(b *testing.B) {
Rules: map[int]Rule{},
}
testEvent := createTestEvent("test-event-id", "test-pubkey", "test content", 1500) // Kind not in blacklist
// Generate keypair once for all events
signer, pubkey := generateTestKeypairB(b)
testEvent := createTestEventBench(b, signer, "test content", 1500) // Kind not in blacklist
b.ResetTimer()
for i := 0; i < b.N; i++ {
policy.CheckPolicy("write", testEvent, []byte("test-pubkey"), "127.0.0.1")
policy.CheckPolicy("write", testEvent, pubkey, "127.0.0.1")
}
}
func BenchmarkCheckPolicyComplexRule(b *testing.B) {
// Create test event with many tags
testEvent := createTestEventBench("test-event-id", "test-pubkey", "test content", 1)
// Generate keypair once for all events
signer, pubkey := generateTestKeypairB(b)
testEvent := createTestEventBench(b, signer, "test content", 1)
// Add many tags
for i := 0; i < 100; i++ {
tagItem1 := tag.New()
tagItem1.T = append(tagItem1.T, []byte("p"), []byte("test-pubkey"))
tagItem1.T = append(tagItem1.T, []byte("p"), []byte(hex.Enc(pubkey)))
*testEvent.Tags = append(*testEvent.Tags, tagItem1)
tagItem2 := tag.New()
@@ -264,7 +282,7 @@ func BenchmarkCheckPolicyComplexRule(b *testing.B) {
rule := Rule{
Description: "complex rule",
WriteAllow: []string{"test-pubkey"},
WriteAllow: []string{hex.Enc(pubkey)},
SizeLimit: int64Ptr(100000),
ContentLimit: int64Ptr(10000),
MustHaveTags: []string{"p", "e"},
@@ -275,7 +293,7 @@ func BenchmarkCheckPolicyComplexRule(b *testing.B) {
b.ResetTimer()
for i := 0; i < b.N; i++ {
policy.checkRulePolicy("write", testEvent, rule, []byte("test-pubkey"))
policy.checkRulePolicy("write", testEvent, rule, pubkey)
}
}
@@ -294,11 +312,12 @@ func BenchmarkCheckPolicyLargeEvent(b *testing.B) {
},
}
// Create test event with large content
testEvent := createTestEvent("test-event-id", "test-pubkey", largeContent, 1)
// Generate keypair once for all events
signer, pubkey := generateTestKeypairB(b)
testEvent := createTestEventBench(b, signer, largeContent, 1)
b.ResetTimer()
for i := 0; i < b.N; i++ {
policy.CheckPolicy("write", testEvent, []byte("test-pubkey"), "127.0.0.1")
policy.CheckPolicy("write", testEvent, pubkey, "127.0.0.1")
}
}

View File

@@ -131,11 +131,13 @@ type PolicyManager struct {
currentCancel context.CancelFunc
mutex sync.RWMutex
isRunning bool
isStarting bool
enabled bool
stdin io.WriteCloser
stdout io.ReadCloser
stderr io.ReadCloser
responseChan chan PolicyResponse
startupChan chan error
}
// P represents a complete policy configuration for a Nostr relay.
@@ -203,6 +205,7 @@ func NewWithManager(ctx context.Context, appName string, enabled bool) *P {
scriptPath: scriptPath,
enabled: enabled,
responseChan: make(chan PolicyResponse, 100), // Buffered channel for responses
startupChan: make(chan error, 1), // Channel for startup completion
}
// Load policy configuration from JSON file
@@ -279,8 +282,21 @@ func (p *P) CheckPolicy(access string, ev *event.E, loggedInPubkey []byte, ipAdd
}
// Check if script is present and enabled
if rule.Script != "" && p.Manager != nil && p.Manager.IsEnabled() {
return p.checkScriptPolicy(access, ev, rule.Script, loggedInPubkey, ipAddress)
if rule.Script != "" && p.Manager != nil {
if p.Manager.IsEnabled() {
// Check if script file exists before trying to use it
if _, err := os.Stat(p.Manager.GetScriptPath()); err == nil {
// Script exists, try to use it
allowed, err := p.checkScriptPolicy(access, ev, rule.Script, loggedInPubkey, ipAddress)
if err == nil {
// Script ran successfully, return its decision
return allowed, nil
}
// Script failed, fall through to apply other criteria
log.W.F("policy script check failed for kind %d: %v, applying other criteria", ev.Kind, err)
}
// Script doesn't exist or failed, fall through to apply other criteria
}
}
// Apply rule-based filtering
@@ -452,12 +468,31 @@ func (p *P) checkRulePolicy(access string, ev *event.E, rule Rule, loggedInPubke
// checkScriptPolicy runs the policy script to determine if event should be allowed
func (p *P) checkScriptPolicy(access string, ev *event.E, scriptPath string, loggedInPubkey []byte, ipAddress string) (allowed bool, err error) {
if p.Manager == nil || !p.Manager.IsRunning() {
// If script is not running, fall back to default policy
log.W.F("policy rule for kind %d is inactive (script not running), falling back to default policy (%s)", ev.Kind, p.DefaultPolicy)
if p.Manager == nil {
return false, fmt.Errorf("policy manager is not initialized")
}
// If policy is disabled, fall back to default policy immediately
if !p.Manager.IsEnabled() {
log.W.F("policy rule for kind %d is inactive (policy disabled), falling back to default policy (%s)", ev.Kind, p.DefaultPolicy)
return p.getDefaultPolicyAction(), nil
}
// Policy is enabled, check if it's running
if !p.Manager.IsRunning() {
// Check if script file exists
if _, err := os.Stat(p.Manager.GetScriptPath()); os.IsNotExist(err) {
// Script doesn't exist, return error so caller can fall back to other criteria
return false, fmt.Errorf("policy script does not exist at %s", p.Manager.GetScriptPath())
}
// Try to start the policy and wait for it
if err := p.Manager.ensureRunning(); err != nil {
// Startup failed, return error so caller can fall back to other criteria
return false, fmt.Errorf("failed to start policy script: %v", err)
}
}
// Create policy event with additional context
policyEvent := &PolicyEvent{
E: ev,
@@ -535,6 +570,91 @@ func (pm *PolicyManager) startPolicyIfExists() {
}
}
// ensureRunning ensures the policy is running, starting it if necessary.
// It waits for startup to complete with a timeout and returns an error if startup fails.
func (pm *PolicyManager) ensureRunning() error {
pm.mutex.Lock()
// Check if already running
if pm.isRunning {
pm.mutex.Unlock()
return nil
}
// Check if already starting
if pm.isStarting {
pm.mutex.Unlock()
// Wait for startup to complete
select {
case err := <-pm.startupChan:
if err != nil {
return fmt.Errorf("policy startup failed: %v", err)
}
// Double-check it's actually running after receiving signal
pm.mutex.RLock()
running := pm.isRunning
pm.mutex.RUnlock()
if !running {
return fmt.Errorf("policy startup completed but process is not running")
}
return nil
case <-time.After(10 * time.Second):
return fmt.Errorf("policy startup timeout")
case <-pm.ctx.Done():
return fmt.Errorf("policy context cancelled")
}
}
// Mark as starting
pm.isStarting = true
pm.mutex.Unlock()
// Start the policy in a goroutine
go func() {
err := pm.StartPolicy()
pm.mutex.Lock()
pm.isStarting = false
pm.mutex.Unlock()
// Signal startup completion (non-blocking)
// Drain any stale value first, then send
select {
case <-pm.startupChan:
default:
}
select {
case pm.startupChan <- err:
default:
// Channel should be empty now, but if it's full, try again
pm.startupChan <- err
}
}()
// Wait for startup to complete
select {
case err := <-pm.startupChan:
if err != nil {
return fmt.Errorf("policy startup failed: %v", err)
}
// Double-check it's actually running after receiving signal
pm.mutex.RLock()
running := pm.isRunning
pm.mutex.RUnlock()
if !running {
return fmt.Errorf("policy startup completed but process is not running")
}
return nil
case <-time.After(10 * time.Second):
pm.mutex.Lock()
pm.isStarting = false
pm.mutex.Unlock()
return fmt.Errorf("policy startup timeout")
case <-pm.ctx.Done():
pm.mutex.Lock()
pm.isStarting = false
pm.mutex.Unlock()
return fmt.Errorf("policy context cancelled")
}
}
// StartPolicy starts the policy script process.
// Returns an error if the script doesn't exist, can't be executed, or is already running.
func (pm *PolicyManager) StartPolicy() error {
@@ -800,6 +920,11 @@ func (pm *PolicyManager) IsRunning() bool {
return pm.isRunning
}
// GetScriptPath returns the path to the policy script.
func (pm *PolicyManager) GetScriptPath() string {
return pm.scriptPath
}
// Shutdown gracefully shuts down the policy manager.
// It cancels the context and stops any running policy script.
func (pm *PolicyManager) Shutdown() {

View File

@@ -0,0 +1,516 @@
package policy
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"testing"
"time"
"lol.mleku.dev/chk"
"next.orly.dev/pkg/crypto/p256k"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
)
// TestPolicyIntegration runs the relay with policy enabled and tests event filtering
func TestPolicyIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
// Generate test keys
allowedSigner := &p256k.Signer{}
if err := allowedSigner.Generate(); chk.E(err) {
t.Fatalf("Failed to generate allowed signer: %v", err)
}
allowedPubkeyHex := hex.Enc(allowedSigner.Pub())
unauthorizedSigner := &p256k.Signer{}
if err := unauthorizedSigner.Generate(); chk.E(err) {
t.Fatalf("Failed to generate unauthorized signer: %v", err)
}
// Create temporary directory for policy config
tempDir := t.TempDir()
configDir := filepath.Join(tempDir, "ORLY_TEST")
if err := os.MkdirAll(configDir, 0755); chk.E(err) {
t.Fatalf("Failed to create config directory: %v", err)
}
// Create policy JSON with generated keys
policyJSON := map[string]interface{}{
"kind": map[string]interface{}{
"whitelist": []int{4678, 10306, 30520, 30919},
},
"rules": map[string]interface{}{
"4678": map[string]interface{}{
"description": "Zenotp message events",
"script": filepath.Join(configDir, "validate4678.js"), // Won't exist, should fall back to default
"privileged": true,
},
"10306": map[string]interface{}{
"description": "End user whitelist changes",
"read_allow": []string{allowedPubkeyHex},
"privileged": true,
},
"30520": map[string]interface{}{
"description": "Zenotp events",
"write_allow": []string{allowedPubkeyHex},
"privileged": true,
},
"30919": map[string]interface{}{
"description": "Zenotp events",
"write_allow": []string{allowedPubkeyHex},
"privileged": true,
},
},
}
policyJSONBytes, err := json.MarshalIndent(policyJSON, "", " ")
if err != nil {
t.Fatalf("Failed to marshal policy JSON: %v", err)
}
policyPath := filepath.Join(configDir, "policy.json")
if err := os.WriteFile(policyPath, policyJSONBytes, 0644); chk.E(err) {
t.Fatalf("Failed to write policy file: %v", err)
}
// Create events with proper signatures
// Event 1: Kind 30520 with allowed pubkey (should be allowed)
event30520Allowed := event.New()
event30520Allowed.CreatedAt = time.Now().Unix()
event30520Allowed.Kind = kind.K{K: 30520}.K
event30520Allowed.Content = []byte("test event 30520")
event30520Allowed.Tags = tag.NewS()
addPTag(event30520Allowed, allowedSigner.Pub()) // Add p tag for privileged check
if err := event30520Allowed.Sign(allowedSigner); chk.E(err) {
t.Fatalf("Failed to sign event30520Allowed: %v", err)
}
// Event 2: Kind 30520 with unauthorized pubkey (should be denied)
event30520Unauthorized := event.New()
event30520Unauthorized.CreatedAt = time.Now().Unix()
event30520Unauthorized.Kind = kind.K{K: 30520}.K
event30520Unauthorized.Content = []byte("test event 30520 unauthorized")
event30520Unauthorized.Tags = tag.NewS()
if err := event30520Unauthorized.Sign(unauthorizedSigner); chk.E(err) {
t.Fatalf("Failed to sign event30520Unauthorized: %v", err)
}
// Event 3: Kind 10306 with allowed pubkey (should be readable by allowed user)
event10306Allowed := event.New()
event10306Allowed.CreatedAt = time.Now().Unix()
event10306Allowed.Kind = kind.K{K: 10306}.K
event10306Allowed.Content = []byte("test event 10306")
event10306Allowed.Tags = tag.NewS()
addPTag(event10306Allowed, allowedSigner.Pub()) // Add p tag for privileged check
if err := event10306Allowed.Sign(allowedSigner); chk.E(err) {
t.Fatalf("Failed to sign event10306Allowed: %v", err)
}
// Event 4: Kind 4678 with allowed pubkey (script-based, should fall back to default)
event4678Allowed := event.New()
event4678Allowed.CreatedAt = time.Now().Unix()
event4678Allowed.Kind = kind.K{K: 4678}.K
event4678Allowed.Content = []byte("test event 4678")
event4678Allowed.Tags = tag.NewS()
addPTag(event4678Allowed, allowedSigner.Pub()) // Add p tag for privileged check
if err := event4678Allowed.Sign(allowedSigner); chk.E(err) {
t.Fatalf("Failed to sign event4678Allowed: %v", err)
}
// Test policy loading
policy, err := New(policyJSONBytes)
if err != nil {
t.Fatalf("Failed to create policy: %v", err)
}
// Verify policy loaded correctly
if len(policy.Rules) != 4 {
t.Errorf("Expected 4 rules, got %d", len(policy.Rules))
}
// Test policy checks directly
t.Run("policy checks", func(t *testing.T) {
// Test 1: Event 30520 with allowed pubkey should be allowed
allowed, err := policy.CheckPolicy("write", event30520Allowed, allowedSigner.Pub(), "127.0.0.1")
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
if !allowed {
t.Error("Expected event30520Allowed to be allowed")
}
// Test 2: Event 30520 with unauthorized pubkey should be denied
allowed, err = policy.CheckPolicy("write", event30520Unauthorized, unauthorizedSigner.Pub(), "127.0.0.1")
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
if allowed {
t.Error("Expected event30520Unauthorized to be denied")
}
// Test 3: Event 10306 should be readable by allowed user
allowed, err = policy.CheckPolicy("read", event10306Allowed, allowedSigner.Pub(), "127.0.0.1")
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
if !allowed {
t.Error("Expected event10306Allowed to be readable by allowed user")
}
// Test 4: Event 10306 should NOT be readable by unauthorized user
allowed, err = policy.CheckPolicy("read", event10306Allowed, unauthorizedSigner.Pub(), "127.0.0.1")
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
if allowed {
t.Error("Expected event10306Allowed to be denied for unauthorized user")
}
// Test 5: Event 10306 should NOT be readable without authentication
allowed, err = policy.CheckPolicy("read", event10306Allowed, nil, "127.0.0.1")
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
if allowed {
t.Error("Expected event10306Allowed to be denied without authentication (privileged)")
}
// Test 6: Event 30520 should NOT be writable without authentication
allowed, err = policy.CheckPolicy("write", event30520Allowed, nil, "127.0.0.1")
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
if allowed {
t.Error("Expected event30520Allowed to be denied without authentication (privileged)")
}
// Test 7: Event 4678 should fall back to default policy (allow) when script not running
allowed, err = policy.CheckPolicy("write", event4678Allowed, allowedSigner.Pub(), "127.0.0.1")
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
if !allowed {
t.Error("Expected event4678Allowed to be allowed when script not running (falls back to default)")
}
// Test 8: Event 4678 should be denied without authentication (privileged check)
allowed, err = policy.CheckPolicy("write", event4678Allowed, nil, "127.0.0.1")
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
if allowed {
t.Error("Expected event4678Allowed to be denied without authentication (privileged)")
}
})
// Test with relay simulation (checking log output)
t.Run("relay simulation", func(t *testing.T) {
// Note: We can't easily capture log output in tests, so we just verify
// that policy checks work correctly
// Simulate policy checks that would happen in relay
// First, publish events (simulate write checks)
checks := []struct {
name string
event *event.E
loggedInPubkey []byte
access string
shouldAllow bool
shouldLog string // Expected log message substring, empty means no specific log expected
}{
{
name: "write 30520 with allowed pubkey",
event: event30520Allowed,
loggedInPubkey: allowedSigner.Pub(),
access: "write",
shouldAllow: true,
},
{
name: "write 30520 with unauthorized pubkey",
event: event30520Unauthorized,
loggedInPubkey: unauthorizedSigner.Pub(),
access: "write",
shouldAllow: false,
},
{
name: "read 10306 with allowed pubkey",
event: event10306Allowed,
loggedInPubkey: allowedSigner.Pub(),
access: "read",
shouldAllow: true,
},
{
name: "read 10306 with unauthorized pubkey",
event: event10306Allowed,
loggedInPubkey: unauthorizedSigner.Pub(),
access: "read",
shouldAllow: false,
},
{
name: "read 10306 without authentication",
event: event10306Allowed,
loggedInPubkey: nil,
access: "read",
shouldAllow: false,
},
{
name: "write 30520 without authentication",
event: event30520Allowed,
loggedInPubkey: nil,
access: "write",
shouldAllow: false,
},
{
name: "write 4678 with allowed pubkey",
event: event4678Allowed,
loggedInPubkey: allowedSigner.Pub(),
access: "write",
shouldAllow: true,
shouldLog: "", // Should not log "policy rule is inactive" if script is not configured
},
}
for _, check := range checks {
t.Run(check.name, func(t *testing.T) {
allowed, err := policy.CheckPolicy(check.access, check.event, check.loggedInPubkey, "127.0.0.1")
if err != nil {
t.Errorf("Unexpected error: %v", err)
return
}
if allowed != check.shouldAllow {
t.Errorf("Expected allowed=%v, got %v", check.shouldAllow, allowed)
}
})
}
})
// Test event IDs are regenerated correctly after signing
t.Run("event ID regeneration", func(t *testing.T) {
// Create a new event, sign it, then verify ID is correct
testEvent := event.New()
testEvent.CreatedAt = time.Now().Unix()
testEvent.Kind = kind.K{K: 30520}.K
testEvent.Content = []byte("test content")
testEvent.Tags = tag.NewS()
// Sign the event
if err := testEvent.Sign(allowedSigner); chk.E(err) {
t.Fatalf("Failed to sign test event: %v", err)
}
// Verify event ID is correct (should be SHA256 of serialized event)
if len(testEvent.ID) != 32 {
t.Errorf("Expected event ID to be 32 bytes, got %d", len(testEvent.ID))
}
// Verify signature is correct
if len(testEvent.Sig) != 64 {
t.Errorf("Expected event signature to be 64 bytes, got %d", len(testEvent.Sig))
}
// Verify signature validates using event's Verify method
valid, err := testEvent.Verify()
if err != nil {
t.Errorf("Failed to verify signature: %v", err)
}
if !valid {
t.Error("Event signature verification failed")
}
})
// Test WebSocket client simulation (for future integration)
t.Run("websocket client simulation", func(t *testing.T) {
// This test simulates what would happen if we connected via WebSocket
// For now, we'll just verify the events can be serialized correctly
events := []*event.E{
event30520Allowed,
event30520Unauthorized,
event10306Allowed,
event4678Allowed,
}
for i, ev := range events {
t.Run(fmt.Sprintf("event_%d", i), func(t *testing.T) {
// Serialize event
serialized := ev.Serialize()
if len(serialized) == 0 {
t.Error("Event serialization returned empty")
}
// Verify event can be parsed back (simplified check)
if len(ev.ID) != 32 {
t.Errorf("Event ID length incorrect: %d", len(ev.ID))
}
if len(ev.Pubkey) != 32 {
t.Errorf("Event pubkey length incorrect: %d", len(ev.Pubkey))
}
if len(ev.Sig) != 64 {
t.Errorf("Event signature length incorrect: %d", len(ev.Sig))
}
})
}
})
}
// TestPolicyWithRelay creates a comprehensive test that simulates relay behavior
func TestPolicyWithRelay(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
// Generate keys
allowedSigner := &p256k.Signer{}
if err := allowedSigner.Generate(); chk.E(err) {
t.Fatalf("Failed to generate allowed signer: %v", err)
}
allowedPubkeyHex := hex.Enc(allowedSigner.Pub())
unauthorizedSigner := &p256k.Signer{}
if err := unauthorizedSigner.Generate(); chk.E(err) {
t.Fatalf("Failed to generate unauthorized signer: %v", err)
}
// Create policy JSON
policyJSON := map[string]interface{}{
"kind": map[string]interface{}{
"whitelist": []int{4678, 10306, 30520, 30919},
},
"rules": map[string]interface{}{
"10306": map[string]interface{}{
"description": "End user whitelist changes",
"read_allow": []string{allowedPubkeyHex},
"privileged": true,
},
"30520": map[string]interface{}{
"description": "Zenotp events",
"write_allow": []string{allowedPubkeyHex},
"privileged": true,
},
"30919": map[string]interface{}{
"description": "Zenotp events",
"write_allow": []string{allowedPubkeyHex},
"privileged": true,
},
},
}
policyJSONBytes, err := json.Marshal(policyJSON)
if err != nil {
t.Fatalf("Failed to marshal policy JSON: %v", err)
}
policy, err := New(policyJSONBytes)
if err != nil {
t.Fatalf("Failed to create policy: %v", err)
}
// Create test event (kind 30520) with allowed pubkey
testEvent := event.New()
testEvent.CreatedAt = time.Now().Unix()
testEvent.Kind = kind.K{K: 30520}.K
testEvent.Content = []byte("test content")
testEvent.Tags = tag.NewS()
addPTag(testEvent, allowedSigner.Pub())
if err := testEvent.Sign(allowedSigner); chk.E(err) {
t.Fatalf("Failed to sign test event: %v", err)
}
// Test scenarios
scenarios := []struct {
name string
loggedInPubkey []byte
expectedResult bool
description string
}{
{
name: "authenticated as allowed pubkey",
loggedInPubkey: allowedSigner.Pub(),
expectedResult: true,
description: "Should allow when authenticated as allowed pubkey",
},
{
name: "unauthenticated",
loggedInPubkey: nil,
expectedResult: false,
description: "Should deny when not authenticated (privileged check)",
},
{
name: "authenticated as different pubkey",
loggedInPubkey: unauthorizedSigner.Pub(),
expectedResult: false,
description: "Should deny when authenticated as different pubkey",
},
}
for _, scenario := range scenarios {
t.Run(scenario.name, func(t *testing.T) {
allowed, err := policy.CheckPolicy("write", testEvent, scenario.loggedInPubkey, "127.0.0.1")
if err != nil {
t.Errorf("Unexpected error: %v", err)
return
}
if allowed != scenario.expectedResult {
t.Errorf("%s: Expected allowed=%v, got %v", scenario.description, scenario.expectedResult, allowed)
}
})
}
// Test read access for kind 10306
readEvent := event.New()
readEvent.CreatedAt = time.Now().Unix()
readEvent.Kind = kind.K{K: 10306}.K
readEvent.Content = []byte("test read event")
readEvent.Tags = tag.NewS()
addPTag(readEvent, allowedSigner.Pub())
if err := readEvent.Sign(allowedSigner); chk.E(err) {
t.Fatalf("Failed to sign read event: %v", err)
}
readScenarios := []struct {
name string
loggedInPubkey []byte
expectedResult bool
description string
}{
{
name: "read authenticated as allowed pubkey",
loggedInPubkey: allowedSigner.Pub(),
expectedResult: true,
description: "Should allow read when authenticated as allowed pubkey",
},
{
name: "read unauthenticated",
loggedInPubkey: nil,
expectedResult: false,
description: "Should deny read when not authenticated (privileged check)",
},
{
name: "read authenticated as different pubkey",
loggedInPubkey: unauthorizedSigner.Pub(),
expectedResult: false,
description: "Should deny read when authenticated as different pubkey",
},
}
for _, scenario := range readScenarios {
t.Run(scenario.name, func(t *testing.T) {
allowed, err := policy.CheckPolicy("read", readEvent, scenario.loggedInPubkey, "127.0.0.1")
if err != nil {
t.Errorf("Unexpected error: %v", err)
return
}
if allowed != scenario.expectedResult {
t.Errorf("%s: Expected allowed=%v, got %v", scenario.description, scenario.expectedResult, allowed)
}
})
}
}

File diff suppressed because it is too large Load Diff

200
pkg/run/run.go Normal file
View File

@@ -0,0 +1,200 @@
package run
import (
"bytes"
"context"
"io"
"os"
"path/filepath"
"strings"
"sync"
"github.com/adrg/xdg"
"lol.mleku.dev/chk"
lol "lol.mleku.dev"
"next.orly.dev/app"
"next.orly.dev/app/config"
"next.orly.dev/pkg/acl"
"next.orly.dev/pkg/database"
)
// Options configures relay startup behavior.
type Options struct {
// CleanupDataDir controls whether the data directory is deleted on Stop().
// Defaults to true. Set to false to preserve the data directory.
CleanupDataDir *bool
// StdoutWriter is an optional writer to receive stdout logs.
// If nil, stdout will be captured to a buffer accessible via Relay.Stdout().
StdoutWriter io.Writer
// StderrWriter is an optional writer to receive stderr logs.
// If nil, stderr will be captured to a buffer accessible via Relay.Stderr().
StderrWriter io.Writer
}
// Relay represents a running relay instance that can be started and stopped.
type Relay struct {
ctx context.Context
cancel context.CancelFunc
db *database.D
quit chan struct{}
dataDir string
cleanupDataDir bool
// Log capture
stdoutBuf *bytes.Buffer
stderrBuf *bytes.Buffer
stdoutWriter io.Writer
stderrWriter io.Writer
logMu sync.RWMutex
}
// Start initializes and starts a relay with the given configuration.
// It bypasses the configuration loading step and uses the provided config directly.
//
// Parameters:
// - cfg: The configuration to use for the relay
// - opts: Optional configuration for relay behavior. If nil, defaults are used.
//
// Returns:
// - relay: A Relay instance that can be used to stop the relay
// - err: An error if initialization or startup fails
func Start(cfg *config.C, opts *Options) (relay *Relay, err error) {
relay = &Relay{
cleanupDataDir: true,
}
// Apply options
var userStdoutWriter, userStderrWriter io.Writer
if opts != nil {
if opts.CleanupDataDir != nil {
relay.cleanupDataDir = *opts.CleanupDataDir
}
userStdoutWriter = opts.StdoutWriter
userStderrWriter = opts.StderrWriter
}
// Set up log capture buffers
relay.stdoutBuf = &bytes.Buffer{}
relay.stderrBuf = &bytes.Buffer{}
// Build writers list for stdout
stdoutWriters := []io.Writer{relay.stdoutBuf}
if userStdoutWriter != nil {
stdoutWriters = append(stdoutWriters, userStdoutWriter)
}
stdoutWriters = append(stdoutWriters, os.Stdout)
relay.stdoutWriter = io.MultiWriter(stdoutWriters...)
// Build writers list for stderr
stderrWriters := []io.Writer{relay.stderrBuf}
if userStderrWriter != nil {
stderrWriters = append(stderrWriters, userStderrWriter)
}
stderrWriters = append(stderrWriters, os.Stderr)
relay.stderrWriter = io.MultiWriter(stderrWriters...)
// Set up logging - write to appropriate destination and capture
if cfg.LogToStdout {
lol.Writer = relay.stdoutWriter
} else {
lol.Writer = relay.stderrWriter
}
lol.SetLogLevel(cfg.LogLevel)
// Expand DataDir if needed
if cfg.DataDir == "" || strings.Contains(cfg.DataDir, "~") {
cfg.DataDir = filepath.Join(xdg.DataHome, cfg.AppName)
}
relay.dataDir = cfg.DataDir
// Create context
relay.ctx, relay.cancel = context.WithCancel(context.Background())
// Initialize database
if relay.db, err = database.New(
relay.ctx, relay.cancel, cfg.DataDir, cfg.DBLogLevel,
); chk.E(err) {
return
}
// Configure ACL
acl.Registry.Active.Store(cfg.ACLMode)
if err = acl.Registry.Configure(cfg, relay.db, relay.ctx); chk.E(err) {
return
}
acl.Registry.Syncer()
// Start the relay
relay.quit = app.Run(relay.ctx, cfg, relay.db)
return
}
// Stop gracefully stops the relay by canceling the context and closing the database.
// If CleanupDataDir is enabled (default), it also removes the data directory.
//
// Returns:
// - err: An error if shutdown fails
func (r *Relay) Stop() (err error) {
if r.cancel != nil {
r.cancel()
}
if r.quit != nil {
<-r.quit
}
if r.db != nil {
err = r.db.Close()
}
// Clean up data directory if enabled
if r.cleanupDataDir && r.dataDir != "" {
if rmErr := os.RemoveAll(r.dataDir); rmErr != nil {
if err == nil {
err = rmErr
}
}
}
return
}
// Stdout returns the complete stdout log buffer contents.
func (r *Relay) Stdout() string {
r.logMu.RLock()
defer r.logMu.RUnlock()
if r.stdoutBuf == nil {
return ""
}
return r.stdoutBuf.String()
}
// Stderr returns the complete stderr log buffer contents.
func (r *Relay) Stderr() string {
r.logMu.RLock()
defer r.logMu.RUnlock()
if r.stderrBuf == nil {
return ""
}
return r.stderrBuf.String()
}
// StdoutBytes returns the complete stdout log buffer as bytes.
func (r *Relay) StdoutBytes() []byte {
r.logMu.RLock()
defer r.logMu.RUnlock()
if r.stdoutBuf == nil {
return nil
}
return r.stdoutBuf.Bytes()
}
// StderrBytes returns the complete stderr log buffer as bytes.
func (r *Relay) StderrBytes() []byte {
r.logMu.RLock()
defer r.logMu.RUnlock()
if r.stderrBuf == nil {
return nil
}
return r.stderrBuf.Bytes()
}

View File

@@ -1 +1 @@
v0.20.0
v0.21.4

326
relay-tester/client.go Normal file
View File

@@ -0,0 +1,326 @@
package relaytester
import (
"context"
"encoding/json"
"sync"
"time"
"github.com/gorilla/websocket"
"lol.mleku.dev/errorf"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/hex"
)
// Client wraps a WebSocket connection to a relay for testing.
type Client struct {
conn *websocket.Conn
url string
mu sync.Mutex
subs map[string]chan []byte
complete map[string]bool // Track if subscription is complete (e.g., by ID)
okCh chan []byte // Channel for OK messages
countCh chan []byte // Channel for COUNT messages
ctx context.Context
cancel context.CancelFunc
}
// NewClient creates a new test client connected to the relay.
func NewClient(url string) (c *Client, err error) {
ctx, cancel := context.WithCancel(context.Background())
var conn *websocket.Conn
dialer := websocket.Dialer{
HandshakeTimeout: 5 * time.Second,
}
if conn, _, err = dialer.Dial(url, nil); err != nil {
cancel()
return
}
c = &Client{
conn: conn,
url: url,
subs: make(map[string]chan []byte),
complete: make(map[string]bool),
okCh: make(chan []byte, 100),
countCh: make(chan []byte, 100),
ctx: ctx,
cancel: cancel,
}
go c.readLoop()
return
}
// Close closes the client connection.
func (c *Client) Close() error {
c.cancel()
return c.conn.Close()
}
// URL returns the relay URL.
func (c *Client) URL() string {
return c.url
}
// Send sends a JSON message to the relay.
func (c *Client) Send(msg interface{}) (err error) {
c.mu.Lock()
defer c.mu.Unlock()
var data []byte
if data, err = json.Marshal(msg); err != nil {
return errorf.E("failed to marshal message: %w", err)
}
if err = c.conn.WriteMessage(websocket.TextMessage, data); err != nil {
return errorf.E("failed to write message: %w", err)
}
return
}
// readLoop reads messages from the relay and routes them to subscriptions.
func (c *Client) readLoop() {
defer c.conn.Close()
for {
select {
case <-c.ctx.Done():
return
default:
}
_, msg, err := c.conn.ReadMessage()
if err != nil {
return
}
var raw []interface{}
if err = json.Unmarshal(msg, &raw); err != nil {
continue
}
if len(raw) < 2 {
continue
}
typ, ok := raw[0].(string)
if !ok {
continue
}
c.mu.Lock()
switch typ {
case "EVENT":
if len(raw) >= 2 {
if subID, ok := raw[1].(string); ok {
if ch, exists := c.subs[subID]; exists {
select {
case ch <- msg:
default:
}
}
}
}
case "EOSE":
if len(raw) >= 2 {
if subID, ok := raw[1].(string); ok {
if ch, exists := c.subs[subID]; exists {
// Send EOSE message to channel
select {
case ch <- msg:
default:
}
// For complete subscriptions (by ID), close the channel after EOSE
if c.complete[subID] {
close(ch)
delete(c.subs, subID)
delete(c.complete, subID)
}
}
}
}
case "OK":
// Route OK messages to okCh for WaitForOK
select {
case c.okCh <- msg:
default:
}
case "COUNT":
// Route COUNT messages to countCh for Count
select {
case c.countCh <- msg:
default:
}
case "NOTICE":
// Notice messages are logged
case "CLOSED":
// Closed messages indicate subscription ended
case "AUTH":
// Auth challenge messages
}
c.mu.Unlock()
}
}
// Subscribe creates a subscription and returns a channel for events.
func (c *Client) Subscribe(subID string, filters []interface{}) (ch chan []byte, err error) {
req := []interface{}{"REQ", subID}
req = append(req, filters...)
if err = c.Send(req); err != nil {
return
}
c.mu.Lock()
ch = make(chan []byte, 100)
c.subs[subID] = ch
// Check if subscription is complete (has 'ids' filter)
isComplete := false
for _, f := range filters {
if fMap, ok := f.(map[string]interface{}); ok {
if ids, exists := fMap["ids"]; exists {
if idList, ok := ids.([]string); ok && len(idList) > 0 {
isComplete = true
break
}
}
}
}
c.complete[subID] = isComplete
c.mu.Unlock()
return
}
// Unsubscribe closes a subscription.
func (c *Client) Unsubscribe(subID string) error {
c.mu.Lock()
if ch, exists := c.subs[subID]; exists {
// Channel might already be closed by EOSE, so use recover to handle gracefully
func() {
defer func() {
if recover() != nil {
// Channel was already closed, ignore
}
}()
close(ch)
}()
delete(c.subs, subID)
delete(c.complete, subID)
}
c.mu.Unlock()
return c.Send([]interface{}{"CLOSE", subID})
}
// Publish sends an EVENT message to the relay.
func (c *Client) Publish(ev *event.E) (err error) {
evJSON := ev.Serialize()
var evMap map[string]interface{}
if err = json.Unmarshal(evJSON, &evMap); err != nil {
return errorf.E("failed to unmarshal event: %w", err)
}
return c.Send([]interface{}{"EVENT", evMap})
}
// WaitForOK waits for an OK response for the given event ID.
func (c *Client) WaitForOK(eventID []byte, timeout time.Duration) (accepted bool, reason string, err error) {
ctx, cancel := context.WithTimeout(c.ctx, timeout)
defer cancel()
idStr := hex.Enc(eventID)
for {
select {
case <-ctx.Done():
return false, "", errorf.E("timeout waiting for OK response")
case msg := <-c.okCh:
var raw []interface{}
if err = json.Unmarshal(msg, &raw); err != nil {
continue
}
if len(raw) < 3 {
continue
}
if id, ok := raw[1].(string); ok && id == idStr {
accepted, _ = raw[2].(bool)
if len(raw) > 3 {
reason, _ = raw[3].(string)
}
return
}
}
}
}
// Count sends a COUNT request and returns the count.
func (c *Client) Count(filters []interface{}) (count int64, err error) {
req := []interface{}{"COUNT", "count-sub"}
req = append(req, filters...)
if err = c.Send(req); err != nil {
return
}
ctx, cancel := context.WithTimeout(c.ctx, 5*time.Second)
defer cancel()
for {
select {
case <-ctx.Done():
return 0, errorf.E("timeout waiting for COUNT response")
case msg := <-c.countCh:
var raw []interface{}
if err = json.Unmarshal(msg, &raw); err != nil {
continue
}
if len(raw) >= 3 {
if subID, ok := raw[1].(string); ok && subID == "count-sub" {
// COUNT response format: ["COUNT", "subscription-id", count, approximate?]
if cnt, ok := raw[2].(float64); ok {
return int64(cnt), nil
}
}
}
}
}
}
// Auth sends an AUTH message with the signed event.
func (c *Client) Auth(ev *event.E) error {
evJSON := ev.Serialize()
var evMap map[string]interface{}
if err := json.Unmarshal(evJSON, &evMap); err != nil {
return errorf.E("failed to unmarshal event: %w", err)
}
return c.Send([]interface{}{"AUTH", evMap})
}
// GetEvents collects all events from a subscription until EOSE.
func (c *Client) GetEvents(subID string, filters []interface{}, timeout time.Duration) (events []*event.E, err error) {
ch, err := c.Subscribe(subID, filters)
if err != nil {
return
}
defer c.Unsubscribe(subID)
ctx, cancel := context.WithTimeout(c.ctx, timeout)
defer cancel()
for {
select {
case <-ctx.Done():
return events, nil
case msg, ok := <-ch:
if !ok {
return events, nil
}
var raw []interface{}
if err = json.Unmarshal(msg, &raw); err != nil {
continue
}
if len(raw) < 2 {
continue
}
typ, ok := raw[0].(string)
if !ok {
continue
}
switch typ {
case "EVENT":
if len(raw) >= 3 {
if evData, ok := raw[2].(map[string]interface{}); ok {
evJSON, _ := json.Marshal(evData)
ev := event.New()
if _, err = ev.Unmarshal(evJSON); err == nil {
events = append(events, ev)
}
}
}
case "EOSE":
// End of stored events - return what we have
return events, nil
}
}
}
}

131
relay-tester/keys.go Normal file
View File

@@ -0,0 +1,131 @@
package relaytester
import (
"crypto/rand"
"fmt"
"time"
"lol.mleku.dev/chk"
"next.orly.dev/pkg/crypto/p256k"
"next.orly.dev/pkg/encoders/bech32encoding"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
)
// KeyPair represents a test keypair.
type KeyPair struct {
Secret *p256k.Signer
Pubkey []byte
Nsec string
Npub string
}
// GenerateKeyPair generates a new keypair for testing.
func GenerateKeyPair() (kp *KeyPair, err error) {
kp = &KeyPair{}
kp.Secret = &p256k.Signer{}
if err = kp.Secret.Generate(); chk.E(err) {
return
}
kp.Pubkey = kp.Secret.Pub()
nsecBytes, err := bech32encoding.BinToNsec(kp.Secret.Sec())
if chk.E(err) {
return
}
kp.Nsec = string(nsecBytes)
npubBytes, err := bech32encoding.BinToNpub(kp.Pubkey)
if chk.E(err) {
return
}
kp.Npub = string(npubBytes)
return
}
// CreateEvent creates a signed event with the given parameters.
func CreateEvent(signer *p256k.Signer, kindNum uint16, content string, tags *tag.S) (ev *event.E, err error) {
ev = event.New()
ev.CreatedAt = time.Now().Unix()
ev.Kind = kindNum
ev.Content = []byte(content)
if tags != nil {
ev.Tags = tags
} else {
ev.Tags = tag.NewS()
}
if err = ev.Sign(signer); chk.E(err) {
return
}
return
}
// CreateEventWithTags creates an event with specific tags.
func CreateEventWithTags(signer *p256k.Signer, kindNum uint16, content string, tagPairs [][]string) (ev *event.E, err error) {
tags := tag.NewS()
for _, pair := range tagPairs {
if len(pair) >= 2 {
// Build tag fields as []byte variadic arguments
tagFields := make([][]byte, len(pair))
tagFields[0] = []byte(pair[0])
for i := 1; i < len(pair); i++ {
tagFields[i] = []byte(pair[i])
}
tags.Append(tag.NewFromBytesSlice(tagFields...))
}
}
return CreateEvent(signer, kindNum, content, tags)
}
// CreateReplaceableEvent creates a replaceable event (kind 0-3, 10000-19999).
func CreateReplaceableEvent(signer *p256k.Signer, kindNum uint16, content string) (ev *event.E, err error) {
return CreateEvent(signer, kindNum, content, nil)
}
// CreateEphemeralEvent creates an ephemeral event (kind 20000-29999).
func CreateEphemeralEvent(signer *p256k.Signer, kindNum uint16, content string) (ev *event.E, err error) {
return CreateEvent(signer, kindNum, content, nil)
}
// CreateDeleteEvent creates a deletion event (kind 5).
func CreateDeleteEvent(signer *p256k.Signer, eventIDs [][]byte, reason string) (ev *event.E, err error) {
tags := tag.NewS()
for _, id := range eventIDs {
// e tags must contain hex-encoded event IDs
tags.Append(tag.NewFromBytesSlice([]byte("e"), []byte(hex.Enc(id))))
}
if reason != "" {
tags.Append(tag.NewFromBytesSlice([]byte("content"), []byte(reason)))
}
return CreateEvent(signer, kind.EventDeletion.K, reason, tags)
}
// CreateParameterizedReplaceableEvent creates a parameterized replaceable event (kind 30000-39999).
func CreateParameterizedReplaceableEvent(signer *p256k.Signer, kindNum uint16, content string, dTag string) (ev *event.E, err error) {
tags := tag.NewS()
tags.Append(tag.NewFromBytesSlice([]byte("d"), []byte(dTag)))
return CreateEvent(signer, kindNum, content, tags)
}
// RandomID generates a random 32-byte ID.
func RandomID() (id []byte, err error) {
id = make([]byte, 32)
if _, err = rand.Read(id); err != nil {
return nil, fmt.Errorf("failed to generate random ID: %w", err)
}
return
}
// MustHex decodes a hex string or panics.
func MustHex(s string) []byte {
b, err := hex.Dec(s)
if err != nil {
panic(fmt.Sprintf("invalid hex: %s", s))
}
return b
}
// HexID returns the hex-encoded event ID.
func HexID(ev *event.E) string {
return hex.Enc(ev.ID)
}

449
relay-tester/test.go Normal file
View File

@@ -0,0 +1,449 @@
package relaytester
import (
"encoding/json"
"time"
"lol.mleku.dev/errorf"
)
// TestResult represents the result of a test.
type TestResult struct {
Name string `json:"test"`
Pass bool `json:"pass"`
Required bool `json:"required"`
Info string `json:"info,omitempty"`
}
// TestFunc is a function that runs a test case.
type TestFunc func(client *Client, key1, key2 *KeyPair) (result TestResult)
// TestCase represents a test case with dependencies.
type TestCase struct {
Name string
Required bool
Func TestFunc
Dependencies []string // Names of tests that must run before this one
}
// TestSuite runs all tests against a relay.
type TestSuite struct {
relayURL string
key1 *KeyPair
key2 *KeyPair
tests map[string]*TestCase
results map[string]TestResult
order []string
}
// NewTestSuite creates a new test suite.
func NewTestSuite(relayURL string) (suite *TestSuite, err error) {
suite = &TestSuite{
relayURL: relayURL,
tests: make(map[string]*TestCase),
results: make(map[string]TestResult),
}
if suite.key1, err = GenerateKeyPair(); err != nil {
return
}
if suite.key2, err = GenerateKeyPair(); err != nil {
return
}
suite.registerTests()
return
}
// AddTest adds a test case to the suite.
func (s *TestSuite) AddTest(tc *TestCase) {
s.tests[tc.Name] = tc
}
// registerTests registers all test cases.
func (s *TestSuite) registerTests() {
allTests := []*TestCase{
{
Name: "Publishes basic event",
Required: true,
Func: testPublishBasicEvent,
},
{
Name: "Finds event by ID",
Required: true,
Func: testFindByID,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Finds event by author",
Required: true,
Func: testFindByAuthor,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Finds event by kind",
Required: true,
Func: testFindByKind,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Finds event by tags",
Required: true,
Func: testFindByTags,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Finds by multiple tags",
Required: true,
Func: testFindByMultipleTags,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Finds by time range",
Required: true,
Func: testFindByTimeRange,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Rejects invalid signature",
Required: true,
Func: testRejectInvalidSignature,
},
{
Name: "Rejects future event",
Required: true,
Func: testRejectFutureEvent,
},
{
Name: "Rejects expired event",
Required: false,
Func: testRejectExpiredEvent,
},
{
Name: "Handles replaceable events",
Required: true,
Func: testReplaceableEvents,
},
{
Name: "Handles ephemeral events",
Required: false,
Func: testEphemeralEvents,
},
{
Name: "Handles parameterized replaceable events",
Required: true,
Func: testParameterizedReplaceableEvents,
},
{
Name: "Handles deletion events",
Required: true,
Func: testDeletionEvents,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Handles COUNT request",
Required: true,
Func: testCountRequest,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Handles limit parameter",
Required: true,
Func: testLimitParameter,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Handles multiple filters",
Required: true,
Func: testMultipleFilters,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Handles subscription close",
Required: true,
Func: testSubscriptionClose,
},
// Filter tests
{
Name: "Since and until filters are inclusive",
Required: true,
Func: testSinceUntilAreInclusive,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Limit zero works",
Required: true,
Func: testLimitZero,
},
// Find tests
{
Name: "Events are ordered from newest to oldest",
Required: true,
Func: testEventsOrderedFromNewestToOldest,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Newest events are returned when filter is limited",
Required: true,
Func: testNewestEventsWhenLimited,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Finds by pubkey and kind",
Required: true,
Func: testFindByPubkeyAndKind,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Finds by pubkey and tags",
Required: true,
Func: testFindByPubkeyAndTags,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Finds by kind and tags",
Required: true,
Func: testFindByKindAndTags,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Finds by scrape",
Required: true,
Func: testFindByScrape,
Dependencies: []string{"Publishes basic event"},
},
// Replaceable event tests
{
Name: "Replaces metadata",
Required: true,
Func: testReplacesMetadata,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Replaces contact list",
Required: true,
Func: testReplacesContactList,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Replaced events are still available by ID",
Required: false,
Func: testReplacedEventsStillAvailableByID,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Replaceable events replace older ones",
Required: true,
Func: testReplaceableEventRemovesPrevious,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Replaceable events rejected if a newer one exists",
Required: true,
Func: testReplaceableEventRejectedIfFuture,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Addressable events replace older ones",
Required: true,
Func: testAddressableEventRemovesPrevious,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Addressable events rejected if a newer one exists",
Required: true,
Func: testAddressableEventRejectedIfFuture,
Dependencies: []string{"Publishes basic event"},
},
// Deletion tests
{
Name: "Deletes by a-tag address",
Required: true,
Func: testDeleteByAddr,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Delete by a-tag deletes older but not newer",
Required: true,
Func: testDeleteByAddrOnlyDeletesOlder,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Delete by a-tag is bound by a-tag",
Required: true,
Func: testDeleteByAddrIsBoundByTag,
Dependencies: []string{"Publishes basic event"},
},
// Ephemeral tests
{
Name: "Ephemeral subscriptions work",
Required: false,
Func: testEphemeralSubscriptionsWork,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Persists ephemeral events",
Required: false,
Func: testPersistsEphemeralEvents,
Dependencies: []string{"Publishes basic event"},
},
// EOSE tests
{
Name: "Supports EOSE",
Required: true,
Func: testSupportsEose,
},
{
Name: "Subscription receives event after ping period",
Required: true,
Func: testSubscriptionReceivesEventAfterPingPeriod,
},
{
Name: "Closes complete subscriptions after EOSE",
Required: false,
Func: testClosesCompleteSubscriptionsAfterEose,
},
{
Name: "Keeps open incomplete subscriptions after EOSE",
Required: true,
Func: testKeepsOpenIncompleteSubscriptionsAfterEose,
},
// JSON tests
{
Name: "Accepts events with empty tags",
Required: false,
Func: testAcceptsEventsWithEmptyTags,
Dependencies: []string{"Publishes basic event"},
},
{
Name: "Accepts NIP-01 JSON escape sequences",
Required: true,
Func: testAcceptsNip1JsonEscapeSequences,
Dependencies: []string{"Publishes basic event"},
},
// Registration tests
{
Name: "Sends OK after EVENT",
Required: true,
Func: testSendsOkAfterEvent,
},
{
Name: "Verifies event signatures",
Required: true,
Func: testVerifiesSignatures,
},
{
Name: "Verifies event ID hashes",
Required: true,
Func: testVerifiesIdHashes,
},
}
for _, tc := range allTests {
s.AddTest(tc)
}
s.topologicalSort()
}
// topologicalSort orders tests based on dependencies.
func (s *TestSuite) topologicalSort() {
visited := make(map[string]bool)
temp := make(map[string]bool)
var visit func(name string)
visit = func(name string) {
if temp[name] {
return
}
if visited[name] {
return
}
temp[name] = true
if tc, exists := s.tests[name]; exists {
for _, dep := range tc.Dependencies {
visit(dep)
}
}
temp[name] = false
visited[name] = true
s.order = append(s.order, name)
}
for name := range s.tests {
if !visited[name] {
visit(name)
}
}
}
// Run runs all tests in the suite.
func (s *TestSuite) Run() (results []TestResult, err error) {
client, err := NewClient(s.relayURL)
if err != nil {
return nil, errorf.E("failed to connect to relay: %w", err)
}
defer client.Close()
for _, name := range s.order {
tc := s.tests[name]
if tc == nil {
continue
}
result := tc.Func(client, s.key1, s.key2)
result.Name = name
result.Required = tc.Required
s.results[name] = result
results = append(results, result)
time.Sleep(100 * time.Millisecond) // Small delay between tests
}
return
}
// RunTest runs a specific test by name.
func (s *TestSuite) RunTest(testName string) (result TestResult, err error) {
tc, exists := s.tests[testName]
if !exists {
return result, errorf.E("test %s not found", testName)
}
// Check dependencies
for _, dep := range tc.Dependencies {
if _, exists := s.results[dep]; !exists {
return result, errorf.E("test %s depends on %s which has not been run", testName, dep)
}
if !s.results[dep].Pass {
return result, errorf.E("test %s depends on %s which failed", testName, dep)
}
}
client, err := NewClient(s.relayURL)
if err != nil {
return result, errorf.E("failed to connect to relay: %w", err)
}
defer client.Close()
result = tc.Func(client, s.key1, s.key2)
result.Name = testName
result.Required = tc.Required
s.results[testName] = result
return
}
// GetResults returns all test results.
func (s *TestSuite) GetResults() map[string]TestResult {
return s.results
}
// ListTests returns a list of all test names in execution order.
func (s *TestSuite) ListTests() []string {
return s.order
}
// GetTestNames returns all registered test names as a map (name -> required).
func (s *TestSuite) GetTestNames() map[string]bool {
result := make(map[string]bool)
for name, tc := range s.tests {
result[name] = tc.Required
}
return result
}
// FormatJSON formats results as JSON.
func FormatJSON(results []TestResult) (output string, err error) {
var data []byte
if data, err = json.Marshal(results); err != nil {
return
}
return string(data), nil
}

1949
relay-tester/tests.go Normal file

File diff suppressed because it is too large Load Diff

245
relay_test.go Normal file
View File

@@ -0,0 +1,245 @@
package main
import (
"fmt"
"net"
"os"
"path/filepath"
"testing"
"time"
lol "lol.mleku.dev"
"next.orly.dev/app/config"
"next.orly.dev/pkg/run"
relaytester "next.orly.dev/relay-tester"
)
var (
testRelayURL string
testName string
testJSON bool
keepDataDir bool
relayPort int
relayDataDir string
)
func TestRelay(t *testing.T) {
var err error
var relay *run.Relay
var relayURL string
// Determine relay URL
if testRelayURL != "" {
relayURL = testRelayURL
} else {
// Start local relay for testing
var port int
if relay, port, err = startTestRelay(); err != nil {
t.Fatalf("Failed to start test relay: %v", err)
}
defer func() {
if stopErr := relay.Stop(); stopErr != nil {
t.Logf("Error stopping relay: %v", stopErr)
}
}()
relayURL = fmt.Sprintf("ws://127.0.0.1:%d", port)
t.Logf("Waiting for relay to be ready at %s...", relayURL)
// Wait for relay to be ready - try connecting to verify it's up
if err = waitForRelay(relayURL, 10*time.Second); err != nil {
t.Fatalf("Relay not ready after timeout: %v", err)
}
t.Logf("Relay is ready at %s", relayURL)
}
// Create test suite
t.Logf("Creating test suite for %s...", relayURL)
suite, err := relaytester.NewTestSuite(relayURL)
if err != nil {
t.Fatalf("Failed to create test suite: %v", err)
}
t.Logf("Test suite created, running tests...")
// Run tests
var results []relaytester.TestResult
if testName != "" {
// Run specific test
result, err := suite.RunTest(testName)
if err != nil {
t.Fatalf("Failed to run test %s: %v", testName, err)
}
results = []relaytester.TestResult{result}
} else {
// Run all tests
if results, err = suite.Run(); err != nil {
t.Fatalf("Failed to run tests: %v", err)
}
}
// Output results
if testJSON {
jsonOutput, err := relaytester.FormatJSON(results)
if err != nil {
t.Fatalf("Failed to format JSON: %v", err)
}
fmt.Println(jsonOutput)
} else {
outputResults(results, t)
}
// Check if any required tests failed
for _, result := range results {
if result.Required && !result.Pass {
t.Errorf("Required test '%s' failed: %s", result.Name, result.Info)
}
}
}
func startTestRelay() (relay *run.Relay, port int, err error) {
cfg := &config.C{
AppName: "ORLY-TEST",
DataDir: relayDataDir,
Listen: "127.0.0.1",
Port: 0, // Always use random port, unless overridden via -port flag
HealthPort: 0,
EnableShutdown: false,
LogLevel: "warn",
DBLogLevel: "warn",
DBBlockCacheMB: 512,
DBIndexCacheMB: 256,
LogToStdout: false,
PprofHTTP: false,
ACLMode: "none",
AuthRequired: false,
AuthToWrite: false,
SubscriptionEnabled: false,
MonthlyPriceSats: 6000,
FollowListFrequency: time.Hour,
WebDisableEmbedded: false,
SprocketEnabled: false,
SpiderMode: "none",
PolicyEnabled: false,
}
// Use explicitly set port if provided via flag, otherwise find an available port
if relayPort > 0 {
cfg.Port = relayPort
} else {
var listener net.Listener
if listener, err = net.Listen("tcp", "127.0.0.1:0"); err != nil {
return nil, 0, fmt.Errorf("failed to find available port: %w", err)
}
addr := listener.Addr().(*net.TCPAddr)
cfg.Port = addr.Port
listener.Close()
}
// Set default data dir if not specified
if cfg.DataDir == "" {
tmpDir := filepath.Join(os.TempDir(), fmt.Sprintf("orly-test-%d", time.Now().UnixNano()))
cfg.DataDir = tmpDir
}
// Set up logging
lol.SetLogLevel(cfg.LogLevel)
// Create options
cleanup := !keepDataDir
opts := &run.Options{
CleanupDataDir: &cleanup,
}
// Start relay
if relay, err = run.Start(cfg, opts); err != nil {
return nil, 0, fmt.Errorf("failed to start relay: %w", err)
}
return relay, cfg.Port, nil
}
// waitForRelay waits for the relay to be ready by attempting to connect
func waitForRelay(url string, timeout time.Duration) error {
// Extract host:port from ws:// URL
addr := url
if len(url) > 7 && url[:5] == "ws://" {
addr = url[5:]
}
deadline := time.Now().Add(timeout)
attempts := 0
for time.Now().Before(deadline) {
conn, err := net.DialTimeout("tcp", addr, 500*time.Millisecond)
if err == nil {
conn.Close()
return nil
}
attempts++
if attempts%10 == 0 {
// Log every 10th attempt (every second)
}
time.Sleep(100 * time.Millisecond)
}
return fmt.Errorf("timeout waiting for relay at %s after %d attempts", url, attempts)
}
func outputResults(results []relaytester.TestResult, t *testing.T) {
passed := 0
failed := 0
requiredFailed := 0
for _, result := range results {
if result.Pass {
passed++
t.Logf("PASS: %s", result.Name)
} else {
failed++
if result.Required {
requiredFailed++
t.Errorf("FAIL (required): %s - %s", result.Name, result.Info)
} else {
t.Logf("FAIL (optional): %s - %s", result.Name, result.Info)
}
}
}
t.Logf("\nTest Summary:")
t.Logf(" Total: %d", len(results))
t.Logf(" Passed: %d", passed)
t.Logf(" Failed: %d", failed)
t.Logf(" Required Failed: %d", requiredFailed)
}
// TestMain allows custom test setup/teardown
func TestMain(m *testing.M) {
// Manually parse our custom flags to avoid conflicts with Go's test flags
for i := 1; i < len(os.Args); i++ {
arg := os.Args[i]
switch arg {
case "-relay-url":
if i+1 < len(os.Args) {
testRelayURL = os.Args[i+1]
i++
}
case "-test-name":
if i+1 < len(os.Args) {
testName = os.Args[i+1]
i++
}
case "-json":
testJSON = true
case "-keep-data":
keepDataDir = true
case "-port":
if i+1 < len(os.Args) {
fmt.Sscanf(os.Args[i+1], "%d", &relayPort)
i++
}
case "-data-dir":
if i+1 < len(os.Args) {
relayDataDir = os.Args[i+1]
i++
}
}
}
code := m.Run()
os.Exit(code)
}

View File

@@ -71,6 +71,9 @@ check_go_installation() {
install_go() {
log_info "Installing Go $GO_VERSION..."
# Save original directory
local original_dir=$(pwd)
# Determine architecture
local arch=$(uname -m)
case $arch in
@@ -100,13 +103,17 @@ install_go() {
rm -rf "$GOROOT"
fi
# Extract Go
log_info "Extracting Go to $GOROOT..."
tar -xf "$go_archive"
# Extract Go to a temporary location first, then move to final destination
log_info "Extracting Go..."
tar -xf "$go_archive" -C /tmp
mv /tmp/go "$GOROOT"
# Clean up
rm -f "$go_archive"
# Return to original directory
cd "$original_dir"
log_success "Go $GO_VERSION installed successfully"
}
@@ -167,7 +174,10 @@ build_application() {
log_info "Updating embedded web assets..."
./scripts/update-embedded-web.sh
# The update-embedded-web.sh script should have built the binary
# Build the binary in the current directory
log_info "Building binary in current directory..."
CGO_ENABLED=1 go build -o "$BINARY_NAME"
if [[ -f "./$BINARY_NAME" ]]; then
log_success "ORLY relay built successfully"
else

198
scripts/run-policy-filter-test.sh Executable file
View File

@@ -0,0 +1,198 @@
#!/bin/bash
set -euo pipefail
# Policy Filter Integration Test
# This script runs the relay with the example policy and tests event filtering
# Config
PORT=${PORT:-34568}
URL=${URL:-ws://127.0.0.1:${PORT}}
LOG=/tmp/orly-policy-filter.out
PID=/tmp/orly-policy-filter.pid
DATADIR=$(mktemp -d)
CONFIG_DIR="$HOME/.config/ORLY_POLICY_TEST"
cleanup() {
trap - EXIT
if [[ -f "$PID" ]]; then
kill -INT "$(cat "$PID")" 2>/dev/null || true
rm -f "$PID"
fi
rm -rf "$DATADIR"
rm -rf "$CONFIG_DIR"
}
trap cleanup EXIT
echo "🧪 Policy Filter Integration Test"
echo "=================================="
# Create config directory
mkdir -p "$CONFIG_DIR"
# Generate keys using Go helper
echo "🔑 Generating test keys..."
KEYGEN_TMP=$(mktemp)
cat > "$KEYGEN_TMP.go" <<'EOF'
package main
import (
"encoding/json"
"fmt"
"next.orly.dev/pkg/crypto/p256k"
"next.orly.dev/pkg/encoders/hex"
)
func main() {
// Generate allowed signer
allowedSigner := &p256k.Signer{}
if err := allowedSigner.Generate(); err != nil {
panic(err)
}
allowedPubkeyHex := hex.Enc(allowedSigner.Pub())
allowedSecHex := hex.Enc(allowedSigner.Sec())
// Generate unauthorized signer
unauthorizedSigner := &p256k.Signer{}
if err := unauthorizedSigner.Generate(); err != nil {
panic(err)
}
unauthorizedPubkeyHex := hex.Enc(unauthorizedSigner.Pub())
unauthorizedSecHex := hex.Enc(unauthorizedSigner.Sec())
result := map[string]string{
"allowedPubkey": allowedPubkeyHex,
"allowedSec": allowedSecHex,
"unauthorizedPubkey": unauthorizedPubkeyHex,
"unauthorizedSec": unauthorizedSecHex,
}
jsonBytes, _ := json.Marshal(result)
fmt.Println(string(jsonBytes))
}
EOF
# Run from the project root directory
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
cd "$PROJECT_ROOT"
KEYS=$(go run -tags=cgo "$KEYGEN_TMP.go" 2>&1 | grep -E '^\{.*\}$' || true)
rm -f "$KEYGEN_TMP.go"
cd - > /dev/null
ALLOWED_PUBKEY=$(echo "$KEYS" | jq -r '.allowedPubkey')
ALLOWED_SEC=$(echo "$KEYS" | jq -r '.allowedSec')
UNAUTHORIZED_PUBKEY=$(echo "$KEYS" | jq -r '.unauthorizedPubkey')
UNAUTHORIZED_SEC=$(echo "$KEYS" | jq -r '.unauthorizedSec')
echo "✅ Generated keys:"
echo " Allowed pubkey: $ALLOWED_PUBKEY"
echo " Unauthorized pubkey: $UNAUTHORIZED_PUBKEY"
# Create policy JSON with generated keys
echo "📝 Creating policy.json..."
cat > "$CONFIG_DIR/policy.json" <<EOF
{
"kind": {
"whitelist": [4678, 10306, 30520, 30919]
},
"rules": {
"4678": {
"description": "Zenotp message events",
"script": "$CONFIG_DIR/validate4678.js",
"privileged": true
},
"10306": {
"description": "End user whitelist changes",
"read_allow": [
"$ALLOWED_PUBKEY"
],
"privileged": true
},
"30520": {
"description": "Zenotp events",
"write_allow": [
"$ALLOWED_PUBKEY"
],
"privileged": true
},
"30919": {
"description": "Zenotp events",
"write_allow": [
"$ALLOWED_PUBKEY"
],
"privileged": true
}
}
}
EOF
echo "✅ Policy file created at: $CONFIG_DIR/policy.json"
# Build relay and test client
echo "🔨 Building relay..."
go build -o orly .
# Start relay
echo "🚀 Starting relay on ${URL} with policy enabled..."
ORLY_APP_NAME="ORLY_POLICY_TEST" \
ORLY_DATA_DIR="$DATADIR" \
ORLY_PORT=${PORT} \
ORLY_POLICY_ENABLED=true \
ORLY_ACL_MODE=none \
ORLY_AUTH_TO_WRITE=true \
ORLY_LOG_LEVEL=info \
./orly >"$LOG" 2>&1 & echo $! >"$PID"
# Wait for relay to start
sleep 3
if ! ps -p "$(cat "$PID")" >/dev/null 2>&1; then
echo "❌ Relay failed to start; logs:" >&2
sed -n '1,200p' "$LOG" >&2
exit 1
fi
echo "✅ Relay started (PID: $(cat "$PID"))"
# Build test client
echo "🔨 Building test client..."
go build -o cmd/policyfiltertest/policyfiltertest ./cmd/policyfiltertest
# Export keys for test client
export ALLOWED_PUBKEY
export ALLOWED_SEC
export UNAUTHORIZED_PUBKEY
export UNAUTHORIZED_SEC
# Run tests
echo "🧪 Running policy filter tests..."
set +e
cmd/policyfiltertest/policyfiltertest -url "${URL}" -allowed-pubkey "$ALLOWED_PUBKEY" -allowed-sec "$ALLOWED_SEC" -unauthorized-pubkey "$UNAUTHORIZED_PUBKEY" -unauthorized-sec "$UNAUTHORIZED_SEC"
TEST_RESULT=$?
set -e
# Check logs for "policy rule is inactive" messages
echo "📋 Checking logs for policy rule inactivity..."
if grep -q "policy rule is inactive" "$LOG"; then
echo "⚠️ WARNING: Found 'policy rule is inactive' messages in logs"
grep "policy rule is inactive" "$LOG" | head -5
else
echo "✅ No 'policy rule is inactive' messages found (good)"
fi
# Check logs for policy filtered events
echo "📋 Checking logs for policy filtered events..."
if grep -q "policy filtered out event" "$LOG"; then
echo "✅ Found policy filtered events (expected):"
grep "policy filtered out event" "$LOG" | head -5
fi
if [ $TEST_RESULT -eq 0 ]; then
echo "✅ All tests passed!"
exit 0
else
echo "❌ Tests failed with exit code $TEST_RESULT"
echo "📋 Last 50 lines of relay log:"
tail -50 "$LOG"
exit $TEST_RESULT
fi

Submodule scripts/secp256k1 deleted from 0cdc758a56

0
scripts/sprocket/SPROCKET_TEST_README.md Normal file → Executable file
View File

0
scripts/sprocket/test-sprocket-complete.sh Normal file → Executable file
View File

0
scripts/sprocket/test-sprocket-demo.sh Normal file → Executable file
View File

0
scripts/sprocket/test-sprocket-example.sh Normal file → Executable file
View File

0
scripts/sprocket/test-sprocket-final.sh Normal file → Executable file
View File

0
scripts/sprocket/test-sprocket-manual.sh Normal file → Executable file
View File

0
scripts/sprocket/test-sprocket-simple.sh Normal file → Executable file
View File

0
scripts/sprocket/test-sprocket-working.sh Normal file → Executable file
View File

0
scripts/sprocket/test-sprocket.py Normal file → Executable file
View File

0
scripts/sprocket/test-sprocket.sh Normal file → Executable file
View File

View File

@@ -1,14 +1,40 @@
#!/usr/bin/env bash
set -e
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
apt -y install build-essential autoconf libtool git wget
cd $SCRIPT_DIR
# Update package lists
apt-get update
# Try to install from package manager first (much faster)
echo "Attempting to install secp256k1 from package manager..."
if apt-get install -y libsecp256k1-dev >/dev/null 2>&1; then
echo "✓ Installed secp256k1 from package manager"
exit 0
fi
# Fall back to building from source if package not available
echo "Package not available in repository, building from source..."
# Install build dependencies
apt-get install -y build-essential autoconf automake libtool git wget pkg-config
cd "$SCRIPT_DIR"
rm -rf secp256k1
# Clone and setup secp256k1
git clone https://github.com/bitcoin-core/secp256k1.git
cd secp256k1
git checkout v0.6.0
# Initialize and update submodules
git submodule init
git submodule update
# Build and install
./autogen.sh
./configure --enable-module-schnorrsig --enable-module-ecdh --prefix=/usr
make -j1
sudo make install
make -j$(nproc)
make install
cd "$SCRIPT_DIR"