Compare commits
4 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
746523ea78
|
|||
|
52189633d9
|
|||
|
59247400dc
|
|||
|
7a27c44bc9
|
@@ -174,7 +174,11 @@
|
||||
"Bash(GOOS=js GOARCH=wasm go build:*)",
|
||||
"Bash(go mod graph:*)",
|
||||
"Bash(xxd:*)",
|
||||
"Bash(CGO_ENABLED=0 go mod tidy:*)"
|
||||
"Bash(CGO_ENABLED=0 go mod tidy:*)",
|
||||
"WebFetch(domain:git.mleku.dev)",
|
||||
"Bash(CGO_ENABLED=0 LOG_LEVEL=trace go test:*)",
|
||||
"Bash(go vet:*)",
|
||||
"Bash(gofmt:*)"
|
||||
],
|
||||
"deny": [],
|
||||
"ask": []
|
||||
|
||||
132
CLAUDE.md
132
CLAUDE.md
@@ -601,7 +601,76 @@ sudo journalctl -u orly -f
|
||||
- `github.com/templexxx/xhex` - SIMD hex encoding
|
||||
- `github.com/ebitengine/purego` - CGO-free C library loading
|
||||
- `go-simpler.org/env` - Environment variable configuration
|
||||
- `lol.mleku.dev` - Custom logging library
|
||||
- `lol.mleku.dev` - Custom logging library (see Logging section below)
|
||||
|
||||
## Logging (lol.mleku.dev)
|
||||
|
||||
The project uses `lol.mleku.dev` (Log Of Location), a simple logging library that prints timestamps and source code locations.
|
||||
|
||||
### Log Levels (lowest to highest verbosity)
|
||||
| Level | Constant | Emoji | Usage |
|
||||
|-------|----------|-------|-------|
|
||||
| Off | `Off` | (none) | Disables all logging |
|
||||
| Fatal | `Fatal` | ☠️ | Unrecoverable errors, program exits |
|
||||
| Error | `Error` | 🚨 | Errors that need attention |
|
||||
| Warn | `Warn` | ⚠️ | Warnings, non-critical issues |
|
||||
| Info | `Info` | ℹ️ | General information (default) |
|
||||
| Debug | `Debug` | 🔎 | Debug information for development |
|
||||
| Trace | `Trace` | 👻 | Very detailed tracing, most verbose |
|
||||
|
||||
### Environment Variable
|
||||
Set log level via `LOG_LEVEL` environment variable:
|
||||
```bash
|
||||
export LOG_LEVEL=trace # Most verbose
|
||||
export LOG_LEVEL=debug # Development debugging
|
||||
export LOG_LEVEL=info # Default
|
||||
export LOG_LEVEL=warn # Only warnings and errors
|
||||
export LOG_LEVEL=error # Only errors
|
||||
export LOG_LEVEL=off # Silent
|
||||
```
|
||||
|
||||
**Note**: ORLY uses `ORLY_LOG_LEVEL` which is mapped to the underlying `LOG_LEVEL`.
|
||||
|
||||
### Usage in Code
|
||||
Import and use the log package:
|
||||
```go
|
||||
import "lol.mleku.dev/log"
|
||||
|
||||
// Log methods (each has .Ln, .F, .S, .C variants)
|
||||
log.T.F("trace: %s", msg) // Trace level - very detailed
|
||||
log.D.F("debug: %s", msg) // Debug level
|
||||
log.I.F("info: %s", msg) // Info level
|
||||
log.W.F("warn: %s", msg) // Warning level
|
||||
log.E.F("error: %s", msg) // Error level
|
||||
log.F.F("fatal: %s", msg) // Fatal level
|
||||
|
||||
// Check errors (prints if error is not nil, returns bool)
|
||||
import "lol.mleku.dev/chk"
|
||||
if chk.E(err) { // chk.E = Error level check
|
||||
return // Error was logged
|
||||
}
|
||||
if chk.D(err) { // chk.D = Debug level check
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### Log Printer Variants
|
||||
Each level has these printer types:
|
||||
- `.Ln(a...)` - Print items with spaces between
|
||||
- `.F(format, a...)` - Printf-style formatting
|
||||
- `.S(a...)` - Spew dump (detailed struct output)
|
||||
- `.C(func() string)` - Lazy evaluation (only runs closure if level is enabled)
|
||||
- `.Chk(error) bool` - Returns true if error is not nil, logs if so
|
||||
- `.Err(format, a...) error` - Logs and returns an error
|
||||
|
||||
### Output Format
|
||||
```
|
||||
1764783029014485👻 message text /path/to/file.go:123
|
||||
```
|
||||
- Unix microsecond timestamp
|
||||
- Level emoji
|
||||
- Message text
|
||||
- Source file:line location
|
||||
|
||||
## Testing Guidelines
|
||||
|
||||
@@ -709,12 +778,71 @@ The Neo4j backend (`pkg/neo4j/`) includes Web of Trust (WoT) extensions:
|
||||
- **Schema Modifications**: See `pkg/neo4j/MODIFYING_SCHEMA.md` for how to update
|
||||
|
||||
### Policy System Enhancements
|
||||
- **Default-Permissive Model**: Read and write are allowed by default unless restrictions are configured
|
||||
- **Write-Only Validation**: Size, age, tag validations apply ONLY to writes
|
||||
- **Read-Only Filtering**: `read_allow`, `read_deny`, `privileged` apply ONLY to reads
|
||||
- **Read-Only Filtering**: `read_allow`, `read_follows_whitelist`, `privileged` apply ONLY to reads
|
||||
- **Separate Follows Whitelists**: `read_follows_whitelist` and `write_follows_whitelist` for fine-grained control
|
||||
- **Scripts**: Policy scripts execute ONLY for write operations
|
||||
- **Reference Documentation**: `docs/POLICY_CONFIGURATION_REFERENCE.md` provides authoritative read vs write applicability
|
||||
- See also: `pkg/policy/README.md` for quick reference
|
||||
|
||||
### Policy JSON Configuration Quick Reference
|
||||
|
||||
```json
|
||||
{
|
||||
"default_policy": "allow|deny",
|
||||
"kind": {
|
||||
"whitelist": [1, 3, 4], // Only these kinds allowed
|
||||
"blacklist": [4] // These kinds denied (ignored if whitelist set)
|
||||
},
|
||||
"global": {
|
||||
// Rule fields applied to ALL events
|
||||
"size_limit": 100000, // Max event size (bytes)
|
||||
"content_limit": 50000, // Max content size (bytes)
|
||||
"max_age_of_event": 86400, // Max age (seconds)
|
||||
"max_age_event_in_future": 300, // Max future time (seconds)
|
||||
"max_expiry_duration": "P7D", // ISO-8601 expiry limit
|
||||
"must_have_tags": ["d", "t"], // Required tag keys
|
||||
"protected_required": false, // Require NIP-70 "-" tag
|
||||
"identifier_regex": "^[a-z0-9-]{1,64}$", // Regex for "d" tags
|
||||
"tag_validation": {"t": "^[a-z0-9]+$"}, // Regex for any tag
|
||||
"privileged": false, // READ-ONLY: party-involved check
|
||||
"write_allow": ["pubkey_hex"], // Pubkeys allowed to write
|
||||
"write_deny": ["pubkey_hex"], // Pubkeys denied from writing
|
||||
"read_allow": ["pubkey_hex"], // Pubkeys allowed to read
|
||||
"read_deny": ["pubkey_hex"], // Pubkeys denied from reading
|
||||
"read_follows_whitelist": ["pubkey_hex"], // Pubkeys whose follows can read
|
||||
"write_follows_whitelist": ["pubkey_hex"], // Pubkeys whose follows can write
|
||||
"script": "/path/to/script.sh" // External validation script
|
||||
},
|
||||
"rules": {
|
||||
"1": { /* Same fields as global, for kind 1 */ },
|
||||
"30023": { /* Same fields as global, for kind 30023 */ }
|
||||
},
|
||||
"policy_admins": ["pubkey_hex"], // Can update via kind 12345
|
||||
"owners": ["pubkey_hex"], // Full policy control
|
||||
"policy_follow_whitelist_enabled": false // Enable legacy write_allow_follows
|
||||
}
|
||||
```
|
||||
|
||||
**Access Control Summary:**
|
||||
| Restriction Field | Applies To | When Set |
|
||||
|-------------------|------------|----------|
|
||||
| `read_allow` | READ | Only listed pubkeys can read |
|
||||
| `read_deny` | READ | Listed pubkeys denied (if no read_allow) |
|
||||
| `read_follows_whitelist` | READ | Named pubkeys + their follows can read |
|
||||
| `write_allow` | WRITE | Only listed pubkeys can write |
|
||||
| `write_deny` | WRITE | Listed pubkeys denied (if no write_allow) |
|
||||
| `write_follows_whitelist` | WRITE | Named pubkeys + their follows can write |
|
||||
| `privileged` | READ | Only author + p-tag recipients can read |
|
||||
|
||||
**Nil Policy Error Handling:**
|
||||
- If `ORLY_POLICY_ENABLED=true` but the policy fails to load (nil policy), the relay will:
|
||||
- Log a FATAL error message indicating misconfiguration
|
||||
- Return an error for all `CheckPolicy` calls
|
||||
- Deny all events until the configuration is fixed
|
||||
- This is a safety measure - a nil policy with policy enabled indicates configuration error
|
||||
|
||||
### Authentication Modes
|
||||
- `ORLY_AUTH_REQUIRED=true`: Require authentication for ALL requests
|
||||
- `ORLY_AUTH_TO_WRITE=true`: Require authentication only for writes (allow anonymous reads)
|
||||
|
||||
@@ -42,7 +42,6 @@ type BenchmarkConfig struct {
|
||||
NetRate int // events/sec per worker
|
||||
|
||||
// Backend selection
|
||||
UseDgraph bool
|
||||
UseNeo4j bool
|
||||
UseRelySQLite bool
|
||||
}
|
||||
@@ -115,12 +114,6 @@ func main() {
|
||||
return
|
||||
}
|
||||
|
||||
if config.UseDgraph {
|
||||
// Run dgraph benchmark
|
||||
runDgraphBenchmark(config)
|
||||
return
|
||||
}
|
||||
|
||||
if config.UseNeo4j {
|
||||
// Run Neo4j benchmark
|
||||
runNeo4jBenchmark(config)
|
||||
@@ -152,28 +145,6 @@ func main() {
|
||||
benchmark.GenerateAsciidocReport()
|
||||
}
|
||||
|
||||
func runDgraphBenchmark(config *BenchmarkConfig) {
|
||||
fmt.Printf("Starting Nostr Relay Benchmark (Dgraph Backend)\n")
|
||||
fmt.Printf("Data Directory: %s\n", config.DataDir)
|
||||
fmt.Printf(
|
||||
"Events: %d, Workers: %d\n",
|
||||
config.NumEvents, config.ConcurrentWorkers,
|
||||
)
|
||||
|
||||
dgraphBench, err := NewDgraphBenchmark(config)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to create dgraph benchmark: %v", err)
|
||||
}
|
||||
defer dgraphBench.Close()
|
||||
|
||||
// Run dgraph benchmark suite
|
||||
dgraphBench.RunSuite()
|
||||
|
||||
// Generate reports
|
||||
dgraphBench.GenerateReport()
|
||||
dgraphBench.GenerateAsciidocReport()
|
||||
}
|
||||
|
||||
func runNeo4jBenchmark(config *BenchmarkConfig) {
|
||||
fmt.Printf("Starting Nostr Relay Benchmark (Neo4j Backend)\n")
|
||||
fmt.Printf("Data Directory: %s\n", config.DataDir)
|
||||
@@ -254,10 +225,6 @@ func parseFlags() *BenchmarkConfig {
|
||||
flag.IntVar(&config.NetRate, "net-rate", 20, "Events per second per worker")
|
||||
|
||||
// Backend selection
|
||||
flag.BoolVar(
|
||||
&config.UseDgraph, "dgraph", false,
|
||||
"Use dgraph backend (requires Docker)",
|
||||
)
|
||||
flag.BoolVar(
|
||||
&config.UseNeo4j, "neo4j", false,
|
||||
"Use Neo4j backend (requires Docker)",
|
||||
|
||||
@@ -31,7 +31,7 @@ type DatabaseConfig struct {
|
||||
}
|
||||
|
||||
// NewDatabase creates a database instance based on the specified type.
|
||||
// Supported types: "badger", "dgraph", "neo4j"
|
||||
// Supported types: "badger", "neo4j"
|
||||
func NewDatabase(
|
||||
ctx context.Context,
|
||||
cancel context.CancelFunc,
|
||||
|
||||
@@ -24,9 +24,6 @@ type DatabaseConfig struct {
|
||||
QueryCacheMaxAge time.Duration // ORLY_QUERY_CACHE_MAX_AGE
|
||||
InlineEventThreshold int // ORLY_INLINE_EVENT_THRESHOLD
|
||||
|
||||
// DGraph-specific settings
|
||||
DgraphURL string // ORLY_DGRAPH_URL
|
||||
|
||||
// Neo4j-specific settings
|
||||
Neo4jURI string // ORLY_NEO4J_URI
|
||||
Neo4jUser string // ORLY_NEO4J_USER
|
||||
@@ -34,7 +31,7 @@ type DatabaseConfig struct {
|
||||
}
|
||||
|
||||
// NewDatabase creates a database instance based on the specified type.
|
||||
// Supported types in WASM: "wasmdb", "dgraph", "neo4j"
|
||||
// Supported types in WASM: "wasmdb", "neo4j"
|
||||
// Note: "badger" is not available in WASM builds due to filesystem dependencies
|
||||
func NewDatabase(
|
||||
ctx context.Context,
|
||||
@@ -67,12 +64,6 @@ func NewDatabaseWithConfig(
|
||||
return nil, fmt.Errorf("wasmdb database backend not available (import _ \"next.orly.dev/pkg/wasmdb\")")
|
||||
}
|
||||
return newWasmDBDatabase(ctx, cancel, cfg)
|
||||
case "dgraph":
|
||||
// Use the dgraph implementation (HTTP-based, works in WASM)
|
||||
if newDgraphDatabase == nil {
|
||||
return nil, fmt.Errorf("dgraph database backend not available (import _ \"next.orly.dev/pkg/dgraph\")")
|
||||
}
|
||||
return newDgraphDatabase(ctx, cancel, cfg)
|
||||
case "neo4j":
|
||||
// Use the neo4j implementation (HTTP-based, works in WASM)
|
||||
if newNeo4jDatabase == nil {
|
||||
@@ -80,20 +71,10 @@ func NewDatabaseWithConfig(
|
||||
}
|
||||
return newNeo4jDatabase(ctx, cancel, cfg)
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported database type: %s (supported in WASM: wasmdb, dgraph, neo4j)", dbType)
|
||||
return nil, fmt.Errorf("unsupported database type: %s (supported in WASM: wasmdb, neo4j)", dbType)
|
||||
}
|
||||
}
|
||||
|
||||
// newDgraphDatabase creates a dgraph database instance
|
||||
// This is defined here to avoid import cycles
|
||||
var newDgraphDatabase func(context.Context, context.CancelFunc, *DatabaseConfig) (Database, error)
|
||||
|
||||
// RegisterDgraphFactory registers the dgraph database factory
|
||||
// This is called from the dgraph package's init() function
|
||||
func RegisterDgraphFactory(factory func(context.Context, context.CancelFunc, *DatabaseConfig) (Database, error)) {
|
||||
newDgraphDatabase = factory
|
||||
}
|
||||
|
||||
// newNeo4jDatabase creates a neo4j database instance
|
||||
// This is defined here to avoid import cycles
|
||||
var newNeo4jDatabase func(context.Context, context.CancelFunc, *DatabaseConfig) (Database, error)
|
||||
|
||||
@@ -13,7 +13,7 @@ import (
|
||||
)
|
||||
|
||||
// Database defines the interface that all database implementations must satisfy.
|
||||
// This allows switching between different storage backends (badger, dgraph, etc.)
|
||||
// This allows switching between different storage backends (badger, neo4j, etc.)
|
||||
type Database interface {
|
||||
// Core lifecycle methods
|
||||
Path() string
|
||||
|
||||
@@ -16,12 +16,13 @@ This document provides a comprehensive guide to the Neo4j database schema used b
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
The Neo4j implementation uses a **dual-node architecture** to separate concerns:
|
||||
The Neo4j implementation uses a **unified node architecture**:
|
||||
|
||||
1. **NIP-01 Base Layer**: Stores Nostr events with `Event`, `Author`, and `Tag` nodes for standard relay operations
|
||||
2. **WoT Extension Layer**: Stores social graph data with `NostrUser` nodes and relationship types (`FOLLOWS`, `MUTES`, `REPORTS`) for trust calculations
|
||||
1. **Event Storage**: `Event` and `Tag` nodes store Nostr events for standard relay operations
|
||||
2. **User Identity**: `NostrUser` nodes represent all Nostr users (both event authors and social graph participants)
|
||||
3. **Social Graph**: Relationship types (`FOLLOWS`, `MUTES`, `REPORTS`) between `NostrUser` nodes for trust calculations
|
||||
|
||||
This separation allows the WoT extension to be modified independently without affecting NIP-01 compliance.
|
||||
**Note:** The `Author` label was deprecated and merged into `NostrUser` to eliminate redundancy. A migration automatically converts existing `Author` nodes when the relay starts.
|
||||
|
||||
### Data Model Summary
|
||||
|
||||
@@ -72,16 +73,17 @@ From the specification document:
|
||||
|
||||
These elements are **required** for a NIP-01 compliant relay.
|
||||
|
||||
### Constraints (schema.go:30-43)
|
||||
### Constraints (schema.go:30-44)
|
||||
|
||||
```cypher
|
||||
-- Event ID uniqueness (for "ids" filter)
|
||||
CREATE CONSTRAINT event_id_unique IF NOT EXISTS
|
||||
FOR (e:Event) REQUIRE e.id IS UNIQUE
|
||||
|
||||
-- Author pubkey uniqueness (for "authors" filter)
|
||||
CREATE CONSTRAINT author_pubkey_unique IF NOT EXISTS
|
||||
FOR (a:Author) REQUIRE a.pubkey IS UNIQUE
|
||||
-- NostrUser pubkey uniqueness (for "authors" filter and social graph)
|
||||
-- NostrUser unifies both NIP-01 author tracking and WoT social graph
|
||||
CREATE CONSTRAINT nostrUser_pubkey IF NOT EXISTS
|
||||
FOR (n:NostrUser) REQUIRE n.pubkey IS UNIQUE
|
||||
```
|
||||
|
||||
### Indexes (schema.go:84-108)
|
||||
@@ -122,14 +124,14 @@ Created in `save-event.go:buildEventCreationCypher()`:
|
||||
Created in `save-event.go:buildEventCreationCypher()`:
|
||||
|
||||
```cypher
|
||||
-- Event → Author relationship
|
||||
(e:Event)-[:AUTHORED_BY]->(a:Author {pubkey: ...})
|
||||
-- Event → NostrUser relationship (author)
|
||||
(e:Event)-[:AUTHORED_BY]->(u:NostrUser {pubkey: ...})
|
||||
|
||||
-- Event → Event reference (e-tags)
|
||||
(e:Event)-[:REFERENCES]->(ref:Event)
|
||||
|
||||
-- Event → Author mention (p-tags)
|
||||
(e:Event)-[:MENTIONS]->(mentioned:Author)
|
||||
-- Event → NostrUser mention (p-tags)
|
||||
(e:Event)-[:MENTIONS]->(mentioned:NostrUser)
|
||||
|
||||
-- Event → Tag (other tags like #t, #d, etc.)
|
||||
(e:Event)-[:TAGGED_WITH]->(t:Tag {type: ..., value: ...})
|
||||
@@ -146,7 +148,7 @@ The `query-events.go` file translates Nostr REQ filters into Cypher queries.
|
||||
| NIP-01 Filter | Cypher Translation | Index Used |
|
||||
|---------------|-------------------|------------|
|
||||
| `ids: ["abc..."]` | `e.id = $id_0` or `e.id STARTS WITH $id_0` | `event_id_unique` |
|
||||
| `authors: ["def..."]` | `e.pubkey = $author_0` or `e.pubkey STARTS WITH $author_0` | `author_pubkey_unique` |
|
||||
| `authors: ["def..."]` | `e.pubkey = $author_0` or `e.pubkey STARTS WITH $author_0` | `nostrUser_pubkey` |
|
||||
| `kinds: [1, 7]` | `e.kind IN $kinds` | `event_kind` |
|
||||
| `since: 1234567890` | `e.created_at >= $since` | `event_created_at` |
|
||||
| `until: 1234567890` | `e.created_at <= $until` | `event_created_at` |
|
||||
@@ -435,25 +437,28 @@ if ev.Kind == 1 {
|
||||
|
||||
### Adding NostrEventTag → NostrUser REFERENCES
|
||||
|
||||
Per the specification update, p-tags should create `REFERENCES` relationships to `NostrUser` nodes:
|
||||
The current implementation creates `MENTIONS` relationships from Events to `NostrUser` nodes for p-tags:
|
||||
|
||||
```go
|
||||
// In save-event.go buildEventCreationCypher(), modify p-tag handling:
|
||||
// In save-event.go buildEventCreationCypher(), p-tag handling:
|
||||
case "p":
|
||||
// Current implementation: creates MENTIONS to Author
|
||||
// Creates MENTIONS to NostrUser (unified node for both author and social graph)
|
||||
cypher += fmt.Sprintf(`
|
||||
MERGE (mentioned%d:Author {pubkey: $%s})
|
||||
MERGE (mentioned%d:NostrUser {pubkey: $%s})
|
||||
ON CREATE SET mentioned%d.created_at = timestamp()
|
||||
CREATE (e)-[:MENTIONS]->(mentioned%d)
|
||||
`, pTagIndex, paramName, pTagIndex)
|
||||
`, pTagIndex, paramName, pTagIndex, pTagIndex)
|
||||
```
|
||||
|
||||
// NEW: Also reference NostrUser for WoT traversal
|
||||
To add additional tag nodes for enhanced query patterns:
|
||||
|
||||
```go
|
||||
// Optional: Also create a Tag node for the p-tag
|
||||
cypher += fmt.Sprintf(`
|
||||
MERGE (user%d:NostrUser {pubkey: $%s})
|
||||
// Create a Tag node for the p-tag
|
||||
MERGE (pTag%d:NostrEventTag {tag_name: 'p', tag_value: $%s})
|
||||
CREATE (e)-[:HAS_TAG]->(pTag%d)
|
||||
CREATE (pTag%d)-[:REFERENCES]->(user%d)
|
||||
`, pTagIndex, paramName, pTagIndex, paramName, pTagIndex, pTagIndex, pTagIndex)
|
||||
CREATE (pTag%d)-[:REFERENCES]->(mentioned%d)
|
||||
`, pTagIndex, paramName, pTagIndex, pTagIndex, pTagIndex)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
@@ -168,9 +168,9 @@ RETURN e
|
||||
|
||||
### Social graph query
|
||||
```cypher
|
||||
MATCH (author:Author {pubkey: "abc123..."})
|
||||
MATCH (author:NostrUser {pubkey: "abc123..."})
|
||||
<-[:AUTHORED_BY]-(e:Event)
|
||||
-[:MENTIONS]->(mentioned:Author)
|
||||
-[:MENTIONS]->(mentioned:NostrUser)
|
||||
RETURN author, e, mentioned
|
||||
```
|
||||
|
||||
|
||||
@@ -55,7 +55,7 @@ func TestExpiration_SaveEventWithExpiration(t *testing.T) {
|
||||
ev.CreatedAt = timestamp.Now().V
|
||||
ev.Kind = 1
|
||||
ev.Content = []byte("Event with expiration")
|
||||
ev.Tags = tag.NewS(tag.NewFromAny("expiration", timestamp.From(futureExpiration).String()))
|
||||
ev.Tags = tag.NewS(tag.NewFromAny("expiration", timestamp.FromUnix(futureExpiration).String()))
|
||||
|
||||
if err := ev.Sign(signer); err != nil {
|
||||
t.Fatalf("Failed to sign event: %v", err)
|
||||
@@ -118,7 +118,7 @@ func TestExpiration_DeleteExpiredEvents(t *testing.T) {
|
||||
expiredEv.CreatedAt = timestamp.Now().V - 7200 // 2 hours ago
|
||||
expiredEv.Kind = 1
|
||||
expiredEv.Content = []byte("Expired event")
|
||||
expiredEv.Tags = tag.NewS(tag.NewFromAny("expiration", timestamp.From(pastExpiration).String()))
|
||||
expiredEv.Tags = tag.NewS(tag.NewFromAny("expiration", timestamp.FromUnix(pastExpiration).String()))
|
||||
|
||||
if err := expiredEv.Sign(signer); err != nil {
|
||||
t.Fatalf("Failed to sign expired event: %v", err)
|
||||
@@ -136,7 +136,7 @@ func TestExpiration_DeleteExpiredEvents(t *testing.T) {
|
||||
validEv.CreatedAt = timestamp.Now().V
|
||||
validEv.Kind = 1
|
||||
validEv.Content = []byte("Valid event")
|
||||
validEv.Tags = tag.NewS(tag.NewFromAny("expiration", timestamp.From(futureExpiration).String()))
|
||||
validEv.Tags = tag.NewS(tag.NewFromAny("expiration", timestamp.FromUnix(futureExpiration).String()))
|
||||
|
||||
if err := validEv.Sign(signer); err != nil {
|
||||
t.Fatalf("Failed to sign valid event: %v", err)
|
||||
|
||||
@@ -331,7 +331,7 @@ func TestGetSerialsByIds(t *testing.T) {
|
||||
}
|
||||
|
||||
// Create and save multiple events
|
||||
ids := tag.NewS()
|
||||
ids := tag.New()
|
||||
for i := 0; i < 3; i++ {
|
||||
ev := event.New()
|
||||
ev.Pubkey = signer.Pub()
|
||||
@@ -347,7 +347,8 @@ func TestGetSerialsByIds(t *testing.T) {
|
||||
t.Fatalf("Failed to save event: %v", err)
|
||||
}
|
||||
|
||||
ids.Append(tag.NewFromAny("", hex.Enc(ev.ID[:])))
|
||||
// Append ID to the tag's T slice
|
||||
ids.T = append(ids.T, []byte(hex.Enc(ev.ID[:])))
|
||||
}
|
||||
|
||||
// Get serials by IDs
|
||||
|
||||
@@ -10,7 +10,7 @@ import (
|
||||
"lol.mleku.dev/log"
|
||||
)
|
||||
|
||||
// NewLogger creates a new dgraph logger.
|
||||
// NewLogger creates a new neo4j logger.
|
||||
func NewLogger(logLevel int, label string) (l *logger) {
|
||||
l = &logger{Label: label}
|
||||
l.Level.Store(int32(logLevel))
|
||||
|
||||
197
pkg/neo4j/migrations.go
Normal file
197
pkg/neo4j/migrations.go
Normal file
@@ -0,0 +1,197 @@
|
||||
package neo4j
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Migration represents a database migration with a version identifier
|
||||
type Migration struct {
|
||||
Version string
|
||||
Description string
|
||||
Migrate func(ctx context.Context, n *N) error
|
||||
}
|
||||
|
||||
// migrations is the ordered list of database migrations
|
||||
// Migrations are applied in order and tracked via Marker nodes
|
||||
var migrations = []Migration{
|
||||
{
|
||||
Version: "v1",
|
||||
Description: "Merge Author nodes into NostrUser nodes",
|
||||
Migrate: migrateAuthorToNostrUser,
|
||||
},
|
||||
}
|
||||
|
||||
// RunMigrations executes all pending migrations
|
||||
func (n *N) RunMigrations() {
|
||||
ctx := context.Background()
|
||||
|
||||
for _, migration := range migrations {
|
||||
// Check if migration has already been applied
|
||||
if n.migrationApplied(ctx, migration.Version) {
|
||||
n.Logger.Infof("migration %s already applied, skipping", migration.Version)
|
||||
continue
|
||||
}
|
||||
|
||||
n.Logger.Infof("applying migration %s: %s", migration.Version, migration.Description)
|
||||
|
||||
if err := migration.Migrate(ctx, n); err != nil {
|
||||
n.Logger.Errorf("migration %s failed: %v", migration.Version, err)
|
||||
// Continue to next migration - don't fail startup
|
||||
continue
|
||||
}
|
||||
|
||||
// Mark migration as complete
|
||||
if err := n.markMigrationComplete(ctx, migration.Version, migration.Description); err != nil {
|
||||
n.Logger.Warningf("failed to mark migration %s as complete: %v", migration.Version, err)
|
||||
}
|
||||
|
||||
n.Logger.Infof("migration %s completed successfully", migration.Version)
|
||||
}
|
||||
}
|
||||
|
||||
// migrationApplied checks if a migration has already been applied
|
||||
func (n *N) migrationApplied(ctx context.Context, version string) bool {
|
||||
cypher := `
|
||||
MATCH (m:Migration {version: $version})
|
||||
RETURN m.version
|
||||
`
|
||||
result, err := n.ExecuteRead(ctx, cypher, map[string]any{"version": version})
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return result.Next(ctx)
|
||||
}
|
||||
|
||||
// markMigrationComplete marks a migration as applied
|
||||
func (n *N) markMigrationComplete(ctx context.Context, version, description string) error {
|
||||
cypher := `
|
||||
CREATE (m:Migration {
|
||||
version: $version,
|
||||
description: $description,
|
||||
applied_at: timestamp()
|
||||
})
|
||||
`
|
||||
_, err := n.ExecuteWrite(ctx, cypher, map[string]any{
|
||||
"version": version,
|
||||
"description": description,
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
// migrateAuthorToNostrUser migrates Author nodes to NostrUser nodes
|
||||
// This consolidates the separate Author (NIP-01) and NostrUser (WoT) labels
|
||||
// into a unified NostrUser label for the social graph
|
||||
func migrateAuthorToNostrUser(ctx context.Context, n *N) error {
|
||||
// Step 1: Check if there are any Author nodes to migrate
|
||||
countCypher := `MATCH (a:Author) RETURN count(a) AS count`
|
||||
countResult, err := n.ExecuteRead(ctx, countCypher, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to count Author nodes: %w", err)
|
||||
}
|
||||
|
||||
var authorCount int64
|
||||
if countResult.Next(ctx) {
|
||||
record := countResult.Record()
|
||||
if count, ok := record.Values[0].(int64); ok {
|
||||
authorCount = count
|
||||
}
|
||||
}
|
||||
|
||||
if authorCount == 0 {
|
||||
n.Logger.Infof("no Author nodes to migrate")
|
||||
return nil
|
||||
}
|
||||
|
||||
n.Logger.Infof("migrating %d Author nodes to NostrUser", authorCount)
|
||||
|
||||
// Step 2: For each Author node, merge into NostrUser with same pubkey
|
||||
// This uses MERGE to either match existing NostrUser or create new one
|
||||
// Then copies any relationships from Author to NostrUser
|
||||
mergeCypher := `
|
||||
// Match all Author nodes
|
||||
MATCH (a:Author)
|
||||
|
||||
// For each Author, merge into NostrUser (creates if doesn't exist)
|
||||
MERGE (u:NostrUser {pubkey: a.pubkey})
|
||||
ON CREATE SET u.created_at = timestamp(), u.migrated_from_author = true
|
||||
|
||||
// Return count for logging
|
||||
RETURN count(DISTINCT a) AS migrated
|
||||
`
|
||||
|
||||
result, err := n.ExecuteWrite(ctx, mergeCypher, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to merge Author nodes to NostrUser: %w", err)
|
||||
}
|
||||
|
||||
// Log result (result consumption happens within the session)
|
||||
_ = result
|
||||
|
||||
// Step 3: Migrate AUTHORED_BY relationships from Author to NostrUser
|
||||
// Events should now point to NostrUser instead of Author
|
||||
relationshipCypher := `
|
||||
// Find events linked to Author via AUTHORED_BY
|
||||
MATCH (e:Event)-[r:AUTHORED_BY]->(a:Author)
|
||||
|
||||
// Get or create the corresponding NostrUser
|
||||
MATCH (u:NostrUser {pubkey: a.pubkey})
|
||||
|
||||
// Create new relationship to NostrUser if it doesn't exist
|
||||
MERGE (e)-[:AUTHORED_BY]->(u)
|
||||
|
||||
// Delete old relationship to Author
|
||||
DELETE r
|
||||
|
||||
RETURN count(r) AS migrated_relationships
|
||||
`
|
||||
|
||||
_, err = n.ExecuteWrite(ctx, relationshipCypher, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to migrate AUTHORED_BY relationships: %w", err)
|
||||
}
|
||||
|
||||
// Step 4: Migrate MENTIONS relationships from Author to NostrUser
|
||||
mentionsCypher := `
|
||||
// Find events with MENTIONS to Author
|
||||
MATCH (e:Event)-[r:MENTIONS]->(a:Author)
|
||||
|
||||
// Get or create the corresponding NostrUser
|
||||
MATCH (u:NostrUser {pubkey: a.pubkey})
|
||||
|
||||
// Create new relationship to NostrUser if it doesn't exist
|
||||
MERGE (e)-[:MENTIONS]->(u)
|
||||
|
||||
// Delete old relationship to Author
|
||||
DELETE r
|
||||
|
||||
RETURN count(r) AS migrated_mentions
|
||||
`
|
||||
|
||||
_, err = n.ExecuteWrite(ctx, mentionsCypher, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to migrate MENTIONS relationships: %w", err)
|
||||
}
|
||||
|
||||
// Step 5: Delete orphaned Author nodes (no longer needed)
|
||||
deleteCypher := `
|
||||
// Find Author nodes with no remaining relationships
|
||||
MATCH (a:Author)
|
||||
WHERE NOT (a)<-[:AUTHORED_BY]-() AND NOT (a)<-[:MENTIONS]-()
|
||||
DETACH DELETE a
|
||||
RETURN count(a) AS deleted
|
||||
`
|
||||
|
||||
_, err = n.ExecuteWrite(ctx, deleteCypher, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to delete orphaned Author nodes: %w", err)
|
||||
}
|
||||
|
||||
// Step 6: Drop the old Author constraint if it exists
|
||||
dropConstraintCypher := `DROP CONSTRAINT author_pubkey_unique IF EXISTS`
|
||||
_, _ = n.ExecuteWrite(ctx, dropConstraintCypher, nil)
|
||||
// Ignore error as constraint may not exist
|
||||
|
||||
n.Logger.Infof("completed Author to NostrUser migration")
|
||||
return nil
|
||||
}
|
||||
@@ -135,6 +135,9 @@ func NewWithConfig(
|
||||
return
|
||||
}
|
||||
|
||||
// Run database migrations (e.g., Author -> NostrUser consolidation)
|
||||
n.RunMigrations()
|
||||
|
||||
// Initialize serial counter
|
||||
if err = n.initSerialCounter(); chk.E(err) {
|
||||
return
|
||||
@@ -298,10 +301,8 @@ func (n *N) EventIdsBySerial(start uint64, count int) (
|
||||
return
|
||||
}
|
||||
|
||||
// RunMigrations runs database migrations (no-op for neo4j)
|
||||
func (n *N) RunMigrations() {
|
||||
// No-op for neo4j
|
||||
}
|
||||
// RunMigrations is implemented in migrations.go
|
||||
// It handles schema migrations like the Author -> NostrUser consolidation
|
||||
|
||||
// Ready returns a channel that closes when the database is ready to serve requests.
|
||||
// This allows callers to wait for database warmup to complete.
|
||||
|
||||
@@ -290,16 +290,16 @@ func TestQueryEventsWithLimit(t *testing.T) {
|
||||
}
|
||||
|
||||
// Query with limit
|
||||
limit := 5
|
||||
limit := uint(5)
|
||||
evs, err := db.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(1)),
|
||||
Limit: limit,
|
||||
Limit: &limit,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to query events with limit: %v", err)
|
||||
}
|
||||
|
||||
if len(evs) != limit {
|
||||
if len(evs) != int(limit) {
|
||||
t.Fatalf("Expected %d events with limit, got %d", limit, len(evs))
|
||||
}
|
||||
|
||||
@@ -406,8 +406,7 @@ func TestQueryEventsMultipleAuthors(t *testing.T) {
|
||||
createAndSaveEvent(t, ctx, db, charlie, 1, "Charlie", nil, baseTs+2)
|
||||
|
||||
// Query for Alice and Bob's events
|
||||
authors := tag.NewFromBytesSlice(alice.Pub())
|
||||
authors.Append(tag.NewFromBytesSlice(bob.Pub()).GetFirst(nil))
|
||||
authors := tag.NewFromBytesSlice(alice.Pub(), bob.Pub())
|
||||
|
||||
evs, err := db.QueryEvents(ctx, &filter.F{
|
||||
Authors: authors,
|
||||
@@ -437,7 +436,7 @@ func TestCountEvents(t *testing.T) {
|
||||
}
|
||||
|
||||
// Count events
|
||||
count, err := db.CountEvents(ctx, &filter.F{
|
||||
count, _, err := db.CountEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(1)),
|
||||
})
|
||||
if err != nil {
|
||||
|
||||
@@ -84,7 +84,7 @@ func (n *N) SaveEvent(c context.Context, ev *event.E) (exists bool, err error) {
|
||||
// buildEventCreationCypher constructs a Cypher query to create an event node with all relationships
|
||||
// This is a single atomic operation that creates:
|
||||
// - Event node with all properties
|
||||
// - Author node and AUTHORED_BY relationship
|
||||
// - NostrUser node and AUTHORED_BY relationship (unified author + WoT node)
|
||||
// - Tag nodes and TAGGED_WITH relationships
|
||||
// - Reference relationships (REFERENCES for 'e' tags, MENTIONS for 'p' tags)
|
||||
func (n *N) buildEventCreationCypher(ev *event.E, serial uint64) (string, map[string]any) {
|
||||
@@ -124,10 +124,12 @@ func (n *N) buildEventCreationCypher(ev *event.E, serial uint64) (string, map[st
|
||||
params["tags"] = string(tagsJSON)
|
||||
|
||||
// Start building the Cypher query
|
||||
// Use MERGE to ensure idempotency for author nodes
|
||||
// Use MERGE to ensure idempotency for NostrUser nodes
|
||||
// NostrUser serves both NIP-01 author tracking and WoT social graph
|
||||
cypher := `
|
||||
// Create or match author node
|
||||
MERGE (a:Author {pubkey: $pubkey})
|
||||
// Create or match NostrUser node (unified author + social graph)
|
||||
MERGE (a:NostrUser {pubkey: $pubkey})
|
||||
ON CREATE SET a.created_at = timestamp(), a.first_seen_event = $eventId
|
||||
|
||||
// Create event node with expiration for NIP-40 support
|
||||
CREATE (e:Event {
|
||||
@@ -212,15 +214,16 @@ FOREACH (ignoreMe IN CASE WHEN ref%d IS NOT NULL THEN [1] ELSE [] END |
|
||||
continue // Skip invalid p-tags
|
||||
}
|
||||
|
||||
// Create mention to another author
|
||||
// Create mention to another NostrUser
|
||||
paramName := fmt.Sprintf("pTag_%d", pTagIndex)
|
||||
params[paramName] = tagValue
|
||||
|
||||
cypher += fmt.Sprintf(`
|
||||
// Mention of author (p-tag)
|
||||
MERGE (mentioned%d:Author {pubkey: $%s})
|
||||
// Mention of NostrUser (p-tag)
|
||||
MERGE (mentioned%d:NostrUser {pubkey: $%s})
|
||||
ON CREATE SET mentioned%d.created_at = timestamp()
|
||||
CREATE (e)-[:MENTIONS]->(mentioned%d)
|
||||
`, pTagIndex, paramName, pTagIndex)
|
||||
`, pTagIndex, paramName, pTagIndex, pTagIndex)
|
||||
|
||||
pTagIndex++
|
||||
|
||||
|
||||
@@ -542,7 +542,7 @@ func TestSaveEvent_ETagReference(t *testing.T) {
|
||||
|
||||
// Verify MENTIONS relationship was also created for the p-tag
|
||||
mentionsCypher := `
|
||||
MATCH (reply:Event {id: $replyId})-[:MENTIONS]->(author:Author {pubkey: $authorPubkey})
|
||||
MATCH (reply:Event {id: $replyId})-[:MENTIONS]->(author:NostrUser {pubkey: $authorPubkey})
|
||||
RETURN author.pubkey AS pubkey
|
||||
`
|
||||
mentionsParams := map[string]any{
|
||||
|
||||
@@ -37,10 +37,11 @@ func (n *N) applySchema(ctx context.Context) error {
|
||||
// REQ filters can specify: {"ids": ["<event_id>", ...]}
|
||||
"CREATE CONSTRAINT event_id_unique IF NOT EXISTS FOR (e:Event) REQUIRE e.id IS UNIQUE",
|
||||
|
||||
// MANDATORY (NIP-01): Author.pubkey uniqueness for "authors" filter
|
||||
// MANDATORY (NIP-01): NostrUser.pubkey uniqueness for "authors" filter
|
||||
// REQ filters can specify: {"authors": ["<pubkey>", ...]}
|
||||
// Events are linked to Author nodes via AUTHORED_BY relationship
|
||||
"CREATE CONSTRAINT author_pubkey_unique IF NOT EXISTS FOR (a:Author) REQUIRE a.pubkey IS UNIQUE",
|
||||
// Events are linked to NostrUser nodes via AUTHORED_BY relationship
|
||||
// NOTE: NostrUser unifies both NIP-01 author tracking and WoT social graph
|
||||
"CREATE CONSTRAINT nostrUser_pubkey IF NOT EXISTS FOR (n:NostrUser) REQUIRE n.pubkey IS UNIQUE",
|
||||
|
||||
// ============================================================
|
||||
// === OPTIONAL: Internal Relay Operations ===
|
||||
@@ -66,9 +67,8 @@ func (n *N) applySchema(ctx context.Context) error {
|
||||
// Not required for NIP-01 compliance
|
||||
// ============================================================
|
||||
|
||||
// OPTIONAL (WoT): NostrUser nodes for social graph/trust metrics
|
||||
// Separate from Author nodes - Author is for NIP-01, NostrUser for WoT
|
||||
"CREATE CONSTRAINT nostrUser_pubkey IF NOT EXISTS FOR (n:NostrUser) REQUIRE n.pubkey IS UNIQUE",
|
||||
// NOTE: NostrUser constraint is defined above in MANDATORY section
|
||||
// It serves both NIP-01 (author tracking) and WoT (social graph) purposes
|
||||
|
||||
// OPTIONAL (WoT): Container for WoT metrics cards per observee
|
||||
"CREATE CONSTRAINT setOfNostrUserWotMetricsCards_observee_pubkey IF NOT EXISTS FOR (n:SetOfNostrUserWotMetricsCards) REQUIRE n.observee_pubkey IS UNIQUE",
|
||||
@@ -200,6 +200,9 @@ func (n *N) dropAll(ctx context.Context) error {
|
||||
constraints := []string{
|
||||
// MANDATORY (NIP-01) constraints
|
||||
"DROP CONSTRAINT event_id_unique IF EXISTS",
|
||||
"DROP CONSTRAINT nostrUser_pubkey IF EXISTS", // Unified author + WoT constraint
|
||||
|
||||
// Legacy constraint (removed in migration)
|
||||
"DROP CONSTRAINT author_pubkey_unique IF EXISTS",
|
||||
|
||||
// OPTIONAL (Internal) constraints
|
||||
@@ -207,9 +210,6 @@ func (n *N) dropAll(ctx context.Context) error {
|
||||
|
||||
// OPTIONAL (Social Graph) constraints
|
||||
"DROP CONSTRAINT processedSocialEvent_event_id IF EXISTS",
|
||||
|
||||
// OPTIONAL (WoT) constraints
|
||||
"DROP CONSTRAINT nostrUser_pubkey IF EXISTS",
|
||||
"DROP CONSTRAINT setOfNostrUserWotMetricsCards_observee_pubkey IF EXISTS",
|
||||
"DROP CONSTRAINT nostrUserWotMetricsCard_unique_combination_1 IF EXISTS",
|
||||
"DROP CONSTRAINT nostrUserWotMetricsCard_unique_combination_2 IF EXISTS",
|
||||
|
||||
@@ -5,179 +5,12 @@ import (
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"git.mleku.dev/mleku/nostr/encoders/filter"
|
||||
"git.mleku.dev/mleku/nostr/encoders/kind"
|
||||
"git.mleku.dev/mleku/nostr/interfaces/signer/p8k"
|
||||
)
|
||||
|
||||
func TestSubscriptions_AddAndRemove(t *testing.T) {
|
||||
neo4jURI := os.Getenv("ORLY_NEO4J_URI")
|
||||
if neo4jURI == "" {
|
||||
t.Skip("Skipping Neo4j test: ORLY_NEO4J_URI not set")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
tempDir := t.TempDir()
|
||||
db, err := New(ctx, cancel, tempDir, "debug")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create database: %v", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
<-db.Ready()
|
||||
|
||||
// Create a subscription
|
||||
subID := "test-sub-123"
|
||||
f := &filter.F{
|
||||
Kinds: kind.NewS(kind.New(1)),
|
||||
}
|
||||
|
||||
// Add subscription
|
||||
db.AddSubscription(subID, f)
|
||||
|
||||
// Get subscription count (should be 1)
|
||||
count := db.GetSubscriptionCount()
|
||||
if count != 1 {
|
||||
t.Fatalf("Expected 1 subscription, got %d", count)
|
||||
}
|
||||
|
||||
// Remove subscription
|
||||
db.RemoveSubscription(subID)
|
||||
|
||||
// Get subscription count (should be 0)
|
||||
count = db.GetSubscriptionCount()
|
||||
if count != 0 {
|
||||
t.Fatalf("Expected 0 subscriptions after removal, got %d", count)
|
||||
}
|
||||
|
||||
t.Logf("✓ Subscription add/remove works correctly")
|
||||
}
|
||||
|
||||
func TestSubscriptions_MultipleSubscriptions(t *testing.T) {
|
||||
neo4jURI := os.Getenv("ORLY_NEO4J_URI")
|
||||
if neo4jURI == "" {
|
||||
t.Skip("Skipping Neo4j test: ORLY_NEO4J_URI not set")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
tempDir := t.TempDir()
|
||||
db, err := New(ctx, cancel, tempDir, "debug")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create database: %v", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
<-db.Ready()
|
||||
|
||||
// Add multiple subscriptions
|
||||
for i := 0; i < 5; i++ {
|
||||
subID := string(rune('A' + i))
|
||||
f := &filter.F{
|
||||
Kinds: kind.NewS(kind.New(uint16(i + 1))),
|
||||
}
|
||||
db.AddSubscription(subID, f)
|
||||
}
|
||||
|
||||
// Get subscription count
|
||||
count := db.GetSubscriptionCount()
|
||||
if count != 5 {
|
||||
t.Fatalf("Expected 5 subscriptions, got %d", count)
|
||||
}
|
||||
|
||||
// Remove some subscriptions
|
||||
db.RemoveSubscription("A")
|
||||
db.RemoveSubscription("C")
|
||||
|
||||
count = db.GetSubscriptionCount()
|
||||
if count != 3 {
|
||||
t.Fatalf("Expected 3 subscriptions after removal, got %d", count)
|
||||
}
|
||||
|
||||
// Clear all subscriptions
|
||||
db.ClearSubscriptions()
|
||||
|
||||
count = db.GetSubscriptionCount()
|
||||
if count != 0 {
|
||||
t.Fatalf("Expected 0 subscriptions after clear, got %d", count)
|
||||
}
|
||||
|
||||
t.Logf("✓ Multiple subscriptions managed correctly")
|
||||
}
|
||||
|
||||
func TestSubscriptions_DuplicateID(t *testing.T) {
|
||||
neo4jURI := os.Getenv("ORLY_NEO4J_URI")
|
||||
if neo4jURI == "" {
|
||||
t.Skip("Skipping Neo4j test: ORLY_NEO4J_URI not set")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
tempDir := t.TempDir()
|
||||
db, err := New(ctx, cancel, tempDir, "debug")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create database: %v", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
<-db.Ready()
|
||||
|
||||
subID := "duplicate-test"
|
||||
|
||||
// Add first subscription
|
||||
f1 := &filter.F{
|
||||
Kinds: kind.NewS(kind.New(1)),
|
||||
}
|
||||
db.AddSubscription(subID, f1)
|
||||
|
||||
// Add subscription with same ID (should replace)
|
||||
f2 := &filter.F{
|
||||
Kinds: kind.NewS(kind.New(7)),
|
||||
}
|
||||
db.AddSubscription(subID, f2)
|
||||
|
||||
// Should still have only 1 subscription
|
||||
count := db.GetSubscriptionCount()
|
||||
if count != 1 {
|
||||
t.Fatalf("Expected 1 subscription (duplicate replaced), got %d", count)
|
||||
}
|
||||
|
||||
t.Logf("✓ Duplicate subscription ID handling works correctly")
|
||||
}
|
||||
|
||||
func TestSubscriptions_RemoveNonExistent(t *testing.T) {
|
||||
neo4jURI := os.Getenv("ORLY_NEO4J_URI")
|
||||
if neo4jURI == "" {
|
||||
t.Skip("Skipping Neo4j test: ORLY_NEO4J_URI not set")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
tempDir := t.TempDir()
|
||||
db, err := New(ctx, cancel, tempDir, "debug")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create database: %v", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
<-db.Ready()
|
||||
|
||||
// Try to remove non-existent subscription (should not panic)
|
||||
db.RemoveSubscription("non-existent")
|
||||
|
||||
// Should still have 0 subscriptions
|
||||
count := db.GetSubscriptionCount()
|
||||
if count != 0 {
|
||||
t.Fatalf("Expected 0 subscriptions, got %d", count)
|
||||
}
|
||||
|
||||
t.Logf("✓ Removing non-existent subscription handled gracefully")
|
||||
}
|
||||
// Note: WebSocket subscription management (AddSubscription, GetSubscriptionCount,
|
||||
// RemoveSubscription, ClearSubscriptions) is handled at the app layer, not the
|
||||
// database layer. Tests for those methods have been removed.
|
||||
|
||||
func TestMarkers_SetGetDelete(t *testing.T) {
|
||||
neo4jURI := os.Getenv("ORLY_NEO4J_URI")
|
||||
@@ -371,24 +204,36 @@ func TestIdentity(t *testing.T) {
|
||||
|
||||
<-db.Ready()
|
||||
|
||||
// Wipe to ensure clean state
|
||||
if err := db.Wipe(); err != nil {
|
||||
t.Fatalf("Failed to wipe database: %v", err)
|
||||
}
|
||||
|
||||
// Get identity (creates if not exists)
|
||||
signer := db.Identity()
|
||||
if signer == nil {
|
||||
t.Fatal("Expected non-nil signer from Identity()")
|
||||
secret1, err := db.GetOrCreateRelayIdentitySecret()
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to get identity: %v", err)
|
||||
}
|
||||
if secret1 == nil {
|
||||
t.Fatal("Expected non-nil secret from GetOrCreateRelayIdentitySecret()")
|
||||
}
|
||||
|
||||
// Get identity again (should return same one)
|
||||
signer2 := db.Identity()
|
||||
if signer2 == nil {
|
||||
t.Fatal("Expected non-nil signer from second Identity() call")
|
||||
secret2, err := db.GetOrCreateRelayIdentitySecret()
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to get identity second time: %v", err)
|
||||
}
|
||||
if secret2 == nil {
|
||||
t.Fatal("Expected non-nil secret from second GetOrCreateRelayIdentitySecret() call")
|
||||
}
|
||||
|
||||
// Public keys should match
|
||||
pub1 := signer.Pub()
|
||||
pub2 := signer2.Pub()
|
||||
for i := range pub1 {
|
||||
if pub1[i] != pub2[i] {
|
||||
t.Fatal("Identity pubkeys don't match across calls")
|
||||
// Secrets should match
|
||||
if len(secret1) != len(secret2) {
|
||||
t.Fatalf("Secret lengths don't match: %d vs %d", len(secret1), len(secret2))
|
||||
}
|
||||
for i := range secret1 {
|
||||
if secret1[i] != secret2[i] {
|
||||
t.Fatal("Identity secrets don't match across calls")
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -19,6 +19,7 @@ The policy system provides fine-grained control over event storage and retrieval
|
||||
- [Dynamic Policy Updates](#dynamic-policy-updates)
|
||||
- [Evaluation Order](#evaluation-order)
|
||||
- [Examples](#examples)
|
||||
- [Permissive Mode Examples](#permissive-mode-examples)
|
||||
|
||||
## Overview
|
||||
|
||||
@@ -264,11 +265,45 @@ Validates that tag values match the specified regex patterns. Only validates tag
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `write_allow_follows` | boolean | Grant read+write access to policy admin follows |
|
||||
| `follows_whitelist_admins` | array | Per-rule admin pubkeys whose follows are whitelisted |
|
||||
| `write_allow_follows` | boolean | **DEPRECATED.** Grant read+write access to policy admin follows |
|
||||
| `follows_whitelist_admins` | array | **DEPRECATED.** Per-rule admin pubkeys whose follows are whitelisted |
|
||||
| `read_follows_whitelist` | array | Pubkeys whose follows can READ events. Restricts read access when set. |
|
||||
| `write_follows_whitelist` | array | Pubkeys whose follows can WRITE events. Restricts write access when set. |
|
||||
|
||||
See [Follows-Based Whitelisting](#follows-based-whitelisting) for details.
|
||||
|
||||
#### Permissive Mode Overrides
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `read_allow_permissive` | boolean | Override kind whitelist for READ access (reads allowed for all kinds) |
|
||||
| `write_allow_permissive` | boolean | Override kind whitelist for WRITE access (writes use global rule only) |
|
||||
|
||||
These fields, when set on the **global** rule, allow independent control over read and write access relative to the kind whitelist/blacklist:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": {
|
||||
"whitelist": [1, 3, 5, 7]
|
||||
},
|
||||
"global": {
|
||||
"read_allow_permissive": true,
|
||||
"size_limit": 100000
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In this example:
|
||||
- **READ**: Allowed for ALL kinds (permissive override ignores whitelist)
|
||||
- **WRITE**: Only kinds 1, 3, 5, 7 can be written (whitelist applies)
|
||||
|
||||
**Important constraints:**
|
||||
- These flags only work on the **global** rule (ignored on kind-specific rules)
|
||||
- You cannot enable BOTH `read_allow_permissive` AND `write_allow_permissive` when a kind whitelist/blacklist is configured (this would make the whitelist meaningless)
|
||||
- Blacklists always take precedence—permissive flags do NOT override explicit blacklist entries
|
||||
|
||||
See [Permissive Mode Examples](#permissive-mode-examples) for detailed use cases.
|
||||
|
||||
#### Rate Limiting
|
||||
|
||||
| Field | Type | Unit | Description |
|
||||
@@ -350,26 +385,47 @@ P[n]Y[n]M[n]W[n]DT[n]H[n]M[n]S
|
||||
|
||||
## Access Control
|
||||
|
||||
### Write Access Evaluation
|
||||
### Default-Permissive Access Model
|
||||
|
||||
The policy system uses a **default-permissive** model for both read and write access:
|
||||
|
||||
- **Read**: Allowed by default unless a read restriction is configured
|
||||
- **Write**: Allowed by default unless a write restriction is configured
|
||||
|
||||
Restrictions become active when any of the following fields are set:
|
||||
|
||||
| Access | Restrictions |
|
||||
|--------|--------------|
|
||||
| Read | `read_allow`, `read_follows_whitelist`, or `privileged` |
|
||||
| Write | `write_allow`, `write_follows_whitelist` |
|
||||
|
||||
**Important**: `privileged` ONLY applies to READ operations.
|
||||
|
||||
### Write Access Evaluation (Default-Permissive)
|
||||
|
||||
```
|
||||
1. If write_allow is set and pubkey NOT in list → DENY
|
||||
2. If write_deny is set and pubkey IN list → DENY
|
||||
1. Universal constraints (size, tags, age) - must pass
|
||||
2. If pubkey in write_deny → DENY
|
||||
3. If write_allow_follows enabled and pubkey in admin follows → ALLOW
|
||||
4. If follows_whitelist_admins set and pubkey in rule follows → ALLOW
|
||||
5. Continue to other checks...
|
||||
4. If write_follows_whitelist set and pubkey in follows → ALLOW
|
||||
5. If write_allow set and pubkey in list → ALLOW
|
||||
6. If ANY write restriction is set → DENY (not in any whitelist)
|
||||
7. Otherwise → ALLOW (default-permissive)
|
||||
```
|
||||
|
||||
### Read Access Evaluation
|
||||
### Read Access Evaluation (Default-Permissive)
|
||||
|
||||
```
|
||||
1. If read_allow is set and pubkey NOT in list → DENY
|
||||
2. If read_deny is set and pubkey IN list → DENY
|
||||
3. If privileged is true and pubkey NOT party to event → DENY
|
||||
4. Continue to other checks...
|
||||
1. If pubkey in read_deny → DENY
|
||||
2. If read_allow_follows enabled and pubkey in admin follows → ALLOW
|
||||
3. If read_follows_whitelist set and pubkey in follows → ALLOW
|
||||
4. If read_allow set and pubkey in list → ALLOW
|
||||
5. If privileged set and pubkey is party to event → ALLOW
|
||||
6. If ANY read restriction is set → DENY (not in any whitelist)
|
||||
7. Otherwise → ALLOW (default-permissive)
|
||||
```
|
||||
|
||||
### Privileged Events
|
||||
### Privileged Events (Read-Only)
|
||||
|
||||
When `privileged: true`, only the author and p-tag recipients can access the event:
|
||||
|
||||
@@ -386,9 +442,37 @@ When `privileged: true`, only the author and p-tag recipients can access the eve
|
||||
|
||||
## Follows-Based Whitelisting
|
||||
|
||||
There are two mechanisms for follows-based access control:
|
||||
The policy system supports whitelisting pubkeys based on follow lists (kind 3 events). There are two approaches:
|
||||
|
||||
### 1. Global Policy Admin Follows
|
||||
### 1. Separate Read/Write Follows Whitelists (Recommended)
|
||||
|
||||
Use `read_follows_whitelist` and `write_follows_whitelist` for fine-grained control:
|
||||
|
||||
```json
|
||||
{
|
||||
"global": {
|
||||
"read_follows_whitelist": ["curator_pubkey_hex"],
|
||||
"write_follows_whitelist": ["moderator_pubkey_hex"]
|
||||
},
|
||||
"rules": {
|
||||
"30023": {
|
||||
"description": "Articles - curated reading, moderated writing",
|
||||
"read_follows_whitelist": ["article_curator_hex"],
|
||||
"write_follows_whitelist": ["article_moderator_hex"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**How it works:**
|
||||
- The pubkeys listed AND their follows (from kind 3 events) can access the events
|
||||
- `read_follows_whitelist`: Restricts WHO can read (when set)
|
||||
- `write_follows_whitelist`: Restricts WHO can write (when set)
|
||||
- If not set, the default-permissive behavior applies
|
||||
|
||||
**Important:** The relay will fail to start if the named pubkeys don't have kind 3 follow list events in the database. This ensures the follow lists are available for access control.
|
||||
|
||||
### 2. Legacy: Global Policy Admin Follows (DEPRECATED)
|
||||
|
||||
Enable whitelisting for all pubkeys followed by policy admins:
|
||||
|
||||
@@ -406,7 +490,7 @@ Enable whitelisting for all pubkeys followed by policy admins:
|
||||
|
||||
When `write_allow_follows` is true, pubkeys in the policy admins' kind 3 follow lists get both read AND write access.
|
||||
|
||||
### 2. Per-Rule Follows Whitelist
|
||||
### 3. Legacy: Per-Rule Follows Whitelist (DEPRECATED)
|
||||
|
||||
Configure specific admins per rule:
|
||||
|
||||
@@ -423,18 +507,33 @@ Configure specific admins per rule:
|
||||
|
||||
This allows different rules to use different admin follow lists.
|
||||
|
||||
### Loading Follow Lists
|
||||
### Loading Follow Lists at Startup
|
||||
|
||||
The application must load follow lists at startup:
|
||||
The application must load follow lists at startup. The new API provides separate methods:
|
||||
|
||||
```go
|
||||
// Get all admin pubkeys that need follow lists loaded
|
||||
admins := policy.GetAllFollowsWhitelistAdmins()
|
||||
// Get all pubkeys that need follow lists loaded (combines read + write + legacy)
|
||||
allPubkeys := policy.GetAllFollowsWhitelistPubkeys()
|
||||
|
||||
// For each admin, load their kind 3 event and update the whitelist
|
||||
for _, adminHex := range admins {
|
||||
follows := loadFollowsFromKind3(adminHex)
|
||||
policy.UpdateRuleFollowsWhitelist(kind, follows)
|
||||
// Or get them separately
|
||||
readPubkeys := policy.GetAllReadFollowsWhitelistPubkeys()
|
||||
writePubkeys := policy.GetAllWriteFollowsWhitelistPubkeys()
|
||||
legacyAdmins := policy.GetAllFollowsWhitelistAdmins()
|
||||
|
||||
// Load follows and update the policy
|
||||
for _, pubkeyHex := range readPubkeys {
|
||||
follows := loadFollowsFromKind3(pubkeyHex)
|
||||
// Update read follows whitelist for specific kinds
|
||||
policy.UpdateRuleReadFollowsWhitelist(kind, follows)
|
||||
// Or for global rule
|
||||
policy.UpdateGlobalReadFollowsWhitelist(follows)
|
||||
}
|
||||
|
||||
for _, pubkeyHex := range writePubkeys {
|
||||
follows := loadFollowsFromKind3(pubkeyHex)
|
||||
policy.UpdateRuleWriteFollowsWhitelist(kind, follows)
|
||||
// Or for global rule
|
||||
policy.UpdateGlobalWriteFollowsWhitelist(follows)
|
||||
}
|
||||
```
|
||||
|
||||
@@ -743,6 +842,83 @@ access_allowed = (
|
||||
}
|
||||
```
|
||||
|
||||
### Permissive Mode Examples
|
||||
|
||||
#### Read-Permissive Relay (Write-Restricted)
|
||||
|
||||
Allow anyone to read all events, but restrict writes to specific kinds:
|
||||
|
||||
```json
|
||||
{
|
||||
"default_policy": "allow",
|
||||
"kind": {
|
||||
"whitelist": [1, 3, 7, 9735]
|
||||
},
|
||||
"global": {
|
||||
"read_allow_permissive": true,
|
||||
"size_limit": 100000
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Behavior:**
|
||||
- **READ**: Any kind can be read (permissive override)
|
||||
- **WRITE**: Only kinds 1, 3, 7, 9735 can be written
|
||||
|
||||
This is useful for relays that want to serve as aggregators (read any event type) but only accept specific event types from clients.
|
||||
|
||||
#### Write-Permissive with Read Restrictions
|
||||
|
||||
Allow writes of any kind (with global constraints), but restrict reads:
|
||||
|
||||
```json
|
||||
{
|
||||
"default_policy": "allow",
|
||||
"kind": {
|
||||
"whitelist": [0, 1, 3]
|
||||
},
|
||||
"global": {
|
||||
"write_allow_permissive": true,
|
||||
"size_limit": 50000,
|
||||
"max_age_of_event": 86400
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Behavior:**
|
||||
- **READ**: Only kinds 0, 1, 3 can be read (whitelist applies)
|
||||
- **WRITE**: Any kind can be written (with size and age limits from global rule)
|
||||
|
||||
This is useful for relays that want to accept any event type but only serve a curated subset.
|
||||
|
||||
#### Archive Relay (Read Any, Accept Specific)
|
||||
|
||||
Perfect for archive/backup relays:
|
||||
|
||||
```json
|
||||
{
|
||||
"default_policy": "allow",
|
||||
"kind": {
|
||||
"whitelist": [0, 1, 3, 4, 7, 30023]
|
||||
},
|
||||
"global": {
|
||||
"read_allow_permissive": true,
|
||||
"size_limit": 500000
|
||||
},
|
||||
"rules": {
|
||||
"30023": {
|
||||
"description": "Long-form articles with validation",
|
||||
"identifier_regex": "^[a-z0-9-]{1,64}$",
|
||||
"max_expiry_duration": "P365D"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Behavior:**
|
||||
- **READ**: All kinds can be read (historical data)
|
||||
- **WRITE**: Only whitelisted kinds accepted, with specific rules for articles
|
||||
|
||||
## Testing
|
||||
|
||||
### Run Policy Tests
|
||||
|
||||
@@ -40,7 +40,7 @@ func BenchmarkCheckKindsPolicy(b *testing.B) {
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
policy.checkKindsPolicy(1)
|
||||
policy.checkKindsPolicy("write", 1)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -168,8 +168,8 @@ func TestBugReproduction_DebugPolicyFlow(t *testing.T) {
|
||||
t.Logf("=== Policy Check Flow ===")
|
||||
|
||||
// Step 1: Check kinds policy
|
||||
kindsAllowed := policy.checkKindsPolicy(event.Kind)
|
||||
t.Logf("1. checkKindsPolicy(kind=%d) returned: %v", event.Kind, kindsAllowed)
|
||||
kindsAllowed := policy.checkKindsPolicy("write", event.Kind)
|
||||
t.Logf("1. checkKindsPolicy(access=write, kind=%d) returned: %v", event.Kind, kindsAllowed)
|
||||
|
||||
// Full policy check
|
||||
allowed, err := policy.CheckPolicy("write", event, testPubkey, "127.0.0.1")
|
||||
|
||||
@@ -485,8 +485,8 @@ func (p *P) IsOwner(pubkey []byte) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
p.policyFollowsMx.RLock()
|
||||
defer p.policyFollowsMx.RUnlock()
|
||||
p.followsMx.RLock()
|
||||
defer p.followsMx.RUnlock()
|
||||
|
||||
for _, owner := range p.ownersBin {
|
||||
if utils.FastEqual(owner, pubkey) {
|
||||
|
||||
686
pkg/policy/default_permissive_test.go
Normal file
686
pkg/policy/default_permissive_test.go
Normal file
@@ -0,0 +1,686 @@
|
||||
package policy
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mleku.dev/mleku/nostr/encoders/event"
|
||||
"git.mleku.dev/mleku/nostr/encoders/hex"
|
||||
"git.mleku.dev/mleku/nostr/encoders/tag"
|
||||
"git.mleku.dev/mleku/nostr/interfaces/signer/p8k"
|
||||
"lol.mleku.dev/chk"
|
||||
)
|
||||
|
||||
// =============================================================================
|
||||
// Default-Permissive Access Control Tests
|
||||
// =============================================================================
|
||||
|
||||
// TestDefaultPermissiveRead tests that read access is allowed by default
|
||||
// when no read restrictions are configured.
|
||||
func TestDefaultPermissiveRead(t *testing.T) {
|
||||
// No read restrictions configured
|
||||
policyJSON := []byte(`{
|
||||
"default_policy": "deny",
|
||||
"rules": {
|
||||
"1": {
|
||||
"description": "No read restrictions"
|
||||
}
|
||||
}
|
||||
}`)
|
||||
|
||||
policy, err := New(policyJSON)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create policy: %v", err)
|
||||
}
|
||||
|
||||
authorSigner, authorPubkey := generateTestKeypair(t)
|
||||
_, readerPubkey := generateTestKeypair(t)
|
||||
_, randomPubkey := generateTestKeypair(t)
|
||||
|
||||
ev := createTestEvent(t, authorSigner, "test content", 1)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
pubkey []byte
|
||||
expectAllow bool
|
||||
}{
|
||||
{
|
||||
name: "author can read (default permissive)",
|
||||
pubkey: authorPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "reader can read (default permissive)",
|
||||
pubkey: readerPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "random user can read (default permissive)",
|
||||
pubkey: randomPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "nil pubkey can read (default permissive)",
|
||||
pubkey: nil,
|
||||
expectAllow: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
allowed, err := policy.CheckPolicy("read", ev, tt.pubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if allowed != tt.expectAllow {
|
||||
t.Errorf("CheckPolicy() = %v, expected %v", allowed, tt.expectAllow)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestDefaultPermissiveWrite tests that write access is allowed by default
|
||||
// when no write restrictions are configured.
|
||||
func TestDefaultPermissiveWrite(t *testing.T) {
|
||||
// No write restrictions configured
|
||||
policyJSON := []byte(`{
|
||||
"default_policy": "deny",
|
||||
"rules": {
|
||||
"1": {
|
||||
"description": "No write restrictions"
|
||||
}
|
||||
}
|
||||
}`)
|
||||
|
||||
policy, err := New(policyJSON)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create policy: %v", err)
|
||||
}
|
||||
|
||||
writerSigner, writerPubkey := generateTestKeypair(t)
|
||||
_, randomPubkey := generateTestKeypair(t)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
signer *p8k.Signer
|
||||
pubkey []byte
|
||||
expectAllow bool
|
||||
}{
|
||||
{
|
||||
name: "writer can write (default permissive)",
|
||||
signer: writerSigner,
|
||||
pubkey: writerPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "random user can write (default permissive)",
|
||||
signer: writerSigner,
|
||||
pubkey: randomPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
ev := createTestEvent(t, tt.signer, "test content", 1)
|
||||
allowed, err := policy.CheckPolicy("write", ev, tt.pubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if allowed != tt.expectAllow {
|
||||
t.Errorf("CheckPolicy() = %v, expected %v", allowed, tt.expectAllow)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestReadFollowsWhitelist tests the read_follows_whitelist field.
|
||||
func TestReadFollowsWhitelist(t *testing.T) {
|
||||
_, curatorPubkey := generateTestKeypair(t)
|
||||
_, followedPubkey := generateTestKeypair(t)
|
||||
_, unfollowedPubkey := generateTestKeypair(t)
|
||||
authorSigner, authorPubkey := generateTestKeypair(t)
|
||||
|
||||
curatorHex := hex.Enc(curatorPubkey)
|
||||
|
||||
policyJSON := []byte(`{
|
||||
"default_policy": "deny",
|
||||
"rules": {
|
||||
"1": {
|
||||
"description": "Only curator follows can read",
|
||||
"read_follows_whitelist": ["` + curatorHex + `"]
|
||||
}
|
||||
}
|
||||
}`)
|
||||
|
||||
policy, err := New(policyJSON)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create policy: %v", err)
|
||||
}
|
||||
|
||||
// Simulate loading curator's follows (includes followed user and curator themselves)
|
||||
policy.UpdateRuleReadFollowsWhitelist(1, [][]byte{followedPubkey})
|
||||
|
||||
ev := createTestEvent(t, authorSigner, "test content", 1)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
pubkey []byte
|
||||
expectAllow bool
|
||||
}{
|
||||
{
|
||||
name: "curator can read (is in whitelist pubkeys)",
|
||||
pubkey: curatorPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "followed user can read",
|
||||
pubkey: followedPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "unfollowed user denied",
|
||||
pubkey: unfollowedPubkey,
|
||||
expectAllow: false,
|
||||
},
|
||||
{
|
||||
name: "author cannot read (not in follows)",
|
||||
pubkey: authorPubkey,
|
||||
expectAllow: false,
|
||||
},
|
||||
{
|
||||
name: "nil pubkey denied",
|
||||
pubkey: nil,
|
||||
expectAllow: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
allowed, err := policy.CheckPolicy("read", ev, tt.pubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if allowed != tt.expectAllow {
|
||||
t.Errorf("CheckPolicy() = %v, expected %v", allowed, tt.expectAllow)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Verify write is still default-permissive (no write restriction)
|
||||
t.Run("write is still default permissive", func(t *testing.T) {
|
||||
allowed, err := policy.CheckPolicy("write", ev, unfollowedPubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if !allowed {
|
||||
t.Error("Expected write to be allowed (no write restriction)")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestWriteFollowsWhitelist tests the write_follows_whitelist field.
|
||||
func TestWriteFollowsWhitelist(t *testing.T) {
|
||||
moderatorSigner, moderatorPubkey := generateTestKeypair(t)
|
||||
followedSigner, followedPubkey := generateTestKeypair(t)
|
||||
unfollowedSigner, unfollowedPubkey := generateTestKeypair(t)
|
||||
|
||||
moderatorHex := hex.Enc(moderatorPubkey)
|
||||
|
||||
policyJSON := []byte(`{
|
||||
"default_policy": "deny",
|
||||
"rules": {
|
||||
"1": {
|
||||
"description": "Only moderator follows can write",
|
||||
"write_follows_whitelist": ["` + moderatorHex + `"]
|
||||
}
|
||||
}
|
||||
}`)
|
||||
|
||||
policy, err := New(policyJSON)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create policy: %v", err)
|
||||
}
|
||||
|
||||
// Simulate loading moderator's follows
|
||||
policy.UpdateRuleWriteFollowsWhitelist(1, [][]byte{followedPubkey})
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
signer *p8k.Signer
|
||||
pubkey []byte
|
||||
expectAllow bool
|
||||
}{
|
||||
{
|
||||
name: "moderator can write (is in whitelist pubkeys)",
|
||||
signer: moderatorSigner,
|
||||
pubkey: moderatorPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "followed user can write",
|
||||
signer: followedSigner,
|
||||
pubkey: followedPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "unfollowed user denied",
|
||||
signer: unfollowedSigner,
|
||||
pubkey: unfollowedPubkey,
|
||||
expectAllow: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
ev := createTestEvent(t, tt.signer, "test content", 1)
|
||||
allowed, err := policy.CheckPolicy("write", ev, tt.pubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if allowed != tt.expectAllow {
|
||||
t.Errorf("CheckPolicy() = %v, expected %v", allowed, tt.expectAllow)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Verify read is still default-permissive (no read restriction)
|
||||
t.Run("read is still default permissive", func(t *testing.T) {
|
||||
ev := createTestEvent(t, unfollowedSigner, "test content", 1)
|
||||
allowed, err := policy.CheckPolicy("read", ev, unfollowedPubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if !allowed {
|
||||
t.Error("Expected read to be allowed (no read restriction)")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestGlobalReadFollowsWhitelist tests read_follows_whitelist in global rule.
|
||||
func TestGlobalReadFollowsWhitelist(t *testing.T) {
|
||||
_, curatorPubkey := generateTestKeypair(t)
|
||||
_, followedPubkey := generateTestKeypair(t)
|
||||
_, unfollowedPubkey := generateTestKeypair(t)
|
||||
authorSigner, _ := generateTestKeypair(t)
|
||||
|
||||
curatorHex := hex.Enc(curatorPubkey)
|
||||
|
||||
policyJSON := []byte(`{
|
||||
"default_policy": "deny",
|
||||
"global": {
|
||||
"description": "Global read follows whitelist",
|
||||
"read_follows_whitelist": ["` + curatorHex + `"]
|
||||
}
|
||||
}`)
|
||||
|
||||
policy, err := New(policyJSON)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create policy: %v", err)
|
||||
}
|
||||
|
||||
// Update global read follows whitelist
|
||||
policy.UpdateGlobalReadFollowsWhitelist([][]byte{followedPubkey})
|
||||
|
||||
// Test with kind 1
|
||||
t.Run("kind 1", func(t *testing.T) {
|
||||
ev := createTestEvent(t, authorSigner, "test content", 1)
|
||||
|
||||
// Followed user can read
|
||||
allowed, err := policy.CheckPolicy("read", ev, followedPubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if !allowed {
|
||||
t.Error("Expected followed user to be allowed to read")
|
||||
}
|
||||
|
||||
// Unfollowed user denied
|
||||
allowed, err = policy.CheckPolicy("read", ev, unfollowedPubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if allowed {
|
||||
t.Error("Expected unfollowed user to be denied")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestGlobalWriteFollowsWhitelist tests write_follows_whitelist in global rule.
|
||||
func TestGlobalWriteFollowsWhitelist(t *testing.T) {
|
||||
_, moderatorPubkey := generateTestKeypair(t)
|
||||
followedSigner, followedPubkey := generateTestKeypair(t)
|
||||
unfollowedSigner, unfollowedPubkey := generateTestKeypair(t)
|
||||
|
||||
moderatorHex := hex.Enc(moderatorPubkey)
|
||||
|
||||
policyJSON := []byte(`{
|
||||
"default_policy": "deny",
|
||||
"global": {
|
||||
"description": "Global write follows whitelist",
|
||||
"write_follows_whitelist": ["` + moderatorHex + `"]
|
||||
}
|
||||
}`)
|
||||
|
||||
policy, err := New(policyJSON)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create policy: %v", err)
|
||||
}
|
||||
|
||||
// Update global write follows whitelist
|
||||
policy.UpdateGlobalWriteFollowsWhitelist([][]byte{followedPubkey})
|
||||
|
||||
// Test with kind 1
|
||||
t.Run("kind 1", func(t *testing.T) {
|
||||
// Followed user can write
|
||||
ev := createTestEvent(t, followedSigner, "test content", 1)
|
||||
allowed, err := policy.CheckPolicy("write", ev, followedPubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if !allowed {
|
||||
t.Error("Expected followed user to be allowed to write")
|
||||
}
|
||||
|
||||
// Unfollowed user denied
|
||||
ev = createTestEvent(t, unfollowedSigner, "test content", 1)
|
||||
allowed, err = policy.CheckPolicy("write", ev, unfollowedPubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if allowed {
|
||||
t.Error("Expected unfollowed user to be denied")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestPrivilegedOnlyAppliesToReadDP tests that privileged only affects read access.
|
||||
func TestPrivilegedOnlyAppliesToReadDP(t *testing.T) {
|
||||
authorSigner, authorPubkey := generateTestKeypair(t)
|
||||
_, recipientPubkey := generateTestKeypair(t)
|
||||
thirdPartySigner, thirdPartyPubkey := generateTestKeypair(t)
|
||||
|
||||
policyJSON := []byte(`{
|
||||
"default_policy": "deny",
|
||||
"rules": {
|
||||
"4": {
|
||||
"description": "Encrypted DMs - privileged",
|
||||
"privileged": true
|
||||
}
|
||||
}
|
||||
}`)
|
||||
|
||||
policy, err := New(policyJSON)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create policy: %v", err)
|
||||
}
|
||||
|
||||
// Create event with p-tag for recipient
|
||||
ev := event.New()
|
||||
ev.Kind = 4
|
||||
ev.Content = []byte("encrypted content")
|
||||
ev.CreatedAt = time.Now().Unix()
|
||||
ev.Tags = tag.NewS()
|
||||
pTag := tag.NewFromAny("p", hex.Enc(recipientPubkey))
|
||||
ev.Tags.Append(pTag)
|
||||
if err := ev.Sign(authorSigner); chk.E(err) {
|
||||
t.Fatalf("Failed to sign event: %v", err)
|
||||
}
|
||||
|
||||
// READ tests
|
||||
t.Run("author can read", func(t *testing.T) {
|
||||
allowed, err := policy.CheckPolicy("read", ev, authorPubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if !allowed {
|
||||
t.Error("Expected author to be allowed to read")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("recipient can read", func(t *testing.T) {
|
||||
allowed, err := policy.CheckPolicy("read", ev, recipientPubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if !allowed {
|
||||
t.Error("Expected recipient to be allowed to read")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("third party cannot read", func(t *testing.T) {
|
||||
allowed, err := policy.CheckPolicy("read", ev, thirdPartyPubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if allowed {
|
||||
t.Error("Expected third party to be denied read access")
|
||||
}
|
||||
})
|
||||
|
||||
// WRITE tests - privileged should NOT affect write
|
||||
t.Run("third party CAN write (privileged doesn't affect write)", func(t *testing.T) {
|
||||
ev := createTestEvent(t, thirdPartySigner, "test content", 4)
|
||||
allowed, err := policy.CheckPolicy("write", ev, thirdPartyPubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if !allowed {
|
||||
t.Error("Expected third party to be allowed to write (privileged doesn't restrict write)")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestCombinedReadWriteFollowsWhitelists tests using both whitelists on same rule.
|
||||
func TestCombinedReadWriteFollowsWhitelists(t *testing.T) {
|
||||
_, curatorPubkey := generateTestKeypair(t)
|
||||
_, moderatorPubkey := generateTestKeypair(t)
|
||||
readerSigner, readerPubkey := generateTestKeypair(t)
|
||||
writerSigner, writerPubkey := generateTestKeypair(t)
|
||||
_, outsiderPubkey := generateTestKeypair(t)
|
||||
|
||||
curatorHex := hex.Enc(curatorPubkey)
|
||||
moderatorHex := hex.Enc(moderatorPubkey)
|
||||
|
||||
policyJSON := []byte(`{
|
||||
"default_policy": "deny",
|
||||
"rules": {
|
||||
"30023": {
|
||||
"description": "Articles - different read/write follows",
|
||||
"read_follows_whitelist": ["` + curatorHex + `"],
|
||||
"write_follows_whitelist": ["` + moderatorHex + `"]
|
||||
}
|
||||
}
|
||||
}`)
|
||||
|
||||
policy, err := New(policyJSON)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create policy: %v", err)
|
||||
}
|
||||
|
||||
// Curator follows reader, moderator follows writer
|
||||
policy.UpdateRuleReadFollowsWhitelist(30023, [][]byte{readerPubkey})
|
||||
policy.UpdateRuleWriteFollowsWhitelist(30023, [][]byte{writerPubkey})
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
access string
|
||||
signer *p8k.Signer
|
||||
pubkey []byte
|
||||
expectAllow bool
|
||||
}{
|
||||
// Read tests
|
||||
{
|
||||
name: "reader can read",
|
||||
access: "read",
|
||||
signer: readerSigner,
|
||||
pubkey: readerPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "writer cannot read (not in read follows)",
|
||||
access: "read",
|
||||
signer: writerSigner,
|
||||
pubkey: writerPubkey,
|
||||
expectAllow: false,
|
||||
},
|
||||
{
|
||||
name: "outsider cannot read",
|
||||
access: "read",
|
||||
signer: readerSigner,
|
||||
pubkey: outsiderPubkey,
|
||||
expectAllow: false,
|
||||
},
|
||||
// Write tests
|
||||
{
|
||||
name: "writer can write",
|
||||
access: "write",
|
||||
signer: writerSigner,
|
||||
pubkey: writerPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "reader cannot write (not in write follows)",
|
||||
access: "write",
|
||||
signer: readerSigner,
|
||||
pubkey: readerPubkey,
|
||||
expectAllow: false,
|
||||
},
|
||||
{
|
||||
name: "outsider cannot write",
|
||||
access: "write",
|
||||
signer: readerSigner,
|
||||
pubkey: outsiderPubkey,
|
||||
expectAllow: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
ev := createTestEvent(t, tt.signer, "test content", 30023)
|
||||
allowed, err := policy.CheckPolicy(tt.access, ev, tt.pubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if allowed != tt.expectAllow {
|
||||
t.Errorf("CheckPolicy() = %v, expected %v", allowed, tt.expectAllow)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestReadAllowWithReadFollowsWhitelist tests combining read_allow and read_follows_whitelist.
|
||||
func TestReadAllowWithReadFollowsWhitelist(t *testing.T) {
|
||||
_, curatorPubkey := generateTestKeypair(t)
|
||||
_, followedPubkey := generateTestKeypair(t)
|
||||
_, explicitPubkey := generateTestKeypair(t)
|
||||
_, outsiderPubkey := generateTestKeypair(t)
|
||||
authorSigner, _ := generateTestKeypair(t)
|
||||
|
||||
curatorHex := hex.Enc(curatorPubkey)
|
||||
explicitHex := hex.Enc(explicitPubkey)
|
||||
|
||||
policyJSON := []byte(`{
|
||||
"default_policy": "deny",
|
||||
"rules": {
|
||||
"1": {
|
||||
"description": "Read via follows OR explicit allow",
|
||||
"read_follows_whitelist": ["` + curatorHex + `"],
|
||||
"read_allow": ["` + explicitHex + `"]
|
||||
}
|
||||
}
|
||||
}`)
|
||||
|
||||
policy, err := New(policyJSON)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create policy: %v", err)
|
||||
}
|
||||
|
||||
policy.UpdateRuleReadFollowsWhitelist(1, [][]byte{followedPubkey})
|
||||
|
||||
ev := createTestEvent(t, authorSigner, "test content", 1)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
pubkey []byte
|
||||
expectAllow bool
|
||||
}{
|
||||
{
|
||||
name: "followed user can read",
|
||||
pubkey: followedPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "explicit allow user can read",
|
||||
pubkey: explicitPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "curator can read (is whitelist pubkey)",
|
||||
pubkey: curatorPubkey,
|
||||
expectAllow: true,
|
||||
},
|
||||
{
|
||||
name: "outsider denied",
|
||||
pubkey: outsiderPubkey,
|
||||
expectAllow: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
allowed, err := policy.CheckPolicy("read", ev, tt.pubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
if allowed != tt.expectAllow {
|
||||
t.Errorf("CheckPolicy() = %v, expected %v", allowed, tt.expectAllow)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestGetAllFollowsWhitelistPubkeysDP tests the combined pubkey retrieval.
|
||||
func TestGetAllFollowsWhitelistPubkeysDP(t *testing.T) {
|
||||
read1 := "1111111111111111111111111111111111111111111111111111111111111111"
|
||||
read2 := "2222222222222222222222222222222222222222222222222222222222222222"
|
||||
write1 := "3333333333333333333333333333333333333333333333333333333333333333"
|
||||
legacy := "4444444444444444444444444444444444444444444444444444444444444444"
|
||||
|
||||
policyJSON := []byte(`{
|
||||
"default_policy": "allow",
|
||||
"global": {
|
||||
"read_follows_whitelist": ["` + read1 + `"],
|
||||
"write_follows_whitelist": ["` + write1 + `"]
|
||||
},
|
||||
"rules": {
|
||||
"1": {
|
||||
"read_follows_whitelist": ["` + read2 + `"],
|
||||
"follows_whitelist_admins": ["` + legacy + `"]
|
||||
}
|
||||
}
|
||||
}`)
|
||||
|
||||
policy, err := New(policyJSON)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create policy: %v", err)
|
||||
}
|
||||
|
||||
allPubkeys := policy.GetAllFollowsWhitelistPubkeys()
|
||||
if len(allPubkeys) != 4 {
|
||||
t.Errorf("Expected 4 unique pubkeys, got %d", len(allPubkeys))
|
||||
}
|
||||
|
||||
// Check each is present
|
||||
pubkeySet := make(map[string]bool)
|
||||
for _, pk := range allPubkeys {
|
||||
pubkeySet[pk] = true
|
||||
}
|
||||
|
||||
expected := []string{read1, read2, write1, legacy}
|
||||
for _, exp := range expected {
|
||||
if !pubkeySet[exp] {
|
||||
t.Errorf("Expected pubkey %s not found", exp)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1091,9 +1091,12 @@ func TestAllNewFieldsCombined(t *testing.T) {
|
||||
}
|
||||
|
||||
// Test new fields in global rule
|
||||
// Global rule is ONLY used as fallback when NO kind-specific rule exists.
|
||||
// If a kind-specific rule exists (even if empty), it takes precedence and global is ignored.
|
||||
func TestNewFieldsInGlobalRule(t *testing.T) {
|
||||
signer, pubkey := generateTestKeypair(t)
|
||||
|
||||
// Policy with global constraints and a kind-specific rule for kind 1
|
||||
policyJSON := []byte(`{
|
||||
"default_policy": "allow",
|
||||
"global": {
|
||||
@@ -1102,7 +1105,7 @@ func TestNewFieldsInGlobalRule(t *testing.T) {
|
||||
},
|
||||
"rules": {
|
||||
"1": {
|
||||
"description": "Kind 1 events"
|
||||
"description": "Kind 1 events - has specific rule, so global is ignored"
|
||||
}
|
||||
}
|
||||
}`)
|
||||
@@ -1112,7 +1115,8 @@ func TestNewFieldsInGlobalRule(t *testing.T) {
|
||||
t.Fatalf("Failed to create policy: %v", err)
|
||||
}
|
||||
|
||||
// Event without protected tag should fail global rule
|
||||
// Kind 1 has a specific rule, so global protected_required is IGNORED
|
||||
// Event should be ALLOWED even without protected tag
|
||||
ev := createTestEventForNewFields(t, signer, "test", 1)
|
||||
addTagString(ev, "expiration", int64ToString(ev.CreatedAt+3600))
|
||||
if err := ev.Sign(signer); chk.E(err) {
|
||||
@@ -1124,23 +1128,39 @@ func TestNewFieldsInGlobalRule(t *testing.T) {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
|
||||
if allowed {
|
||||
t.Error("Global protected_required should deny event without - tag")
|
||||
if !allowed {
|
||||
t.Error("Kind 1 has specific rule - global protected_required should be ignored, event should be allowed")
|
||||
}
|
||||
|
||||
// Add protected tag
|
||||
addTagString(ev, "-", "")
|
||||
if err := ev.Sign(signer); chk.E(err) {
|
||||
// Now test kind 999 which has NO specific rule - global should apply
|
||||
ev2 := createTestEventForNewFields(t, signer, "test", 999)
|
||||
addTagString(ev2, "expiration", int64ToString(ev2.CreatedAt+3600))
|
||||
if err := ev2.Sign(signer); chk.E(err) {
|
||||
t.Fatalf("Failed to sign: %v", err)
|
||||
}
|
||||
|
||||
allowed, err = policy.CheckPolicy("write", ev, pubkey, "127.0.0.1")
|
||||
allowed, err = policy.CheckPolicy("write", ev2, pubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
|
||||
if allowed {
|
||||
t.Error("Kind 999 has NO specific rule - global protected_required should apply, event should be denied")
|
||||
}
|
||||
|
||||
// Add protected tag to kind 999 event - should now be allowed
|
||||
addTagString(ev2, "-", "")
|
||||
if err := ev2.Sign(signer); chk.E(err) {
|
||||
t.Fatalf("Failed to sign: %v", err)
|
||||
}
|
||||
|
||||
allowed, err = policy.CheckPolicy("write", ev2, pubkey, "127.0.0.1")
|
||||
if err != nil {
|
||||
t.Fatalf("CheckPolicy error: %v", err)
|
||||
}
|
||||
|
||||
if !allowed {
|
||||
t.Error("Should allow event with protected tag and valid expiry")
|
||||
t.Error("Kind 999 with protected tag and valid expiry should be allowed by global rule")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1331,6 +1351,57 @@ func TestValidateJSONNewFields(t *testing.T) {
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
// Tests for read_allow_permissive and write_allow_permissive
|
||||
{
|
||||
name: "valid read_allow_permissive alone with whitelist",
|
||||
json: `{
|
||||
"kind": {"whitelist": [1, 3, 5]},
|
||||
"global": {"read_allow_permissive": true}
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid write_allow_permissive alone with whitelist",
|
||||
json: `{
|
||||
"kind": {"whitelist": [1, 3, 5]},
|
||||
"global": {"write_allow_permissive": true}
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "invalid both permissive flags with whitelist",
|
||||
json: `{
|
||||
"kind": {"whitelist": [1, 3, 5]},
|
||||
"global": {
|
||||
"read_allow_permissive": true,
|
||||
"write_allow_permissive": true
|
||||
}
|
||||
}`,
|
||||
expectError: true,
|
||||
errorMatch: "read_allow_permissive and write_allow_permissive cannot be enabled together",
|
||||
},
|
||||
{
|
||||
name: "invalid both permissive flags with blacklist",
|
||||
json: `{
|
||||
"kind": {"blacklist": [2, 4, 6]},
|
||||
"global": {
|
||||
"read_allow_permissive": true,
|
||||
"write_allow_permissive": true
|
||||
}
|
||||
}`,
|
||||
expectError: true,
|
||||
errorMatch: "read_allow_permissive and write_allow_permissive cannot be enabled together",
|
||||
},
|
||||
{
|
||||
name: "valid both permissive flags without any kind restriction",
|
||||
json: `{
|
||||
"global": {
|
||||
"read_allow_permissive": true,
|
||||
"write_allow_permissive": true
|
||||
}
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -146,6 +146,7 @@ func TestCheckKindsPolicy(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
policy *P
|
||||
access string // "read" or "write"
|
||||
kind uint16
|
||||
expected bool
|
||||
}{
|
||||
@@ -155,6 +156,7 @@ func TestCheckKindsPolicy(t *testing.T) {
|
||||
Kind: Kinds{},
|
||||
rules: map[int]Rule{}, // No rules defined
|
||||
},
|
||||
access: "write",
|
||||
kind: 1,
|
||||
expected: true, // Should be allowed (no rules = allow all kinds)
|
||||
},
|
||||
@@ -166,6 +168,7 @@ func TestCheckKindsPolicy(t *testing.T) {
|
||||
2: {Description: "Rule for kind 2"},
|
||||
},
|
||||
},
|
||||
access: "write",
|
||||
kind: 1,
|
||||
expected: false, // Should be denied (implicit whitelist, no rule for kind 1)
|
||||
},
|
||||
@@ -177,6 +180,7 @@ func TestCheckKindsPolicy(t *testing.T) {
|
||||
1: {Description: "Rule for kind 1"},
|
||||
},
|
||||
},
|
||||
access: "write",
|
||||
kind: 1,
|
||||
expected: true, // Should be allowed (has rule)
|
||||
},
|
||||
@@ -189,6 +193,7 @@ func TestCheckKindsPolicy(t *testing.T) {
|
||||
},
|
||||
rules: map[int]Rule{}, // No specific rules
|
||||
},
|
||||
access: "write",
|
||||
kind: 1,
|
||||
expected: true, // Should be allowed (global rule exists)
|
||||
},
|
||||
@@ -199,6 +204,7 @@ func TestCheckKindsPolicy(t *testing.T) {
|
||||
Whitelist: []int{1, 3, 5},
|
||||
},
|
||||
},
|
||||
access: "write",
|
||||
kind: 1,
|
||||
expected: true,
|
||||
},
|
||||
@@ -209,6 +215,7 @@ func TestCheckKindsPolicy(t *testing.T) {
|
||||
Whitelist: []int{1, 3, 5},
|
||||
},
|
||||
},
|
||||
access: "write",
|
||||
kind: 2,
|
||||
expected: false,
|
||||
},
|
||||
@@ -222,6 +229,7 @@ func TestCheckKindsPolicy(t *testing.T) {
|
||||
3: {Description: "Rule for kind 3"}, // Has at least one rule
|
||||
},
|
||||
},
|
||||
access: "write",
|
||||
kind: 1,
|
||||
expected: false, // Should be denied (not blacklisted but no rule for kind 1)
|
||||
},
|
||||
@@ -235,6 +243,7 @@ func TestCheckKindsPolicy(t *testing.T) {
|
||||
1: {Description: "Rule for kind 1"},
|
||||
},
|
||||
},
|
||||
access: "write",
|
||||
kind: 1,
|
||||
expected: true, // Should be allowed (not blacklisted and has rule)
|
||||
},
|
||||
@@ -245,6 +254,7 @@ func TestCheckKindsPolicy(t *testing.T) {
|
||||
Blacklist: []int{2, 4, 6},
|
||||
},
|
||||
},
|
||||
access: "write",
|
||||
kind: 2,
|
||||
expected: false,
|
||||
},
|
||||
@@ -256,14 +266,87 @@ func TestCheckKindsPolicy(t *testing.T) {
|
||||
Blacklist: []int{1, 2, 3},
|
||||
},
|
||||
},
|
||||
access: "write",
|
||||
kind: 1,
|
||||
expected: true,
|
||||
},
|
||||
// Tests for new permissive flags
|
||||
{
|
||||
name: "read_allow_permissive - allows read for non-whitelisted kind",
|
||||
policy: &P{
|
||||
Kind: Kinds{
|
||||
Whitelist: []int{1, 3, 5},
|
||||
},
|
||||
Global: Rule{
|
||||
ReadAllowPermissive: true,
|
||||
},
|
||||
},
|
||||
access: "read",
|
||||
kind: 2,
|
||||
expected: true, // Should be allowed (read permissive overrides whitelist)
|
||||
},
|
||||
{
|
||||
name: "read_allow_permissive - write still blocked for non-whitelisted kind",
|
||||
policy: &P{
|
||||
Kind: Kinds{
|
||||
Whitelist: []int{1, 3, 5},
|
||||
},
|
||||
Global: Rule{
|
||||
ReadAllowPermissive: true,
|
||||
},
|
||||
},
|
||||
access: "write",
|
||||
kind: 2,
|
||||
expected: false, // Should be denied (only read is permissive)
|
||||
},
|
||||
{
|
||||
name: "write_allow_permissive - allows write for non-whitelisted kind",
|
||||
policy: &P{
|
||||
Kind: Kinds{
|
||||
Whitelist: []int{1, 3, 5},
|
||||
},
|
||||
Global: Rule{
|
||||
WriteAllowPermissive: true,
|
||||
},
|
||||
},
|
||||
access: "write",
|
||||
kind: 2,
|
||||
expected: true, // Should be allowed (write permissive overrides whitelist)
|
||||
},
|
||||
{
|
||||
name: "write_allow_permissive - read still blocked for non-whitelisted kind",
|
||||
policy: &P{
|
||||
Kind: Kinds{
|
||||
Whitelist: []int{1, 3, 5},
|
||||
},
|
||||
Global: Rule{
|
||||
WriteAllowPermissive: true,
|
||||
},
|
||||
},
|
||||
access: "read",
|
||||
kind: 2,
|
||||
expected: false, // Should be denied (only write is permissive)
|
||||
},
|
||||
{
|
||||
name: "blacklist - permissive flags do NOT override blacklist",
|
||||
policy: &P{
|
||||
Kind: Kinds{
|
||||
Blacklist: []int{2, 4, 6},
|
||||
},
|
||||
Global: Rule{
|
||||
ReadAllowPermissive: true,
|
||||
WriteAllowPermissive: true,
|
||||
},
|
||||
},
|
||||
access: "write",
|
||||
kind: 2,
|
||||
expected: false, // Should be denied (blacklist always applies)
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := tt.policy.checkKindsPolicy(tt.kind)
|
||||
result := tt.policy.checkKindsPolicy(tt.access, tt.kind)
|
||||
if result != tt.expected {
|
||||
t.Errorf("Expected %v, got %v", tt.expected, result)
|
||||
}
|
||||
@@ -996,19 +1079,19 @@ func TestEdgeCasesWhitelistBlacklistConflict(t *testing.T) {
|
||||
}
|
||||
|
||||
// Test kind in both whitelist and blacklist - whitelist should win
|
||||
allowed := policy.checkKindsPolicy(1)
|
||||
allowed := policy.checkKindsPolicy("write", 1)
|
||||
if !allowed {
|
||||
t.Error("Expected whitelist to override blacklist")
|
||||
}
|
||||
|
||||
// Test kind in blacklist but not whitelist
|
||||
allowed = policy.checkKindsPolicy(2)
|
||||
allowed = policy.checkKindsPolicy("write", 2)
|
||||
if allowed {
|
||||
t.Error("Expected kind in blacklist but not whitelist to be blocked")
|
||||
}
|
||||
|
||||
// Test kind in whitelist but not blacklist
|
||||
allowed = policy.checkKindsPolicy(5)
|
||||
allowed = policy.checkKindsPolicy("write", 5)
|
||||
if !allowed {
|
||||
t.Error("Expected kind in whitelist to be allowed")
|
||||
}
|
||||
|
||||
@@ -1 +1 @@
|
||||
v0.32.3
|
||||
v0.32.7
|
||||
Reference in New Issue
Block a user