Compare commits

...

9 Commits

Author SHA1 Message Date
9d6280eab1 Fix duplicate REPORTS relationships in Neo4j backend (v0.36.1)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Change processReport() to use MERGE instead of CREATE for REPORTS
  relationships, deduplicating by (reporter, reported, report_type)
- Add ON CREATE/ON MATCH clauses to preserve newest event data while
  preventing duplicate relationships
- Add getExistingReportEvent() helper to check for existing reports
- Add markReportEventSuperseded() to track superseded events
- Add v4 migration migrateDeduplicateReports() to clean up existing
  duplicate REPORTS relationships in databases
- Add comprehensive tests: TestReportDeduplication with subtests for
  deduplication, different types, and superseded event tracking
- Update WOT_SPEC.md with REPORTS deduplication behavior and correct
  property names (report_type, created_at, created_by_event)
- Bump version to v0.36.1

Fixes: https://git.nostrdev.com/mleku/next.orly.dev/issues/16

Files modified:
- pkg/neo4j/social-event-processor.go: MERGE-based deduplication
- pkg/neo4j/migrations.go: v4 migration for duplicate cleanup
- pkg/neo4j/social-event-processor_test.go: Deduplication tests
- pkg/neo4j/WOT_SPEC.md: Updated REPORTS documentation
- pkg/version/version: Bump to v0.36.1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-16 10:13:15 +01:00
96bdf5cba2 Implement Tag-based e/p model for Neo4j backend (v0.36.0)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add unified Tag-based model where e/p tags create intermediate Tag nodes
  with REFERENCES relationships to Event/NostrUser nodes
- Update save-event.go: addPTagsInBatches and addETagsInBatches now create
  Tag nodes with TAGGED_WITH and REFERENCES relationships
- Update delete.go: CheckForDeleted uses Tag traversal for kind 5 detection
- Add v3 migration in migrations.go to convert existing direct REFERENCES
  and MENTIONS relationships to the new Tag-based model
- Create comprehensive test file tag_model_test.go with 15+ test functions
  covering Tag model, filter queries, migrations, and deletion detection
- Update save-event_test.go to verify new Tag-based relationship patterns
- Update WOT_SPEC.md with Tag-Based References documentation section
- Update CLAUDE.md and README.md with Neo4j Tag-based model documentation
- Bump version to v0.36.0

This change enables #e and #p filter queries to work correctly by storing
all tags (including e/p) through intermediate Tag nodes.

Files modified:
- pkg/neo4j/save-event.go: Tag-based e/p relationship creation
- pkg/neo4j/delete.go: Tag traversal for deletion detection
- pkg/neo4j/migrations.go: v3 migration for existing data
- pkg/neo4j/tag_model_test.go: New comprehensive test file
- pkg/neo4j/save-event_test.go: Updated for new model
- pkg/neo4j/WOT_SPEC.md: Tag-Based References documentation
- pkg/neo4j/README.md: Architecture and example queries
- CLAUDE.md: Repository documentation update
- pkg/version/version: Bump to v0.36.0

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-16 09:22:05 +01:00
516ce9c42c Add issue templates, CI workflows, and decentralization plan
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add Gitea issue templates for bug reports and feature requests with
  structured YAML forms for version, database backend, and log level
- Add GitHub Actions CI workflow for automated testing on push/PR
- Add GitHub Actions release workflow for building multi-platform
  binaries on tag push with SHA256 checksums
- Add CONTRIBUTING.md with development setup, PR guidelines, and
  commit message format documentation
- Add DECENTRALIZE_NOSTR.md expansion plan outlining WireGuard tunnel,
  GUI installer, system tray, and proxy server architecture
- Update allowed commands in Claude settings
- Bump version to v0.35.5

Files modified:
- .gitea/issue_template/: Bug report, feature request, and config YAML
- .github/workflows/: CI and release automation workflows
- CONTRIBUTING.md: New contributor guide
- docs/plans/DECENTRALIZE_NOSTR.md: Expansion architecture plan
- .claude/settings.local.json: Updated allowed commands
- pkg/version/version: Version bump to v0.35.5

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 20:50:49 +01:00
ed95947971 Add release command and bump version to v0.35.4
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add .claude/commands/release.md slash command for automated release
  workflow with version bumping, commit creation, tagging, and push
- Supports patch and minor version increments with proper validation
- Includes build verification step before committing
- Update settings.local.json with allowed commands from previous session
- Bump version from v0.35.3 to v0.35.4

Files modified:
- .claude/commands/release.md: New release automation command
- .claude/settings.local.json: Updated allowed commands
- pkg/version/version: Version bump to v0.35.4

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 19:50:13 +01:00
b58b91cd14 Add ORLY_POLICY_PATH for custom policy file location
Some checks failed
Go / build-and-release (push) Has been cancelled
- Add ORLY_POLICY_PATH environment variable to configure custom policy
  file path, overriding the default ~/.config/ORLY/policy.json location
- Enforce ABSOLUTE paths only - relay panics on startup if relative path
  is provided, preventing common misconfiguration errors
- Update PolicyManager to store and expose configPath for hot-reload saves
- Add ConfigPath() method to P struct delegating to internal PolicyManager
- Update NewWithManager() signature to accept optional custom path parameter
- Add BUG_REPORTS_AND_FEATURE_REQUEST_PROTOCOL.md with issue submission
  guidelines requiring environment details, reproduction steps, and logs
- Update README.md with system requirements (500MB minimum memory) and
  link to bug report protocol
- Update CLAUDE.md and README.md documentation for new ORLY_POLICY_PATH

Files modified:
- app/config/config.go: Add PolicyPath config field
- pkg/policy/policy.go: Add configPath storage and validation
- app/handle-policy-config.go: Use policyManager.ConfigPath()
- app/main.go: Pass cfg.PolicyPath to NewWithManager
- pkg/policy/*_test.go: Update test calls with new parameter
- BUG_REPORTS_AND_FEATURE_REQUEST_PROTOCOL.md: New file
- README.md, CLAUDE.md: Documentation updates

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 18:36:04 +01:00
20293046d3 update nostr library version for scheme handling fix
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-14 08:25:12 +01:00
a6d969d7e9 bump version
Some checks failed
Go / build-and-release (push) Has been cancelled
2025-12-14 08:20:41 +01:00
a5dc827e15 Fix NIP-11 fetch URL scheme conversion for non-proxied relays
- Convert wss:// to https:// and ws:// to http:// before fetching NIP-11
  documents, fixing failures for users not using HTTPS upgrade proxies
- The fetchNIP11 function was using WebSocket URLs directly for HTTP
  requests, causing scheme mismatch errors

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 08:20:09 +01:00
be81b3320e rate limiter test report 2025-12-12 21:59:00 +01:00
37 changed files with 3324 additions and 95 deletions

View File

@@ -0,0 +1,50 @@
# Release Command
Review all changes in the repository and create a release with proper commit message, version tag, and push to remotes.
## Argument: $ARGUMENTS
The argument should be one of:
- `patch` - Bump the patch version (e.g., v0.35.3 -> v0.35.4)
- `minor` - Bump the minor version and reset patch to 0 (e.g., v0.35.3 -> v0.36.0)
If no argument provided, default to `patch`.
## Steps to perform:
1. **Read the current version** from `pkg/version/version`
2. **Calculate the new version** based on the argument:
- Parse the current version (format: vMAJOR.MINOR.PATCH)
- If `patch`: increment PATCH by 1
- If `minor`: increment MINOR by 1, set PATCH to 0
3. **Update the version file** (`pkg/version/version`) with the new version
4. **Review changes** using `git status` and `git diff --stat HEAD`
5. **Compose a commit message** following this format:
- First line: 72 chars max, imperative mood summary
- Blank line
- Bullet points describing each significant change
- "Files modified:" section listing affected files
- Footer with Claude Code attribution
6. **Stage all changes** with `git add -A`
7. **Create the commit** with the composed message
8. **Create a git tag** with the new version (e.g., `v0.36.0`)
9. **Push to remotes** (origin and gitea) with tags:
```
git push origin main --tags
git push gitea main --tags
```
10. **Report completion** with the new version and commit hash
## Important:
- Do NOT push to github remote (only origin and gitea)
- Always verify the build compiles before committing: `CGO_ENABLED=0 go build -o /dev/null ./...`
- If build fails, fix issues before proceeding

View File

@@ -111,7 +111,16 @@
"Bash(fi)", "Bash(fi)",
"Bash(xargs:*)", "Bash(xargs:*)",
"Bash(for i in 1 2 3 4 5)", "Bash(for i in 1 2 3 4 5)",
"Bash(do)" "Bash(do)",
"WebFetch(domain:vermaden.wordpress.com)",
"WebFetch(domain:eylenburg.github.io)",
"Bash(go run -exec '' -c 'package main; import \"\"git.mleku.dev/mleku/nostr/utils/normalize\"\"; import \"\"fmt\"\"; func main() { fmt.Println(string(normalize.URL([]byte(\"\"relay.example.com:3334\"\")))); fmt.Println(string(normalize.URL([]byte(\"\"relay.example.com:443\"\")))); fmt.Println(string(normalize.URL([]byte(\"\"ws://relay.example.com:3334\"\")))); fmt.Println(string(normalize.URL([]byte(\"\"wss://relay.example.com:3334\"\")))) }')",
"Bash(go run:*)",
"Bash(git commit -m \"$(cat <<''EOF''\nFix NIP-11 fetch URL scheme conversion for non-proxied relays\n\n- Convert wss:// to https:// and ws:// to http:// before fetching NIP-11\n documents, fixing failures for users not using HTTPS upgrade proxies\n- The fetchNIP11 function was using WebSocket URLs directly for HTTP\n requests, causing scheme mismatch errors\n\n🤖 Generated with [Claude Code](https://claude.com/claude-code)\n\nCo-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>\nEOF\n)\")",
"Bash(/tmp/orly help:*)",
"Bash(git commit -m \"$(cat <<''EOF''\nAdd ORLY_POLICY_PATH for custom policy file location\n\n- Add ORLY_POLICY_PATH environment variable to configure custom policy\n file path, overriding the default ~/.config/ORLY/policy.json location\n- Enforce ABSOLUTE paths only - relay panics on startup if relative path\n is provided, preventing common misconfiguration errors\n- Update PolicyManager to store and expose configPath for hot-reload saves\n- Add ConfigPath() method to P struct delegating to internal PolicyManager\n- Update NewWithManager() signature to accept optional custom path parameter\n- Add BUG_REPORTS_AND_FEATURE_REQUEST_PROTOCOL.md with issue submission\n guidelines requiring environment details, reproduction steps, and logs\n- Update README.md with system requirements (500MB minimum memory) and\n link to bug report protocol\n- Update CLAUDE.md and README.md documentation for new ORLY_POLICY_PATH\n\nFiles modified:\n- app/config/config.go: Add PolicyPath config field\n- pkg/policy/policy.go: Add configPath storage and validation\n- app/handle-policy-config.go: Use policyManager.ConfigPath()\n- app/main.go: Pass cfg.PolicyPath to NewWithManager\n- pkg/policy/*_test.go: Update test calls with new parameter\n- BUG_REPORTS_AND_FEATURE_REQUEST_PROTOCOL.md: New file\n- README.md, CLAUDE.md: Documentation updates\n\n🤖 Generated with [Claude Code](https://claude.com/claude-code)\n\nCo-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>\nEOF\n)\")",
"Bash(mkdir:*)",
"Bash(ssh:*)"
], ],
"deny": [], "deny": [],
"ask": [] "ask": []

View File

@@ -0,0 +1,118 @@
name: Bug Report
about: Report a bug or unexpected behavior in ORLY relay
title: "[BUG] "
labels:
- bug
body:
- type: markdown
attributes:
value: |
## Bug Report Guidelines
Thank you for taking the time to report a bug. Please fill out the form below to help us understand and reproduce the issue.
**Before submitting:**
- Search [existing issues](https://git.mleku.dev/mleku/next.orly.dev/issues) to avoid duplicates
- Check the [documentation](https://git.mleku.dev/mleku/next.orly.dev) for configuration guidance
- Ensure you're running a recent version of ORLY
- type: input
id: version
attributes:
label: ORLY Version
description: Run `./orly version` to get the version
placeholder: "v0.35.4"
validations:
required: true
- type: dropdown
id: database
attributes:
label: Database Backend
description: Which database backend are you using?
options:
- Badger (default)
- Neo4j
- WasmDB
validations:
required: true
- type: textarea
id: description
attributes:
label: Bug Description
description: A clear and concise description of the bug
placeholder: Describe what happened and what you expected to happen
validations:
required: true
- type: textarea
id: reproduction
attributes:
label: Steps to Reproduce
description: Detailed steps to reproduce the behavior
placeholder: |
1. Start relay with `./orly`
2. Connect with client X
3. Perform action Y
4. Observe error Z
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected Behavior
description: What did you expect to happen?
validations:
required: true
- type: textarea
id: logs
attributes:
label: Relevant Logs
description: |
Include relevant log output. Set `ORLY_LOG_LEVEL=debug` or `trace` for more detail.
This will be automatically formatted as code.
render: shell
- type: textarea
id: config
attributes:
label: Configuration
description: |
Relevant environment variables or configuration (redact sensitive values).
This will be automatically formatted as code.
render: shell
placeholder: |
ORLY_ACL_MODE=follows
ORLY_POLICY_ENABLED=true
ORLY_DB_TYPE=badger
- type: textarea
id: environment
attributes:
label: Environment
description: Operating system, Go version, etc.
placeholder: |
OS: Linux 6.8.0
Go: 1.25.3
Architecture: amd64
- type: textarea
id: additional
attributes:
label: Additional Context
description: Any other context, screenshots, or information that might help
- type: checkboxes
id: checklist
attributes:
label: Checklist
options:
- label: I have searched existing issues and this is not a duplicate
required: true
- label: I have included version information
required: true
- label: I have included steps to reproduce the issue
required: true

View File

@@ -0,0 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Documentation
url: https://git.mleku.dev/mleku/next.orly.dev
about: Check the repository documentation before opening an issue
- name: Nostr Protocol (NIPs)
url: https://github.com/nostr-protocol/nips
about: For questions about Nostr protocol specifications

View File

@@ -0,0 +1,118 @@
name: Feature Request
about: Suggest a new feature or enhancement for ORLY relay
title: "[FEATURE] "
labels:
- enhancement
body:
- type: markdown
attributes:
value: |
## Feature Request Guidelines
Thank you for suggesting a feature. Please provide as much detail as possible to help us understand your proposal.
**Before submitting:**
- Search [existing issues](https://git.mleku.dev/mleku/next.orly.dev/issues) to avoid duplicates
- Check if this is covered by an existing [NIP](https://github.com/nostr-protocol/nips)
- Review the [documentation](https://git.mleku.dev/mleku/next.orly.dev) for current capabilities
- type: dropdown
id: category
attributes:
label: Feature Category
description: What area of ORLY does this feature relate to?
options:
- Protocol (NIP implementation)
- Database / Storage
- Performance / Optimization
- Policy / Access Control
- Web UI / Admin Interface
- Deployment / Operations
- API / Integration
- Documentation
- Other
validations:
required: true
- type: textarea
id: problem
attributes:
label: Problem Statement
description: |
What problem does this feature solve? Is this related to a frustration you have?
A clear problem statement helps us understand the motivation.
placeholder: "I'm always frustrated when..."
validations:
required: true
- type: textarea
id: solution
attributes:
label: Proposed Solution
description: |
Describe the solution you'd like. Be specific about expected behavior.
placeholder: "I would like ORLY to..."
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Alternatives Considered
description: |
Describe any alternative solutions or workarounds you've considered.
placeholder: "I've tried X but it doesn't work because..."
- type: input
id: nip
attributes:
label: Related NIP
description: If this relates to a Nostr Implementation Possibility, provide the NIP number
placeholder: "NIP-XX"
- type: dropdown
id: impact
attributes:
label: Scope of Impact
description: How significant is this feature?
options:
- Minor enhancement (small quality-of-life improvement)
- Moderate feature (adds useful capability)
- Major feature (significant new functionality)
- Breaking change (requires migration or config changes)
validations:
required: true
- type: dropdown
id: contribution
attributes:
label: Willingness to Contribute
description: Would you be willing to help implement this feature?
options:
- "Yes, I can submit a PR"
- "Yes, I can help with testing"
- "No, but I can provide more details"
- "No"
validations:
required: true
- type: textarea
id: additional
attributes:
label: Additional Context
description: |
Any other context, mockups, examples, or references that help explain the feature.
For protocol features, include example event structures or message flows if applicable.
- type: checkboxes
id: checklist
attributes:
label: Checklist
options:
- label: I have searched existing issues and this is not a duplicate
required: true
- label: I have described the problem this feature solves
required: true
- label: I have checked if this relates to an existing NIP
required: false

53
.github/workflows/ci.yaml vendored Normal file
View File

@@ -0,0 +1,53 @@
name: CI
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.23'
- name: Download libsecp256k1
run: |
wget -q https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/libsecp256k1.so -O libsecp256k1.so
chmod +x libsecp256k1.so
- name: Run tests
run: |
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}$(pwd)"
CGO_ENABLED=0 go test ./...
- name: Build binary
run: |
CGO_ENABLED=0 go build -o orly .
./orly version
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.23'
- name: Check go mod tidy
run: |
go mod tidy
git diff --exit-code go.mod go.sum
- name: Run go vet
run: CGO_ENABLED=0 go vet ./...

154
.github/workflows/release.yaml vendored Normal file
View File

@@ -0,0 +1,154 @@
name: Release
on:
push:
tags:
- 'v*'
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- goos: linux
goarch: amd64
platform: linux-amd64
ext: ""
lib: libsecp256k1.so
- goos: linux
goarch: arm64
platform: linux-arm64
ext: ""
lib: libsecp256k1.so
- goos: darwin
goarch: amd64
platform: darwin-amd64
ext: ""
lib: libsecp256k1.dylib
- goos: darwin
goarch: arm64
platform: darwin-arm64
ext: ""
lib: libsecp256k1.dylib
- goos: windows
goarch: amd64
platform: windows-amd64
ext: ".exe"
lib: libsecp256k1.dll
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.23'
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install bun
run: |
curl -fsSL https://bun.sh/install | bash
echo "$HOME/.bun/bin" >> $GITHUB_PATH
- name: Build Web UI
run: |
cd app/web
$HOME/.bun/bin/bun install
$HOME/.bun/bin/bun run build
- name: Get version
id: version
run: echo "version=$(cat pkg/version/version)" >> $GITHUB_OUTPUT
- name: Build binary
env:
CGO_ENABLED: 0
GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }}
run: |
VERSION=${{ steps.version.outputs.version }}
OUTPUT="orly-${VERSION}-${{ matrix.platform }}${{ matrix.ext }}"
go build -ldflags "-s -w -X main.version=${VERSION}" -o ${OUTPUT} .
sha256sum ${OUTPUT} > ${OUTPUT}.sha256
- name: Download runtime library
run: |
VERSION=${{ steps.version.outputs.version }}
LIB="${{ matrix.lib }}"
wget -q "https://git.mleku.dev/mleku/nostr/raw/branch/main/crypto/p8k/${LIB}" -O "${LIB}" || true
if [ -f "${LIB}" ]; then
sha256sum "${LIB}" > "${LIB}.sha256"
fi
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: orly-${{ matrix.platform }}
path: |
orly-*
libsecp256k1*
release:
needs: build
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Get version
id: version
run: echo "version=$(cat pkg/version/version)" >> $GITHUB_OUTPUT
- name: Download all artifacts
uses: actions/download-artifact@v4
with:
path: artifacts
merge-multiple: true
- name: Create combined checksums
run: |
cd artifacts
cat *.sha256 | sort -k2 > SHA256SUMS.txt
rm -f *.sha256
- name: List release files
run: ls -la artifacts/
- name: Create Release
uses: softprops/action-gh-release@v1
with:
name: ORLY ${{ steps.version.outputs.version }}
body: |
## ORLY ${{ steps.version.outputs.version }}
### Downloads
Download the appropriate binary for your platform. The `libsecp256k1` library is optional but recommended for better cryptographic performance.
### Installation
1. Download the binary for your platform
2. (Optional) Download the corresponding `libsecp256k1` library
3. Place both files in the same directory
4. Make the binary executable: `chmod +x orly-*`
5. Run: `./orly-*-linux-amd64` (or your platform's binary)
### Verify Downloads
```bash
sha256sum -c SHA256SUMS.txt
```
### Configuration
See the [repository documentation](https://git.mleku.dev/mleku/next.orly.dev) for configuration options.
files: |
artifacts/*
draft: false
prerelease: false

View File

@@ -0,0 +1,254 @@
# Feature Request and Bug Report Protocol
This document describes how to submit effective bug reports and feature requests for ORLY relay. Following these guidelines helps maintainers understand and resolve issues quickly.
## Before Submitting
1. **Search existing issues** - Your issue may already be reported or discussed
2. **Check documentation** - Review `CLAUDE.md`, `docs/`, and `pkg/*/README.md` files
3. **Verify with latest version** - Ensure the issue exists in the current release
4. **Test with default configuration** - Rule out configuration-specific problems
## Bug Reports
### Required Information
**Title**: Concise summary of the problem
- Good: "Kind 3 events with 8000+ follows truncated on save"
- Bad: "Events not saving" or "Bug in database"
**Environment**:
```
ORLY version: (output of ./orly version)
OS: (e.g., Ubuntu 24.04, macOS 14.2)
Go version: (output of go version)
Database backend: (badger/neo4j/wasmdb)
```
**Configuration** (relevant settings only):
```bash
ORLY_DB_TYPE=badger
ORLY_POLICY_ENABLED=true
# Include any non-default settings
```
**Steps to Reproduce**:
1. Start relay with configuration X
2. Connect client and send event Y
3. Query for event with filter Z
4. Observe error/unexpected behavior
**Expected Behavior**: What should happen
**Actual Behavior**: What actually happens
**Logs**: Include relevant log output with `ORLY_LOG_LEVEL=debug` or `trace`
### Minimal Reproduction
The most effective bug reports include a minimal reproduction case:
```bash
# Example: Script that demonstrates the issue
export ORLY_LOG_LEVEL=debug
./orly &
sleep 2
# Send problematic event
echo '["EVENT", {...}]' | websocat ws://localhost:3334
# Show the failure
echo '["REQ", "test", {"kinds": [1]}]' | websocat ws://localhost:3334
```
Or provide a failing test case:
```go
func TestReproduceBug(t *testing.T) {
// Setup
db := setupTestDB(t)
// This should work but fails
event := createTestEvent(kind, content)
err := db.SaveEvent(ctx, event)
require.NoError(t, err)
// Query returns unexpected result
results, err := db.QueryEvents(ctx, filter)
assert.Len(t, results, 1) // Fails: got 0
}
```
## Feature Requests
### Required Information
**Title**: Clear description of the feature
- Good: "Add WebSocket compression support (permessage-deflate)"
- Bad: "Make it faster" or "New feature idea"
**Problem Statement**: What problem does this solve?
```
Currently, clients with high-latency connections experience slow sync times
because event data is transmitted uncompressed. A typical session transfers
50MB of JSON that could be reduced to ~10MB with compression.
```
**Proposed Solution**: How should it work?
```
Add optional permessage-deflate WebSocket extension support:
- New config: ORLY_WS_COMPRESSION=true
- Negotiate compression during WebSocket handshake
- Apply to messages over configurable threshold (default 1KB)
```
**Use Case**: Who benefits and how?
```
- Mobile clients on cellular connections
- Users syncing large follow lists
- Relays with bandwidth constraints
```
**Alternatives Considered** (optional):
```
- Application-level compression: Rejected because it requires client changes
- HTTP/2: Not applicable for WebSocket connections
```
### Implementation Notes (optional)
If you have implementation ideas:
```
Suggested approach:
1. Add compression config to app/config/config.go
2. Modify gorilla/websocket upgrader in app/handle-websocket.go
3. Add compression threshold check before WriteMessage()
Reference: gorilla/websocket has built-in permessage-deflate support
```
## What Makes Reports Effective
**Do**:
- Be specific and factual
- Include version numbers and exact error messages
- Provide reproducible steps
- Attach relevant logs (redact sensitive data)
- Link to related issues or discussions
- Respond to follow-up questions promptly
**Avoid**:
- Vague descriptions ("it doesn't work")
- Multiple unrelated issues in one report
- Assuming the cause without evidence
- Demanding immediate fixes
- Duplicating existing issues
## Issue Labels
When applicable, suggest appropriate labels:
| Label | Use When |
|-------|----------|
| `bug` | Something isn't working as documented |
| `enhancement` | New feature or improvement |
| `performance` | Speed or resource usage issue |
| `documentation` | Docs are missing or incorrect |
| `question` | Clarification needed (not a bug) |
| `good first issue` | Suitable for new contributors |
## Response Expectations
- **Acknowledgment**: Within a few days
- **Triage**: Issue labeled and prioritized
- **Resolution**: Depends on complexity and priority
Complex features may require discussion before implementation. Bug fixes for critical issues are prioritized.
## Following Up
If your issue hasn't received attention:
1. **Check issue status** - It may be labeled or assigned
2. **Add new information** - If you've discovered more details
3. **Politely bump** - A single follow-up comment after 2 weeks is appropriate
4. **Consider contributing** - PRs that fix bugs or implement features are welcome
## Contributing Fixes
If you want to fix a bug or implement a feature yourself:
1. Comment on the issue to avoid duplicate work
2. Follow the coding patterns in `CLAUDE.md`
3. Include tests for your changes
4. Keep PRs focused on a single issue
5. Reference the issue number in your PR
## Security Issues
**Do not report security vulnerabilities in public issues.**
For security-sensitive bugs:
- Contact maintainers directly
- Provide detailed reproduction steps privately
- Allow reasonable time for a fix before disclosure
## Examples
### Good Bug Report
```markdown
## WebSocket disconnects after 60 seconds of inactivity
**Environment**:
- ORLY v0.34.5
- Ubuntu 22.04
- Go 1.25.3
- Badger backend
**Steps to Reproduce**:
1. Connect to relay: `websocat ws://localhost:3334`
2. Send subscription: `["REQ", "test", {"kinds": [1], "limit": 1}]`
3. Wait 60 seconds without sending messages
4. Observe connection closed
**Expected**: Connection remains open (Nostr relays should maintain persistent connections)
**Actual**: Connection closed with code 1000 after exactly 60 seconds
**Logs** (ORLY_LOG_LEVEL=debug):
```
1764783029014485🔎 client timeout, closing connection /app/handle-websocket.go:142
```
**Possible Cause**: May be related to read deadline not being extended on subscription activity
```
### Good Feature Request
```markdown
## Add rate limiting per pubkey
**Problem**:
A single pubkey can flood the relay with events, consuming storage and
bandwidth. Currently there's no way to limit per-author submission rate.
**Proposed Solution**:
Add configurable rate limiting:
```bash
ORLY_RATE_LIMIT_EVENTS_PER_MINUTE=60
ORLY_RATE_LIMIT_BURST=10
```
When exceeded, return OK false with "rate-limited" message per NIP-20.
**Use Case**:
- Public relays protecting against spam
- Community relays with fair-use policies
- Paid relays enforcing subscription tiers
**Alternatives Considered**:
- IP-based limiting: Ineffective because users share IPs and use VPNs
- Global limiting: Punishes all users for one bad actor
```

View File

@@ -147,6 +147,10 @@ export ORLY_SPROCKET_ENABLED=true
# Enable policy system # Enable policy system
export ORLY_POLICY_ENABLED=true export ORLY_POLICY_ENABLED=true
# Custom policy file path (MUST be ABSOLUTE path starting with /)
# Default: ~/.config/ORLY/policy.json (or ~/.config/{ORLY_APP_NAME}/policy.json)
# export ORLY_POLICY_PATH=/etc/orly/policy.json
# Database backend selection (badger, neo4j, or wasmdb) # Database backend selection (badger, neo4j, or wasmdb)
export ORLY_DB_TYPE=badger export ORLY_DB_TYPE=badger
@@ -231,11 +235,18 @@ export ORLY_AUTH_TO_WRITE=false # Require auth only for writes
**`pkg/neo4j/`** - Neo4j graph database backend with social graph support **`pkg/neo4j/`** - Neo4j graph database backend with social graph support
- `neo4j.go` - Main database implementation - `neo4j.go` - Main database implementation
- `schema.go` - Graph schema and index definitions (includes WoT extensions) - `schema.go` - Graph schema and index definitions (includes WoT extensions)
- `migrations.go` - Database schema migrations (v1: base, v2: WoT, v3: Tag-based e/p)
- `query-events.go` - REQ filter to Cypher translation - `query-events.go` - REQ filter to Cypher translation
- `save-event.go` - Event storage with relationship creation - `save-event.go` - Event storage with Tag-based relationship creation
- `delete.go` - Event deletion (NIP-09) with Tag traversal for deletion detection
- `social-event-processor.go` - Processes kinds 0, 3, 1984, 10000 for social graph - `social-event-processor.go` - Processes kinds 0, 3, 1984, 10000 for social graph
- `hex_utils.go` - Helpers for binary-to-hex tag value extraction
- `WOT_SPEC.md` - Web of Trust data model specification (NostrUser nodes, trust metrics) - `WOT_SPEC.md` - Web of Trust data model specification (NostrUser nodes, trust metrics)
- `MODIFYING_SCHEMA.md` - Guide for schema modifications - `MODIFYING_SCHEMA.md` - Guide for schema modifications
- **Tests:**
- `tag_model_test.go` - Tag-based e/p model and filter query tests
- `save-event_test.go` - Event storage and relationship tests
- `social-event-processor_test.go` - Social graph event processing tests
**`pkg/protocol/`** - Nostr protocol implementation **`pkg/protocol/`** - Nostr protocol implementation
- `ws/` - WebSocket message framing and parsing - `ws/` - WebSocket message framing and parsing
@@ -270,7 +281,8 @@ export ORLY_AUTH_TO_WRITE=false # Require auth only for writes
- `none.go` - Open relay (no restrictions) - `none.go` - Open relay (no restrictions)
**`pkg/policy/`** - Event filtering and validation policies **`pkg/policy/`** - Event filtering and validation policies
- Policy configuration loaded from `~/.config/ORLY/policy.json` - Policy configuration loaded from `~/.config/ORLY/policy.json` by default
- Custom path via `ORLY_POLICY_PATH` (MUST be absolute path starting with `/`)
- Per-kind size limits, age restrictions, custom scripts - Per-kind size limits, age restrictions, custom scripts
- **Write-Only Validation**: Size, age, tag, and expiry validations apply ONLY to write operations - **Write-Only Validation**: Size, age, tag, and expiry validations apply ONLY to write operations
- **Read-Only Filtering**: `read_allow`, `read_deny`, `privileged` apply ONLY to read operations - **Read-Only Filtering**: `read_allow`, `read_deny`, `privileged` apply ONLY to read operations
@@ -344,6 +356,11 @@ export ORLY_AUTH_TO_WRITE=false # Require auth only for writes
- Supports multiple backends via `ORLY_DB_TYPE` environment variable - Supports multiple backends via `ORLY_DB_TYPE` environment variable
- **Badger** (default): Embedded key-value store with custom indexing, ideal for single-instance deployments - **Badger** (default): Embedded key-value store with custom indexing, ideal for single-instance deployments
- **Neo4j**: Graph database with social graph and Web of Trust (WoT) extensions - **Neo4j**: Graph database with social graph and Web of Trust (WoT) extensions
- **Tag-Based e/p Model**: All tags stored through intermediate Tag nodes
- `Event-[:TAGGED_WITH]->Tag{type:'e'}-[:REFERENCES]->Event` for e-tags
- `Event-[:TAGGED_WITH]->Tag{type:'p'}-[:REFERENCES]->NostrUser` for p-tags
- Enables unified querying: `#e` and `#p` filter queries work correctly
- Automatic migration from direct REFERENCES/MENTIONS (v3 migration)
- Processes kinds 0 (profile), 3 (contacts), 1984 (reports), 10000 (mute list) for social graph - Processes kinds 0 (profile), 3 (contacts), 1984 (reports), 10000 (mute list) for social graph
- NostrUser nodes with trust metrics (influence, PageRank) - NostrUser nodes with trust metrics (influence, PageRank)
- FOLLOWS, MUTES, REPORTS relationships for WoT analysis - FOLLOWS, MUTES, REPORTS relationships for WoT analysis
@@ -811,11 +828,18 @@ The directory spider (`pkg/spider/directory.go`) automatically discovers and syn
### Neo4j Social Graph Backend ### Neo4j Social Graph Backend
The Neo4j backend (`pkg/neo4j/`) includes Web of Trust (WoT) extensions: The Neo4j backend (`pkg/neo4j/`) includes Web of Trust (WoT) extensions:
- **Tag-Based e/p Model**: All tags (including e/p) stored through intermediate Tag nodes
- `Event-[:TAGGED_WITH]->Tag{type:'e'}-[:REFERENCES]->Event`
- `Event-[:TAGGED_WITH]->Tag{type:'p'}-[:REFERENCES]->NostrUser`
- Enables unified tag querying (`#e` and `#p` filter queries now work)
- v3 migration automatically converts existing direct REFERENCES/MENTIONS
- **Social Event Processor**: Handles kinds 0, 3, 1984, 10000 for social graph management - **Social Event Processor**: Handles kinds 0, 3, 1984, 10000 for social graph management
- **NostrUser nodes**: Store profile data and trust metrics (influence, PageRank) - **NostrUser nodes**: Store profile data and trust metrics (influence, PageRank)
- **Relationships**: FOLLOWS, MUTES, REPORTS for social graph analysis - **Relationships**: FOLLOWS, MUTES, REPORTS for social graph analysis
- **Deletion Detection**: `CheckForDeleted()` uses Tag traversal for kind 5 event checks
- **WoT Schema**: See `pkg/neo4j/WOT_SPEC.md` for full specification - **WoT Schema**: See `pkg/neo4j/WOT_SPEC.md` for full specification
- **Schema Modifications**: See `pkg/neo4j/MODIFYING_SCHEMA.md` for how to update - **Schema Modifications**: See `pkg/neo4j/MODIFYING_SCHEMA.md` for how to update
- **Comprehensive Tests**: `tag_model_test.go` covers Tag-based model, filter queries, migrations
### WasmDB IndexedDB Backend ### WasmDB IndexedDB Backend
WebAssembly-compatible database backend (`pkg/wasmdb/`): WebAssembly-compatible database backend (`pkg/wasmdb/`):

101
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,101 @@
# Contributing to ORLY
Thank you for your interest in contributing to ORLY! This document outlines the process for reporting bugs, requesting features, and submitting contributions.
**Canonical Repository:** https://git.mleku.dev/mleku/next.orly.dev
## Issue Reporting Policy
### Before Opening an Issue
1. **Search existing issues** to avoid duplicates
2. **Check the documentation** in the repository
3. **Verify your version** - run `./orly version` and ensure you're on a recent release
4. **Review the CLAUDE.md** file for configuration guidance
### Bug Reports
Use the **Bug Report** template when reporting unexpected behavior. A good bug report includes:
- **Version information** - exact ORLY version from `./orly version`
- **Database backend** - Badger, Neo4j, or WasmDB
- **Clear description** - what happened vs. what you expected
- **Reproduction steps** - detailed steps to trigger the bug
- **Logs** - relevant log output (use `ORLY_LOG_LEVEL=debug` or `trace`)
- **Configuration** - relevant environment variables (redact secrets)
#### Log Levels for Debugging
```bash
export ORLY_LOG_LEVEL=trace # Most verbose
export ORLY_LOG_LEVEL=debug # Development debugging
export ORLY_LOG_LEVEL=info # Default
```
### Feature Requests
Use the **Feature Request** template when suggesting new functionality. A good feature request includes:
- **Problem statement** - what problem does this solve?
- **Proposed solution** - specific description of desired behavior
- **Alternatives considered** - workarounds you've tried
- **Related NIP** - if this implements a Nostr protocol specification
- **Impact assessment** - is this a minor tweak or major change?
#### Feature Categories
- **Protocol** - NIP implementations and Nostr protocol features
- **Database** - Storage backends, indexing, query optimization
- **Performance** - Caching, SIMD operations, memory optimization
- **Policy** - Access control, event filtering, validation
- **Web UI** - Admin interface improvements
- **Operations** - Deployment, monitoring, systemd integration
## Code Contributions
### Development Setup
```bash
# Clone the repository
git clone https://git.mleku.dev/mleku/next.orly.dev.git
cd next.orly.dev
# Build
CGO_ENABLED=0 go build -o orly
# Run tests
./scripts/test.sh
# Build with web UI
./scripts/update-embedded-web.sh
```
### Pull Request Guidelines
1. **One feature/fix per PR** - keep changes focused
2. **Write tests** - for new functionality and bug fixes
3. **Follow existing patterns** - match the code style of surrounding code
4. **Update documentation** - if your change affects configuration or behavior
5. **Test your changes** - run `./scripts/test.sh` before submitting
### Commit Message Format
```
Short summary (72 chars max, imperative mood)
- Bullet point describing change 1
- Bullet point describing change 2
Files modified:
- path/to/file1.go: Description of change
- path/to/file2.go: Description of change
```
## Communication
- **Issues:** https://git.mleku.dev/mleku/next.orly.dev/issues
- **Documentation:** https://git.mleku.dev/mleku/next.orly.dev
## License
By contributing to ORLY, you agree that your contributions will be licensed under the same license as the project.

View File

@@ -1,5 +1,7 @@
# next.orly.dev # next.orly.dev
---
![orly.dev](./docs/orly.png) ![orly.dev](./docs/orly.png)
![Version v0.24.1](https://img.shields.io/badge/version-v0.24.1-blue.svg) ![Version v0.24.1](https://img.shields.io/badge/version-v0.24.1-blue.svg)
@@ -10,6 +12,19 @@ zap me: <20>mlekudev@getalby.com
follow me on [nostr](https://jumble.social/users/npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku) follow me on [nostr](https://jumble.social/users/npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku)
## ⚠️ Bug Reports & Feature Requests
**Bug reports and feature requests that do not follow the protocol will not be accepted.**
Before submitting any issue, you must read and follow [BUG_REPORTS_AND_FEATURE_REQUEST_PROTOCOL.md](./BUG_REPORTS_AND_FEATURE_REQUEST_PROTOCOL.md).
Requirements:
- **Bug reports**: Include environment details, reproduction steps, expected/actual behavior, and logs
- **Feature requests**: Include problem statement, proposed solution, and use cases
- **Both**: Search existing issues first, verify with latest version, provide minimal reproduction
Issues missing required information will be closed without review.
## ⚠️ System Requirements ## ⚠️ System Requirements
> **IMPORTANT: ORLY requires a minimum of 500MB of free memory to operate.** > **IMPORTANT: ORLY requires a minimum of 500MB of free memory to operate.**
@@ -217,7 +232,12 @@ ORLY includes a comprehensive policy system for fine-grained control over event
```bash ```bash
export ORLY_POLICY_ENABLED=true export ORLY_POLICY_ENABLED=true
# Create policy file at ~/.config/ORLY/policy.json # Default policy file: ~/.config/ORLY/policy.json
# OPTIONAL: Use a custom policy file location
# WARNING: ORLY_POLICY_PATH MUST be an ABSOLUTE path (starting with /)
# Relative paths will be REJECTED and the relay will fail to start
export ORLY_POLICY_PATH=/etc/orly/policy.json
``` ```
For detailed configuration and examples, see the [Policy Usage Guide](docs/POLICY_USAGE_GUIDE.md). For detailed configuration and examples, see the [Policy Usage Guide](docs/POLICY_USAGE_GUIDE.md).

View File

@@ -82,7 +82,8 @@ type C struct {
DirectorySpiderInterval time.Duration `env:"ORLY_DIRECTORY_SPIDER_INTERVAL" default:"24h" usage:"how often to run directory spider"` DirectorySpiderInterval time.Duration `env:"ORLY_DIRECTORY_SPIDER_INTERVAL" default:"24h" usage:"how often to run directory spider"`
DirectorySpiderMaxHops int `env:"ORLY_DIRECTORY_SPIDER_HOPS" default:"3" usage:"maximum hops for relay discovery from seed users"` DirectorySpiderMaxHops int `env:"ORLY_DIRECTORY_SPIDER_HOPS" default:"3" usage:"maximum hops for relay discovery from seed users"`
PolicyEnabled bool `env:"ORLY_POLICY_ENABLED" default:"false" usage:"enable policy-based event processing (configuration found in $HOME/.config/ORLY/policy.json)"` PolicyEnabled bool `env:"ORLY_POLICY_ENABLED" default:"false" usage:"enable policy-based event processing (default config: $HOME/.config/ORLY/policy.json)"`
PolicyPath string `env:"ORLY_POLICY_PATH" usage:"ABSOLUTE path to policy configuration file (MUST start with /); overrides default location; relative paths are rejected"`
// NIP-43 Relay Access Metadata and Requests // NIP-43 Relay Access Metadata and Requests
NIP43Enabled bool `env:"ORLY_NIP43_ENABLED" default:"false" usage:"enable NIP-43 relay access metadata and invite system"` NIP43Enabled bool `env:"ORLY_NIP43_ENABLED" default:"false" usage:"enable NIP-43 relay access metadata and invite system"`

View File

@@ -3,9 +3,7 @@ package app
import ( import (
"bytes" "bytes"
"fmt" "fmt"
"path/filepath"
"github.com/adrg/xdg"
"lol.mleku.dev/log" "lol.mleku.dev/log"
"git.mleku.dev/mleku/nostr/encoders/event" "git.mleku.dev/mleku/nostr/encoders/event"
"git.mleku.dev/mleku/nostr/encoders/filter" "git.mleku.dev/mleku/nostr/encoders/filter"
@@ -76,8 +74,8 @@ func (l *Listener) HandlePolicyConfigUpdate(ev *event.E) error {
log.I.F("policy config validation passed") log.I.F("policy config validation passed")
// Get config path for saving // Get config path for saving (uses custom path if set, otherwise default)
configPath := filepath.Join(xdg.ConfigHome, l.Config.AppName, "policy.json") configPath := l.policyManager.ConfigPath()
// 3. Pause ALL message processing (lock mutex) // 3. Pause ALL message processing (lock mutex)
// Note: We need to release the RLock first (which caller holds), then acquire exclusive Lock // Note: We need to release the RLock first (which caller holds), then acquire exclusive Lock

View File

@@ -74,7 +74,7 @@ func setupPolicyTestListener(t *testing.T, policyAdminHex string) (*Listener, *d
} }
// Create policy manager - now config file exists at XDG path // Create policy manager - now config file exists at XDG path
policyManager := policy.NewWithManager(ctx, cfg.AppName, cfg.PolicyEnabled) policyManager := policy.NewWithManager(ctx, cfg.AppName, cfg.PolicyEnabled, "")
server := &Server{ server := &Server{
Ctx: ctx, Ctx: ctx,

View File

@@ -87,7 +87,7 @@ func Run(
l.sprocketManager = NewSprocketManager(ctx, cfg.AppName, cfg.SprocketEnabled) l.sprocketManager = NewSprocketManager(ctx, cfg.AppName, cfg.SprocketEnabled)
// Initialize policy manager // Initialize policy manager
l.policyManager = policy.NewWithManager(ctx, cfg.AppName, cfg.PolicyEnabled) l.policyManager = policy.NewWithManager(ctx, cfg.AppName, cfg.PolicyEnabled, cfg.PolicyPath)
// Merge policy-defined owners with environment-defined owners // Merge policy-defined owners with environment-defined owners
// This allows cloud deployments to add owners via policy.json when env vars cannot be modified // This allows cloud deployments to add owners via policy.json when env vars cannot be modified

View File

@@ -0,0 +1,142 @@
# Rate Limiting Test Report: Neo4j Backend
**Test Date:** December 12, 2025
**Test Duration:** 73 minutes (4,409 seconds)
**Import File:** `wot_reference.jsonl` (2.7 GB, 2,158,366 events)
## Configuration
| Parameter | Value |
|-----------|-------|
| Database Backend | Neo4j 5-community (Docker) |
| Target Memory | 1,500 MB (relay process) |
| Emergency Threshold | 1,167 (target + 1/6) |
| Recovery Threshold | 833 (target - 1/6) |
| Max Write Delay | 1,000 ms (normal), 5,000 ms (emergency) |
| Neo4j Memory Limits | Heap: 512MB-1GB, Page Cache: 512MB |
## Results Summary
### Memory Management
| Component | Metric | Value |
|-----------|--------|-------|
| **Relay Process** | Peak RSS (VmHWM) | 148 MB |
| **Relay Process** | Final RSS | 35 MB |
| **Neo4j Container** | Memory Usage | 1.614 GB |
| **Neo4j Container** | Memory % | 10.83% of 14.91GB |
| **Rate Limiting** | Events Triggered | **0** |
### Key Finding: Architecture Difference
Unlike Badger (embedded database), Neo4j runs as a **separate process** in a Docker container. This means:
1. **Relay process memory stays low** (~35MB) because it's just a client
2. **Neo4j manages its own memory** within the container (1.6GB used)
3. **Rate limiter monitors relay RSS**, which doesn't reflect Neo4j's actual load
4. **No rate limiting triggered** because relay memory never approached the 1.5GB target
This is architecturally correct - the relay doesn't need memory-based rate limiting for Neo4j because it's not holding the data in process.
### Event Processing
| Event Type | Count | Rate |
|------------|-------|------|
| Contact Lists (kind 3) | 174,836 | 40 events/sec |
| Mute Lists (kind 10000) | 4,027 | 0.9 events/sec |
| **Total Social Events** | **178,863** | **41 events/sec** |
### Neo4j Performance
| Metric | Value |
|--------|-------|
| CPU Usage | 40-45% |
| Memory | Stable at 1.6GB |
| Disk Writes | 12.7 GB |
| Network In | 1.8 GB |
| Network Out | 583 MB |
| Process Count | 77-82 |
### Import Throughput Over Time
```
Time Contact Lists Delta/min Neo4j Memory
------ ------------- --------- ------------
08:28 0 - 1.57 GB
08:47 31,257 ~2,100 1.61 GB
08:52 42,403 ~2,200 1.61 GB
09:02 67,581 ~2,500 1.61 GB
09:12 97,316 ~3,000 1.60 GB
09:22 112,681 ~3,100 1.61 GB
09:27 163,252 ~10,000* 1.61 GB
09:41 174,836 ~2,400 1.61 GB
```
*Spike may be due to batch processing of cached events
### Memory Stability
Neo4j's memory usage remained remarkably stable throughout the test:
```
Sample Memory Delta
-------- -------- -----
08:47 1.605 GB -
09:02 1.611 GB +6 MB
09:12 1.603 GB -8 MB
09:27 1.607 GB +4 MB
09:41 1.614 GB +7 MB
```
**Variance:** < 15 MB over 73 minutes - excellent stability.
## Architecture Comparison: Badger vs Neo4j
| Aspect | Badger | Neo4j |
|--------|--------|-------|
| Database Type | Embedded | External (Docker) |
| Memory Consumer | Relay process | Container process |
| Rate Limiter Target | Relay RSS | Relay RSS |
| Rate Limiting Effectiveness | High | Low* |
| Compaction Triggering | Yes | N/A |
| Emergency Mode | Yes | Not triggered |
*The current rate limiter design targets relay process memory, which doesn't reflect Neo4j's actual resource usage.
## Recommendations for Neo4j Rate Limiting
The current implementation monitors **relay process memory**, but for Neo4j this should be enhanced to monitor:
### 1. Query Latency-Based Throttling (Currently Implemented)
The Neo4j monitor already tracks query latency via `RecordQueryLatency()` and `RecordWriteLatency()`, using EMA smoothing. Latency > 500ms increases reported load.
### 2. Connection Pool Saturation (Currently Implemented)
The `querySem` semaphore limits concurrent queries (default 10). When full, the load metric increases.
### 3. Future Enhancement: Container Metrics
Consider monitoring Neo4j container metrics via:
- Docker stats API for memory/CPU
- Neo4j metrics endpoint for transaction counts, cache hit rates
- JMX metrics for heap usage and GC pressure
## Conclusion
The Neo4j import test demonstrated:
1. **Stable Memory Usage**: Neo4j maintained consistent 1.6GB memory throughout
2. **Consistent Throughput**: ~40 social events/second with no degradation
3. **Architectural Isolation**: Relay stays lightweight while Neo4j handles data
4. **Rate Limiter Design**: Current RSS-based limiting is appropriate for Badger but less relevant for Neo4j
**Recommendation:** The Neo4j rate limiter is correctly implemented but relies on latency and concurrency metrics rather than memory pressure. For production deployments with Neo4j, configure appropriate Neo4j memory limits in the container (heap_initial, heap_max, pagecache) rather than relying on relay-side rate limiting.
## Test Environment
- **OS:** Linux 6.8.0-87-generic
- **Architecture:** x86_64
- **Go Version:** 1.25.3
- **Neo4j Version:** 5.26.18 (community)
- **Container:** Docker with 14.91GB limit
- **Neo4j Settings:**
- Heap Initial: 512MB
- Heap Max: 1GB
- Page Cache: 512MB

View File

@@ -0,0 +1,325 @@
# ORLY Expansion Plan: Documentation, Installer, Tray, and WireGuard
## Overview
Expand ORLY from a relay binary into a complete ecosystem for personal Nostr relay deployment, with:
1. **Textbook-style README** - Progressive documentation from novice to expert
2. **GUI Installer** - Wails-based setup wizard (Linux + macOS)
3. **System Tray** - Service monitoring and control
4. **WireGuard Client** - Embedded tunnel for NAT traversal
5. **Proxy Server** - Self-hostable AND managed service option
---
## Architecture
```
USER SYSTEMS
┌─────────────────────────────────────────────────────────────────────┐
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ orly-setup │ │ orly │ │ orly --tray │ │
│ │ (Installer) │ │ (Relay) │ │ (Systray) │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ │ generates │ serves │ monitors │
│ ▼ ▼ ▼ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ ~/.config/ │ │ :3334 WS/HTTP│ │ /api/admin/* │ │
│ │ systemd svc │ │ + WG tunnel │ │ status/ctrl │ │
│ └──────────────┘ └──────┬───────┘ └──────────────┘ │
│ │ │
│ ┌───────┴───────┐ │
│ │ pkg/tunnel/ │ │
│ │ WireGuard │ │
│ └───────┬───────┘ │
└─────────────────────────────┼───────────────────────────────────────┘
│ WG Tunnel (UDP :51820)
┌─────────────────────────────────────────────────────────────────────┐
│ PROXY SERVER │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ WG Server │───▶│ Nostr Auth │───▶│ Public Proxy │ │
│ │ :51820 │ │ (npub-based) │ │ Egress │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
```
---
## Package Structure
```
next.orly.dev/
├── cmd/
│ ├── orly-setup/ # NEW: Wails installer
│ │ ├── main.go
│ │ ├── app.go # Backend logic
│ │ ├── frontend/ # Svelte wizard UI
│ │ │ └── src/steps/ # Welcome, Config, Install, Complete
│ │ └── install/
│ │ ├── preflight.go # Dependency checks
│ │ ├── systemd.go # Service creation
│ │ └── verify.go # Post-install checks
│ │
│ └── proxy-server/ # NEW: WireGuard proxy
│ ├── main.go
│ ├── server.go # WG server
│ ├── auth.go # Nostr auth
│ └── registry.go # User management
├── pkg/
│ ├── tunnel/ # NEW: Embedded WG client
│ │ ├── tunnel.go # Main interface
│ │ ├── client.go # wireguard-go wrapper
│ │ ├── reconnect.go # Auto-reconnect
│ │ └── health.go # Connection health
│ │
│ ├── tray/ # NEW: System tray
│ │ ├── tray.go # Platform abstraction
│ │ ├── tray_linux.go # Linux implementation
│ │ ├── tray_darwin.go # macOS implementation
│ │ └── menu.go # Menu construction
│ │
│ ├── admin/ # NEW: Admin HTTP API
│ │ ├── api.go # Router
│ │ ├── status.go # GET /api/admin/status
│ │ ├── control.go # POST /api/admin/start|stop|restart
│ │ └── logs.go # GET /api/admin/logs (SSE)
│ │
│ └── interfaces/
│ ├── tunnel/tunnel.go # Tunnel interface
│ ├── tray/tray.go # Tray interface
│ └── admin/admin.go # Admin API interface
└── docs/
└── README.adoc # NEW: Textbook-style docs
```
---
## Implementation Phases
### Phase 1: Documentation Foundation
**Files to create/modify:**
- `README.adoc` - New textbook-style documentation
- `docs/` - Reorganize scattered docs
**README Structure (Textbook Style):**
```
Chapter 1: Quick Start (5-minute setup)
Chapter 2: Installation (platform-specific)
Chapter 3: Configuration (all env vars)
Chapter 4: Operations (systemd, monitoring)
Chapter 5: Security (TLS, ACLs, policy)
Chapter 6: Advanced (Neo4j, clustering, WoT)
Chapter 7: Architecture (internals)
Appendices: Reference tables, troubleshooting
```
### Phase 2: Admin API
**Files to create:**
- `pkg/admin/api.go` - Router and middleware
- `pkg/admin/status.go` - Status endpoint
- `pkg/admin/control.go` - Start/stop/restart
- `pkg/admin/logs.go` - Log streaming via SSE
- `pkg/interfaces/admin/admin.go` - Interface definition
**Files to modify:**
- `app/server.go` - Register `/api/admin/*` routes
- `app/config/config.go` - Add admin API config
**Endpoints:**
```
GET /api/admin/status - Relay status, uptime, connections
POST /api/admin/start - Start relay (when in tray mode)
POST /api/admin/stop - Graceful shutdown
POST /api/admin/restart - Graceful restart
GET /api/admin/logs - SSE log stream
```
### Phase 3: System Tray
**Files to create:**
- `pkg/tray/tray.go` - Platform abstraction
- `pkg/tray/tray_linux.go` - Linux (dbus/appindicator)
- `pkg/tray/tray_darwin.go` - macOS (NSStatusBar)
- `pkg/tray/menu.go` - Menu construction
- `pkg/interfaces/tray/tray.go` - Interface
**Files to modify:**
- `main.go` - Add `--tray` flag handling
- `app/config/config.go` - Add tray config
**Features:**
- Status icon (green/yellow/red)
- Start/Stop/Restart menu items
- Open Web UI (launches browser)
- View Logs submenu
- Auto-start on login toggle
### Phase 4: Installer GUI (Wails)
**Files to create:**
- `cmd/orly-setup/main.go` - Wails entry point
- `cmd/orly-setup/app.go` - Backend methods
- `cmd/orly-setup/frontend/` - Svelte wizard
- `cmd/orly-setup/install/preflight.go` - Dependency checks
- `cmd/orly-setup/install/systemd.go` - Service creation
- `cmd/orly-setup/install/config.go` - Config generation
- `cmd/orly-setup/install/verify.go` - Post-install checks
- `scripts/build-installer.sh` - Build script
**Wizard Steps:**
1. Welcome - Introduction, license
2. Preflight - Check Go, disk, ports
3. Configuration - Port, data dir, TLS domains
4. Admin Setup - Generate or import admin keys
5. Database - Choose Badger or Neo4j
6. WireGuard (optional) - Tunnel config
7. Installation - Create service, start relay
8. Complete - Verify and show status
### Phase 5: WireGuard Client
**Files to create:**
- `pkg/tunnel/tunnel.go` - Main interface
- `pkg/tunnel/client.go` - wireguard-go wrapper
- `pkg/tunnel/config.go` - WG configuration
- `pkg/tunnel/reconnect.go` - Auto-reconnect logic
- `pkg/tunnel/health.go` - Health monitoring
- `pkg/tunnel/handoff.go` - Graceful restart
- `pkg/interfaces/tunnel/tunnel.go` - Interface
**Files to modify:**
- `app/config/config.go` - Add WG config fields
- `app/main.go` - Initialize tunnel on startup
- `main.go` - Tunnel lifecycle management
**Config additions:**
```go
WGEnabled bool `env:"ORLY_WG_ENABLED" default:"false"`
WGServer string `env:"ORLY_WG_SERVER"`
WGPrivateKey string `env:"ORLY_WG_PRIVATE_KEY"`
WGServerPubKey string `env:"ORLY_WG_PUBLIC_KEY"`
WGKeepalive int `env:"ORLY_WG_KEEPALIVE" default:"25"`
WGMTU int `env:"ORLY_WG_MTU" default:"1280"`
WGReconnect bool `env:"ORLY_WG_RECONNECT" default:"true"`
```
### Phase 6: Proxy Server
**Files to create:**
- `cmd/proxy-server/main.go` - Entry point
- `cmd/proxy-server/server.go` - WG server management
- `cmd/proxy-server/auth.go` - Nostr-based auth
- `cmd/proxy-server/registry.go` - User/relay registry
- `cmd/proxy-server/bandwidth.go` - Traffic monitoring
- `cmd/proxy-server/config.go` - Server configuration
**Features:**
- WireGuard server (wireguard-go)
- Nostr event-based authentication (NIP-98 style)
- User registration via signed events
- Relay discovery and assignment
- Bandwidth monitoring and quotas
- Multi-tenant isolation
---
## Key Interfaces
### Tunnel Interface
```go
type Tunnel interface {
Connect(ctx context.Context) error
Disconnect() error
Status() TunnelStatus
Handoff() (*HandoffState, error)
Resume(state *HandoffState) error
}
```
### Admin API Interface
```go
type AdminAPI interface {
Status() (*RelayStatus, error)
Start() error
Stop() error
Restart() error
Logs(ctx context.Context, lines int) (<-chan LogEntry, error)
}
```
### Tray Interface
```go
type TrayApp interface {
Run() error
Quit()
UpdateStatus(status StatusLevel, tooltip string)
ShowNotification(title, message string)
}
```
---
## Dependencies to Add
```go
// go.mod additions
require (
github.com/wailsapp/wails/v2 v2.x.x // Installer GUI
golang.zx2c4.com/wireguard v0.x.x // WireGuard client
github.com/getlantern/systray v1.x.x // System tray (or fyne.io/systray)
)
```
---
## Build Commands
```bash
# Standard relay build (unchanged)
CGO_ENABLED=0 go build -o orly
# Relay with tray support
CGO_ENABLED=0 go build -tags tray -o orly
# Installer GUI
cd cmd/orly-setup && wails build -platform linux/amd64,darwin/amd64
# Proxy server
CGO_ENABLED=0 go build -o orly-proxy ./cmd/proxy-server
# All platforms
./scripts/build-all.sh
```
---
## Critical Files Reference
| File | Purpose |
|------|---------|
| `app/config/config.go` | Add WG, tray, admin API config |
| `app/server.go` | Register admin API routes |
| `main.go` | Add --tray flag, WG initialization |
| `scripts/deploy.sh` | Pattern for installer service creation |
| `app/web/src/App.svelte` | Pattern for installer UI |
---
## Backward Compatibility
- Main `orly` binary behavior unchanged without flags
- All new features opt-in via environment variables
- WireGuard gracefully degrades if connection fails
- Tray mode only activates with `--tray` flag
- Admin API can be disabled via `ORLY_ADMIN_API_ENABLED=false`
---
## Success Criteria
1. New user can install via GUI wizard in < 5 minutes
2. README guides user from zero to running relay
3. System tray provides one-click relay management
4. WireGuard tunnel auto-connects and reconnects
5. Proxy server enables home relay exposure without port forwarding
6. All existing functionality preserved

2
go.mod
View File

@@ -3,7 +3,7 @@ module next.orly.dev
go 1.25.3 go 1.25.3
require ( require (
git.mleku.dev/mleku/nostr v1.0.8 git.mleku.dev/mleku/nostr v1.0.9
github.com/adrg/xdg v0.5.3 github.com/adrg/xdg v0.5.3
github.com/aperturerobotics/go-indexeddb v0.2.3 github.com/aperturerobotics/go-indexeddb v0.2.3
github.com/dgraph-io/badger/v4 v4.8.0 github.com/dgraph-io/badger/v4 v4.8.0

4
go.sum
View File

@@ -1,5 +1,5 @@
git.mleku.dev/mleku/nostr v1.0.8 h1:YYREdIxobEqYkzxQ7/5ALACPzLkiHW+CTira+VvSQZk= git.mleku.dev/mleku/nostr v1.0.9 h1:aiN0ihnXzEpboXjW4u8qr5XokLQqg4P0XSZ1Y273qM0=
git.mleku.dev/mleku/nostr v1.0.8/go.mod h1:iYTlg2WKJXJ0kcsM6QBGOJ0UDiJidMgL/i64cHyPjZc= git.mleku.dev/mleku/nostr v1.0.9/go.mod h1:iYTlg2WKJXJ0kcsM6QBGOJ0UDiJidMgL/i64cHyPjZc=
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/ImVexed/fasturl v0.0.0-20230304231329-4e41488060f3 h1:ClzzXMDDuUbWfNNZqGeYq4PnYOlwlOVIvSyNaIy0ykg= github.com/ImVexed/fasturl v0.0.0-20230304231329-4e41488060f3 h1:ClzzXMDDuUbWfNNZqGeYq4PnYOlwlOVIvSyNaIy0ykg=

View File

@@ -35,10 +35,12 @@ export ORLY_NEO4J_PASSWORD=password
## Features ## Features
- **Graph-Native Storage**: Events, authors, and tags stored as nodes and relationships - **Graph-Native Storage**: Events, authors, and tags stored as nodes and relationships
- **Unified Tag Model**: All tags (including e/p tags) stored as Tag nodes with REFERENCES relationships
- **Efficient Queries**: Leverages Neo4j's native graph traversal for tag and social graph queries - **Efficient Queries**: Leverages Neo4j's native graph traversal for tag and social graph queries
- **Cypher Query Language**: Powerful, expressive query language for complex filters - **Cypher Query Language**: Powerful, expressive query language for complex filters
- **Automatic Indexing**: Unique constraints and indexes for optimal performance - **Automatic Indexing**: Unique constraints and indexes for optimal performance
- **Relationship Queries**: Native support for event references, mentions, and tags - **Relationship Queries**: Native support for event references, mentions, and tags
- **Automatic Migrations**: Schema migrations run automatically on startup
- **Web of Trust (WoT) Extensions**: Optional support for trust metrics, social graph analysis, and content filtering (see [WOT_SPEC.md](./WOT_SPEC.md)) - **Web of Trust (WoT) Extensions**: Optional support for trust metrics, social graph analysis, and content filtering (see [WOT_SPEC.md](./WOT_SPEC.md))
## Architecture ## Architecture
@@ -50,6 +52,23 @@ See [docs/NEO4J_BACKEND.md](../../docs/NEO4J_BACKEND.md) for comprehensive docum
- Development guide - Development guide
- Comparison with other backends - Comparison with other backends
### Tag-Based e/p Model
All tags, including `e` (event references) and `p` (pubkey mentions), are stored through intermediate Tag nodes:
```
Event -[:TAGGED_WITH]-> Tag{type:'e',value:eventId} -[:REFERENCES]-> Event
Event -[:TAGGED_WITH]-> Tag{type:'p',value:pubkey} -[:REFERENCES]-> NostrUser
Event -[:TAGGED_WITH]-> Tag{type:'t',value:topic} (no REFERENCES for regular tags)
```
**Benefits:**
- Unified tag querying: `#e` and `#p` filter queries work correctly
- Consistent data model: All tags use the same TAGGED_WITH pattern
- Graph traversal: Can traverse from events through tags to referenced entities
**Migration:** Existing databases with direct `REFERENCES`/`MENTIONS` relationships are automatically migrated at startup via v3 migration.
### Web of Trust (WoT) Extensions ### Web of Trust (WoT) Extensions
This package includes schema support for Web of Trust trust metrics computation: This package includes schema support for Web of Trust trust metrics computation:
@@ -96,6 +115,8 @@ This package includes schema support for Web of Trust trust metrics computation:
### Tests ### Tests
- `social-event-processor_test.go` - Comprehensive tests for kinds 0, 3, 1984, 10000 - `social-event-processor_test.go` - Comprehensive tests for kinds 0, 3, 1984, 10000
- `tag_model_test.go` - Tag-based e/p model tests and filter query tests
- `save-event_test.go` - Event storage and relationship tests
## Testing ## Testing
@@ -166,11 +187,25 @@ MATCH (e:Event)-[:TAGGED_WITH]->(t:Tag {type: "t", value: "bitcoin"})
RETURN e RETURN e
``` ```
### Event reference query (e-tags)
```cypher
MATCH (e:Event)-[:TAGGED_WITH]->(t:Tag {type: "e"})-[:REFERENCES]->(ref:Event)
WHERE e.id = "abc123..."
RETURN e, ref
```
### Mentions query (p-tags)
```cypher
MATCH (e:Event)-[:TAGGED_WITH]->(t:Tag {type: "p"})-[:REFERENCES]->(u:NostrUser)
WHERE e.id = "abc123..."
RETURN e, u
```
### Social graph query ### Social graph query
```cypher ```cypher
MATCH (author:NostrUser {pubkey: "abc123..."}) MATCH (author:NostrUser {pubkey: "abc123..."})
<-[:AUTHORED_BY]-(e:Event) <-[:AUTHORED_BY]-(e:Event)
-[:MENTIONS]->(mentioned:NostrUser) -[:TAGGED_WITH]->(:Tag {type: "p"})-[:REFERENCES]->(mentioned:NostrUser)
RETURN author, e, mentioned RETURN author, e, mentioned
``` ```

View File

@@ -125,6 +125,40 @@ Legacy node label that is redundant with SetOfNostrUserWotMetricsCards. Should b
### Relationship Types ### Relationship Types
#### Tag-Based References (e and p tags)
The Neo4j backend uses a unified Tag-based model for `e` and `p` tags, enabling consistent tag querying while maintaining graph traversal capabilities.
**E-tags (Event References):**
```
(Event)-[:TAGGED_WITH]->(Tag {type: 'e', value: <event_id>})-[:REFERENCES]->(Event)
```
**P-tags (Pubkey Mentions):**
```
(Event)-[:TAGGED_WITH]->(Tag {type: 'p', value: <pubkey>})-[:REFERENCES]->(NostrUser)
```
This model provides:
- Unified tag querying via `#e` and `#p` filters (same as other tags)
- Graph traversal from events to referenced events/users
- Consistent indexing through existing Tag node indexes
**Query Examples:**
```cypher
-- Find all events that reference a specific event
MATCH (e:Event)-[:TAGGED_WITH]->(t:Tag {type: 'e', value: $eventId})-[:REFERENCES]->(ref:Event)
RETURN e
-- Find all events that mention a specific pubkey
MATCH (e:Event)-[:TAGGED_WITH]->(t:Tag {type: 'p', value: $pubkey})-[:REFERENCES]->(u:NostrUser)
RETURN e
-- Count references to an event (thread replies)
MATCH (t:Tag {type: 'e', value: $eventId})<-[:TAGGED_WITH]-(e:Event)
RETURN count(e) AS replyCount
```
#### 1. FOLLOWS #### 1. FOLLOWS
Represents a follow relationship between users (derived from kind 3 events). Represents a follow relationship between users (derived from kind 3 events).
@@ -151,11 +185,18 @@ Represents a report filed against a user (derived from kind 1984 events).
**Direction:** `(reporter:NostrUser)-[:REPORTS]->(reported:NostrUser)` **Direction:** `(reporter:NostrUser)-[:REPORTS]->(reported:NostrUser)`
**Properties:** **Deduplication:** Only one REPORTS relationship exists per (reporter, reported, report_type) combination.
- `reportType` (string) - NIP-56 report type (impersonation, spam, illegal, malware, nsfw, etc.) Multiple reports of the same type from the same user to the same target update the existing
- `timestamp` (integer) - When the report was filed relationship with the most recent event's data. This prevents double-counting in GrapeRank
calculations while maintaining audit trails via ProcessedSocialEvent nodes.
**Source:** Created from kind 1984 (reporting) events **Properties:**
- `report_type` (string) - NIP-56 report type (impersonation, spam, illegal, malware, nsfw, etc.)
- `created_at` (integer) - Timestamp of the most recent report event
- `created_by_event` (string) - Event ID of the most recent report
- `relay_received_at` (integer) - When the relay first received any report of this type
**Source:** Created/updated from kind 1984 (reporting) events
#### 4. WOT_METRICS_CARDS #### 4. WOT_METRICS_CARDS
@@ -187,7 +228,7 @@ The WoT model processes the following Nostr event kinds:
|------|------|---------|--------------| |------|------|---------|--------------|
| 0 | Profile Metadata | User profile information | Update NostrUser properties (npub, name, etc.) | | 0 | Profile Metadata | User profile information | Update NostrUser properties (npub, name, etc.) |
| 3 | Contact List | Follow list | Create/update FOLLOWS relationships | | 3 | Contact List | Follow list | Create/update FOLLOWS relationships |
| 1984 | Reporting | Report users/content | Create REPORTS relationships with reportType | | 1984 | Reporting | Report users/content | Create/update REPORTS relationships (deduplicated by report_type) |
| 10000 | Mute List | Mute list | Create/update MUTES relationships | | 10000 | Mute List | Mute list | Create/update MUTES relationships |
| 30382 | Trusted Assertion (NIP-85) | Published trust metrics | Create/update NostrUserWotMetricsCard nodes | | 30382 | Trusted Assertion (NIP-85) | Published trust metrics | Create/update NostrUserWotMetricsCard nodes |
@@ -247,8 +288,9 @@ Comprehensive implementation with additional features:
- `IS_A_REACTION_TO` (kind 7 reactions) - `IS_A_REACTION_TO` (kind 7 reactions)
- `IS_A_RESPONSE_TO` (kind 1 replies) - `IS_A_RESPONSE_TO` (kind 1 replies)
- `IS_A_REPOST_OF` (kind 6, kind 16 reposts) - `IS_A_REPOST_OF` (kind 6, kind 16 reposts)
- `P_TAGGED` (p-tag mentions from events to users) - Tag-based references (see "Tag-Based References" section above):
- `E_TAGGED` (e-tag references from events to events) - `Event-[:TAGGED_WITH]->Tag{type:'p'}-[:REFERENCES]->NostrUser` (p-tag mentions)
- `Event-[:TAGGED_WITH]->Tag{type:'e'}-[:REFERENCES]->Event` (e-tag references)
- NostrRelay, CashuMint nodes for ecosystem mapping - NostrRelay, CashuMint nodes for ecosystem mapping
- Enhanced GrapeRank incorporating zaps, replies, reactions - Enhanced GrapeRank incorporating zaps, replies, reactions

View File

@@ -175,14 +175,15 @@ func (n *N) ProcessDelete(ev *event.E, admins [][]byte) error {
// CheckForDeleted checks if an event has been deleted // CheckForDeleted checks if an event has been deleted
func (n *N) CheckForDeleted(ev *event.E, admins [][]byte) error { func (n *N) CheckForDeleted(ev *event.E, admins [][]byte) error {
// Query for kind 5 events that reference this event // Query for kind 5 events that reference this event via Tag nodes
ctx := context.Background() ctx := context.Background()
idStr := hex.Enc(ev.ID[:]) idStr := hex.Enc(ev.ID[:])
// Build cypher query to find deletion events // Build cypher query to find deletion events
// Traverses through Tag nodes: Event-[:TAGGED_WITH]->Tag-[:REFERENCES]->Event
cypher := ` cypher := `
MATCH (target:Event {id: $targetId}) MATCH (target:Event {id: $targetId})
MATCH (delete:Event {kind: 5})-[:REFERENCES]->(target) MATCH (delete:Event {kind: 5})-[:TAGGED_WITH]->(t:Tag {type: 'e'})-[:REFERENCES]->(target)
WHERE delete.pubkey = $pubkey OR delete.pubkey IN $admins WHERE delete.pubkey = $pubkey OR delete.pubkey IN $admins
RETURN delete.id AS id RETURN delete.id AS id
LIMIT 1` LIMIT 1`

View File

@@ -25,6 +25,16 @@ var migrations = []Migration{
Description: "Clean up binary-encoded pubkeys and event IDs to lowercase hex", Description: "Clean up binary-encoded pubkeys and event IDs to lowercase hex",
Migrate: migrateBinaryToHex, Migrate: migrateBinaryToHex,
}, },
{
Version: "v3",
Description: "Convert direct REFERENCES/MENTIONS relationships to Tag-based model",
Migrate: migrateToTagBasedReferences,
},
{
Version: "v4",
Description: "Deduplicate REPORTS relationships by (reporter, reported, report_type)",
Migrate: migrateDeduplicateReports,
},
} }
// RunMigrations executes all pending migrations // RunMigrations executes all pending migrations
@@ -343,3 +353,245 @@ func migrateBinaryToHex(ctx context.Context, n *N) error {
n.Logger.Infof("binary-to-hex migration completed successfully") n.Logger.Infof("binary-to-hex migration completed successfully")
return nil return nil
} }
// migrateToTagBasedReferences converts direct REFERENCES and MENTIONS relationships
// to the new Tag-based model where:
// - Event-[:REFERENCES]->Event becomes Event-[:TAGGED_WITH]->Tag-[:REFERENCES]->Event
// - Event-[:MENTIONS]->NostrUser becomes Event-[:TAGGED_WITH]->Tag-[:REFERENCES]->NostrUser
//
// This enables unified tag querying via #e and #p filters while maintaining graph traversal.
func migrateToTagBasedReferences(ctx context.Context, n *N) error {
// Step 1: Count existing direct REFERENCES relationships (Event->Event)
countRefCypher := `
MATCH (source:Event)-[r:REFERENCES]->(target:Event)
RETURN count(r) AS count
`
result, err := n.ExecuteRead(ctx, countRefCypher, nil)
if err != nil {
return fmt.Errorf("failed to count REFERENCES relationships: %w", err)
}
var refCount int64
if result.Next(ctx) {
if count, ok := result.Record().Values[0].(int64); ok {
refCount = count
}
}
n.Logger.Infof("found %d direct Event-[:REFERENCES]->Event relationships to migrate", refCount)
// Step 2: Count existing direct MENTIONS relationships (Event->NostrUser)
countMentionsCypher := `
MATCH (source:Event)-[r:MENTIONS]->(target:NostrUser)
RETURN count(r) AS count
`
result, err = n.ExecuteRead(ctx, countMentionsCypher, nil)
if err != nil {
return fmt.Errorf("failed to count MENTIONS relationships: %w", err)
}
var mentionsCount int64
if result.Next(ctx) {
if count, ok := result.Record().Values[0].(int64); ok {
mentionsCount = count
}
}
n.Logger.Infof("found %d direct Event-[:MENTIONS]->NostrUser relationships to migrate", mentionsCount)
// If nothing to migrate, we're done
if refCount == 0 && mentionsCount == 0 {
n.Logger.Infof("no direct relationships to migrate, migration complete")
return nil
}
// Step 3: Migrate REFERENCES relationships to Tag-based model
// Process in batches to avoid memory issues with large datasets
if refCount > 0 {
n.Logger.Infof("migrating %d REFERENCES relationships to Tag-based model...", refCount)
// This query:
// 1. Finds Event->Event REFERENCES relationships
// 2. Creates/merges Tag node with type='e' and value=target event ID
// 3. Creates TAGGED_WITH from source Event to Tag
// 4. Creates REFERENCES from Tag to target Event
// 5. Deletes the old direct REFERENCES relationship
migrateRefCypher := `
MATCH (source:Event)-[r:REFERENCES]->(target:Event)
WITH source, r, target LIMIT 1000
MERGE (t:Tag {type: 'e', value: target.id})
CREATE (source)-[:TAGGED_WITH]->(t)
MERGE (t)-[:REFERENCES]->(target)
DELETE r
RETURN count(r) AS migrated
`
// Run migration in batches until no more relationships exist
totalMigrated := int64(0)
for {
result, err := n.ExecuteWrite(ctx, migrateRefCypher, nil)
if err != nil {
return fmt.Errorf("failed to migrate REFERENCES batch: %w", err)
}
var batchMigrated int64
if result.Next(ctx) {
if count, ok := result.Record().Values[0].(int64); ok {
batchMigrated = count
}
}
if batchMigrated == 0 {
break
}
totalMigrated += batchMigrated
n.Logger.Infof("migrated %d REFERENCES relationships (total: %d)", batchMigrated, totalMigrated)
}
n.Logger.Infof("completed migrating %d REFERENCES relationships", totalMigrated)
}
// Step 4: Migrate MENTIONS relationships to Tag-based model
if mentionsCount > 0 {
n.Logger.Infof("migrating %d MENTIONS relationships to Tag-based model...", mentionsCount)
// This query:
// 1. Finds Event->NostrUser MENTIONS relationships
// 2. Creates/merges Tag node with type='p' and value=target pubkey
// 3. Creates TAGGED_WITH from source Event to Tag
// 4. Creates REFERENCES from Tag to target NostrUser
// 5. Deletes the old direct MENTIONS relationship
migrateMentionsCypher := `
MATCH (source:Event)-[r:MENTIONS]->(target:NostrUser)
WITH source, r, target LIMIT 1000
MERGE (t:Tag {type: 'p', value: target.pubkey})
CREATE (source)-[:TAGGED_WITH]->(t)
MERGE (t)-[:REFERENCES]->(target)
DELETE r
RETURN count(r) AS migrated
`
// Run migration in batches until no more relationships exist
totalMigrated := int64(0)
for {
result, err := n.ExecuteWrite(ctx, migrateMentionsCypher, nil)
if err != nil {
return fmt.Errorf("failed to migrate MENTIONS batch: %w", err)
}
var batchMigrated int64
if result.Next(ctx) {
if count, ok := result.Record().Values[0].(int64); ok {
batchMigrated = count
}
}
if batchMigrated == 0 {
break
}
totalMigrated += batchMigrated
n.Logger.Infof("migrated %d MENTIONS relationships (total: %d)", batchMigrated, totalMigrated)
}
n.Logger.Infof("completed migrating %d MENTIONS relationships", totalMigrated)
}
n.Logger.Infof("Tag-based references migration completed successfully")
return nil
}
// migrateDeduplicateReports removes duplicate REPORTS relationships
// Prior to this migration, processReport() used CREATE which allowed multiple
// REPORTS relationships with the same report_type between the same two users.
// This migration keeps only the most recent report (by created_at) for each
// (reporter, reported, report_type) combination.
func migrateDeduplicateReports(ctx context.Context, n *N) error {
// Step 1: Count duplicate REPORTS relationships
// Duplicates are defined as multiple REPORTS with the same (reporter, reported, report_type)
countDuplicatesCypher := `
MATCH (reporter:NostrUser)-[r:REPORTS]->(reported:NostrUser)
WITH reporter, reported, r.report_type AS type, collect(r) AS rels
WHERE size(rels) > 1
RETURN sum(size(rels) - 1) AS duplicate_count
`
result, err := n.ExecuteRead(ctx, countDuplicatesCypher, nil)
if err != nil {
return fmt.Errorf("failed to count duplicate REPORTS: %w", err)
}
var duplicateCount int64
if result.Next(ctx) {
if count, ok := result.Record().Values[0].(int64); ok {
duplicateCount = count
}
}
if duplicateCount == 0 {
n.Logger.Infof("no duplicate REPORTS relationships found, migration complete")
return nil
}
n.Logger.Infof("found %d duplicate REPORTS relationships to remove", duplicateCount)
// Step 2: Delete duplicate REPORTS, keeping the one with the highest created_at
// This query:
// 1. Groups REPORTS by (reporter, reported, report_type)
// 2. Finds the maximum created_at for each group
// 3. Deletes all relationships in the group except the newest one
deleteDuplicatesCypher := `
MATCH (reporter:NostrUser)-[r:REPORTS]->(reported:NostrUser)
WITH reporter, reported, r.report_type AS type,
collect(r) AS rels, max(r.created_at) AS maxCreatedAt
WHERE size(rels) > 1
UNWIND rels AS rel
WITH rel, maxCreatedAt
WHERE rel.created_at < maxCreatedAt
DELETE rel
RETURN count(*) AS deleted
`
writeResult, err := n.ExecuteWrite(ctx, deleteDuplicatesCypher, nil)
if err != nil {
return fmt.Errorf("failed to delete duplicate REPORTS: %w", err)
}
var deletedCount int64
if writeResult.Next(ctx) {
if count, ok := writeResult.Record().Values[0].(int64); ok {
deletedCount = count
}
}
n.Logger.Infof("deleted %d duplicate REPORTS relationships", deletedCount)
// Step 3: Mark superseded ProcessedSocialEvent nodes for deleted reports
// Find ProcessedSocialEvent nodes (kind 1984) whose event IDs are no longer
// referenced by any REPORTS relationship's created_by_event
markSupersededCypher := `
MATCH (evt:ProcessedSocialEvent {event_kind: 1984})
WHERE evt.superseded_by IS NULL
AND NOT EXISTS {
MATCH ()-[r:REPORTS]->()
WHERE r.created_by_event = evt.event_id
}
SET evt.superseded_by = 'migration_v4_dedupe'
RETURN count(evt) AS superseded
`
markResult, err := n.ExecuteWrite(ctx, markSupersededCypher, nil)
if err != nil {
// Non-fatal - just log warning
n.Logger.Warningf("failed to mark superseded ProcessedSocialEvent nodes: %v", err)
} else {
var supersededCount int64
if markResult.Next(ctx) {
if count, ok := markResult.Record().Values[0].(int64); ok {
supersededCount = count
}
}
if supersededCount > 0 {
n.Logger.Infof("marked %d ProcessedSocialEvent nodes as superseded", supersededCount)
}
}
n.Logger.Infof("REPORTS deduplication migration completed successfully")
return nil
}

View File

@@ -238,7 +238,8 @@ func (n *N) addTagsInBatches(c context.Context, eventID string, ev *event.E) err
} }
// addPTagsInBatches adds p-tag (pubkey mention) relationships using UNWIND for efficiency. // addPTagsInBatches adds p-tag (pubkey mention) relationships using UNWIND for efficiency.
// Creates NostrUser nodes for mentioned pubkeys and MENTIONS relationships. // Creates Tag nodes with type='p' and REFERENCES relationships to NostrUser nodes.
// This enables unified tag querying via #p filters while maintaining the social graph.
func (n *N) addPTagsInBatches(c context.Context, eventID string, pTags []string) error { func (n *N) addPTagsInBatches(c context.Context, eventID string, pTags []string) error {
// Process in batches to avoid memory issues // Process in batches to avoid memory issues
for i := 0; i < len(pTags); i += tagBatchSize { for i := 0; i < len(pTags); i += tagBatchSize {
@@ -249,12 +250,17 @@ func (n *N) addPTagsInBatches(c context.Context, eventID string, pTags []string)
batch := pTags[i:end] batch := pTags[i:end]
// Use UNWIND to process multiple p-tags in a single query // Use UNWIND to process multiple p-tags in a single query
// Creates Tag nodes as intermediaries, enabling unified #p filter queries
// Tag-[:REFERENCES]->NostrUser allows graph traversal from tag to user
cypher := ` cypher := `
MATCH (e:Event {id: $eventId}) MATCH (e:Event {id: $eventId})
UNWIND $pubkeys AS pubkey UNWIND $pubkeys AS pubkey
MERGE (t:Tag {type: 'p', value: pubkey})
CREATE (e)-[:TAGGED_WITH]->(t)
WITH t, pubkey
MERGE (u:NostrUser {pubkey: pubkey}) MERGE (u:NostrUser {pubkey: pubkey})
ON CREATE SET u.created_at = timestamp() ON CREATE SET u.created_at = timestamp()
CREATE (e)-[:MENTIONS]->(u)` MERGE (t)-[:REFERENCES]->(u)`
params := map[string]any{ params := map[string]any{
"eventId": eventID, "eventId": eventID,
@@ -270,7 +276,8 @@ CREATE (e)-[:MENTIONS]->(u)`
} }
// addETagsInBatches adds e-tag (event reference) relationships using UNWIND for efficiency. // addETagsInBatches adds e-tag (event reference) relationships using UNWIND for efficiency.
// Only creates REFERENCES relationships if the referenced event exists. // Creates Tag nodes with type='e' and REFERENCES relationships to Event nodes (if they exist).
// This enables unified tag querying via #e filters while maintaining event graph structure.
func (n *N) addETagsInBatches(c context.Context, eventID string, eTags []string) error { func (n *N) addETagsInBatches(c context.Context, eventID string, eTags []string) error {
// Process in batches to avoid memory issues // Process in batches to avoid memory issues
for i := 0; i < len(eTags); i += tagBatchSize { for i := 0; i < len(eTags); i += tagBatchSize {
@@ -281,14 +288,18 @@ func (n *N) addETagsInBatches(c context.Context, eventID string, eTags []string)
batch := eTags[i:end] batch := eTags[i:end]
// Use UNWIND to process multiple e-tags in a single query // Use UNWIND to process multiple e-tags in a single query
// OPTIONAL MATCH ensures we only create relationships if referenced event exists // Creates Tag nodes as intermediaries, enabling unified #e filter queries
// Tag-[:REFERENCES]->Event allows graph traversal from tag to referenced event
// OPTIONAL MATCH ensures we only create REFERENCES if referenced event exists
cypher := ` cypher := `
MATCH (e:Event {id: $eventId}) MATCH (e:Event {id: $eventId})
UNWIND $eventIds AS refId UNWIND $eventIds AS refId
MERGE (t:Tag {type: 'e', value: refId})
CREATE (e)-[:TAGGED_WITH]->(t)
WITH t, refId
OPTIONAL MATCH (ref:Event {id: refId}) OPTIONAL MATCH (ref:Event {id: refId})
WITH e, ref
WHERE ref IS NOT NULL WHERE ref IS NOT NULL
CREATE (e)-[:REFERENCES]->(ref)` MERGE (t)-[:REFERENCES]->(ref)`
params := map[string]any{ params := map[string]any{
"eventId": eventID, "eventId": eventID,

View File

@@ -151,7 +151,7 @@ func TestSafePrefix(t *testing.T) {
} }
// TestSaveEvent_ETagReference tests that events with e-tags are saved correctly // TestSaveEvent_ETagReference tests that events with e-tags are saved correctly
// and the REFERENCES relationships are created when the referenced event exists. // using the Tag-based model: Event-[:TAGGED_WITH]->Tag-[:REFERENCES]->Event.
// Uses shared testDB from testmain_test.go to avoid auth rate limiting. // Uses shared testDB from testmain_test.go to avoid auth rate limiting.
func TestSaveEvent_ETagReference(t *testing.T) { func TestSaveEvent_ETagReference(t *testing.T) {
if testDB == nil { if testDB == nil {
@@ -226,10 +226,10 @@ func TestSaveEvent_ETagReference(t *testing.T) {
t.Fatal("Reply event should not exist yet") t.Fatal("Reply event should not exist yet")
} }
// Verify REFERENCES relationship was created // Verify Tag-based e-tag model: Event-[:TAGGED_WITH]->Tag{type:'e'}-[:REFERENCES]->Event
cypher := ` cypher := `
MATCH (reply:Event {id: $replyId})-[:REFERENCES]->(root:Event {id: $rootId}) MATCH (reply:Event {id: $replyId})-[:TAGGED_WITH]->(t:Tag {type: 'e', value: $rootId})-[:REFERENCES]->(root:Event {id: $rootId})
RETURN reply.id AS replyId, root.id AS rootId RETURN reply.id AS replyId, t.value AS tagValue, root.id AS rootId
` `
params := map[string]any{ params := map[string]any{
"replyId": hex.Enc(replyEvent.ID[:]), "replyId": hex.Enc(replyEvent.ID[:]),
@@ -238,42 +238,43 @@ func TestSaveEvent_ETagReference(t *testing.T) {
result, err := testDB.ExecuteRead(ctx, cypher, params) result, err := testDB.ExecuteRead(ctx, cypher, params)
if err != nil { if err != nil {
t.Fatalf("Failed to query REFERENCES relationship: %v", err) t.Fatalf("Failed to query Tag-based REFERENCES: %v", err)
} }
if !result.Next(ctx) { if !result.Next(ctx) {
t.Error("Expected REFERENCES relationship between reply and root events") t.Error("Expected Tag-based REFERENCES relationship between reply and root events")
} else { } else {
record := result.Record() record := result.Record()
returnedReplyId := record.Values[0].(string) returnedReplyId := record.Values[0].(string)
returnedRootId := record.Values[1].(string) tagValue := record.Values[1].(string)
t.Logf("✓ REFERENCES relationship verified: %s -> %s", returnedReplyId[:8], returnedRootId[:8]) returnedRootId := record.Values[2].(string)
t.Logf("✓ Tag-based REFERENCES verified: Event(%s) -> Tag{e:%s} -> Event(%s)", returnedReplyId[:8], tagValue[:8], returnedRootId[:8])
} }
// Verify MENTIONS relationship was also created for the p-tag // Verify Tag-based p-tag model: Event-[:TAGGED_WITH]->Tag{type:'p'}-[:REFERENCES]->NostrUser
mentionsCypher := ` pTagCypher := `
MATCH (reply:Event {id: $replyId})-[:MENTIONS]->(author:NostrUser {pubkey: $authorPubkey}) MATCH (reply:Event {id: $replyId})-[:TAGGED_WITH]->(t:Tag {type: 'p', value: $authorPubkey})-[:REFERENCES]->(author:NostrUser {pubkey: $authorPubkey})
RETURN author.pubkey AS pubkey RETURN author.pubkey AS pubkey, t.value AS tagValue
` `
mentionsParams := map[string]any{ pTagParams := map[string]any{
"replyId": hex.Enc(replyEvent.ID[:]), "replyId": hex.Enc(replyEvent.ID[:]),
"authorPubkey": hex.Enc(alice.Pub()), "authorPubkey": hex.Enc(alice.Pub()),
} }
mentionsResult, err := testDB.ExecuteRead(ctx, mentionsCypher, mentionsParams) pTagResult, err := testDB.ExecuteRead(ctx, pTagCypher, pTagParams)
if err != nil { if err != nil {
t.Fatalf("Failed to query MENTIONS relationship: %v", err) t.Fatalf("Failed to query Tag-based p-tag: %v", err)
} }
if !mentionsResult.Next(ctx) { if !pTagResult.Next(ctx) {
t.Error("Expected MENTIONS relationship for p-tag") t.Error("Expected Tag-based p-tag relationship")
} else { } else {
t.Logf("✓ MENTIONS relationship verified") t.Logf("✓ Tag-based p-tag relationship verified")
} }
} }
// TestSaveEvent_ETagMissingReference tests that e-tags to non-existent events // TestSaveEvent_ETagMissingReference tests that e-tags to non-existent events
// don't create broken relationships (batched processing handles this gracefully). // create Tag nodes but don't create REFERENCES relationships to missing events.
// Uses shared testDB from testmain_test.go to avoid auth rate limiting. // Uses shared testDB from testmain_test.go to avoid auth rate limiting.
func TestSaveEvent_ETagMissingReference(t *testing.T) { func TestSaveEvent_ETagMissingReference(t *testing.T) {
if testDB == nil { if testDB == nil {
@@ -331,29 +332,50 @@ func TestSaveEvent_ETagMissingReference(t *testing.T) {
t.Error("Event should have been saved despite missing reference") t.Error("Event should have been saved despite missing reference")
} }
// Verify no REFERENCES relationship was created (as the target doesn't exist) // Verify Tag node was created with TAGGED_WITH relationship
tagCypher := `
MATCH (e:Event {id: $eventId})-[:TAGGED_WITH]->(t:Tag {type: 'e', value: $refId})
RETURN t.value AS tagValue
`
tagParams := map[string]any{
"eventId": hex.Enc(ev.ID[:]),
"refId": nonExistentEventID,
}
tagResult, err := testDB.ExecuteRead(ctx, tagCypher, tagParams)
if err != nil {
t.Fatalf("Failed to check Tag node: %v", err)
}
if !tagResult.Next(ctx) {
t.Error("Expected Tag node to be created for e-tag even when target doesn't exist")
} else {
t.Logf("✓ Tag node created for missing reference")
}
// Verify no REFERENCES relationship was created from Tag (as the target Event doesn't exist)
refCypher := ` refCypher := `
MATCH (e:Event {id: $eventId})-[:REFERENCES]->(ref:Event) MATCH (t:Tag {type: 'e', value: $refId})-[:REFERENCES]->(ref:Event)
RETURN count(ref) AS refCount RETURN count(ref) AS refCount
` `
refParams := map[string]any{"eventId": hex.Enc(ev.ID[:])} refParams := map[string]any{"refId": nonExistentEventID}
refResult, err := testDB.ExecuteRead(ctx, refCypher, refParams) refResult, err := testDB.ExecuteRead(ctx, refCypher, refParams)
if err != nil { if err != nil {
t.Fatalf("Failed to check references: %v", err) t.Fatalf("Failed to check REFERENCES from Tag: %v", err)
} }
if refResult.Next(ctx) { if refResult.Next(ctx) {
count := refResult.Record().Values[0].(int64) count := refResult.Record().Values[0].(int64)
if count > 0 { if count > 0 {
t.Errorf("Expected no REFERENCES relationship for non-existent event, got %d", count) t.Errorf("Expected no REFERENCES from Tag for non-existent event, got %d", count)
} else { } else {
t.Logf("✓ Correctly handled missing reference (no relationship created)") t.Logf("✓ Correctly handled missing reference (no REFERENCES from Tag)")
} }
} }
} }
// TestSaveEvent_MultipleETags tests events with multiple e-tags. // TestSaveEvent_MultipleETags tests events with multiple e-tags using Tag-based model.
// Uses shared testDB from testmain_test.go to avoid auth rate limiting. // Uses shared testDB from testmain_test.go to avoid auth rate limiting.
func TestSaveEvent_MultipleETags(t *testing.T) { func TestSaveEvent_MultipleETags(t *testing.T) {
if testDB == nil { if testDB == nil {
@@ -409,7 +431,7 @@ func TestSaveEvent_MultipleETags(t *testing.T) {
t.Fatalf("Failed to sign reply event: %v", err) t.Fatalf("Failed to sign reply event: %v", err)
} }
// Save reply event - tests batched e-tag creation // Save reply event - tests batched e-tag creation with Tag nodes
exists, err := testDB.SaveEvent(ctx, replyEvent) exists, err := testDB.SaveEvent(ctx, replyEvent)
if err != nil { if err != nil {
t.Fatalf("Failed to save multi-reference event: %v", err) t.Fatalf("Failed to save multi-reference event: %v", err)
@@ -418,16 +440,17 @@ func TestSaveEvent_MultipleETags(t *testing.T) {
t.Fatal("Reply event should not exist yet") t.Fatal("Reply event should not exist yet")
} }
// Verify all REFERENCES relationships were created // Verify all Tag-based REFERENCES relationships were created
// Event-[:TAGGED_WITH]->Tag{type:'e'}-[:REFERENCES]->Event
cypher := ` cypher := `
MATCH (reply:Event {id: $replyId})-[:REFERENCES]->(ref:Event) MATCH (reply:Event {id: $replyId})-[:TAGGED_WITH]->(t:Tag {type: 'e'})-[:REFERENCES]->(ref:Event)
RETURN ref.id AS refId RETURN ref.id AS refId
` `
params := map[string]any{"replyId": hex.Enc(replyEvent.ID[:])} params := map[string]any{"replyId": hex.Enc(replyEvent.ID[:])}
result, err := testDB.ExecuteRead(ctx, cypher, params) result, err := testDB.ExecuteRead(ctx, cypher, params)
if err != nil { if err != nil {
t.Fatalf("Failed to query REFERENCES relationships: %v", err) t.Fatalf("Failed to query Tag-based REFERENCES: %v", err)
} }
referencedIDs := make(map[string]bool) referencedIDs := make(map[string]bool)
@@ -437,20 +460,20 @@ func TestSaveEvent_MultipleETags(t *testing.T) {
} }
if len(referencedIDs) != 3 { if len(referencedIDs) != 3 {
t.Errorf("Expected 3 REFERENCES relationships, got %d", len(referencedIDs)) t.Errorf("Expected 3 Tag-based REFERENCES, got %d", len(referencedIDs))
} }
for i, id := range eventIDs { for i, id := range eventIDs {
if !referencedIDs[id] { if !referencedIDs[id] {
t.Errorf("Missing REFERENCES relationship to event %d (%s)", i, id[:8]) t.Errorf("Missing Tag-based REFERENCES to event %d (%s)", i, id[:8])
} }
} }
t.Logf("✓ All %d REFERENCES relationships created successfully", len(referencedIDs)) t.Logf("✓ All %d Tag-based REFERENCES created successfully", len(referencedIDs))
} }
// TestSaveEvent_LargePTagBatch tests that events with many p-tags are saved correctly // TestSaveEvent_LargePTagBatch tests that events with many p-tags are saved correctly
// using batched processing to avoid Neo4j stack overflow. // using batched Tag-based processing to avoid Neo4j stack overflow.
// Uses shared testDB from testmain_test.go to avoid auth rate limiting. // Uses shared testDB from testmain_test.go to avoid auth rate limiting.
func TestSaveEvent_LargePTagBatch(t *testing.T) { func TestSaveEvent_LargePTagBatch(t *testing.T) {
if testDB == nil { if testDB == nil {
@@ -498,24 +521,45 @@ func TestSaveEvent_LargePTagBatch(t *testing.T) {
t.Fatal("Event should not exist yet") t.Fatal("Event should not exist yet")
} }
// Verify all MENTIONS relationships were created // Verify all Tag nodes were created with TAGGED_WITH relationships
countCypher := ` tagCountCypher := `
MATCH (e:Event {id: $eventId})-[:MENTIONS]->(u:NostrUser) MATCH (e:Event {id: $eventId})-[:TAGGED_WITH]->(t:Tag {type: 'p'})
RETURN count(u) AS mentionCount RETURN count(t) AS tagCount
` `
countParams := map[string]any{"eventId": hex.Enc(ev.ID[:])} tagCountParams := map[string]any{"eventId": hex.Enc(ev.ID[:])}
result, err := testDB.ExecuteRead(ctx, countCypher, countParams) tagResult, err := testDB.ExecuteRead(ctx, tagCountCypher, tagCountParams)
if err != nil { if err != nil {
t.Fatalf("Failed to count MENTIONS: %v", err) t.Fatalf("Failed to count p-tag Tag nodes: %v", err)
} }
if result.Next(ctx) { if tagResult.Next(ctx) {
count := result.Record().Values[0].(int64) count := tagResult.Record().Values[0].(int64)
if count != int64(numTags) { if count != int64(numTags) {
t.Errorf("Expected %d MENTIONS relationships, got %d", numTags, count) t.Errorf("Expected %d Tag nodes, got %d", numTags, count)
} else { } else {
t.Logf("✓ All %d MENTIONS relationships created via batched processing", count) t.Logf("✓ All %d p-tag Tag nodes created via batched processing", count)
}
}
// Verify all REFERENCES relationships to NostrUser were created
refCountCypher := `
MATCH (e:Event {id: $eventId})-[:TAGGED_WITH]->(t:Tag {type: 'p'})-[:REFERENCES]->(u:NostrUser)
RETURN count(u) AS refCount
`
refCountParams := map[string]any{"eventId": hex.Enc(ev.ID[:])}
refResult, err := testDB.ExecuteRead(ctx, refCountCypher, refCountParams)
if err != nil {
t.Fatalf("Failed to count Tag-based REFERENCES to NostrUser: %v", err)
}
if refResult.Next(ctx) {
count := refResult.Record().Values[0].(int64)
if count != int64(numTags) {
t.Errorf("Expected %d REFERENCES to NostrUser, got %d", numTags, count)
} else {
t.Logf("✓ All %d Tag-based REFERENCES to NostrUser created via batched processing", count)
} }
} }
} }

View File

@@ -211,6 +211,8 @@ func (p *SocialEventProcessor) processMuteList(ctx context.Context, ev *event.E)
} }
// processReport handles kind 1984 events (reports) // processReport handles kind 1984 events (reports)
// Deduplicates by (reporter, reported, report_type) - only one REPORTS relationship
// per combination, with the most recent event's data preserved.
func (p *SocialEventProcessor) processReport(ctx context.Context, ev *event.E) error { func (p *SocialEventProcessor) processReport(ctx context.Context, ev *event.E) error {
reporterPubkey := hex.Enc(ev.Pubkey[:]) reporterPubkey := hex.Enc(ev.Pubkey[:])
eventID := hex.Enc(ev.ID[:]) eventID := hex.Enc(ev.ID[:])
@@ -236,8 +238,14 @@ func (p *SocialEventProcessor) processReport(ctx context.Context, ev *event.E) e
return nil return nil
} }
// Create REPORTS relationship // Check for existing report of the same type to determine if this is an update
// Note: WITH is required between CREATE and MERGE in Cypher existingEventID, err := p.getExistingReportEvent(ctx, reporterPubkey, reportedPubkey, reportType)
if err != nil {
return fmt.Errorf("failed to check existing report: %w", err)
}
// Create REPORTS relationship with MERGE to deduplicate
// MERGE on (reporter, reported, report_type) ensures only one relationship per combination
cypher := ` cypher := `
// Create event tracking node // Create event tracking node
CREATE (evt:ProcessedSocialEvent { CREATE (evt:ProcessedSocialEvent {
@@ -257,13 +265,18 @@ func (p *SocialEventProcessor) processReport(ctx context.Context, ev *event.E) e
MERGE (reporter:NostrUser {pubkey: $reporter_pubkey}) MERGE (reporter:NostrUser {pubkey: $reporter_pubkey})
MERGE (reported:NostrUser {pubkey: $reported_pubkey}) MERGE (reported:NostrUser {pubkey: $reported_pubkey})
// Create REPORTS relationship // MERGE on (reporter, reported, report_type) - deduplicate!
CREATE (reporter)-[:REPORTS { MERGE (reporter)-[r:REPORTS {report_type: $report_type}]->(reported)
created_by_event: $event_id, ON CREATE SET
created_at: $created_at, r.created_by_event = $event_id,
relay_received_at: timestamp(), r.created_at = $created_at,
report_type: $report_type r.relay_received_at = timestamp()
}]->(reported) ON MATCH SET
// Only update if this event is newer
r.created_by_event = CASE WHEN $created_at > r.created_at
THEN $event_id ELSE r.created_by_event END,
r.created_at = CASE WHEN $created_at > r.created_at
THEN $created_at ELSE r.created_at END
` `
params := map[string]any{ params := map[string]any{
@@ -274,9 +287,14 @@ func (p *SocialEventProcessor) processReport(ctx context.Context, ev *event.E) e
"report_type": reportType, "report_type": reportType,
} }
_, err := p.db.ExecuteWrite(ctx, cypher, params) _, err = p.db.ExecuteWrite(ctx, cypher, params)
if err != nil { if err != nil {
return fmt.Errorf("failed to create report: %w", err) return fmt.Errorf("failed to create/update report: %w", err)
}
// Mark old ProcessedSocialEvent as superseded if this is an update with newer data
if existingEventID != "" && existingEventID != eventID {
p.markReportEventSuperseded(ctx, existingEventID, eventID)
} }
p.db.Logger.Infof("processed report: reporter=%s, reported=%s, type=%s", p.db.Logger.Infof("processed report: reporter=%s, reported=%s, type=%s",
@@ -285,6 +303,52 @@ func (p *SocialEventProcessor) processReport(ctx context.Context, ev *event.E) e
return nil return nil
} }
// getExistingReportEvent checks if a REPORTS relationship already exists for this combination
// Returns the event ID that created the relationship, or empty string if none exists
func (p *SocialEventProcessor) getExistingReportEvent(ctx context.Context, reporterPubkey, reportedPubkey, reportType string) (string, error) {
cypher := `
MATCH (reporter:NostrUser {pubkey: $reporter_pubkey})-[r:REPORTS {report_type: $report_type}]->(reported:NostrUser {pubkey: $reported_pubkey})
RETURN r.created_by_event AS event_id
LIMIT 1
`
params := map[string]any{
"reporter_pubkey": reporterPubkey,
"reported_pubkey": reportedPubkey,
"report_type": reportType,
}
result, err := p.db.ExecuteRead(ctx, cypher, params)
if err != nil {
return "", err
}
if result.Next(ctx) {
record := result.Record()
if eventID, ok := record.Values[0].(string); ok {
return eventID, nil
}
}
return "", nil
}
// markReportEventSuperseded marks an older ProcessedSocialEvent as superseded by a newer one
func (p *SocialEventProcessor) markReportEventSuperseded(ctx context.Context, oldEventID, newEventID string) {
cypher := `
MATCH (old:ProcessedSocialEvent {event_id: $old_event_id, event_kind: 1984})
SET old.superseded_by = $new_event_id
`
params := map[string]any{
"old_event_id": oldEventID,
"new_event_id": newEventID,
}
// Ignore errors - old event may not exist
p.db.ExecuteWrite(ctx, cypher, params)
}
// UpdateContactListParams holds parameters for contact list graph update // UpdateContactListParams holds parameters for contact list graph update
type UpdateContactListParams struct { type UpdateContactListParams struct {
AuthorPubkey string AuthorPubkey string

View File

@@ -737,3 +737,264 @@ func BenchmarkDiffComputation(b *testing.B) {
_, _ = diffStringSlices(old, new) _, _ = diffStringSlices(old, new)
} }
} }
// TestReportDeduplication tests that duplicate REPORTS are deduplicated
func TestReportDeduplication(t *testing.T) {
if testDB == nil {
t.Skip("Neo4j not available")
}
ctx := context.Background()
t.Run("DeduplicateSameType", func(t *testing.T) {
// Clean database for this subtest
cleanTestDatabase()
reporter := generateTestKeypair(t, "reporter")
reported := generateTestKeypair(t, "reported")
reporterPubkey := hex.Enc(reporter.pubkey[:])
reportedPubkey := hex.Enc(reported.pubkey[:])
// Create first report (older timestamp)
ev1 := event.New()
ev1.Pubkey = reporter.pubkey
ev1.CreatedAt = 1000
ev1.Kind = 1984
ev1.Tags = tag.NewS(
tag.NewFromAny("p", reportedPubkey, "impersonation"),
)
ev1.Content = []byte("First report")
if err := ev1.Sign(reporter.signer); err != nil {
t.Fatalf("Failed to sign first event: %v", err)
}
if _, err := testDB.SaveEvent(ctx, ev1); err != nil {
t.Fatalf("Failed to save first report: %v", err)
}
// Create second report (newer timestamp, same type)
ev2 := event.New()
ev2.Pubkey = reporter.pubkey
ev2.CreatedAt = 2000 // Newer timestamp
ev2.Kind = 1984
ev2.Tags = tag.NewS(
tag.NewFromAny("p", reportedPubkey, "impersonation"),
)
ev2.Content = []byte("Second report")
if err := ev2.Sign(reporter.signer); err != nil {
t.Fatalf("Failed to sign second event: %v", err)
}
if _, err := testDB.SaveEvent(ctx, ev2); err != nil {
t.Fatalf("Failed to save second report: %v", err)
}
// Verify only ONE REPORTS relationship exists
cypher := `
MATCH (r:NostrUser {pubkey: $reporter})-[rel:REPORTS]->(d:NostrUser {pubkey: $reported})
RETURN count(rel) AS count, rel.created_at AS created_at, rel.created_by_event AS event_id
`
params := map[string]any{
"reporter": reporterPubkey,
"reported": reportedPubkey,
}
result, err := testDB.ExecuteRead(ctx, cypher, params)
if err != nil {
t.Fatalf("Failed to query REPORTS: %v", err)
}
if !result.Next(ctx) {
t.Fatal("No REPORTS relationship found")
}
record := result.Record()
count := record.Values[0].(int64)
createdAt := record.Values[1].(int64)
eventID := record.Values[2].(string)
if count != 1 {
t.Errorf("Expected 1 REPORTS relationship, got %d", count)
}
// Verify the relationship has the newer event's data
if createdAt != 2000 {
t.Errorf("Expected created_at=2000 (newer), got %d", createdAt)
}
ev2ID := hex.Enc(ev2.ID[:])
if eventID != ev2ID {
t.Errorf("Expected event_id=%s, got %s", ev2ID, eventID)
}
t.Log("✓ Duplicate reports correctly deduplicated to single relationship with newest data")
})
t.Run("DifferentTypesAllowed", func(t *testing.T) {
// Clean database for this subtest
cleanTestDatabase()
reporter := generateTestKeypair(t, "reporter2")
reported := generateTestKeypair(t, "reported2")
reporterPubkey := hex.Enc(reporter.pubkey[:])
reportedPubkey := hex.Enc(reported.pubkey[:])
// Report for impersonation
ev1 := event.New()
ev1.Pubkey = reporter.pubkey
ev1.CreatedAt = 1000
ev1.Kind = 1984
ev1.Tags = tag.NewS(
tag.NewFromAny("p", reportedPubkey, "impersonation"),
)
if err := ev1.Sign(reporter.signer); err != nil {
t.Fatalf("Failed to sign event: %v", err)
}
if _, err := testDB.SaveEvent(ctx, ev1); err != nil {
t.Fatalf("Failed to save report: %v", err)
}
// Report for spam (different type)
ev2 := event.New()
ev2.Pubkey = reporter.pubkey
ev2.CreatedAt = 2000
ev2.Kind = 1984
ev2.Tags = tag.NewS(
tag.NewFromAny("p", reportedPubkey, "spam"),
)
if err := ev2.Sign(reporter.signer); err != nil {
t.Fatalf("Failed to sign event: %v", err)
}
if _, err := testDB.SaveEvent(ctx, ev2); err != nil {
t.Fatalf("Failed to save report: %v", err)
}
// Verify TWO REPORTS relationships exist (different types)
cypher := `
MATCH (r:NostrUser {pubkey: $reporter})-[rel:REPORTS]->(d:NostrUser {pubkey: $reported})
RETURN rel.report_type AS type ORDER BY type
`
params := map[string]any{
"reporter": reporterPubkey,
"reported": reportedPubkey,
}
result, err := testDB.ExecuteRead(ctx, cypher, params)
if err != nil {
t.Fatalf("Failed to query REPORTS: %v", err)
}
var types []string
for result.Next(ctx) {
types = append(types, result.Record().Values[0].(string))
}
if len(types) != 2 {
t.Errorf("Expected 2 REPORTS relationships, got %d", len(types))
}
if len(types) >= 2 && (types[0] != "impersonation" || types[1] != "spam") {
t.Errorf("Expected [impersonation, spam], got %v", types)
}
t.Log("✓ Different report types correctly create separate relationships")
})
t.Run("SupersededEventTracking", func(t *testing.T) {
// Clean database for this subtest
cleanTestDatabase()
reporter := generateTestKeypair(t, "reporter3")
reported := generateTestKeypair(t, "reported3")
reporterPubkey := hex.Enc(reporter.pubkey[:])
reportedPubkey := hex.Enc(reported.pubkey[:])
// Create first report
ev1 := event.New()
ev1.Pubkey = reporter.pubkey
ev1.CreatedAt = 1000
ev1.Kind = 1984
ev1.Tags = tag.NewS(
tag.NewFromAny("p", reportedPubkey, "spam"),
)
if err := ev1.Sign(reporter.signer); err != nil {
t.Fatalf("Failed to sign first event: %v", err)
}
if _, err := testDB.SaveEvent(ctx, ev1); err != nil {
t.Fatalf("Failed to save first report: %v", err)
}
ev1ID := hex.Enc(ev1.ID[:])
// Create second report (supersedes first)
ev2 := event.New()
ev2.Pubkey = reporter.pubkey
ev2.CreatedAt = 2000
ev2.Kind = 1984
ev2.Tags = tag.NewS(
tag.NewFromAny("p", reportedPubkey, "spam"),
)
if err := ev2.Sign(reporter.signer); err != nil {
t.Fatalf("Failed to sign second event: %v", err)
}
if _, err := testDB.SaveEvent(ctx, ev2); err != nil {
t.Fatalf("Failed to save second report: %v", err)
}
ev2ID := hex.Enc(ev2.ID[:])
// Verify first ProcessedSocialEvent is superseded
cypher := `
MATCH (evt:ProcessedSocialEvent {event_id: $event_id, event_kind: 1984})
RETURN evt.superseded_by AS superseded_by
`
params := map[string]any{"event_id": ev1ID}
result, err := testDB.ExecuteRead(ctx, cypher, params)
if err != nil {
t.Fatalf("Failed to query ProcessedSocialEvent: %v", err)
}
if !result.Next(ctx) {
t.Fatal("First ProcessedSocialEvent not found")
}
supersededBy := result.Record().Values[0]
if supersededBy == nil {
t.Error("Expected first event to be superseded, but superseded_by is null")
} else if supersededBy.(string) != ev2ID {
t.Errorf("Expected superseded_by=%s, got %v", ev2ID, supersededBy)
}
// Verify second ProcessedSocialEvent is NOT superseded
params = map[string]any{"event_id": ev2ID}
result, err = testDB.ExecuteRead(ctx, cypher, params)
if err != nil {
t.Fatalf("Failed to query second ProcessedSocialEvent: %v", err)
}
if !result.Next(ctx) {
t.Fatal("Second ProcessedSocialEvent not found")
}
supersededBy = result.Record().Values[0]
if supersededBy != nil {
t.Errorf("Expected second event not to be superseded, but superseded_by=%v", supersededBy)
}
t.Log("✓ ProcessedSocialEvent correctly tracks superseded events")
})
}

1105
pkg/neo4j/tag_model_test.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -96,7 +96,7 @@ func TestBugReproduction_WithPolicyManager(t *testing.T) {
// Create policy with manager (enabled) // Create policy with manager (enabled)
ctx := context.Background() ctx := context.Background()
policy := NewWithManager(ctx, "ORLY", true) policy := NewWithManager(ctx, "ORLY", true, "")
// Load policy from file // Load policy from file
if err := policy.LoadFromFile(policyPath); err != nil { if err := policy.LoadFromFile(policyPath); err != nil {

View File

@@ -31,7 +31,7 @@ func setupTestPolicy(t *testing.T, appName string) (*P, func()) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
policy := NewWithManager(ctx, appName, true) policy := NewWithManager(ctx, appName, true, "")
if policy == nil { if policy == nil {
cancel() cancel()
os.RemoveAll(configDir) os.RemoveAll(configDir)

View File

@@ -29,7 +29,7 @@ func setupHotreloadTestPolicy(t *testing.T, appName string) (*P, func()) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
policy := NewWithManager(ctx, appName, true) policy := NewWithManager(ctx, appName, true, "")
if policy == nil { if policy == nil {
cancel() cancel()
os.RemoveAll(configDir) os.RemoveAll(configDir)

View File

@@ -514,12 +514,19 @@ type PolicyManager struct {
ctx context.Context ctx context.Context
cancel context.CancelFunc cancel context.CancelFunc
configDir string configDir string
configPath string // Path to policy.json file
scriptPath string // Default script path for backward compatibility scriptPath string // Default script path for backward compatibility
enabled bool enabled bool
mutex sync.RWMutex mutex sync.RWMutex
runners map[string]*ScriptRunner // Map of script path -> runner runners map[string]*ScriptRunner // Map of script path -> runner
} }
// ConfigPath returns the path to the policy configuration file.
// This is used by hot-reload handlers to know where to save updated policy.
func (pm *PolicyManager) ConfigPath() string {
return pm.configPath
}
// P represents a complete policy configuration for a Nostr relay. // P represents a complete policy configuration for a Nostr relay.
// It defines access control rules, kind filtering, and default behavior. // It defines access control rules, kind filtering, and default behavior.
// Policies are evaluated in order: global rules, kind filtering, specific rules, then default policy. // Policies are evaluated in order: global rules, kind filtering, specific rules, then default policy.
@@ -695,6 +702,15 @@ func (p *P) IsEnabled() bool {
return p != nil && p.manager != nil && p.manager.IsEnabled() return p != nil && p.manager != nil && p.manager.IsEnabled()
} }
// ConfigPath returns the path to the policy configuration file.
// Delegates to the internal PolicyManager.
func (p *P) ConfigPath() string {
if p == nil || p.manager == nil {
return ""
}
return p.manager.ConfigPath()
}
// getDefaultPolicyAction returns true if the default policy is "allow", false if "deny" // getDefaultPolicyAction returns true if the default policy is "allow", false if "deny"
func (p *P) getDefaultPolicyAction() (allowed bool) { func (p *P) getDefaultPolicyAction() (allowed bool) {
switch p.DefaultPolicy { switch p.DefaultPolicy {
@@ -711,10 +727,29 @@ func (p *P) getDefaultPolicyAction() (allowed bool) {
// NewWithManager creates a new policy with a policy manager for script execution. // NewWithManager creates a new policy with a policy manager for script execution.
// It initializes the policy manager, loads configuration from files, and starts // It initializes the policy manager, loads configuration from files, and starts
// background processes for script management and periodic health checks. // background processes for script management and periodic health checks.
func NewWithManager(ctx context.Context, appName string, enabled bool) *P { //
// The customPolicyPath parameter allows overriding the default policy file location.
// If empty, uses the default path: $HOME/.config/{appName}/policy.json
// If provided, it MUST be an absolute path (starting with /) or the function will panic.
func NewWithManager(ctx context.Context, appName string, enabled bool, customPolicyPath string) *P {
configDir := filepath.Join(xdg.ConfigHome, appName) configDir := filepath.Join(xdg.ConfigHome, appName)
scriptPath := filepath.Join(configDir, "policy.sh") scriptPath := filepath.Join(configDir, "policy.sh")
configPath := filepath.Join(configDir, "policy.json")
// Determine the policy config path
var configPath string
if customPolicyPath != "" {
// Validate that custom path is absolute
if !filepath.IsAbs(customPolicyPath) {
panic(fmt.Sprintf("FATAL: ORLY_POLICY_PATH must be an ABSOLUTE path (starting with /), got: %q", customPolicyPath))
}
configPath = customPolicyPath
// Update configDir to match the custom path's directory for script resolution
configDir = filepath.Dir(customPolicyPath)
scriptPath = filepath.Join(configDir, "policy.sh")
log.I.F("using custom policy path: %s", configPath)
} else {
configPath = filepath.Join(configDir, "policy.json")
}
ctx, cancel := context.WithCancel(ctx) ctx, cancel := context.WithCancel(ctx)
@@ -722,6 +757,7 @@ func NewWithManager(ctx context.Context, appName string, enabled bool) *P {
ctx: ctx, ctx: ctx,
cancel: cancel, cancel: cancel,
configDir: configDir, configDir: configDir,
configPath: configPath,
scriptPath: scriptPath, scriptPath: scriptPath,
enabled: enabled, enabled: enabled,
runners: make(map[string]*ScriptRunner), runners: make(map[string]*ScriptRunner),

View File

@@ -825,7 +825,7 @@ func TestNewWithManager(t *testing.T) {
// Test with disabled policy (doesn't require policy.json file) // Test with disabled policy (doesn't require policy.json file)
t.Run("disabled policy", func(t *testing.T) { t.Run("disabled policy", func(t *testing.T) {
enabled := false enabled := false
policy := NewWithManager(ctx, appName, enabled) policy := NewWithManager(ctx, appName, enabled, "")
if policy == nil { if policy == nil {
t.Fatal("Expected policy but got nil") t.Fatal("Expected policy but got nil")

View File

@@ -31,7 +31,7 @@ func setupTagValidationTestPolicy(t *testing.T, appName string) (*P, func()) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
policy := NewWithManager(ctx, appName, true) policy := NewWithManager(ctx, appName, true, "")
if policy == nil { if policy == nil {
cancel() cancel()
os.RemoveAll(configDir) os.RemoveAll(configDir)

View File

@@ -69,8 +69,11 @@ func (c *NIP11Cache) Get(ctx context.Context, relayURL string) (*relayinfo.T, er
// fetchNIP11 fetches relay information document from a given URL // fetchNIP11 fetches relay information document from a given URL
func (c *NIP11Cache) fetchNIP11(ctx context.Context, relayURL string) (*relayinfo.T, error) { func (c *NIP11Cache) fetchNIP11(ctx context.Context, relayURL string) (*relayinfo.T, error) {
// Construct NIP-11 URL // Convert WebSocket URL to HTTP URL for NIP-11 fetch
// wss:// -> https://, ws:// -> http://
nip11URL := relayURL nip11URL := relayURL
nip11URL = strings.Replace(nip11URL, "wss://", "https://", 1)
nip11URL = strings.Replace(nip11URL, "ws://", "http://", 1)
if !strings.HasSuffix(nip11URL, "/") { if !strings.HasSuffix(nip11URL, "/") {
nip11URL += "/" nip11URL += "/"
} }

View File

@@ -1 +1 @@
v0.35.0 v0.36.1