Replaced individual environment variable access with a unified `DatabaseConfig` struct for all database backends. This centralizes configuration management, reduces redundant code, and ensures all options are documented in `app/config/config.go`. Backward compatibility is maintained with default values and retained constructors.
Dgraph Database Implementation for ORLY
This package provides a Dgraph-based implementation of the ORLY database interface, enabling graph-based storage for Nostr events with powerful relationship querying capabilities.
Status: Step 1 Complete ✅
Current State: Dgraph server integration is complete and functional Next Step: DQL query/mutation implementation in save-event.go and query-events.go
Architecture
Client-Server Model
The implementation uses a client-server architecture:
┌─────────────────────────────────────────────┐
│ ORLY Relay Process │
│ │
│ ┌────────────────────────────────────┐ │
│ │ Dgraph Client (pkg/dgraph) │ │
│ │ - dgo library (gRPC) │ │
│ │ - Schema management │────┼───► Dgraph Server
│ │ - Query/Mutate methods │ │ (localhost:9080)
│ └────────────────────────────────────┘ │ - Event graph
│ │ - Authors, tags
│ ┌────────────────────────────────────┐ │ - Relationships
│ │ Badger Metadata Store │ │
│ │ - Markers (key-value) │ │
│ │ - Serial counters │ │
│ │ - Relay identity │ │
│ └────────────────────────────────────┘ │
└─────────────────────────────────────────────┘
Dual Storage Strategy
-
Dgraph (Graph Database)
- Nostr events and their content
- Author relationships
- Tag relationships
- Event references and mentions
- Optimized for graph traversals and complex queries
-
Badger (Key-Value Store)
- Metadata markers
- Serial number counters
- Relay identity keys
- Fast key-value operations
Setup
1. Start Dgraph Server
Using Docker (recommended):
docker run -d \
--name dgraph \
-p 8080:8080 \
-p 9080:9080 \
-p 8000:8000 \
-v ~/dgraph:/dgraph \
dgraph/standalone:latest
2. Configure ORLY
export ORLY_DB_TYPE=dgraph
export ORLY_DGRAPH_URL=localhost:9080 # Optional, this is the default
3. Run ORLY
./orly
On startup, ORLY will:
- Connect to dgraph server via gRPC
- Apply the Nostr schema automatically
- Initialize badger metadata store
- Initialize serial number counter
- Start accepting events
Schema
The Nostr schema defines the following types:
Event Nodes
type Event {
event.id # Event ID (string, indexed)
event.serial # Sequential number (int, indexed)
event.kind # Event kind (int, indexed)
event.created_at # Timestamp (int, indexed)
event.content # Event content (string)
event.sig # Signature (string, indexed)
event.pubkey # Author pubkey (string, indexed)
event.authored_by # -> Author (uid)
event.references # -> Events (uid list)
event.mentions # -> Events (uid list)
event.tagged_with # -> Tags (uid list)
}
Author Nodes
type Author {
author.pubkey # Pubkey (string, indexed, unique)
author.events # -> Events (uid list, reverse)
}
Tag Nodes
type Tag {
tag.type # Tag type (string, indexed)
tag.value # Tag value (string, indexed + fulltext)
tag.events # -> Events (uid list, reverse)
}
Marker Nodes (Metadata)
type Marker {
marker.key # Key (string, indexed, unique)
marker.value # Value (string)
}
Configuration
Environment Variables
ORLY_DB_TYPE=dgraph- Enable dgraph database (default: badger)ORLY_DGRAPH_URL=host:port- Dgraph gRPC endpoint (default: localhost:9080)ORLY_DATA_DIR=/path- Data directory for metadata storage
Connection Details
The dgraph client uses insecure gRPC by default for local development. For production deployments:
- Set up TLS certificates for dgraph
- Modify
pkg/dgraph/dgraph.goto usegrpc.WithTransportCredentials()with your certs
Implementation Details
Files
dgraph.go- Main implementation, initialization, lifecycleschema.go- Schema definition and applicationsave-event.go- Event storage (TODO: update to use Mutate)query-events.go- Event queries (TODO: update to parse DQL responses)fetch-event.go- Event retrieval methodsdelete.go- Event deletionmarkers.go- Key-value metadata storage (uses badger)serial.go- Serial number generation (uses badger)subscriptions.go- Subscription/payment tracking (uses markers)nip43.go- NIP-43 invite system (uses markers)import-export.go- Import/export operationslogger.go- Logging adapter
Key Methods
Initialization
d, err := dgraph.New(ctx, cancel, dataDir, logLevel)
Querying (DQL)
resp, err := d.Query(ctx, dqlQuery)
Mutations (RDF N-Quads)
mutation := &api.Mutation{SetNquads: []byte(nquads)}
resp, err := d.Mutate(ctx, mutation)
Development Status
✅ Step 1: Dgraph Server Integration (COMPLETE)
- dgo client library integration
- gRPC connection to external dgraph
- Schema definition and auto-application
- Query() and Mutate() method stubs
- ORLY_DGRAPH_URL configuration
- Dual-storage architecture
- Proper lifecycle management
📝 Step 2: DQL Implementation (NEXT)
Priority tasks:
- save-event.go - Replace RDF string building with actual Mutate() calls
- query-events.go - Parse actual JSON responses from Query()
- fetch-event.go - Implement DQL queries for event retrieval
- delete.go - Implement deletion mutations
📝 Step 3: Testing (FUTURE)
- Integration testing with relay-tester
- Performance benchmarks vs badger
- Memory profiling
- Production deployment testing
Troubleshooting
Connection Refused
failed to connect to dgraph at localhost:9080: connection refused
Solution: Ensure dgraph server is running:
docker ps | grep dgraph
docker logs dgraph
Schema Application Failed
failed to apply schema: ...
Solution: Check dgraph server logs and ensure no schema conflicts:
docker logs dgraph
Binary Not Finding libsecp256k1.so
This is unrelated to dgraph. Ensure:
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}$(pwd)/pkg/crypto/p8k"
Performance Considerations
When to Use Dgraph
Good fit:
- Complex graph queries (follows-of-follows, social graphs)
- Full-text search requirements
- Advanced filtering and aggregations
- Multi-hop relationship traversals
Not ideal for:
- Simple key-value lookups (badger is faster)
- Very high write throughput (badger has lower latency)
- Single-node deployments with simple queries
Optimization Tips
- Indexing: Ensure frequently queried fields have appropriate indexes
- Pagination: Use offset/limit in DQL queries for large result sets
- Caching: Consider adding an LRU cache for hot events
- Schema Design: Use reverse edges for efficient relationship traversal
Resources
Contributing
When working on dgraph implementation:
- Test changes against a local dgraph instance
- Update schema.go if adding new node types or predicates
- Ensure dual-storage strategy is maintained (dgraph for events, badger for metadata)
- Add integration tests for new features
- Update DGRAPH_IMPLEMENTATION_STATUS.md with progress