Compare commits
22 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
3314a2a892
|
|||
|
7c14c72e9d
|
|||
|
dbdc5d703e
|
|||
|
c1acf0deaa
|
|||
|
ccffeb902c
|
|||
|
35201490a0
|
|||
|
3afd6131d5
|
|||
|
386878fec8
|
|||
| 474e16c315 | |||
|
|
47e94c5ff6 | ||
|
|
c62fdc96d5 | ||
|
|
4c66eda10e | ||
|
|
9fdef77e02 | ||
|
e8a69077b3
|
|||
|
128bc60726
|
|||
|
6c6f9e8874
|
|||
|
01131f252e
|
|||
|
02333b74ae
|
|||
|
86ac7b7897
|
|||
|
7e6adf9fba
|
|||
|
7d5ebd5ccd
|
|||
|
f8a321eaee
|
@@ -96,4 +96,4 @@ log statements to help locate the cause of bugs
|
||||
|
||||
always use Go v1.25.1 for everything involving Go
|
||||
|
||||
always use the nips repository that is available at /nips in the root of the repository for documentation about nostr protocol
|
||||
always use the nips repository also for information, found at ../github.com/nostr-protocol/nips attached to the project
|
||||
@@ -13,6 +13,8 @@ cmd/benchmark/reports/
|
||||
|
||||
# Go build cache and binaries
|
||||
**/bin/
|
||||
**/dist/
|
||||
**/build/
|
||||
**/*.out
|
||||
|
||||
# Allow web dist directory (needed for embedding)
|
||||
!app/web/dist/
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Apache Reverse Proxy Guide for Docker Apps
|
||||
|
||||
**Complete guide for WebSocket-enabled applications - covers both Plesk and Standard Apache**
|
||||
**Updated with real-world troubleshooting solutions**
|
||||
**Updated with real-world troubleshooting solutions and latest Orly relay improvements**
|
||||
|
||||
## 🎯 **What This Solves**
|
||||
- WebSocket connection failures (`NS_ERROR_WEBSOCKET_CONNECTION_REFUSED`)
|
||||
@@ -9,24 +9,33 @@
|
||||
- Docker container proxy configuration
|
||||
- SSL certificate integration
|
||||
- Plesk configuration conflicts and virtual host precedence issues
|
||||
- **NEW**: WebSocket scheme validation errors (`expected 'ws' got 'wss'`)
|
||||
- **NEW**: Proxy-friendly relay configuration with enhanced CORS headers
|
||||
- **NEW**: Improved error handling for malformed client data
|
||||
|
||||
## 🐳 **Step 1: Deploy Your Docker Application**
|
||||
|
||||
### **For Stella's Orly Relay:**
|
||||
### **For Stella's Orly Relay (Latest Version with Proxy Improvements):**
|
||||
```bash
|
||||
# Pull and run the relay
|
||||
# Pull and run the relay with enhanced proxy support
|
||||
docker run -d \
|
||||
--name stella-relay \
|
||||
--name orly-relay \
|
||||
--restart unless-stopped \
|
||||
-p 127.0.0.1:7777:7777 \
|
||||
-v /data/orly-relay:/data \
|
||||
-e ORLY_OWNERS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx \
|
||||
-e ORLY_ADMINS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx,npub1l5sga6xg72phsz5422ykujprejwud075ggrr3z2hwyrfgr7eylqstegx9z \
|
||||
silberengel/orly-relay:latest
|
||||
-e ORLY_ADMINS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx,npub1l5sga6xg72phsz5422ykujprejwud075ggrr3z2hwyrfgr7eylqstegx9z,npub1m4ny6hjqzepn4rxknuq94c2gpqzr29ufkkw7ttcxyak7v43n6vvsajc2jl \
|
||||
-e ORLY_BOOTSTRAP_RELAYS=wss://profiles.nostr1.com,wss://purplepag.es,wss://relay.nostr.band,wss://relay.damus.io \
|
||||
-e ORLY_RELAY_URL=wss://orly-relay.imwald.eu \
|
||||
-e ORLY_ACL_MODE=follows \
|
||||
-e ORLY_SPIDER_MODE=follows \
|
||||
-e ORLY_SPIDER_FREQUENCY=1h \
|
||||
-e ORLY_SUBSCRIPTION_ENABLED=false \
|
||||
silberengel/next-orly:latest
|
||||
|
||||
# Test the relay
|
||||
curl -I http://127.0.0.1:7777
|
||||
# Should return: HTTP/1.1 426 Upgrade Required
|
||||
# Should return: HTTP/1.1 200 OK with enhanced CORS headers
|
||||
```
|
||||
|
||||
### **For Web Apps (like Jumble):**
|
||||
@@ -253,9 +262,40 @@ sudo a2enmod proxy
|
||||
sudo a2enmod proxy_http
|
||||
sudo a2enmod proxy_wstunnel
|
||||
sudo a2enmod rewrite
|
||||
sudo a2enmod headers
|
||||
sudo systemctl restart apache2
|
||||
```
|
||||
|
||||
## 🆕 **Step 4: Latest Orly Relay Improvements**
|
||||
|
||||
### **Enhanced Proxy Support**
|
||||
The latest Orly relay includes several proxy improvements:
|
||||
|
||||
1. **Flexible WebSocket Scheme Handling**: Accepts both `ws://` and `wss://` schemes for authentication
|
||||
2. **Enhanced CORS Headers**: Better compatibility with web applications
|
||||
3. **Improved Error Handling**: More robust handling of malformed client data
|
||||
4. **Proxy-Aware Logging**: Better debugging information for proxy setups
|
||||
|
||||
### **Key Environment Variables**
|
||||
```bash
|
||||
# Essential for proxy setups
|
||||
ORLY_RELAY_URL=wss://your-domain.com # Must match your public URL
|
||||
ORLY_ACL_MODE=follows # Enable follows-based access control
|
||||
ORLY_SPIDER_MODE=follows # Enable content syncing from other relays
|
||||
ORLY_SUBSCRIPTION_ENABLED=false # Disable payment requirements
|
||||
```
|
||||
|
||||
### **Testing the Enhanced Relay**
|
||||
```bash
|
||||
# Test local connectivity
|
||||
curl -I http://127.0.0.1:7777
|
||||
|
||||
# Expected response includes enhanced CORS headers:
|
||||
# Access-Control-Allow-Credentials: true
|
||||
# Access-Control-Max-Age: 86400
|
||||
# Vary: Origin, Access-Control-Request-Method, Access-Control-Request-Headers
|
||||
```
|
||||
|
||||
## ⚡ **Step 4: Alternative - Nginx in Plesk**
|
||||
|
||||
If Apache keeps giving issues, switch to Nginx in Plesk:
|
||||
@@ -327,13 +367,67 @@ After making changes:
|
||||
```bash
|
||||
# Essential debugging
|
||||
docker ps | grep relay # Container running?
|
||||
curl -I http://127.0.0.1:7777 # Local relay (should return 426)
|
||||
curl -I http://127.0.0.1:7777 # Local relay (should return 200 with CORS headers)
|
||||
apache2ctl -S | grep domain.com # Virtual host precedence
|
||||
grep ProxyPass /etc/apache2/plesk.conf.d/vhosts/domain.conf # Config applied?
|
||||
|
||||
# WebSocket testing
|
||||
echo '["REQ","test",{}]' | websocat wss://domain.com/ # Root path
|
||||
echo '["REQ","test",{}]' | websocat wss://domain.com/ws/ # /ws/ path
|
||||
|
||||
# Check relay logs for proxy information
|
||||
docker logs relay-name | grep -i "proxy info"
|
||||
docker logs relay-name | grep -i "websocket connection"
|
||||
```
|
||||
|
||||
## 🚨 **Latest Troubleshooting Solutions**
|
||||
|
||||
### **WebSocket Scheme Validation Errors**
|
||||
**Problem**: `"HTTP Scheme incorrect: expected 'ws' got 'wss'"`
|
||||
|
||||
**Solution**: Use the latest Orly relay image with enhanced proxy support:
|
||||
```bash
|
||||
# Pull the latest image with proxy improvements
|
||||
docker pull silberengel/next-orly:latest
|
||||
|
||||
# Restart with the latest image
|
||||
docker stop orly-relay && docker rm orly-relay
|
||||
# Then run with the configuration above
|
||||
```
|
||||
|
||||
### **Malformed Client Data Errors**
|
||||
**Problem**: `"invalid hex array size, got 2 expect 64"`
|
||||
|
||||
**Solution**: These are client-side issues, not server problems. The latest relay handles them gracefully:
|
||||
- The relay now sends helpful error messages to clients
|
||||
- Malformed requests are logged but don't crash the relay
|
||||
- Normal operations continue despite client errors
|
||||
|
||||
### **Follows ACL Not Working**
|
||||
**Problem**: Only owners can write, admins can't write
|
||||
|
||||
**Solution**: Ensure proper configuration:
|
||||
```bash
|
||||
# Check ACL configuration
|
||||
docker exec orly-relay env | grep ACL
|
||||
|
||||
# Should show: ORLY_ACL_MODE=follows
|
||||
# If not, restart with explicit configuration
|
||||
```
|
||||
|
||||
### **Spider Not Syncing Content**
|
||||
**Problem**: Spider enabled but not pulling events
|
||||
|
||||
**Solution**: Check for relay lists and follow events:
|
||||
```bash
|
||||
# Check spider status
|
||||
docker logs orly-relay | grep -i spider
|
||||
|
||||
# Look for relay discovery
|
||||
docker logs orly-relay | grep -i "relay URLs"
|
||||
|
||||
# Check for follow events
|
||||
docker logs orly-relay | grep -i "kind.*3"
|
||||
```
|
||||
|
||||
### **Working Solution (Proven):**
|
||||
@@ -362,3 +456,28 @@ echo '["REQ","test",{}]' | websocat wss://domain.com/ws/ # /ws/ path
|
||||
2. Use `ws://` proxy for Nostr relays, not `http://`
|
||||
3. Direct Apache config files are more reliable than Plesk interface
|
||||
4. Always check virtual host precedence with `apache2ctl -S`
|
||||
5. **NEW**: Use the latest Orly relay image for better proxy compatibility
|
||||
6. **NEW**: Enhanced CORS headers improve web app compatibility
|
||||
7. **NEW**: Flexible WebSocket scheme handling eliminates authentication errors
|
||||
8. **NEW**: Improved error handling makes the relay more robust
|
||||
|
||||
## 🎉 **Summary of Latest Improvements**
|
||||
|
||||
### **Enhanced Proxy Support**
|
||||
- ✅ Flexible WebSocket scheme validation (accepts both `ws://` and `wss://`)
|
||||
- ✅ Enhanced CORS headers for better web app compatibility
|
||||
- ✅ Improved error handling for malformed client data
|
||||
- ✅ Proxy-aware logging for better debugging
|
||||
|
||||
### **Spider and ACL Features**
|
||||
- ✅ Follows-based access control (`ORLY_ACL_MODE=follows`)
|
||||
- ✅ Content syncing from other relays (`ORLY_SPIDER_MODE=follows`)
|
||||
- ✅ No payment requirements (`ORLY_SUBSCRIPTION_ENABLED=false`)
|
||||
|
||||
### **Production Ready**
|
||||
- ✅ Robust error handling
|
||||
- ✅ Enhanced logging and debugging
|
||||
- ✅ Better client compatibility
|
||||
- ✅ Improved proxy support
|
||||
|
||||
**The latest Orly relay is now fully optimized for proxy environments and provides a much better user experience!**
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
docker-compose up -d
|
||||
|
||||
# View logs
|
||||
docker-compose logs -f stella-relay
|
||||
docker-compose logs -f orly-relay
|
||||
|
||||
# Stop the relay
|
||||
docker-compose down
|
||||
@@ -136,7 +136,7 @@ go run ./cmd/stresstest -relay ws://localhost:7777
|
||||
```bash
|
||||
# Container debugging
|
||||
docker ps | grep relay
|
||||
docker logs stella-relay
|
||||
docker logs orly-relay
|
||||
curl -I http://127.0.0.1:7777 # Should return HTTP 426
|
||||
|
||||
# WebSocket testing
|
||||
@@ -153,7 +153,7 @@ grep ProxyPass /etc/apache2/plesk.conf.d/vhosts/domain.conf
|
||||
|
||||
```bash
|
||||
# View relay logs
|
||||
docker-compose logs -f stella-relay
|
||||
docker-compose logs -f orly-relay
|
||||
|
||||
# View nginx logs (if using proxy)
|
||||
docker-compose logs -f nginx
|
||||
|
||||
@@ -62,7 +62,7 @@ ENV ORLY_PORT=7777
|
||||
ENV ORLY_LOG_LEVEL=info
|
||||
ENV ORLY_MAX_CONNECTIONS=1000
|
||||
ENV ORLY_OWNERS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx
|
||||
ENV ORLY_ADMINS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx,npub1l5sga6xg72phsz5422ykujprejwud075ggrr3z2hwyrfgr7eylqstegx9z
|
||||
ENV ORLY_ADMINS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx,npub1m4ny6hjqzepn4rxknuq94c2gpqzr29ufkkw7ttcxyak7v43n6vvsajc2jl,npub1l5sga6xg72phsz5422ykujprejwud075ggrr3z2hwyrfgr7eylqstegx9z
|
||||
|
||||
# Health check to ensure relay is responding
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
|
||||
@@ -40,8 +40,9 @@ type C struct {
|
||||
Admins []string `env:"ORLY_ADMINS" usage:"comma-separated list of admin npubs"`
|
||||
Owners []string `env:"ORLY_OWNERS" usage:"comma-separated list of owner npubs, who have full control of the relay for wipe and restart and other functions"`
|
||||
ACLMode string `env:"ORLY_ACL_MODE" usage:"ACL mode: follows,none" default:"none"`
|
||||
SpiderMode string `env:"ORLY_SPIDER_MODE" usage:"spider mode: none,follow" default:"none"`
|
||||
SpiderMode string `env:"ORLY_SPIDER_MODE" usage:"spider mode: none,follows" default:"none"`
|
||||
SpiderFrequency time.Duration `env:"ORLY_SPIDER_FREQUENCY" usage:"spider frequency in seconds" default:"1h"`
|
||||
BootstrapRelays []string `env:"ORLY_BOOTSTRAP_RELAYS" usage:"comma-separated list of bootstrap relay URLs for initial sync"`
|
||||
NWCUri string `env:"ORLY_NWC_URI" usage:"NWC (Nostr Wallet Connect) connection string for Lightning payments"`
|
||||
SubscriptionEnabled bool `env:"ORLY_SUBSCRIPTION_ENABLED" default:"false" usage:"enable subscription-based access control requiring payment for non-directory events"`
|
||||
MonthlyPriceSats int64 `env:"ORLY_MONTHLY_PRICE_SATS" default:"6000" usage:"price in satoshis for one month subscription (default ~$2 USD)"`
|
||||
@@ -225,15 +226,14 @@ func EnvKV(cfg any) (m KVSlice) {
|
||||
k := t.Field(i).Tag.Get("env")
|
||||
v := reflect.ValueOf(cfg).Field(i).Interface()
|
||||
var val string
|
||||
switch v.(type) {
|
||||
switch v := v.(type) {
|
||||
case string:
|
||||
val = v.(string)
|
||||
val = v
|
||||
case int, bool, time.Duration:
|
||||
val = fmt.Sprint(v)
|
||||
case []string:
|
||||
arr := v.([]string)
|
||||
if len(arr) > 0 {
|
||||
val = strings.Join(arr, ",")
|
||||
if len(v) > 0 {
|
||||
val = strings.Join(v, ",")
|
||||
}
|
||||
}
|
||||
// this can happen with embedded structs
|
||||
@@ -305,5 +305,4 @@ func PrintHelp(cfg *C, printer io.Writer) {
|
||||
fmt.Fprintf(printer, "\ncurrent configuration:\n\n")
|
||||
PrintEnv(cfg, printer)
|
||||
fmt.Fprintln(printer)
|
||||
return
|
||||
}
|
||||
|
||||
78
app/handle-count.go
Normal file
78
app/handle-count.go
Normal file
@@ -0,0 +1,78 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"lol.mleku.dev/chk"
|
||||
"lol.mleku.dev/log"
|
||||
"next.orly.dev/pkg/acl"
|
||||
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
|
||||
"next.orly.dev/pkg/encoders/envelopes/countenvelope"
|
||||
"next.orly.dev/pkg/utils/normalize"
|
||||
)
|
||||
|
||||
// HandleCount processes a COUNT envelope by parsing the request, verifying
|
||||
// permissions, invoking the database CountEvents for each provided filter, and
|
||||
// responding with a COUNT response containing the aggregate count.
|
||||
func (l *Listener) HandleCount(msg []byte) (err error) {
|
||||
log.D.F("HandleCount: START processing from %s", l.remote)
|
||||
|
||||
// Parse the COUNT request
|
||||
env := countenvelope.New()
|
||||
if _, err = env.Unmarshal(msg); chk.E(err) {
|
||||
return normalize.Error.Errorf(err.Error())
|
||||
}
|
||||
log.D.C(func() string { return fmt.Sprintf("COUNT sub=%s filters=%d", env.Subscription, len(env.Filters)) })
|
||||
|
||||
// If ACL is active, send a challenge (same as REQ path)
|
||||
if acl.Registry.Active.Load() != "none" {
|
||||
if err = authenvelope.NewChallengeWith(l.challenge.Load()).Write(l); chk.E(err) {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Check read permissions
|
||||
accessLevel := acl.Registry.GetAccessLevel(l.authedPubkey.Load(), l.remote)
|
||||
switch accessLevel {
|
||||
case "none":
|
||||
return errors.New("auth required: user not authed or has no read access")
|
||||
default:
|
||||
// allowed to read
|
||||
}
|
||||
|
||||
// Use a bounded context for counting
|
||||
ctx, cancel := context.WithTimeout(l.ctx, 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Aggregate count across all provided filters
|
||||
var total int
|
||||
var approx bool // database returns false per implementation
|
||||
for _, f := range env.Filters {
|
||||
if f == nil {
|
||||
continue
|
||||
}
|
||||
var cnt int
|
||||
var a bool
|
||||
cnt, a, err = l.D.CountEvents(ctx, f)
|
||||
if chk.E(err) {
|
||||
return
|
||||
}
|
||||
total += cnt
|
||||
approx = approx || a
|
||||
}
|
||||
|
||||
// Build and send COUNT response
|
||||
var res *countenvelope.Response
|
||||
if res, err = countenvelope.NewResponseFrom(env.Subscription, total, approx); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if err = res.Write(l); chk.E(err) {
|
||||
return
|
||||
}
|
||||
|
||||
log.D.F("HandleCount: COMPLETED processing from %s count=%d approx=%v", l.remote, total, approx)
|
||||
return nil
|
||||
}
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"next.orly.dev/pkg/encoders/envelopes"
|
||||
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
|
||||
"next.orly.dev/pkg/encoders/envelopes/closeenvelope"
|
||||
"next.orly.dev/pkg/encoders/envelopes/countenvelope"
|
||||
"next.orly.dev/pkg/encoders/envelopes/eventenvelope"
|
||||
"next.orly.dev/pkg/encoders/envelopes/noticeenvelope"
|
||||
"next.orly.dev/pkg/encoders/envelopes/reqenvelope"
|
||||
@@ -55,6 +56,9 @@ func (l *Listener) HandleMessage(msg []byte, remote string) {
|
||||
case authenvelope.L:
|
||||
log.D.F("%s processing AUTH envelope", remote)
|
||||
err = l.HandleAuth(rem)
|
||||
case countenvelope.L:
|
||||
log.D.F("%s processing COUNT envelope", remote)
|
||||
err = l.HandleCount(rem)
|
||||
default:
|
||||
err = fmt.Errorf("unknown envelope type %s", t)
|
||||
log.E.F("%s unknown envelope type: %s (payload: %q)", remote, t, string(rem))
|
||||
|
||||
@@ -40,12 +40,14 @@ func (s *Server) HandleRelayInfo(w http.ResponseWriter, r *http.Request) {
|
||||
relayinfo.RelayInformationDocument,
|
||||
relayinfo.GenericTagQueries,
|
||||
// relayinfo.NostrMarketplace,
|
||||
relayinfo.CountingResults,
|
||||
relayinfo.EventTreatment,
|
||||
relayinfo.CommandResults,
|
||||
relayinfo.ParameterizedReplaceableEvents,
|
||||
relayinfo.ExpirationTimestamp,
|
||||
relayinfo.ProtectedEvents,
|
||||
relayinfo.RelayListMetadata,
|
||||
relayinfo.SearchCapability,
|
||||
)
|
||||
if s.Config.ACLMode != "none" {
|
||||
supportedNIPs = relayinfo.GetList(
|
||||
@@ -56,16 +58,18 @@ func (s *Server) HandleRelayInfo(w http.ResponseWriter, r *http.Request) {
|
||||
relayinfo.RelayInformationDocument,
|
||||
relayinfo.GenericTagQueries,
|
||||
// relayinfo.NostrMarketplace,
|
||||
relayinfo.CountingResults,
|
||||
relayinfo.EventTreatment,
|
||||
relayinfo.CommandResults,
|
||||
relayinfo.ParameterizedReplaceableEvents,
|
||||
relayinfo.ExpirationTimestamp,
|
||||
relayinfo.ProtectedEvents,
|
||||
relayinfo.RelayListMetadata,
|
||||
relayinfo.SearchCapability,
|
||||
)
|
||||
}
|
||||
sort.Sort(supportedNIPs)
|
||||
log.T.Ln("supported NIPs", supportedNIPs)
|
||||
log.I.Ln("supported NIPs", supportedNIPs)
|
||||
// Construct description with dashboard URL
|
||||
dashboardURL := s.DashboardURL(r)
|
||||
description := version.Description + " dashboard: " + dashboardURL
|
||||
|
||||
@@ -64,7 +64,7 @@ func (l *Listener) HandleReq(msg []byte) (err error) {
|
||||
l.ctx, 30*time.Second,
|
||||
)
|
||||
defer queryCancel()
|
||||
|
||||
|
||||
// Collect all events from all filters
|
||||
var allEvents event.S
|
||||
for _, f := range *env.Filters {
|
||||
|
||||
@@ -38,7 +38,9 @@ const (
|
||||
|
||||
func (s *Server) HandleWebsocket(w http.ResponseWriter, r *http.Request) {
|
||||
remote := GetRemoteFromReq(r)
|
||||
log.T.F("handling websocket connection from %s", remote)
|
||||
|
||||
// Log comprehensive proxy information for debugging
|
||||
LogProxyInfo(r, "WebSocket connection from "+remote)
|
||||
if len(s.Config.IPWhitelist) > 0 {
|
||||
for _, ip := range s.Config.IPWhitelist {
|
||||
log.T.F("checking IP whitelist: %s", ip)
|
||||
@@ -55,9 +57,14 @@ whitelist:
|
||||
defer cancel()
|
||||
var err error
|
||||
var conn *websocket.Conn
|
||||
if conn, err = websocket.Accept(
|
||||
w, r, &websocket.AcceptOptions{OriginPatterns: []string{"*"}},
|
||||
); chk.E(err) {
|
||||
// Configure WebSocket accept options for proxy compatibility
|
||||
acceptOptions := &websocket.AcceptOptions{
|
||||
OriginPatterns: []string{"*"}, // Allow all origins for proxy compatibility
|
||||
// Don't check origin when behind a proxy - let the proxy handle it
|
||||
InsecureSkipVerify: true,
|
||||
}
|
||||
|
||||
if conn, err = websocket.Accept(w, r, acceptOptions); chk.E(err) {
|
||||
log.E.F("websocket accept failed from %s: %v", remote, err)
|
||||
return
|
||||
}
|
||||
@@ -65,18 +72,17 @@ whitelist:
|
||||
conn.SetReadLimit(DefaultMaxMessageSize)
|
||||
defer conn.CloseNow()
|
||||
listener := &Listener{
|
||||
ctx: ctx,
|
||||
Server: s,
|
||||
conn: conn,
|
||||
remote: remote,
|
||||
req: r,
|
||||
ctx: ctx,
|
||||
Server: s,
|
||||
conn: conn,
|
||||
remote: remote,
|
||||
req: r,
|
||||
startTime: time.Now(),
|
||||
}
|
||||
chal := make([]byte, 32)
|
||||
rand.Read(chal)
|
||||
listener.challenge.Store([]byte(hex.Enc(chal)))
|
||||
// If admins are configured, immediately prompt client to AUTH (NIP-42)
|
||||
if len(s.Config.Admins) > 0 {
|
||||
// log.D.F("sending initial AUTH challenge to %s", remote)
|
||||
if s.Config.ACLMode != "none" {
|
||||
log.D.F("sending AUTH challenge to %s", remote)
|
||||
if err = authenvelope.NewChallengeWith(listener.challenge.Load()).
|
||||
Write(listener); chk.E(err) {
|
||||
@@ -89,20 +95,23 @@ whitelist:
|
||||
go s.Pinger(ctx, conn, ticker, cancel)
|
||||
defer func() {
|
||||
log.D.F("closing websocket connection from %s", remote)
|
||||
|
||||
|
||||
// Cancel context and stop pinger
|
||||
cancel()
|
||||
ticker.Stop()
|
||||
|
||||
|
||||
// Cancel all subscriptions for this connection
|
||||
log.D.F("cancelling subscriptions for %s", remote)
|
||||
listener.publishers.Receive(&W{Cancel: true})
|
||||
|
||||
|
||||
// Log detailed connection statistics
|
||||
log.D.F("ws connection closed %s: msgs=%d, REQs=%d, EVENTs=%d, duration=%v",
|
||||
remote, listener.msgCount, listener.reqCount, listener.eventCount,
|
||||
time.Since(time.Now())) // Note: This will be near-zero, would need start time tracked
|
||||
|
||||
dur := time.Since(listener.startTime)
|
||||
log.D.F(
|
||||
"ws connection closed %s: msgs=%d, REQs=%d, EVENTs=%d, duration=%v",
|
||||
remote, listener.msgCount, listener.reqCount, listener.eventCount,
|
||||
dur,
|
||||
)
|
||||
|
||||
// Log any remaining connection state
|
||||
if listener.authedPubkey.Load() != nil {
|
||||
log.D.F("ws connection %s was authenticated", remote)
|
||||
@@ -118,7 +127,7 @@ whitelist:
|
||||
}
|
||||
var typ websocket.MessageType
|
||||
var msg []byte
|
||||
// log.T.F("waiting for message from %s", remote)
|
||||
log.T.F("waiting for message from %s", remote)
|
||||
|
||||
// Block waiting for message; rely on pings and context cancellation to detect dead peers
|
||||
typ, msg, err = conn.Read(ctx)
|
||||
@@ -160,9 +169,15 @@ whitelist:
|
||||
pongStart := time.Now()
|
||||
if err = conn.Write(writeCtx, PongMessage, msg); chk.E(err) {
|
||||
pongDuration := time.Since(pongStart)
|
||||
log.E.F("failed to send PONG to %s after %v: %v", remote, pongDuration, err)
|
||||
log.E.F(
|
||||
"failed to send PONG to %s after %v: %v", remote,
|
||||
pongDuration, err,
|
||||
)
|
||||
if writeCtx.Err() != nil {
|
||||
log.E.F("PONG write timeout to %s after %v (limit=%v)", remote, pongDuration, DefaultWriteTimeout)
|
||||
log.E.F(
|
||||
"PONG write timeout to %s after %v (limit=%v)", remote,
|
||||
pongDuration, DefaultWriteTimeout,
|
||||
)
|
||||
}
|
||||
writeCancel()
|
||||
return
|
||||
@@ -196,31 +211,37 @@ func (s *Server) Pinger(
|
||||
case <-ticker.C:
|
||||
pingCount++
|
||||
log.D.F("sending PING #%d", pingCount)
|
||||
|
||||
|
||||
// Create a write context with timeout for ping operation
|
||||
pingCtx, pingCancel := context.WithTimeout(ctx, DefaultWriteTimeout)
|
||||
pingStart := time.Now()
|
||||
|
||||
|
||||
if err = conn.Ping(pingCtx); err != nil {
|
||||
pingDuration := time.Since(pingStart)
|
||||
log.E.F("PING #%d FAILED after %v: %v", pingCount, pingDuration, err)
|
||||
|
||||
log.E.F(
|
||||
"PING #%d FAILED after %v: %v", pingCount, pingDuration,
|
||||
err,
|
||||
)
|
||||
|
||||
if pingCtx.Err() != nil {
|
||||
log.E.F("PING #%d timeout after %v (limit=%v)", pingCount, pingDuration, DefaultWriteTimeout)
|
||||
log.E.F(
|
||||
"PING #%d timeout after %v (limit=%v)", pingCount,
|
||||
pingDuration, DefaultWriteTimeout,
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
chk.E(err)
|
||||
pingCancel()
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
pingDuration := time.Since(pingStart)
|
||||
log.D.F("PING #%d sent successfully in %v", pingCount, pingDuration)
|
||||
|
||||
|
||||
if pingDuration > time.Millisecond*100 {
|
||||
log.D.F("SLOW PING #%d: %v (>100ms)", pingCount, pingDuration)
|
||||
}
|
||||
|
||||
|
||||
pingCancel()
|
||||
case <-ctx.Done():
|
||||
log.D.F("pinger context cancelled after %d pings", pingCount)
|
||||
|
||||
@@ -3,6 +3,8 @@ package app
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"lol.mleku.dev/log"
|
||||
)
|
||||
|
||||
// GetRemoteFromReq retrieves the originating IP address of the client from
|
||||
@@ -67,3 +69,28 @@ func GetRemoteFromReq(r *http.Request) (rr string) {
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// LogProxyInfo logs comprehensive proxy information for debugging
|
||||
func LogProxyInfo(r *http.Request, prefix string) {
|
||||
proxyHeaders := map[string]string{
|
||||
"X-Forwarded-For": r.Header.Get("X-Forwarded-For"),
|
||||
"X-Real-IP": r.Header.Get("X-Real-IP"),
|
||||
"X-Forwarded-Proto": r.Header.Get("X-Forwarded-Proto"),
|
||||
"X-Forwarded-Host": r.Header.Get("X-Forwarded-Host"),
|
||||
"X-Forwarded-Port": r.Header.Get("X-Forwarded-Port"),
|
||||
"Forwarded": r.Header.Get("Forwarded"),
|
||||
"Host": r.Header.Get("Host"),
|
||||
"User-Agent": r.Header.Get("User-Agent"),
|
||||
}
|
||||
|
||||
var info []string
|
||||
for header, value := range proxyHeaders {
|
||||
if value != "" {
|
||||
info = append(info, header+":"+value)
|
||||
}
|
||||
}
|
||||
|
||||
if len(info) > 0 {
|
||||
log.T.F("%s proxy info: %s", prefix, strings.Join(info, " "))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -19,6 +19,7 @@ type Listener struct {
|
||||
req *http.Request
|
||||
challenge atomic.Bytes
|
||||
authedPubkey atomic.Bytes
|
||||
startTime time.Time
|
||||
// Diagnostics: per-connection counters
|
||||
msgCount int
|
||||
reqCount int
|
||||
|
||||
@@ -40,17 +40,24 @@ type Server struct {
|
||||
// Challenge storage for HTTP UI authentication
|
||||
challengeMutex sync.RWMutex
|
||||
challenges map[string][]byte
|
||||
|
||||
|
||||
paymentProcessor *PaymentProcessor
|
||||
}
|
||||
|
||||
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
// Set CORS headers for all responses
|
||||
// Set comprehensive CORS headers for proxy compatibility
|
||||
w.Header().Set("Access-Control-Allow-Origin", "*")
|
||||
w.Header().Set("Access-Control-Allow-Methods", "GET, POST, OPTIONS")
|
||||
w.Header().Set(
|
||||
"Access-Control-Allow-Headers", "Content-Type, Authorization",
|
||||
)
|
||||
w.Header().Set("Access-Control-Allow-Headers",
|
||||
"Origin, X-Requested-With, Content-Type, Accept, Authorization, "+
|
||||
"X-Forwarded-For, X-Forwarded-Proto, X-Forwarded-Host, X-Real-IP, "+
|
||||
"Upgrade, Connection, Sec-WebSocket-Key, Sec-WebSocket-Version, "+
|
||||
"Sec-WebSocket-Protocol, Sec-WebSocket-Extensions")
|
||||
w.Header().Set("Access-Control-Allow-Credentials", "true")
|
||||
w.Header().Set("Access-Control-Max-Age", "86400")
|
||||
|
||||
// Add proxy-friendly headers
|
||||
w.Header().Set("Vary", "Origin, Access-Control-Request-Method, Access-Control-Request-Headers")
|
||||
|
||||
// Handle preflight OPTIONS requests
|
||||
if r.Method == "OPTIONS" {
|
||||
@@ -58,6 +65,11 @@ func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
// Log proxy information for debugging (only for WebSocket requests to avoid spam)
|
||||
if r.Header.Get("Upgrade") == "websocket" {
|
||||
LogProxyInfo(r, "HTTP request")
|
||||
}
|
||||
|
||||
// If this is a websocket request, only intercept the relay root path.
|
||||
// This allows other websocket paths (e.g., Vite HMR) to be handled by the dev proxy when enabled.
|
||||
if r.Header.Get("Upgrade") == "websocket" {
|
||||
@@ -83,13 +95,30 @@ func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
func (s *Server) ServiceURL(req *http.Request) (st string) {
|
||||
// Get host from various proxy headers
|
||||
host := req.Header.Get("X-Forwarded-Host")
|
||||
if host == "" {
|
||||
host = req.Header.Get("Host")
|
||||
}
|
||||
if host == "" {
|
||||
host = req.Host
|
||||
}
|
||||
|
||||
// Get protocol from various proxy headers
|
||||
proto := req.Header.Get("X-Forwarded-Proto")
|
||||
if proto == "" {
|
||||
if host == "localhost" {
|
||||
proto = req.Header.Get("X-Forwarded-Scheme")
|
||||
}
|
||||
if proto == "" {
|
||||
// Check if we're behind a proxy by looking for common proxy headers
|
||||
hasProxyHeaders := req.Header.Get("X-Forwarded-For") != "" ||
|
||||
req.Header.Get("X-Real-IP") != "" ||
|
||||
req.Header.Get("Forwarded") != ""
|
||||
|
||||
if hasProxyHeaders {
|
||||
// If we have proxy headers, assume HTTPS/WSS
|
||||
proto = "wss"
|
||||
} else if host == "localhost" {
|
||||
proto = "ws"
|
||||
} else if strings.Contains(host, ":") {
|
||||
// has a port number
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
"dependencies": {
|
||||
"react": "^18.2.0",
|
||||
"react-dom": "^18.2.0",
|
||||
"react-json-pretty": "^2.2.0",
|
||||
},
|
||||
"devDependencies": {
|
||||
"bun-types": "latest",
|
||||
@@ -25,10 +26,18 @@
|
||||
|
||||
"loose-envify": ["loose-envify@1.4.0", "", { "dependencies": { "js-tokens": "^3.0.0 || ^4.0.0" }, "bin": { "loose-envify": "cli.js" } }, "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q=="],
|
||||
|
||||
"object-assign": ["object-assign@4.1.1", "", {}, "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg=="],
|
||||
|
||||
"prop-types": ["prop-types@15.8.1", "", { "dependencies": { "loose-envify": "^1.4.0", "object-assign": "^4.1.1", "react-is": "^16.13.1" } }, "sha512-oj87CgZICdulUohogVAR7AjlC0327U4el4L6eAvOqCeudMDVU0NThNaV+b9Df4dXgSP1gXMTnPdhfe/2qDH5cg=="],
|
||||
|
||||
"react": ["react@18.3.1", "", { "dependencies": { "loose-envify": "^1.1.0" } }, "sha512-wS+hAgJShR0KhEvPJArfuPVN1+Hz1t0Y6n5jLrGQbkb4urgPE/0Rve+1kMB1v/oWgHgm4WIcV+i7F2pTVj+2iQ=="],
|
||||
|
||||
"react-dom": ["react-dom@18.3.1", "", { "dependencies": { "loose-envify": "^1.1.0", "scheduler": "^0.23.2" }, "peerDependencies": { "react": "^18.3.1" } }, "sha512-5m4nQKp+rZRb09LNH59GM4BxTh9251/ylbKIbpe7TpGxfJ+9kv6BLkLBXIjjspbgbnIBNqlI23tRnTWT0snUIw=="],
|
||||
|
||||
"react-is": ["react-is@16.13.1", "", {}, "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ=="],
|
||||
|
||||
"react-json-pretty": ["react-json-pretty@2.2.0", "", { "dependencies": { "prop-types": "^15.6.2" }, "peerDependencies": { "react": ">=15.0", "react-dom": ">=15.0" } }, "sha512-3UMzlAXkJ4R8S4vmkRKtvJHTewG4/rn1Q18n0zqdu/ipZbUPLVZD+QwC7uVcD/IAY3s8iNVHlgR2dMzIUS0n1A=="],
|
||||
|
||||
"scheduler": ["scheduler@0.23.2", "", { "dependencies": { "loose-envify": "^1.1.0" } }, "sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ=="],
|
||||
|
||||
"undici-types": ["undici-types@7.12.0", "", {}, "sha512-goOacqME2GYyOZZfb5Lgtu+1IDmAlAEu5xnD3+xTzS10hT0vzpf0SPjkXwAw9Jm+4n/mQGDP3LO8CPbYROeBfQ=="],
|
||||
|
||||
161
app/web/dist/index-kk1m7jg4.js
vendored
Normal file
161
app/web/dist/index-kk1m7jg4.js
vendored
Normal file
File diff suppressed because one or more lines are too long
160
app/web/dist/index-w8zpqk4w.js
vendored
160
app/web/dist/index-w8zpqk4w.js
vendored
File diff suppressed because one or more lines are too long
2
app/web/dist/index.html
vendored
2
app/web/dist/index.html
vendored
@@ -5,7 +5,7 @@
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>Nostr Relay</title>
|
||||
|
||||
<link rel="stylesheet" crossorigin href="./index-q4cwd1fy.css"><script type="module" crossorigin src="./index-w8zpqk4w.js"></script></head>
|
||||
<link rel="stylesheet" crossorigin href="./index-q4cwd1fy.css"><script type="module" crossorigin src="./index-kk1m7jg4.js"></script></head>
|
||||
<body>
|
||||
<script>
|
||||
// Apply system theme preference immediately to avoid flash of wrong theme
|
||||
|
||||
@@ -10,7 +10,8 @@
|
||||
},
|
||||
"dependencies": {
|
||||
"react": "^18.2.0",
|
||||
"react-dom": "^18.2.0"
|
||||
"react-dom": "^18.2.0",
|
||||
"react-json-pretty": "^2.2.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"bun-types": "latest"
|
||||
|
||||
@@ -1,4 +1,22 @@
|
||||
import React, { useState, useEffect, useRef } from 'react';
|
||||
import JSONPretty from 'react-json-pretty';
|
||||
|
||||
function PrettyJSONView({ jsonString, maxHeightClass = 'max-h-64' }) {
|
||||
let data;
|
||||
try {
|
||||
data = JSON.parse(jsonString);
|
||||
} catch (_) {
|
||||
data = jsonString;
|
||||
}
|
||||
return (
|
||||
<div
|
||||
className={`text-xs p-2 rounded overflow-auto ${maxHeightClass} break-all break-words whitespace-pre-wrap bg-gray-950 text-white`}
|
||||
style={{ overflowWrap: 'anywhere', wordBreak: 'break-word' }}
|
||||
>
|
||||
<JSONPretty data={data} space={2} />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function App() {
|
||||
const [user, setUser] = useState(null);
|
||||
@@ -25,6 +43,14 @@ function App() {
|
||||
const [allEventsHasMore, setAllEventsHasMore] = useState(true);
|
||||
const [expandedAllEventId, setExpandedAllEventId] = useState(null);
|
||||
|
||||
// Search state
|
||||
const [searchQuery, setSearchQuery] = useState('');
|
||||
const [searchResults, setSearchResults] = useState([]);
|
||||
const [searchLoading, setSearchLoading] = useState(false);
|
||||
const [searchOffset, setSearchOffset] = useState(0);
|
||||
const [searchHasMore, setSearchHasMore] = useState(true);
|
||||
const [expandedSearchEventId, setExpandedSearchEventId] = useState(null);
|
||||
|
||||
// Profile cache for All Events Log
|
||||
const [profileCache, setProfileCache] = useState({});
|
||||
|
||||
@@ -68,6 +94,7 @@ function App() {
|
||||
exportAll: false,
|
||||
exportSpecific: false,
|
||||
importEvents: false,
|
||||
search: true,
|
||||
eventsLog: false,
|
||||
allEventsLog: false
|
||||
});
|
||||
@@ -992,6 +1019,177 @@ function App() {
|
||||
}
|
||||
}
|
||||
|
||||
// Search functions
|
||||
function processSearchResponse(receivedEvents, reset) {
|
||||
try {
|
||||
const filtered = filterDeletedEvents(receivedEvents);
|
||||
const sorted = filtered.sort((a, b) => b.created_at - a.created_at);
|
||||
const currentOffset = reset ? 0 : searchOffset;
|
||||
const limit = 50;
|
||||
const page = sorted.slice(currentOffset, currentOffset + limit);
|
||||
if (reset) {
|
||||
setSearchResults(page);
|
||||
setSearchOffset(page.length);
|
||||
} else {
|
||||
setSearchResults(prev => [...prev, ...page]);
|
||||
setSearchOffset(prev => prev + page.length);
|
||||
}
|
||||
setSearchHasMore(currentOffset + page.length < sorted.length);
|
||||
// fetch profiles for authors in search results
|
||||
fetchProfilesForEvents(page);
|
||||
} catch (e) {
|
||||
console.error('Error processing search results:', e);
|
||||
} finally {
|
||||
setSearchLoading(false);
|
||||
}
|
||||
}
|
||||
|
||||
async function fetchSearchResultsFromRelay(query, reset = true, limit = 50, timeoutMs = 10000) {
|
||||
if (!query || !query.trim()) {
|
||||
// clear results on empty query when resetting
|
||||
if (reset) {
|
||||
setSearchResults([]);
|
||||
setSearchOffset(0);
|
||||
setSearchHasMore(true);
|
||||
}
|
||||
return;
|
||||
}
|
||||
if (searchLoading) return;
|
||||
if (!reset && !searchHasMore) return;
|
||||
|
||||
setSearchLoading(true);
|
||||
|
||||
return new Promise((resolve) => {
|
||||
let resolved = false;
|
||||
let receivedEvents = [];
|
||||
let ws;
|
||||
let reqSent = false;
|
||||
|
||||
try {
|
||||
ws = new WebSocket(relayURL());
|
||||
} catch (e) {
|
||||
console.error('Failed to create WebSocket:', e);
|
||||
setSearchLoading(false);
|
||||
resolve();
|
||||
return;
|
||||
}
|
||||
|
||||
const subId = 'search-' + Math.random().toString(36).slice(2);
|
||||
const timer = setTimeout(() => {
|
||||
if (ws && ws.readyState === 1) {
|
||||
try { ws.close(); } catch (_) {}
|
||||
}
|
||||
if (!resolved) {
|
||||
resolved = true;
|
||||
processSearchResponse(receivedEvents, reset);
|
||||
resolve();
|
||||
}
|
||||
}, timeoutMs);
|
||||
|
||||
const sendRequest = () => {
|
||||
if (!reqSent && ws && ws.readyState === 1) {
|
||||
try {
|
||||
const req = ['REQ', subId, { search: query }];
|
||||
ws.send(JSON.stringify(req));
|
||||
reqSent = true;
|
||||
} catch (e) {
|
||||
console.error('Failed to send WebSocket request:', e);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
ws.onopen = () => sendRequest();
|
||||
|
||||
ws.onmessage = async (msg) => {
|
||||
try {
|
||||
const data = JSON.parse(msg.data);
|
||||
const type = data[0];
|
||||
if (type === 'AUTH') {
|
||||
const challenge = data[1];
|
||||
if (!window.nostr) {
|
||||
clearTimeout(timer);
|
||||
if (!resolved) {
|
||||
resolved = true;
|
||||
processSearchResponse(receivedEvents, reset);
|
||||
resolve();
|
||||
}
|
||||
return;
|
||||
}
|
||||
try {
|
||||
const authEvent = { kind: 22242, created_at: Math.floor(Date.now()/1000), tags: [['relay', relayURL()], ['challenge', challenge]], content: '' };
|
||||
const signed = await window.nostr.signEvent(authEvent);
|
||||
ws.send(JSON.stringify(['AUTH', signed]));
|
||||
} catch (authErr) {
|
||||
console.error('Search auth failed:', authErr);
|
||||
clearTimeout(timer);
|
||||
if (!resolved) {
|
||||
resolved = true;
|
||||
processSearchResponse(receivedEvents, reset);
|
||||
resolve();
|
||||
}
|
||||
}
|
||||
} else if (type === 'EVENT' && data[1] === subId) {
|
||||
const ev = data[2];
|
||||
if (ev) {
|
||||
receivedEvents.push({
|
||||
id: ev.id,
|
||||
kind: ev.kind,
|
||||
created_at: ev.created_at,
|
||||
content: ev.content || '',
|
||||
author: ev.pubkey || '',
|
||||
raw_json: JSON.stringify(ev)
|
||||
});
|
||||
}
|
||||
} else if (type === 'EOSE' && data[1] === subId) {
|
||||
try { ws.send(JSON.stringify(['CLOSE', subId])); } catch (_) {}
|
||||
try { ws.close(); } catch (_) {}
|
||||
clearTimeout(timer);
|
||||
if (!resolved) {
|
||||
resolved = true;
|
||||
processSearchResponse(receivedEvents, reset);
|
||||
resolve();
|
||||
}
|
||||
} else if (type === 'CLOSED' && data[1] === subId) {
|
||||
clearTimeout(timer);
|
||||
if (!resolved) {
|
||||
resolved = true;
|
||||
processSearchResponse(receivedEvents, reset);
|
||||
resolve();
|
||||
}
|
||||
} else if (type === 'OK' && data[1] && data[1].length === 64 && !reqSent) {
|
||||
sendRequest();
|
||||
}
|
||||
} catch (e) {
|
||||
console.error('Search WS message parse error:', e);
|
||||
}
|
||||
};
|
||||
|
||||
ws.onerror = (err) => {
|
||||
console.error('Search WS error:', err);
|
||||
try { ws.close(); } catch (_) {}
|
||||
clearTimeout(timer);
|
||||
if (!resolved) {
|
||||
resolved = true;
|
||||
processSearchResponse(receivedEvents, reset);
|
||||
resolve();
|
||||
}
|
||||
};
|
||||
|
||||
ws.onclose = () => {
|
||||
clearTimeout(timer);
|
||||
if (!resolved) {
|
||||
resolved = true;
|
||||
processSearchResponse(receivedEvents, reset);
|
||||
resolve();
|
||||
}
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
function toggleSearchEventExpansion(eventId) {
|
||||
setExpandedSearchEventId(current => current === eventId ? null : eventId);
|
||||
}
|
||||
|
||||
// Events log functions
|
||||
async function fetchEvents(reset = false) {
|
||||
await fetchEventsFromRelay(reset);
|
||||
@@ -1015,11 +1213,22 @@ function App() {
|
||||
|
||||
function copyEventJSON(eventJSON) {
|
||||
try {
|
||||
navigator.clipboard.writeText(eventJSON);
|
||||
// Ensure minified JSON is copied regardless of input format
|
||||
let toCopy = eventJSON;
|
||||
try {
|
||||
toCopy = JSON.stringify(JSON.parse(eventJSON));
|
||||
} catch (_) {
|
||||
// if not valid JSON string, fall back to original
|
||||
}
|
||||
navigator.clipboard.writeText(toCopy);
|
||||
} catch (error) {
|
||||
// Fallback for older browsers
|
||||
const textArea = document.createElement('textarea');
|
||||
textArea.value = eventJSON;
|
||||
let toCopy = eventJSON;
|
||||
try {
|
||||
toCopy = JSON.stringify(JSON.parse(eventJSON));
|
||||
} catch (_) {}
|
||||
textArea.value = toCopy;
|
||||
document.body.appendChild(textArea);
|
||||
textArea.select();
|
||||
document.execCommand('copy');
|
||||
@@ -1617,6 +1826,140 @@ function App() {
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
{/* Search */}
|
||||
<div className={`m-2 p-2 ${getPanelBgClass()} rounded-lg w-full`}>
|
||||
<div
|
||||
className={`text-lg font-bold flex items-center justify-between cursor-pointer p-2 ${getTextClass()} ${getThemeClasses('hover:bg-gray-300', 'hover:bg-gray-700')} rounded`}
|
||||
onClick={() => toggleSection('search')}
|
||||
>
|
||||
<span>Search</span>
|
||||
<span className="text-xl">
|
||||
{expandedSections.search ? '▼' : '▶'}
|
||||
</span>
|
||||
</div>
|
||||
{expandedSections.search && (
|
||||
<div className="p-2 bg-gray-900 rounded-lg mt-2">
|
||||
<div className="flex gap-2 items-center mb-3">
|
||||
<input
|
||||
type="text"
|
||||
placeholder="Search notes..."
|
||||
value={searchQuery}
|
||||
onChange={(e) => setSearchQuery(e.target.value)}
|
||||
onKeyDown={(e) => { if (e.key === 'Enter') { fetchSearchResultsFromRelay(searchQuery, true); } }}
|
||||
className={`${getThemeClasses('bg-white text-black border-gray-300', 'bg-gray-800 text-white border-gray-600')} border rounded px-3 py-2 flex-grow`}
|
||||
/>
|
||||
<button
|
||||
className={`${getThemeClasses('bg-blue-600 hover:bg-blue-700', 'bg-blue-500 hover:bg-blue-600')} text-white px-4 py-2 rounded`}
|
||||
onClick={() => fetchSearchResultsFromRelay(searchQuery, true)}
|
||||
disabled={searchLoading}
|
||||
title="Search"
|
||||
>
|
||||
{searchLoading ? 'Searching…' : 'Search'}
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div className="space-y-2">
|
||||
{searchResults.length === 0 && !searchLoading && (
|
||||
<div className={`text-center py-4 ${getTextClass()}`}>No results</div>
|
||||
)}
|
||||
|
||||
{searchResults.map((event) => (
|
||||
<div key={event.id} className={`border rounded p-3 ${getThemeClasses('border-gray-300 bg-white', 'border-gray-600 bg-gray-800')}`}>
|
||||
<div className="cursor-pointer" onClick={() => toggleSearchEventExpansion(event.id)}>
|
||||
<div className="flex items-center justify-between w-full">
|
||||
<div className="flex items-center gap-6 w-full">
|
||||
<div className="flex items-center gap-3 min-w-0">
|
||||
{event.author && profileCache[event.author] && (
|
||||
<>
|
||||
{profileCache[event.author].picture && (
|
||||
<img
|
||||
src={profileCache[event.author].picture}
|
||||
alt={profileCache[event.author].display_name || profileCache[event.author].name || 'User avatar'}
|
||||
className={`w-8 h-8 rounded-full object-cover border h-16 ${getThemeClasses('border-gray-300', 'border-gray-600')}`}
|
||||
onError={(e) => { e.currentTarget.style.display = 'none'; }}
|
||||
/>
|
||||
)}
|
||||
<div className="flex flex-col flex-grow w-full">
|
||||
<span className={`text-sm font-medium ${getTextClass()}`}>
|
||||
{profileCache[event.author].display_name || profileCache[event.author].name || `${event.author.slice(0, 8)}...`}
|
||||
</span>
|
||||
{profileCache[event.author].display_name && profileCache[event.author].name && (
|
||||
<span className={`text-xs ${getTextClass()} opacity-70`}>
|
||||
{profileCache[event.author].name}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
{event.author && !profileCache[event.author] && (
|
||||
<span className={`text-sm font-medium ${getTextClass()}`}>
|
||||
{`${event.author.slice(0, 8)}...`}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div className="flex items-center gap-3">
|
||||
<span className={`font-mono text-sm px-2 py-1 rounded ${getThemeClasses('bg-blue-100 text-blue-800', 'bg-blue-900 text-blue-200')}`}>
|
||||
Kind {event.kind}
|
||||
</span>
|
||||
<span className={`text-sm ${getTextClass()}`}>
|
||||
{formatTimestamp(event.created_at)}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
<div className="justify-end ml-auto rounded-full h-16 w-16 flex items-center justify-center">
|
||||
<div className={`text-white text-xs px-4 py-4 rounded flex flex-grow items-center ${getThemeClasses('text-gray-700', 'text-gray-300')}`}>
|
||||
{expandedSearchEventId === event.id ? '▼' : ' '}
|
||||
</div>
|
||||
<button
|
||||
className="bg-red-600 hover:bg-red-700 text-white text-xs px-1 py-1 rounded flex items-center"
|
||||
onClick={(e) => { e.stopPropagation(); deleteEvent(event.id, event.raw_json, event.author); }}
|
||||
title="Delete this event"
|
||||
>
|
||||
🗑️
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{event.content && (
|
||||
<div className={`mt-2 text-sm ${getTextClass()}`}>
|
||||
{truncateContent(event.content)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{expandedSearchEventId === event.id && (
|
||||
<div className={`mt-3 p-3 rounded ${getThemeClasses('bg-gray-100', 'bg-gray-900')}`} onClick={(e) => e.stopPropagation()}>
|
||||
<div className="flex items-center justify-between mb-2">
|
||||
<span className={`text-sm font-semibold ${getTextClass()}`}>Raw JSON</span>
|
||||
<button
|
||||
className={`${getThemeClasses('bg-gray-200 hover:bg-gray-300 text-black', 'bg-gray-800 hover:bg-gray-700 text-white')} text-xs px-2 py-1 rounded`}
|
||||
onClick={() => copyEventJSON(event.raw_json)}
|
||||
>
|
||||
Copy JSON
|
||||
</button>
|
||||
</div>
|
||||
<PrettyJSONView jsonString={event.raw_json} maxHeightClass="max-h-64" />
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
))}
|
||||
|
||||
{!searchLoading && searchHasMore && searchResults.length > 0 && (
|
||||
<div className="text-center py-4">
|
||||
<button
|
||||
className={`${getThemeClasses('bg-blue-600 hover:bg-blue-700', 'bg-blue-500 hover:bg-blue-600')} text-white px-4 py-2 rounded`}
|
||||
onClick={() => fetchSearchResultsFromRelay(searchQuery, false)}
|
||||
>
|
||||
Load More
|
||||
</button>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* My Events Log */}
|
||||
<div className={`m-2 p-2 ${getPanelBgClass()} rounded-lg w-full`}>
|
||||
<div
|
||||
@@ -1734,9 +2077,7 @@ function App() {
|
||||
Copy
|
||||
</button>
|
||||
</div>
|
||||
<pre className={`text-xs p-2 rounded overflow-auto max-h-40 break-all whitespace-pre-wrap ${getPanelBgClass()} ${getTextClass()}`}>
|
||||
{JSON.stringify(JSON.parse(event.raw_json), null, 2)}
|
||||
</pre>
|
||||
<PrettyJSONView jsonString={event.raw_json} maxHeightClass="max-h-40" />
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
@@ -1883,9 +2224,7 @@ function App() {
|
||||
Copy
|
||||
</button>
|
||||
</div>
|
||||
<pre className={`text-xs p-2 rounded overflow-auto max-h-40 break-all whitespace-pre-wrap ${getPanelBgClass()} ${getTextClass()}`}>
|
||||
{JSON.stringify(JSON.parse(event.raw_json), null, 2)}
|
||||
</pre>
|
||||
<PrettyJSONView jsonString={event.raw_json} maxHeightClass="max-h-40" />
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
# Docker Compose for Stella's Nostr Relay
|
||||
# Owner: npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx
|
||||
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
stella-relay:
|
||||
image: silberengel/orly-relay:latest
|
||||
container_name: stella-nostr-relay
|
||||
orly-relay:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
image: silberengel/next-orly:latest
|
||||
container_name: orly-relay
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "127.0.0.1:7777:7777"
|
||||
@@ -19,21 +20,23 @@ services:
|
||||
- ORLY_LISTEN=0.0.0.0
|
||||
- ORLY_PORT=7777
|
||||
- ORLY_LOG_LEVEL=info
|
||||
- ORLY_MAX_CONNECTIONS=1000
|
||||
- ORLY_DB_LOG_LEVEL=error
|
||||
- ORLY_OWNERS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx
|
||||
- ORLY_ADMINS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx,npub1l5sga6xg72phsz5422ykujprejwud075ggrr3z2hwyrfgr7eylqstegx9z
|
||||
- ORLY_ADMINS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx,npub1m4ny6hjqzepn4rxknuq94c2gpqzr29ufkkw7ttcxyak7v43n6vvsajc2jl,npub1l5sga6xg72phsz5422ykujprejwud075ggrr3z2hwyrfgr7eylqstegx9z
|
||||
|
||||
# Performance Settings (based on v0.4.8 optimizations)
|
||||
- ORLY_CONCURRENT_WORKERS=0 # 0 = auto-detect CPU cores
|
||||
- ORLY_BATCH_SIZE=1000
|
||||
- ORLY_CACHE_SIZE=10000
|
||||
# ACL and Spider Configuration
|
||||
- ORLY_ACL_MODE=follows
|
||||
- ORLY_SPIDER_MODE=follows
|
||||
|
||||
# Database Settings
|
||||
- BADGER_LOG_LEVEL=ERROR
|
||||
- BADGER_SYNC_WRITES=false # Better performance, slightly less durability
|
||||
# Bootstrap relay URLs for initial sync
|
||||
- ORLY_BOOTSTRAP_RELAYS=wss://profiles.nostr1.com,wss://purplepag.es,wss://relay.nostr.band,wss://relay.damus.io
|
||||
|
||||
# Security Settings
|
||||
- ORLY_REQUIRE_AUTH=false
|
||||
# Subscription Settings (optional)
|
||||
- ORLY_SUBSCRIPTION_ENABLED=false
|
||||
- ORLY_MONTHLY_PRICE_SATS=0
|
||||
|
||||
# Performance Settings
|
||||
- ORLY_MAX_CONNECTIONS=1000
|
||||
- ORLY_MAX_EVENT_SIZE=65536
|
||||
- ORLY_MAX_SUBSCRIPTIONS=20
|
||||
|
||||
@@ -74,7 +77,7 @@ services:
|
||||
- ./nginx/ssl:/etc/nginx/ssl:ro
|
||||
- nginx_logs:/var/log/nginx
|
||||
depends_on:
|
||||
- stella-relay
|
||||
- orly-relay
|
||||
profiles:
|
||||
- proxy # Only start with: docker-compose --profile proxy up
|
||||
|
||||
@@ -90,4 +93,4 @@ volumes:
|
||||
|
||||
networks:
|
||||
default:
|
||||
name: stella-relay-network
|
||||
name: orly-relay-network
|
||||
|
||||
44
go.mod
44
go.mod
@@ -4,48 +4,50 @@ go 1.25.0
|
||||
|
||||
require (
|
||||
github.com/adrg/xdg v0.5.3
|
||||
github.com/coder/websocket v1.8.13
|
||||
github.com/coder/websocket v1.8.14
|
||||
github.com/davecgh/go-spew v1.1.1
|
||||
github.com/dgraph-io/badger/v4 v4.8.0
|
||||
github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0
|
||||
github.com/klauspost/cpuid/v2 v2.3.0
|
||||
github.com/pkg/profile v1.7.0
|
||||
github.com/puzpuzpuz/xsync/v3 v3.5.1
|
||||
github.com/stretchr/testify v1.10.0
|
||||
github.com/stretchr/testify v1.11.1
|
||||
github.com/templexxx/xhex v0.0.0-20200614015412-aed53437177b
|
||||
go-simpler.org/env v0.12.0
|
||||
go.uber.org/atomic v1.11.0
|
||||
golang.org/x/crypto v0.41.0
|
||||
golang.org/x/exp v0.0.0-20250819193227-8b4c13bb791b
|
||||
golang.org/x/crypto v0.42.0
|
||||
golang.org/x/exp v0.0.0-20251002181428-27f1f14c8bb9
|
||||
golang.org/x/lint v0.0.0-20241112194109-818c5a804067
|
||||
golang.org/x/net v0.43.0
|
||||
golang.org/x/net v0.44.0
|
||||
honnef.co/go/tools v0.6.1
|
||||
lol.mleku.dev v1.0.3
|
||||
lukechampine.com/frand v1.5.1
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c // indirect
|
||||
github.com/BurntSushi/toml v1.5.0 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/dgraph-io/ristretto/v2 v2.2.0 // indirect
|
||||
github.com/dgraph-io/ristretto/v2 v2.3.0 // indirect
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/felixge/fgprof v0.9.3 // indirect
|
||||
github.com/felixge/fgprof v0.9.5 // indirect
|
||||
github.com/go-logr/logr v1.4.3 // indirect
|
||||
github.com/go-logr/stdr v1.2.2 // indirect
|
||||
github.com/google/flatbuffers v25.2.10+incompatible // indirect
|
||||
github.com/google/pprof v0.0.0-20211214055906-6f57359322fd // indirect
|
||||
github.com/google/flatbuffers v25.9.23+incompatible // indirect
|
||||
github.com/google/pprof v0.0.0-20251002213607-436353cc1ee6 // indirect
|
||||
github.com/klauspost/compress v1.18.0 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/templexxx/cpu v0.0.1 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
||||
go.opentelemetry.io/otel v1.37.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.37.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.37.0 // indirect
|
||||
golang.org/x/exp/typeparams v0.0.0-20231108232855-2478ac86f678 // indirect
|
||||
golang.org/x/mod v0.27.0 // indirect
|
||||
golang.org/x/sync v0.16.0 // indirect
|
||||
golang.org/x/sys v0.35.0 // indirect
|
||||
golang.org/x/tools v0.36.0 // indirect
|
||||
google.golang.org/protobuf v1.36.6 // indirect
|
||||
github.com/templexxx/cpu v0.1.1 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||
go.opentelemetry.io/otel v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.38.0 // indirect
|
||||
golang.org/x/exp/typeparams v0.0.0-20251002181428-27f1f14c8bb9 // indirect
|
||||
golang.org/x/mod v0.28.0 // indirect
|
||||
golang.org/x/sync v0.17.0 // indirect
|
||||
golang.org/x/sys v0.36.0 // indirect
|
||||
golang.org/x/tools v0.37.0 // indirect
|
||||
google.golang.org/protobuf v1.36.10 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
||||
|
||||
retract v1.0.3
|
||||
|
||||
102
go.sum
102
go.sum
@@ -1,39 +1,53 @@
|
||||
github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c h1:pxW6RcqyfI9/kWtOwnv/G+AzdKuy2ZrqINhenH4HyNs=
|
||||
github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
|
||||
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
|
||||
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
|
||||
github.com/adrg/xdg v0.5.3 h1:xRnxJXne7+oWDatRhR1JLnvuccuIeCoBu2rtuLqQB78=
|
||||
github.com/adrg/xdg v0.5.3/go.mod h1:nlTsY+NNiCBGCK2tpm09vRqfVzrc2fLmXGpBLF0zlTQ=
|
||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/chromedp/cdproto v0.0.0-20230802225258-3cf4e6d46a89/go.mod h1:GKljq0VrfU4D5yc+2qA6OVr8pmO/MBbPEWqWQ/oqGEs=
|
||||
github.com/chromedp/chromedp v0.9.2/go.mod h1:LkSXJKONWTCHAfQasKFUZI+mxqS4tZqhmtGzzhLsnLs=
|
||||
github.com/chromedp/sysutil v1.0.0/go.mod h1:kgWmDdq8fTzXYcKIBqIYvRRTnYb9aNS9moAV0xufSww=
|
||||
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
||||
github.com/chzyer/logex v1.2.1/go.mod h1:JLbx6lG2kDbNRFnfkgvh4eRJRPX1QCoOIWomwysCBrQ=
|
||||
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
|
||||
github.com/chzyer/readline v1.5.1/go.mod h1:Eh+b79XXUwfKfcPLepksvw2tcLE/Ct21YObkaSkeBlk=
|
||||
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
||||
github.com/coder/websocket v1.8.13 h1:f3QZdXy7uGVz+4uCJy2nTZyM0yTBj8yANEHhqlXZ9FE=
|
||||
github.com/coder/websocket v1.8.13/go.mod h1:LNVeNrXQZfe5qhS9ALED3uA+l5pPqvwXg3CKoDBB2gs=
|
||||
github.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8=
|
||||
github.com/coder/websocket v1.8.14 h1:9L0p0iKiNOibykf283eHkKUHHrpG7f65OE3BhhO7v9g=
|
||||
github.com/coder/websocket v1.8.14/go.mod h1:NX3SzP+inril6yawo5CQXx8+fk145lPDC6pumgx0mVg=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgraph-io/badger/v4 v4.8.0 h1:JYph1ChBijCw8SLeybvPINizbDKWZ5n/GYbz2yhN/bs=
|
||||
github.com/dgraph-io/badger/v4 v4.8.0/go.mod h1:U6on6e8k/RTbUWxqKR0MvugJuVmkxSNc79ap4917h4w=
|
||||
github.com/dgraph-io/ristretto/v2 v2.2.0 h1:bkY3XzJcXoMuELV8F+vS8kzNgicwQFAaGINAEJdWGOM=
|
||||
github.com/dgraph-io/ristretto/v2 v2.2.0/go.mod h1:RZrm63UmcBAaYWC1DotLYBmTvgkrs0+XhBd7Npn7/zI=
|
||||
github.com/dgraph-io/ristretto/v2 v2.3.0 h1:qTQ38m7oIyd4GAed/QkUZyPFNMnvVWyazGXRwvOt5zk=
|
||||
github.com/dgraph-io/ristretto/v2 v2.3.0/go.mod h1:gpoRV3VzrEY1a9dWAYV6T1U7YzfgttXdd/ZzL1s9OZM=
|
||||
github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da h1:aIftn67I1fkbMa512G+w+Pxci9hJPB8oMnkcP3iZF38=
|
||||
github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/felixge/fgprof v0.9.3 h1:VvyZxILNuCiUCSXtPtYmmtGvb65nqXh2QFWc0Wpf2/g=
|
||||
github.com/felixge/fgprof v0.9.3/go.mod h1:RdbpDgzqYVh/T9fPELJyV7EYJuHB55UTEULNun8eiPw=
|
||||
github.com/felixge/fgprof v0.9.5 h1:8+vR6yu2vvSKn08urWyEuxx75NWPEvybbkBirEpsbVY=
|
||||
github.com/felixge/fgprof v0.9.5/go.mod h1:yKl+ERSa++RYOs32d8K6WEXCB4uXdLls4ZaZPpayhMM=
|
||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||
github.com/google/flatbuffers v25.2.10+incompatible h1:F3vclr7C3HpB1k9mxCGRMXq6FdUalZ6H/pNX4FP1v0Q=
|
||||
github.com/google/flatbuffers v25.2.10+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
|
||||
github.com/gobwas/httphead v0.1.0/go.mod h1:O/RXo79gxV8G+RqlR/otEwx4Q36zl9rqC5u12GKvMCM=
|
||||
github.com/gobwas/pool v0.2.1/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw=
|
||||
github.com/gobwas/ws v1.2.1/go.mod h1:hRKAFb8wOxFROYNsT1bqfWnhX+b5MFeJM9r2ZSwg/KY=
|
||||
github.com/google/flatbuffers v25.9.23+incompatible h1:rGZKv+wOb6QPzIdkM2KxhBZCDrA0DeN6DNmRDrqIsQU=
|
||||
github.com/google/flatbuffers v25.9.23+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/pprof v0.0.0-20211214055906-6f57359322fd h1:1FjCyPC+syAzJ5/2S8fqdZK1R22vvA0J7JZKcuOIQ7Y=
|
||||
github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8IQu3XUZ8Nc/bM9CCZFOyjUNOSygVozoDg=
|
||||
github.com/google/pprof v0.0.0-20240227163752-401108e1b7e7/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
|
||||
github.com/google/pprof v0.0.0-20251002213607-436353cc1ee6 h1:/WHh/1k4thM/w+PAZEIiZK9NwCMFahw5tUzKUCnUtds=
|
||||
github.com/google/pprof v0.0.0-20251002213607-436353cc1ee6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U=
|
||||
github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
|
||||
github.com/ianlancetaylor/demangle v0.0.0-20230524184225-eabc099b10ab/go.mod h1:gx7rwoVhcfuVKG5uya9Hs3Sxj7EIvldVofAWIUtGouw=
|
||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||
github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 h1:iQTw/8FWTuc7uiaSepXwyf3o52HaUYcV+Tu66S3F5GA=
|
||||
github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8=
|
||||
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
|
||||
@@ -44,70 +58,76 @@ github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/ledongthuc/pdf v0.0.0-20220302134840-0c2507a12d80/go.mod h1:imJHygn/1yfhB7XSJJKlFZKl/J+dCPAknuiaGOshXAs=
|
||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||
github.com/orisano/pixelmatch v0.0.0-20220722002657-fb0b55479cde/go.mod h1:nZgzbfBr3hhjoZnS66nKrHmduYNpc34ny7RK4z5/HM0=
|
||||
github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA=
|
||||
github.com/pkg/profile v1.7.0/go.mod h1:8Uer0jas47ZQMJ7VD+OHknK4YDY07LPUC6dEvqDjvNo=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/puzpuzpuz/xsync/v3 v3.5.1 h1:GJYJZwO6IdxN/IKbneznS6yPkVC+c3zyY/j19c++5Fg=
|
||||
github.com/puzpuzpuz/xsync/v3 v3.5.1/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA=
|
||||
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
|
||||
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/templexxx/cpu v0.0.1 h1:hY4WdLOgKdc8y13EYklu9OUTXik80BkxHoWvTO6MQQY=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/templexxx/cpu v0.0.1/go.mod h1:w7Tb+7qgcAlIyX4NhLuDKt78AHA5SzPmq0Wj6HiEnnk=
|
||||
github.com/templexxx/cpu v0.1.1 h1:isxHaxBXpYFWnk2DReuKkigaZyrjs2+9ypIdGP4h+HI=
|
||||
github.com/templexxx/cpu v0.1.1/go.mod h1:w7Tb+7qgcAlIyX4NhLuDKt78AHA5SzPmq0Wj6HiEnnk=
|
||||
github.com/templexxx/xhex v0.0.0-20200614015412-aed53437177b h1:XeDLE6c9mzHpdv3Wb1+pWBaWv/BlHK0ZYIu/KaL6eHg=
|
||||
github.com/templexxx/xhex v0.0.0-20200614015412-aed53437177b/go.mod h1:7rwmCH0wC2fQvNEvPZ3sKXukhyCTyiaZ5VTZMQYpZKQ=
|
||||
go-simpler.org/env v0.12.0 h1:kt/lBts0J1kjWJAnB740goNdvwNxt5emhYngL0Fzufs=
|
||||
go-simpler.org/env v0.12.0/go.mod h1:cc/5Md9JCUM7LVLtN0HYjPTDcI3Q8TDaPlNTAlDU+WI=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
|
||||
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
|
||||
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
|
||||
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
|
||||
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
|
||||
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
|
||||
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
|
||||
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
|
||||
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
|
||||
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
|
||||
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
|
||||
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
||||
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
|
||||
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
|
||||
golang.org/x/exp v0.0.0-20250819193227-8b4c13bb791b h1:DXr+pvt3nC887026GRP39Ej11UATqWDmWuS99x26cD0=
|
||||
golang.org/x/exp v0.0.0-20250819193227-8b4c13bb791b/go.mod h1:4QTo5u+SEIbbKW1RacMZq1YEfOBqeXa19JeshGi+zc4=
|
||||
golang.org/x/exp/typeparams v0.0.0-20231108232855-2478ac86f678 h1:1P7xPZEwZMoBoz0Yze5Nx2/4pxj6nw9ZqHWXqP0iRgQ=
|
||||
golang.org/x/exp/typeparams v0.0.0-20231108232855-2478ac86f678/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
|
||||
golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI=
|
||||
golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8=
|
||||
golang.org/x/exp v0.0.0-20251002181428-27f1f14c8bb9 h1:TQwNpfvNkxAVlItJf6Cr5JTsVZoC/Sj7K3OZv2Pc14A=
|
||||
golang.org/x/exp v0.0.0-20251002181428-27f1f14c8bb9/go.mod h1:TwQYMMnGpvZyc+JpB/UAuTNIsVJifOlSkrZkhcvpVUk=
|
||||
golang.org/x/exp/typeparams v0.0.0-20251002181428-27f1f14c8bb9 h1:EvjuVHWMoRaAxH402KMgrQpGUjoBy/OWvZjLOqQnwNk=
|
||||
golang.org/x/exp/typeparams v0.0.0-20251002181428-27f1f14c8bb9/go.mod h1:4Mzdyp/6jzw9auFDJ3OMF5qksa7UvPnzKqTVGcb04ms=
|
||||
golang.org/x/lint v0.0.0-20241112194109-818c5a804067 h1:adDmSQyFTCiv19j015EGKJBoaa7ElV0Q1Wovb/4G7NA=
|
||||
golang.org/x/lint v0.0.0-20241112194109-818c5a804067/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.27.0 h1:kb+q2PyFnEADO2IEF935ehFUXlWiNjJWtRNgBLSfbxQ=
|
||||
golang.org/x/mod v0.27.0/go.mod h1:rWI627Fq0DEoudcK+MBkNkCe0EetEaDSwJJkCcjpazc=
|
||||
golang.org/x/mod v0.28.0 h1:gQBtGhjxykdjY9YhZpSlZIsbnaE2+PgjfLWUQTnoZ1U=
|
||||
golang.org/x/mod v0.28.0/go.mod h1:yfB/L0NOf/kmEbXjzCPOx1iK1fRutOydrCMsqRhEBxI=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
|
||||
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
|
||||
golang.org/x/net v0.44.0 h1:evd8IRDyfNBMBTTY5XRF1vaZlD+EmWx6x8PkhR04H/I=
|
||||
golang.org/x/net v0.44.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
|
||||
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
|
||||
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI=
|
||||
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
|
||||
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.36.0 h1:kWS0uv/zsvHEle1LbV5LE8QujrxB3wfQyxHfhOk0Qkg=
|
||||
golang.org/x/tools v0.36.0/go.mod h1:WBDiHKJK8YgLHlcQPYQzNCkUxUypCaa5ZegCVutKm+s=
|
||||
golang.org/x/tools v0.37.0 h1:DVSRzp7FwePZW356yEAChSdNcQo6Nsp+fex1SUW09lE=
|
||||
golang.org/x/tools v0.37.0/go.mod h1:MBN5QPQtLMHVdvsbtarmTNukZDdgwdwlO5qGacAzF0w=
|
||||
golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM=
|
||||
golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
|
||||
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
|
||||
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
|
||||
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
|
||||
117
manage-relay.sh
117
manage-relay.sh
@@ -1,42 +1,57 @@
|
||||
#!/bin/bash
|
||||
# Stella's Orly Relay Management Script
|
||||
# Uses docker-compose.yml directly for configuration
|
||||
|
||||
set -e
|
||||
|
||||
RELAY_SERVICE="stella-relay"
|
||||
# Get script directory and project root
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_DIR="$SCRIPT_DIR"
|
||||
|
||||
# Configuration from docker-compose.yml
|
||||
RELAY_SERVICE="orly-relay"
|
||||
CONTAINER_NAME="orly-nostr-relay"
|
||||
RELAY_URL="ws://127.0.0.1:7777"
|
||||
HTTP_URL="http://127.0.0.1:7777"
|
||||
RELAY_DATA_DIR="/home/madmin/.local/share/orly-relay"
|
||||
|
||||
# Change to project directory for docker-compose commands
|
||||
cd "$PROJECT_DIR"
|
||||
|
||||
case "${1:-}" in
|
||||
"start")
|
||||
echo "🚀 Starting Stella's Orly Relay..."
|
||||
sudo systemctl start $RELAY_SERVICE
|
||||
docker compose up -d orly-relay
|
||||
echo "✅ Relay started!"
|
||||
;;
|
||||
"stop")
|
||||
echo "⏹️ Stopping Stella's Orly Relay..."
|
||||
sudo systemctl stop $RELAY_SERVICE
|
||||
docker compose down
|
||||
echo "✅ Relay stopped!"
|
||||
;;
|
||||
"restart")
|
||||
echo "🔄 Restarting Stella's Orly Relay..."
|
||||
sudo systemctl restart $RELAY_SERVICE
|
||||
docker compose restart orly-relay
|
||||
echo "✅ Relay restarted!"
|
||||
;;
|
||||
"status")
|
||||
echo "📊 Stella's Orly Relay Status:"
|
||||
sudo systemctl status $RELAY_SERVICE --no-pager
|
||||
docker compose ps orly-relay
|
||||
;;
|
||||
"logs")
|
||||
echo "📜 Stella's Orly Relay Logs:"
|
||||
sudo journalctl -u $RELAY_SERVICE -f --no-pager
|
||||
docker compose logs -f orly-relay
|
||||
;;
|
||||
"test")
|
||||
echo "🧪 Testing relay connection..."
|
||||
if curl -s -I http://127.0.0.1:7777 | grep -q "426 Upgrade Required"; then
|
||||
if curl -s -I "$HTTP_URL" | grep -q "426 Upgrade Required"; then
|
||||
echo "✅ Relay is responding correctly!"
|
||||
echo "📡 WebSocket URL: $RELAY_URL"
|
||||
echo "🌐 HTTP URL: $HTTP_URL"
|
||||
else
|
||||
echo "❌ Relay is not responding correctly"
|
||||
echo " Expected: 426 Upgrade Required"
|
||||
echo " URL: $HTTP_URL"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
@@ -53,14 +68,53 @@ case "${1:-}" in
|
||||
"info")
|
||||
echo "📋 Stella's Orly Relay Information:"
|
||||
echo " Service: $RELAY_SERVICE"
|
||||
echo " Container: $CONTAINER_NAME"
|
||||
echo " WebSocket URL: $RELAY_URL"
|
||||
echo " HTTP URL: http://127.0.0.1:7777"
|
||||
echo " Data Directory: /home/madmin/.local/share/orly-relay"
|
||||
echo " Config Directory: $(pwd)"
|
||||
echo " HTTP URL: $HTTP_URL"
|
||||
echo " Data Directory: $RELAY_DATA_DIR"
|
||||
echo " Config Directory: $PROJECT_DIR"
|
||||
echo ""
|
||||
echo "🔑 Admin NPubs:"
|
||||
echo " Stella: npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx"
|
||||
echo " Admin2: npub1l5sga6xg72phsz5422ykujprejwud075ggrr3z2hwyrfgr7eylqstegx9z"
|
||||
echo "🐳 Docker Information:"
|
||||
echo " Compose File: $PROJECT_DIR/docker-compose.yml"
|
||||
echo " Container Status:"
|
||||
docker compose ps orly-relay 2>/dev/null || echo " Not running"
|
||||
echo ""
|
||||
echo "💡 Configuration:"
|
||||
echo " All settings are defined in docker-compose.yml"
|
||||
echo " Use 'docker compose config' to see parsed configuration"
|
||||
;;
|
||||
"docker-logs")
|
||||
echo "🐳 Docker Container Logs:"
|
||||
docker compose logs -f orly-relay 2>/dev/null || echo "❌ Container not found or not running"
|
||||
;;
|
||||
"docker-status")
|
||||
echo "🐳 Docker Container Status:"
|
||||
docker compose ps orly-relay
|
||||
;;
|
||||
"docker-restart")
|
||||
echo "🔄 Restarting Docker Container..."
|
||||
docker compose restart orly-relay
|
||||
echo "✅ Container restarted!"
|
||||
;;
|
||||
"docker-update")
|
||||
echo "🔄 Updating and restarting Docker Container..."
|
||||
docker compose pull orly-relay
|
||||
docker compose up -d orly-relay
|
||||
echo "✅ Container updated and restarted!"
|
||||
;;
|
||||
"docker-build")
|
||||
echo "🔨 Building Docker Container..."
|
||||
docker compose build orly-relay
|
||||
echo "✅ Container built!"
|
||||
;;
|
||||
"docker-down")
|
||||
echo "⏹️ Stopping Docker Container..."
|
||||
docker compose down
|
||||
echo "✅ Container stopped!"
|
||||
;;
|
||||
"docker-config")
|
||||
echo "📋 Docker Compose Configuration:"
|
||||
docker compose config
|
||||
;;
|
||||
*)
|
||||
echo "🌲 Stella's Orly Relay Management Script"
|
||||
@@ -68,21 +122,32 @@ case "${1:-}" in
|
||||
echo "Usage: $0 [COMMAND]"
|
||||
echo ""
|
||||
echo "Commands:"
|
||||
echo " start Start the relay"
|
||||
echo " stop Stop the relay"
|
||||
echo " restart Restart the relay"
|
||||
echo " status Show relay status"
|
||||
echo " logs Show relay logs (follow mode)"
|
||||
echo " test Test relay connection"
|
||||
echo " enable Enable auto-start at boot"
|
||||
echo " disable Disable auto-start at boot"
|
||||
echo " info Show relay information"
|
||||
echo " start Start the relay"
|
||||
echo " stop Stop the relay"
|
||||
echo " restart Restart the relay"
|
||||
echo " status Show relay status"
|
||||
echo " logs Show relay logs (follow mode)"
|
||||
echo " test Test relay connection"
|
||||
echo " enable Enable auto-start at boot"
|
||||
echo " disable Disable auto-start at boot"
|
||||
echo " info Show relay information"
|
||||
echo ""
|
||||
echo "Docker Commands:"
|
||||
echo " docker-logs Show Docker container logs"
|
||||
echo " docker-status Show Docker container status"
|
||||
echo " docker-restart Restart Docker container only"
|
||||
echo " docker-update Update and restart container"
|
||||
echo " docker-build Build Docker container"
|
||||
echo " docker-down Stop Docker container"
|
||||
echo " docker-config Show Docker Compose configuration"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 start # Start the relay"
|
||||
echo " $0 status # Check if it's running"
|
||||
echo " $0 test # Test WebSocket connection"
|
||||
echo " $0 logs # Watch real-time logs"
|
||||
echo " $0 start # Start the relay"
|
||||
echo " $0 status # Check if it's running"
|
||||
echo " $0 test # Test WebSocket connection"
|
||||
echo " $0 logs # Watch real-time logs"
|
||||
echo " $0 docker-logs # Watch Docker container logs"
|
||||
echo " $0 docker-update # Update and restart container"
|
||||
echo ""
|
||||
echo "🌲 Crafted in the digital forest by Stella ✨"
|
||||
;;
|
||||
|
||||
@@ -3,6 +3,8 @@ package acl
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"strings"
|
||||
"sync"
|
||||
@@ -22,9 +24,9 @@ import (
|
||||
"next.orly.dev/pkg/encoders/envelopes/reqenvelope"
|
||||
"next.orly.dev/pkg/encoders/event"
|
||||
"next.orly.dev/pkg/encoders/filter"
|
||||
"next.orly.dev/pkg/encoders/hex"
|
||||
"next.orly.dev/pkg/encoders/kind"
|
||||
"next.orly.dev/pkg/encoders/tag"
|
||||
"next.orly.dev/pkg/encoders/timestamp"
|
||||
"next.orly.dev/pkg/protocol/publish"
|
||||
"next.orly.dev/pkg/utils"
|
||||
"next.orly.dev/pkg/utils/normalize"
|
||||
@@ -108,7 +110,7 @@ func (f *Follows) Configure(cfg ...any) (err error) {
|
||||
for _, v := range ev.Tags.GetAll([]byte("p")) {
|
||||
// log.I.F("adding follow: %s", v.Value())
|
||||
var a []byte
|
||||
if b, e := hex.Dec(string(v.Value())); chk.E(e) {
|
||||
if b, e := hex.DecodeString(string(v.Value())); chk.E(e) {
|
||||
continue
|
||||
} else {
|
||||
a = b
|
||||
@@ -158,6 +160,8 @@ func (f *Follows) adminRelays() (urls []string) {
|
||||
copy(admins, f.admins)
|
||||
f.followsMx.RUnlock()
|
||||
seen := make(map[string]struct{})
|
||||
|
||||
// First, try to get relay URLs from admin kind 10002 events
|
||||
for _, adm := range admins {
|
||||
fl := &filter.F{
|
||||
Authors: tag.NewFromAny(adm),
|
||||
@@ -194,6 +198,29 @@ func (f *Follows) adminRelays() (urls []string) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If no admin relays found, use bootstrap relays as fallback
|
||||
if len(urls) == 0 {
|
||||
log.I.F("no admin relays found in DB, checking bootstrap relays")
|
||||
if len(f.cfg.BootstrapRelays) > 0 {
|
||||
log.I.F("using bootstrap relays: %v", f.cfg.BootstrapRelays)
|
||||
for _, relay := range f.cfg.BootstrapRelays {
|
||||
n := string(normalize.URL(relay))
|
||||
if n == "" {
|
||||
log.W.F("invalid bootstrap relay URL: %s", relay)
|
||||
continue
|
||||
}
|
||||
if _, ok := seen[n]; ok {
|
||||
continue
|
||||
}
|
||||
seen[n] = struct{}{}
|
||||
urls = append(urls, n)
|
||||
}
|
||||
} else {
|
||||
log.W.F("no bootstrap relays configured")
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
@@ -211,7 +238,7 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
|
||||
urls := f.adminRelays()
|
||||
log.I.S(urls)
|
||||
if len(urls) == 0 {
|
||||
log.W.F("follows syncer: no admin relays found in DB (kind 10002)")
|
||||
log.W.F("follows syncer: no admin relays found in DB (kind 10002) and no bootstrap relays configured")
|
||||
return
|
||||
}
|
||||
log.T.F(
|
||||
@@ -228,18 +255,45 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
|
||||
return
|
||||
default:
|
||||
}
|
||||
c, _, err := websocket.Dial(ctx, u, nil)
|
||||
// Create a timeout context for the connection
|
||||
connCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
|
||||
// Create proper headers for the WebSocket connection
|
||||
headers := http.Header{}
|
||||
headers.Set("User-Agent", "ORLY-Relay/0.9.2")
|
||||
headers.Set("Origin", "https://orly.dev")
|
||||
|
||||
// Use proper WebSocket dial options
|
||||
dialOptions := &websocket.DialOptions{
|
||||
HTTPHeader: headers,
|
||||
}
|
||||
|
||||
c, _, err := websocket.Dial(connCtx, u, dialOptions)
|
||||
cancel()
|
||||
if err != nil {
|
||||
log.W.F("follows syncer: dial %s failed: %v", u, err)
|
||||
if strings.Contains(
|
||||
err.Error(), "response status code 101 but got 403",
|
||||
) {
|
||||
// 403 means the relay is not accepting connections from
|
||||
// us. Forbidden is the meaning, usually used to
|
||||
// indicate either the IP or user is blocked. so stop
|
||||
// trying this one.
|
||||
return
|
||||
|
||||
// Handle different types of errors
|
||||
if strings.Contains(err.Error(), "response status code 101 but got 403") {
|
||||
// 403 means the relay is not accepting connections from us
|
||||
// Forbidden is the meaning, usually used to indicate either the IP or user is blocked
|
||||
// But we should still retry after a longer delay
|
||||
log.W.F("follows syncer: relay %s returned 403, will retry after longer delay", u)
|
||||
timer := time.NewTimer(5 * time.Minute) // Wait 5 minutes before retrying 403 errors
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-timer.C:
|
||||
}
|
||||
continue
|
||||
} else if strings.Contains(err.Error(), "timeout") || strings.Contains(err.Error(), "connection refused") {
|
||||
// Network issues, retry with normal backoff
|
||||
log.W.F("follows syncer: network issue with %s, retrying in %v", u, backoff)
|
||||
} else {
|
||||
// Other errors, retry with normal backoff
|
||||
log.W.F("follows syncer: connection error with %s, retrying in %v", u, backoff)
|
||||
}
|
||||
|
||||
timer := time.NewTimer(backoff)
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
@@ -252,21 +306,37 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
|
||||
continue
|
||||
}
|
||||
backoff = time.Second
|
||||
// send REQ
|
||||
log.I.F("follows syncer: successfully connected to %s", u)
|
||||
|
||||
// send REQ for kind 3 (follow lists), kind 10002 (relay lists), and all events from follows
|
||||
ff := &filter.S{}
|
||||
f1 := &filter.F{
|
||||
Authors: tag.NewFromBytesSlice(authors...),
|
||||
Limit: values.ToUintPointer(0),
|
||||
Kinds: kind.NewS(kind.New(kind.FollowList.K)),
|
||||
Limit: values.ToUintPointer(100),
|
||||
}
|
||||
*ff = append(*ff, f1)
|
||||
f2 := &filter.F{
|
||||
Authors: tag.NewFromBytesSlice(authors...),
|
||||
Kinds: kind.NewS(kind.New(kind.RelayListMetadata.K)),
|
||||
Limit: values.ToUintPointer(100),
|
||||
}
|
||||
// Add filter for all events from follows (last 30 days)
|
||||
oneMonthAgo := timestamp.FromUnix(time.Now().Add(-30 * 24 * time.Hour).Unix())
|
||||
f3 := &filter.F{
|
||||
Authors: tag.NewFromBytesSlice(authors...),
|
||||
Since: oneMonthAgo,
|
||||
Limit: values.ToUintPointer(1000),
|
||||
}
|
||||
*ff = append(*ff, f1, f2, f3)
|
||||
req := reqenvelope.NewFrom([]byte("follows-sync"), ff)
|
||||
if err = c.Write(
|
||||
ctx, websocket.MessageText, req.Marshal(nil),
|
||||
); chk.E(err) {
|
||||
log.W.F("follows syncer: failed to send REQ to %s: %v", u, err)
|
||||
_ = c.Close(websocket.StatusInternalError, "write failed")
|
||||
continue
|
||||
}
|
||||
log.T.F("sent REQ to %s for follows subscription", u)
|
||||
log.I.F("follows syncer: sent REQ to %s for kind 3, 10002, and all events (last 30 days) from followed users", u)
|
||||
// read loop
|
||||
for {
|
||||
select {
|
||||
@@ -294,6 +364,23 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
|
||||
if ok, err := res.Event.Verify(); chk.T(err) || !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
// Process events based on kind
|
||||
switch res.Event.Kind {
|
||||
case kind.FollowList.K:
|
||||
log.I.F("follows syncer: received kind 3 (follow list) event from %s on relay %s",
|
||||
hex.EncodeToString(res.Event.Pubkey), u)
|
||||
// Extract followed pubkeys from 'p' tags in kind 3 events
|
||||
f.extractFollowedPubkeys(res.Event)
|
||||
case kind.RelayListMetadata.K:
|
||||
log.I.F("follows syncer: received kind 10002 (relay list) event from %s on relay %s",
|
||||
hex.EncodeToString(res.Event.Pubkey), u)
|
||||
default:
|
||||
// Log all other events from followed users
|
||||
log.I.F("follows syncer: received kind %d event from %s on relay %s",
|
||||
res.Event.Kind, hex.EncodeToString(res.Event.Pubkey), u)
|
||||
}
|
||||
|
||||
if _, _, err = f.D.SaveEvent(
|
||||
ctx, res.Event,
|
||||
); err != nil {
|
||||
@@ -365,12 +452,26 @@ func (f *Follows) Syncer() {
|
||||
func (f *Follows) GetFollowedPubkeys() [][]byte {
|
||||
f.followsMx.RLock()
|
||||
defer f.followsMx.RUnlock()
|
||||
|
||||
|
||||
followedPubkeys := make([][]byte, len(f.follows))
|
||||
copy(followedPubkeys, f.follows)
|
||||
return followedPubkeys
|
||||
}
|
||||
|
||||
// extractFollowedPubkeys extracts followed pubkeys from 'p' tags in kind 3 events
|
||||
func (f *Follows) extractFollowedPubkeys(event *event.E) {
|
||||
if event.Kind != kind.FollowList.K {
|
||||
return
|
||||
}
|
||||
|
||||
// Extract all 'p' tags (followed pubkeys) from the kind 3 event
|
||||
for _, tag := range event.Tags.GetAll([]byte("p")) {
|
||||
if len(tag.Value()) == 32 { // Valid pubkey length
|
||||
f.AddFollow(tag.Value())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// AddFollow appends a pubkey to the in-memory follows list if not already present
|
||||
// and signals the syncer to refresh subscriptions.
|
||||
func (f *Follows) AddFollow(pub []byte) {
|
||||
@@ -387,6 +488,7 @@ func (f *Follows) AddFollow(pub []byte) {
|
||||
b := make([]byte, len(pub))
|
||||
copy(b, pub)
|
||||
f.follows = append(f.follows, b)
|
||||
log.I.F("follows syncer: added new followed pubkey: %s", hex.EncodeToString(pub))
|
||||
// notify syncer if initialized
|
||||
if f.updated != nil {
|
||||
select {
|
||||
|
||||
44
pkg/database/count.go
Normal file
44
pkg/database/count.go
Normal file
@@ -0,0 +1,44 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"next.orly.dev/pkg/encoders/filter"
|
||||
)
|
||||
|
||||
// CountEvents mirrors the initial selection logic of QueryEvents but stops
|
||||
// once we have identified candidate event serials (id/pk/ts). It returns the
|
||||
// count of those serials. The `approx` flag is always false as requested.
|
||||
func (d *D) CountEvents(c context.Context, f *filter.F) (
|
||||
count int, approx bool, err error,
|
||||
) {
|
||||
approx = false
|
||||
if f == nil {
|
||||
return 0, false, nil
|
||||
}
|
||||
|
||||
// If explicit Ids are provided, count how many of them resolve to serials.
|
||||
if f.Ids != nil && f.Ids.Len() > 0 {
|
||||
var serials map[string]interface{}
|
||||
// Use type inference without importing extra packages by discarding the
|
||||
// concrete value type via a two-step assignment.
|
||||
if tmp, idErr := d.GetSerialsByIds(f.Ids); idErr != nil {
|
||||
return 0, false, idErr
|
||||
} else {
|
||||
// Reassign to a map with empty interface values to avoid referencing
|
||||
// the concrete Uint40 type here.
|
||||
serials = make(map[string]interface{}, len(tmp))
|
||||
for k := range tmp {
|
||||
serials[k] = struct{}{}
|
||||
}
|
||||
}
|
||||
return len(serials), false, nil
|
||||
}
|
||||
|
||||
// Otherwise, query for candidate Id/Pubkey/Timestamp triplets and count them.
|
||||
if idPkTs, qErr := d.QueryForIds(c, f); qErr != nil {
|
||||
return 0, false, qErr
|
||||
} else {
|
||||
return len(idPkTs), false, nil
|
||||
}
|
||||
}
|
||||
@@ -52,8 +52,18 @@ func New(
|
||||
}
|
||||
|
||||
opts := badger.DefaultOptions(d.dataDir)
|
||||
opts.BlockCacheSize = int64(units.Gb)
|
||||
opts.BlockSize = units.Gb
|
||||
// Use sane defaults to avoid excessive memory usage during startup.
|
||||
// Badger's default BlockSize is small (e.g., 4KB). Overriding it to very large values
|
||||
// can cause massive allocations and OOM panics during deployments.
|
||||
// Set BlockCacheSize to a moderate value and keep BlockSize small.
|
||||
opts.BlockCacheSize = int64(256 * units.Mb) // 256 MB cache
|
||||
opts.BlockSize = 4 * units.Kb // 4 KB block size
|
||||
// Prevent huge allocations during table building and memtable flush.
|
||||
// Badger's TableBuilder buffer is sized by BaseTableSize; ensure it's small.
|
||||
opts.BaseTableSize = 64 * units.Mb // 64 MB per table (default ~2MB, increased for fewer files but safe)
|
||||
opts.MemTableSize = 64 * units.Mb // 64 MB memtable to match table size
|
||||
// Keep value log files to a moderate size as well
|
||||
opts.ValueLogFileSize = 256 * units.Mb // 256 MB value log files
|
||||
opts.CompactL0OnClose = true
|
||||
opts.LmaxCompaction = true
|
||||
opts.Compression = options.None
|
||||
|
||||
@@ -153,5 +153,35 @@ func GetIndexesForEvent(ev *event.E, serial uint64) (
|
||||
if err = appendIndexBytes(&idxs, kindPubkeyIndex); chk.E(err) {
|
||||
return
|
||||
}
|
||||
|
||||
// Word token indexes (from content)
|
||||
if len(ev.Content) > 0 {
|
||||
for _, h := range TokenHashes(ev.Content) {
|
||||
w := new(Word)
|
||||
w.FromWord(h) // 8-byte truncated hash
|
||||
wIdx := indexes.WordEnc(w, ser)
|
||||
if err = appendIndexBytes(&idxs, wIdx); chk.E(err) {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
// Extend full-text search to include all fields of all tags
|
||||
if ev.Tags != nil && ev.Tags.Len() > 0 {
|
||||
for _, t := range *ev.Tags {
|
||||
for _, field := range t.T { // include key and all values
|
||||
if len(field) == 0 {
|
||||
continue
|
||||
}
|
||||
for _, h := range TokenHashes(field) {
|
||||
w := new(Word)
|
||||
w.FromWord(h)
|
||||
wIdx := indexes.WordEnc(w, ser)
|
||||
if err = appendIndexBytes(&idxs, wIdx); chk.E(err) {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -113,6 +113,27 @@ func GetIndexesFromFilter(f *filter.F) (idxs []Range, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
// Word search: if Search field is present, generate word index ranges
|
||||
if len(f.Search) > 0 {
|
||||
for _, h := range TokenHashes(f.Search) {
|
||||
w := new(types2.Word)
|
||||
w.FromWord(h)
|
||||
buf := new(bytes.Buffer)
|
||||
idx := indexes.WordEnc(w, nil)
|
||||
if err = idx.MarshalWrite(buf); chk.E(err) {
|
||||
return
|
||||
}
|
||||
b := buf.Bytes()
|
||||
end := make([]byte, len(b))
|
||||
copy(end, b)
|
||||
for i := 0; i < 5; i++ { // match any serial
|
||||
end = append(end, 0xff)
|
||||
}
|
||||
idxs = append(idxs, Range{b, end})
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
caStart := new(types2.Uint64)
|
||||
caEnd := new(types2.Uint64)
|
||||
|
||||
|
||||
@@ -69,6 +69,7 @@ const (
|
||||
TagPubkeyPrefix = I("tpc") // tag, pubkey, created at
|
||||
TagKindPubkeyPrefix = I("tkp") // tag, kind, pubkey, created at
|
||||
|
||||
WordPrefix = I("wrd") // word hash, serial
|
||||
ExpirationPrefix = I("exp") // timestamp of expiration
|
||||
VersionPrefix = I("ver") // database version number, for triggering reindexes when new keys are added (policy is add-only).
|
||||
)
|
||||
@@ -106,6 +107,8 @@ func Prefix(prf int) (i I) {
|
||||
return ExpirationPrefix
|
||||
case Version:
|
||||
return VersionPrefix
|
||||
case Word:
|
||||
return WordPrefix
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -147,6 +150,8 @@ func Identify(r io.Reader) (i int, err error) {
|
||||
|
||||
case ExpirationPrefix:
|
||||
i = Expiration
|
||||
case WordPrefix:
|
||||
i = Word
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -233,6 +238,21 @@ func FullIdPubkeyDec(
|
||||
return New(NewPrefix(), ser, fid, p, ca)
|
||||
}
|
||||
|
||||
// Word index for tokenized search terms
|
||||
//
|
||||
// 3 prefix|8 word-hash|5 serial
|
||||
var Word = next()
|
||||
|
||||
func WordVars() (w *types.Word, ser *types.Uint40) {
|
||||
return new(types.Word), new(types.Uint40)
|
||||
}
|
||||
func WordEnc(w *types.Word, ser *types.Uint40) (enc *T) {
|
||||
return New(NewPrefix(Word), w, ser)
|
||||
}
|
||||
func WordDec(w *types.Word, ser *types.Uint40) (enc *T) {
|
||||
return New(NewPrefix(), w, ser)
|
||||
}
|
||||
|
||||
// CreatedAt is an index that allows search for the timestamp on the event.
|
||||
//
|
||||
// 3 prefix|8 timestamp|5 serial
|
||||
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
currentVersion uint32 = 1
|
||||
currentVersion uint32 = 2
|
||||
)
|
||||
|
||||
func (d *D) RunMigrations() {
|
||||
@@ -56,22 +56,8 @@ func (d *D) RunMigrations() {
|
||||
}
|
||||
if dbVersion == 0 {
|
||||
log.D.F("no version tag found, creating...")
|
||||
// write the version tag now
|
||||
if err = d.Update(
|
||||
func(txn *badger.Txn) (err error) {
|
||||
buf := new(bytes.Buffer)
|
||||
vv := new(types.Uint32)
|
||||
vv.Set(currentVersion)
|
||||
log.I.S(vv)
|
||||
if err = indexes.VersionEnc(vv).MarshalWrite(buf); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if err = txn.Set(buf.Bytes(), nil); chk.E(err) {
|
||||
return
|
||||
}
|
||||
return
|
||||
},
|
||||
); chk.E(err) {
|
||||
// write the version tag now (ensure any old tags are removed first)
|
||||
if err = d.writeVersionTag(currentVersion); chk.E(err) {
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -79,7 +65,136 @@ func (d *D) RunMigrations() {
|
||||
log.I.F("migrating to version 1...")
|
||||
// the first migration is expiration tags
|
||||
d.UpdateExpirationTags()
|
||||
// bump to version 1
|
||||
_ = d.writeVersionTag(1)
|
||||
}
|
||||
if dbVersion < 2 {
|
||||
log.I.F("migrating to version 2...")
|
||||
// backfill word indexes
|
||||
d.UpdateWordIndexes()
|
||||
// bump to version 2
|
||||
_ = d.writeVersionTag(2)
|
||||
}
|
||||
}
|
||||
|
||||
// writeVersionTag writes a new version tag key to the database (no value)
|
||||
func (d *D) writeVersionTag(ver uint32) (err error) {
|
||||
return d.Update(
|
||||
func(txn *badger.Txn) (err error) {
|
||||
// delete any existing version keys first (there should only be one, but be safe)
|
||||
verPrf := new(bytes.Buffer)
|
||||
if _, err = indexes.VersionPrefix.Write(verPrf); chk.E(err) {
|
||||
return
|
||||
}
|
||||
it := txn.NewIterator(badger.IteratorOptions{Prefix: verPrf.Bytes()})
|
||||
defer it.Close()
|
||||
for it.Rewind(); it.Valid(); it.Next() {
|
||||
item := it.Item()
|
||||
key := item.KeyCopy(nil)
|
||||
if err = txn.Delete(key); chk.E(err) {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// now write the new version key
|
||||
buf := new(bytes.Buffer)
|
||||
vv := new(types.Uint32)
|
||||
vv.Set(ver)
|
||||
if err = indexes.VersionEnc(vv).MarshalWrite(buf); chk.E(err) {
|
||||
return
|
||||
}
|
||||
return txn.Set(buf.Bytes(), nil)
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
func (d *D) UpdateWordIndexes() {
|
||||
log.T.F("updating word indexes...")
|
||||
var err error
|
||||
var wordIndexes [][]byte
|
||||
// iterate all events and generate word index keys from content and tags
|
||||
if err = d.View(
|
||||
func(txn *badger.Txn) (err error) {
|
||||
prf := new(bytes.Buffer)
|
||||
if err = indexes.EventEnc(nil).MarshalWrite(prf); chk.E(err) {
|
||||
return
|
||||
}
|
||||
it := txn.NewIterator(badger.IteratorOptions{Prefix: prf.Bytes()})
|
||||
defer it.Close()
|
||||
for it.Rewind(); it.Valid(); it.Next() {
|
||||
item := it.Item()
|
||||
var val []byte
|
||||
if val, err = item.ValueCopy(nil); chk.E(err) {
|
||||
continue
|
||||
}
|
||||
// decode the event
|
||||
ev := new(event.E)
|
||||
if err = ev.UnmarshalBinary(bytes.NewBuffer(val)); chk.E(err) {
|
||||
continue
|
||||
}
|
||||
// log.I.F("updating word indexes for event: %s", ev.Serialize())
|
||||
// read serial from key
|
||||
key := item.Key()
|
||||
ser := indexes.EventVars()
|
||||
if err = indexes.EventDec(ser).UnmarshalRead(bytes.NewBuffer(key)); chk.E(err) {
|
||||
continue
|
||||
}
|
||||
// collect unique word hashes for this event
|
||||
seen := make(map[string]struct{})
|
||||
// from content
|
||||
if len(ev.Content) > 0 {
|
||||
for _, h := range TokenHashes(ev.Content) {
|
||||
seen[string(h)] = struct{}{}
|
||||
}
|
||||
}
|
||||
// from all tag fields (key and values)
|
||||
if ev.Tags != nil && ev.Tags.Len() > 0 {
|
||||
for _, t := range *ev.Tags {
|
||||
for _, field := range t.T {
|
||||
if len(field) == 0 {
|
||||
continue
|
||||
}
|
||||
for _, h := range TokenHashes(field) {
|
||||
seen[string(h)] = struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// build keys
|
||||
for k := range seen {
|
||||
w := new(types.Word)
|
||||
w.FromWord([]byte(k))
|
||||
buf := new(bytes.Buffer)
|
||||
if err = indexes.WordEnc(
|
||||
w, ser,
|
||||
).MarshalWrite(buf); chk.E(err) {
|
||||
continue
|
||||
}
|
||||
wordIndexes = append(wordIndexes, buf.Bytes())
|
||||
}
|
||||
}
|
||||
return
|
||||
},
|
||||
); chk.E(err) {
|
||||
return
|
||||
}
|
||||
// sort the indexes for ordered writes
|
||||
sort.Slice(
|
||||
wordIndexes, func(i, j int) bool {
|
||||
return bytes.Compare(
|
||||
wordIndexes[i], wordIndexes[j],
|
||||
) < 0
|
||||
},
|
||||
)
|
||||
// write in a batch
|
||||
batch := d.NewWriteBatch()
|
||||
for _, v := range wordIndexes {
|
||||
if err = batch.Set(v, nil); chk.E(err) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
_ = batch.Flush()
|
||||
log.T.F("finished updating word indexes...")
|
||||
}
|
||||
|
||||
func (d *D) UpdateExpirationTags() {
|
||||
|
||||
194
pkg/database/query-events-search_test.go
Normal file
194
pkg/database/query-events-search_test.go
Normal file
@@ -0,0 +1,194 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"lol.mleku.dev/chk"
|
||||
"next.orly.dev/pkg/crypto/p256k"
|
||||
"next.orly.dev/pkg/encoders/event"
|
||||
"next.orly.dev/pkg/encoders/filter"
|
||||
"next.orly.dev/pkg/encoders/kind"
|
||||
"next.orly.dev/pkg/encoders/tag"
|
||||
"next.orly.dev/pkg/encoders/timestamp"
|
||||
)
|
||||
|
||||
// helper to create a fresh DB
|
||||
func newTestDB(t *testing.T) (*D, context.Context, context.CancelFunc, string) {
|
||||
t.Helper()
|
||||
tempDir, err := os.MkdirTemp("", "search-db-*")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
db, err := New(ctx, cancel, tempDir, "error")
|
||||
if err != nil {
|
||||
cancel()
|
||||
os.RemoveAll(tempDir)
|
||||
t.Fatalf("Failed to init DB: %v", err)
|
||||
}
|
||||
return db, ctx, cancel, tempDir
|
||||
}
|
||||
|
||||
// TestQueryEventsBySearchTerms creates a small set of events with content and tags,
|
||||
// saves them, then queries using filter.Search to ensure the word index works.
|
||||
func TestQueryEventsBySearchTerms(t *testing.T) {
|
||||
db, ctx, cancel, tempDir := newTestDB(t)
|
||||
defer func() {
|
||||
// cancel context first to stop background routines cleanly
|
||||
cancel()
|
||||
db.Close()
|
||||
os.RemoveAll(tempDir)
|
||||
}()
|
||||
|
||||
// signer for all events
|
||||
sign := new(p256k.Signer)
|
||||
if err := sign.Generate(); chk.E(err) {
|
||||
t.Fatalf("signer generate: %v", err)
|
||||
}
|
||||
|
||||
now := timestamp.Now().V
|
||||
|
||||
// Events to cover tokenizer rules:
|
||||
// - regular words
|
||||
// - URLs ignored
|
||||
// - 64-char hex ignored
|
||||
// - nostr: URIs ignored
|
||||
// - #[n] mentions ignored
|
||||
// - tag fields included in search
|
||||
|
||||
// 1. Contains words: "alpha beta", plus URL and hex (ignored)
|
||||
ev1 := event.New()
|
||||
ev1.Kind = kind.TextNote.K
|
||||
ev1.Pubkey = sign.Pub()
|
||||
ev1.CreatedAt = now - 5
|
||||
ev1.Content = []byte("Alpha beta visit https://example.com deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef")
|
||||
ev1.Tags = tag.NewS()
|
||||
ev1.Sign(sign)
|
||||
if _, _, err := db.SaveEvent(ctx, ev1); err != nil {
|
||||
t.Fatalf("save ev1: %v", err)
|
||||
}
|
||||
|
||||
// 2. Contains overlap word "beta" and unique "gamma" and nostr: URI ignored
|
||||
ev2 := event.New()
|
||||
ev2.Kind = kind.TextNote.K
|
||||
ev2.Pubkey = sign.Pub()
|
||||
ev2.CreatedAt = now - 4
|
||||
ev2.Content = []byte("beta and GAMMA with nostr:nevent1qqqqq")
|
||||
ev2.Tags = tag.NewS()
|
||||
ev2.Sign(sign)
|
||||
if _, _, err := db.SaveEvent(ctx, ev2); err != nil {
|
||||
t.Fatalf("save ev2: %v", err)
|
||||
}
|
||||
|
||||
// 3. Contains only a URL (should not create word tokens) and mention #[1] (ignored)
|
||||
ev3 := event.New()
|
||||
ev3.Kind = kind.TextNote.K
|
||||
ev3.Pubkey = sign.Pub()
|
||||
ev3.CreatedAt = now - 3
|
||||
ev3.Content = []byte("see www.example.org #[1]")
|
||||
ev3.Tags = tag.NewS()
|
||||
ev3.Sign(sign)
|
||||
if _, _, err := db.SaveEvent(ctx, ev3); err != nil {
|
||||
t.Fatalf("save ev3: %v", err)
|
||||
}
|
||||
|
||||
// 4. No content words, but tag value has searchable words: "delta epsilon"
|
||||
ev4 := event.New()
|
||||
ev4.Kind = kind.TextNote.K
|
||||
ev4.Pubkey = sign.Pub()
|
||||
ev4.CreatedAt = now - 2
|
||||
ev4.Content = []byte("")
|
||||
ev4.Tags = tag.NewS()
|
||||
*ev4.Tags = append(*ev4.Tags, tag.NewFromAny("t", "delta epsilon"))
|
||||
ev4.Sign(sign)
|
||||
if _, _, err := db.SaveEvent(ctx, ev4); err != nil {
|
||||
t.Fatalf("save ev4: %v", err)
|
||||
}
|
||||
|
||||
// 5. Another event with both content and tag tokens for ordering checks
|
||||
ev5 := event.New()
|
||||
ev5.Kind = kind.TextNote.K
|
||||
ev5.Pubkey = sign.Pub()
|
||||
ev5.CreatedAt = now - 1
|
||||
ev5.Content = []byte("alpha DELTA mixed-case and link http://foo.bar")
|
||||
ev5.Tags = tag.NewS()
|
||||
*ev5.Tags = append(*ev5.Tags, tag.NewFromAny("t", "zeta"))
|
||||
ev5.Sign(sign)
|
||||
if _, _, err := db.SaveEvent(ctx, ev5); err != nil {
|
||||
t.Fatalf("save ev5: %v", err)
|
||||
}
|
||||
|
||||
// Small sleep to ensure created_at ordering is the only factor
|
||||
time.Sleep(5 * time.Millisecond)
|
||||
|
||||
// Helper to run a search and return IDs
|
||||
run := func(q string) ([]*event.E, error) {
|
||||
f := &filter.F{Search: []byte(q)}
|
||||
return db.QueryEvents(ctx, f)
|
||||
}
|
||||
|
||||
// Single-term search: alpha -> should match ev1 and ev5 ordered by created_at desc (ev5 newer)
|
||||
if evs, err := run("alpha"); err != nil {
|
||||
t.Fatalf("search alpha: %v", err)
|
||||
} else {
|
||||
if len(evs) != 2 {
|
||||
t.Fatalf("alpha expected 2 results, got %d", len(evs))
|
||||
}
|
||||
if !(evs[0].CreatedAt >= evs[1].CreatedAt) {
|
||||
t.Fatalf("results not ordered by created_at desc")
|
||||
}
|
||||
}
|
||||
|
||||
// Overlap term beta -> ev1 and ev2
|
||||
if evs, err := run("beta"); err != nil {
|
||||
t.Fatalf("search beta: %v", err)
|
||||
} else if len(evs) != 2 {
|
||||
t.Fatalf("beta expected 2 results, got %d", len(evs))
|
||||
}
|
||||
|
||||
// Unique term gamma -> only ev2
|
||||
if evs, err := run("gamma"); err != nil {
|
||||
t.Fatalf("search gamma: %v", err)
|
||||
} else if len(evs) != 1 {
|
||||
t.Fatalf("gamma expected 1 result, got %d", len(evs))
|
||||
}
|
||||
|
||||
// URL terms should be ignored: example -> appears only as URL in ev1/ev3/ev5; tokenizer ignores URLs so expect 0
|
||||
if evs, err := run("example"); err != nil {
|
||||
t.Fatalf("search example: %v", err)
|
||||
} else if len(evs) != 0 {
|
||||
t.Fatalf("example expected 0 results (URL tokens ignored), got %d", len(evs))
|
||||
}
|
||||
|
||||
// Tag words searchable: delta should match ev4 and ev5 (delta in tag for ev4, in content for ev5)
|
||||
if evs, err := run("delta"); err != nil {
|
||||
t.Fatalf("search delta: %v", err)
|
||||
} else if len(evs) != 2 {
|
||||
t.Fatalf("delta expected 2 results, got %d", len(evs))
|
||||
}
|
||||
|
||||
// Very short token ignored: single-letter should yield 0
|
||||
if evs, err := run("a"); err != nil {
|
||||
t.Fatalf("search short token: %v", err)
|
||||
} else if len(evs) != 0 {
|
||||
t.Fatalf("single-letter expected 0 results, got %d", len(evs))
|
||||
}
|
||||
|
||||
// 64-char hex should be ignored
|
||||
hex64 := "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef"
|
||||
if evs, err := run(hex64); err != nil {
|
||||
t.Fatalf("search hex64: %v", err)
|
||||
} else if len(evs) != 0 {
|
||||
t.Fatalf("hex64 expected 0 results, got %d", len(evs))
|
||||
}
|
||||
|
||||
// nostr: scheme ignored
|
||||
if evs, err := run("nostr:nevent1qqqqq"); err != nil {
|
||||
t.Fatalf("search nostr: %v", err)
|
||||
} else if len(evs) != 0 {
|
||||
t.Fatalf("nostr: expected 0 results, got %d", len(evs))
|
||||
}
|
||||
}
|
||||
@@ -7,13 +7,16 @@ import (
|
||||
|
||||
"lol.mleku.dev/chk"
|
||||
"next.orly.dev/pkg/database/indexes/types"
|
||||
"next.orly.dev/pkg/encoders/event"
|
||||
"next.orly.dev/pkg/encoders/filter"
|
||||
"next.orly.dev/pkg/interfaces/store"
|
||||
)
|
||||
|
||||
// QueryForIds retrieves a list of IdPkTs based on the provided filter.
|
||||
// It supports filtering by ranges and tags but disallows filtering by Ids.
|
||||
// Results are sorted by timestamp in reverse chronological order.
|
||||
// Results are sorted by timestamp in reverse chronological order by default.
|
||||
// When a search query is present, results are ranked by a 50/50 blend of
|
||||
// match count (how many distinct search terms matched) and recency.
|
||||
// Returns an error if the filter contains Ids or if any operation fails.
|
||||
func (d *D) QueryForIds(c context.Context, f *filter.F) (
|
||||
idPkTs []*store.IdPkTs, err error,
|
||||
@@ -29,6 +32,9 @@ func (d *D) QueryForIds(c context.Context, f *filter.F) (
|
||||
}
|
||||
var results []*store.IdPkTs
|
||||
var founds []*types.Uint40
|
||||
// When searching, we want to count how many index ranges (search terms)
|
||||
// matched each note. We'll track counts by serial.
|
||||
counts := make(map[uint64]int)
|
||||
for _, idx := range idxs {
|
||||
if founds, err = d.GetSerialsByRange(idx); chk.E(err) {
|
||||
return
|
||||
@@ -37,6 +43,12 @@ func (d *D) QueryForIds(c context.Context, f *filter.F) (
|
||||
if tmp, err = d.GetFullIdPubkeyBySerials(founds); chk.E(err) {
|
||||
return
|
||||
}
|
||||
// If this query is driven by Search terms, increment count per serial
|
||||
if len(f.Search) > 0 {
|
||||
for _, v := range tmp {
|
||||
counts[v.Ser]++
|
||||
}
|
||||
}
|
||||
results = append(results, tmp...)
|
||||
}
|
||||
// deduplicate in case this somehow happened (such as two or more
|
||||
@@ -48,12 +60,109 @@ func (d *D) QueryForIds(c context.Context, f *filter.F) (
|
||||
idPkTs = append(idPkTs, idpk)
|
||||
}
|
||||
}
|
||||
// sort results by timestamp in reverse chronological order
|
||||
sort.Slice(
|
||||
idPkTs, func(i, j int) bool {
|
||||
return idPkTs[i].Ts > idPkTs[j].Ts
|
||||
},
|
||||
)
|
||||
|
||||
// If search is combined with Authors/Kinds/Tags, require events to match ALL of those present fields in addition to the word match.
|
||||
if len(f.Search) > 0 && ((f.Authors != nil && f.Authors.Len() > 0) || (f.Kinds != nil && f.Kinds.Len() > 0) || (f.Tags != nil && f.Tags.Len() > 0)) {
|
||||
// Build serial list for fetching full events
|
||||
serials := make([]*types.Uint40, 0, len(idPkTs))
|
||||
for _, v := range idPkTs {
|
||||
s := new(types.Uint40)
|
||||
s.Set(v.Ser)
|
||||
serials = append(serials, s)
|
||||
}
|
||||
var evs map[uint64]*event.E
|
||||
if evs, err = d.FetchEventsBySerials(serials); chk.E(err) {
|
||||
return
|
||||
}
|
||||
filtered := make([]*store.IdPkTs, 0, len(idPkTs))
|
||||
for _, v := range idPkTs {
|
||||
ev, ok := evs[v.Ser]
|
||||
if !ok || ev == nil {
|
||||
continue
|
||||
}
|
||||
matchesAll := true
|
||||
if f.Authors != nil && f.Authors.Len() > 0 && !f.Authors.Contains(ev.Pubkey) {
|
||||
matchesAll = false
|
||||
}
|
||||
if matchesAll && f.Kinds != nil && f.Kinds.Len() > 0 && !f.Kinds.Contains(ev.Kind) {
|
||||
matchesAll = false
|
||||
}
|
||||
if matchesAll && f.Tags != nil && f.Tags.Len() > 0 {
|
||||
// Require the event to satisfy all tag filters as in MatchesIgnoringTimestampConstraints
|
||||
tagOK := true
|
||||
for _, t := range *f.Tags {
|
||||
if t.Len() < 2 {
|
||||
continue
|
||||
}
|
||||
key := t.Key()
|
||||
values := t.T[1:]
|
||||
if !ev.Tags.ContainsAny(key, values) {
|
||||
tagOK = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if !tagOK {
|
||||
matchesAll = false
|
||||
}
|
||||
}
|
||||
if matchesAll {
|
||||
filtered = append(filtered, v)
|
||||
}
|
||||
}
|
||||
idPkTs = filtered
|
||||
}
|
||||
|
||||
if len(f.Search) == 0 {
|
||||
// No search query: sort by timestamp in reverse chronological order
|
||||
sort.Slice(
|
||||
idPkTs, func(i, j int) bool {
|
||||
return idPkTs[i].Ts > idPkTs[j].Ts
|
||||
},
|
||||
)
|
||||
} else {
|
||||
// Search query present: blend match count relevance with recency (50/50)
|
||||
// Normalize both match count and timestamp to [0,1] and compute score.
|
||||
var maxCount int
|
||||
var minTs, maxTs int64
|
||||
if len(idPkTs) > 0 {
|
||||
minTs, maxTs = idPkTs[0].Ts, idPkTs[0].Ts
|
||||
}
|
||||
for _, v := range idPkTs {
|
||||
if c := counts[v.Ser]; c > maxCount {
|
||||
maxCount = c
|
||||
}
|
||||
if v.Ts < minTs {
|
||||
minTs = v.Ts
|
||||
}
|
||||
if v.Ts > maxTs {
|
||||
maxTs = v.Ts
|
||||
}
|
||||
}
|
||||
// Precompute denominator to avoid div-by-zero
|
||||
tsSpan := maxTs - minTs
|
||||
if tsSpan <= 0 {
|
||||
tsSpan = 1
|
||||
}
|
||||
if maxCount <= 0 {
|
||||
maxCount = 1
|
||||
}
|
||||
sort.Slice(
|
||||
idPkTs, func(i, j int) bool {
|
||||
ci := float64(counts[idPkTs[i].Ser]) / float64(maxCount)
|
||||
cj := float64(counts[idPkTs[j].Ser]) / float64(maxCount)
|
||||
ai := float64(idPkTs[i].Ts-minTs) / float64(tsSpan)
|
||||
aj := float64(idPkTs[j].Ts-minTs) / float64(tsSpan)
|
||||
si := 0.5*ci + 0.5*ai
|
||||
sj := 0.5*cj + 0.5*aj
|
||||
if si == sj {
|
||||
// tie-break by recency
|
||||
return idPkTs[i].Ts > idPkTs[j].Ts
|
||||
}
|
||||
return si > sj
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
if f.Limit != nil && len(idPkTs) > int(*f.Limit) {
|
||||
idPkTs = idPkTs[:*f.Limit]
|
||||
}
|
||||
|
||||
@@ -9,14 +9,23 @@ import (
|
||||
|
||||
"github.com/dgraph-io/badger/v4"
|
||||
"lol.mleku.dev/chk"
|
||||
"lol.mleku.dev/log"
|
||||
"next.orly.dev/pkg/database/indexes"
|
||||
"next.orly.dev/pkg/database/indexes/types"
|
||||
"next.orly.dev/pkg/encoders/event"
|
||||
"next.orly.dev/pkg/encoders/filter"
|
||||
"next.orly.dev/pkg/encoders/hex"
|
||||
"next.orly.dev/pkg/encoders/kind"
|
||||
"next.orly.dev/pkg/encoders/tag"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrOlderThanExisting is returned when a candidate event is older than an existing replaceable/addressable event.
|
||||
ErrOlderThanExisting = errors.New("older than existing event")
|
||||
// ErrMissingDTag is returned when a parameterized replaceable event lacks the required 'd' tag.
|
||||
ErrMissingDTag = errors.New("event is missing a d tag identifier")
|
||||
)
|
||||
|
||||
func (d *D) GetSerialsFromFilter(f *filter.F) (
|
||||
sers types.Uint40s, err error,
|
||||
) {
|
||||
@@ -34,6 +43,65 @@ func (d *D) GetSerialsFromFilter(f *filter.F) (
|
||||
return
|
||||
}
|
||||
|
||||
// WouldReplaceEvent checks if the provided event would replace existing events
|
||||
// based on Nostr's replaceable or parameterized replaceable semantics. It
|
||||
// returns true along with the serials of events that should be replaced if the
|
||||
// candidate is newer-or-equal. If an existing event is newer, it returns
|
||||
// (false, serials, ErrOlderThanExisting). If no conflicts exist, it returns
|
||||
// (false, nil, nil).
|
||||
func (d *D) WouldReplaceEvent(ev *event.E) (bool, types.Uint40s, error) {
|
||||
// Only relevant for replaceable or parameterized replaceable kinds
|
||||
if !(kind.IsReplaceable(ev.Kind) || kind.IsParameterizedReplaceable(ev.Kind)) {
|
||||
return false, nil, nil
|
||||
}
|
||||
|
||||
var f *filter.F
|
||||
if kind.IsReplaceable(ev.Kind) {
|
||||
f = &filter.F{
|
||||
Authors: tag.NewFromBytesSlice(ev.Pubkey),
|
||||
Kinds: kind.NewS(kind.New(ev.Kind)),
|
||||
}
|
||||
} else {
|
||||
// parameterized replaceable requires 'd' tag
|
||||
dTag := ev.Tags.GetFirst([]byte("d"))
|
||||
if dTag == nil {
|
||||
return false, nil, ErrMissingDTag
|
||||
}
|
||||
f = &filter.F{
|
||||
Authors: tag.NewFromBytesSlice(ev.Pubkey),
|
||||
Kinds: kind.NewS(kind.New(ev.Kind)),
|
||||
Tags: tag.NewS(
|
||||
tag.NewFromAny("d", dTag.Value()),
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
sers, err := d.GetSerialsFromFilter(f)
|
||||
if chk.E(err) {
|
||||
return false, nil, err
|
||||
}
|
||||
if len(sers) == 0 {
|
||||
return false, nil, nil
|
||||
}
|
||||
|
||||
// Determine if any existing event is newer than the candidate
|
||||
shouldReplace := true
|
||||
for _, s := range sers {
|
||||
oldEv, ferr := d.FetchEventBySerial(s)
|
||||
if chk.E(ferr) {
|
||||
continue
|
||||
}
|
||||
if ev.CreatedAt < oldEv.CreatedAt {
|
||||
shouldReplace = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if shouldReplace {
|
||||
return true, sers, nil
|
||||
}
|
||||
return false, sers, ErrOlderThanExisting
|
||||
}
|
||||
|
||||
// SaveEvent saves an event to the database, generating all the necessary indexes.
|
||||
func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
|
||||
if ev == nil {
|
||||
@@ -66,117 +134,37 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
|
||||
err = fmt.Errorf("blocked: %s", err.Error())
|
||||
return
|
||||
}
|
||||
// check for replacement
|
||||
if kind.IsReplaceable(ev.Kind) {
|
||||
// find the events and check timestamps before deleting
|
||||
f := &filter.F{
|
||||
Authors: tag.NewFromBytesSlice(ev.Pubkey),
|
||||
Kinds: kind.NewS(kind.New(ev.Kind)),
|
||||
}
|
||||
// check for replacement (separated check vs deletion)
|
||||
if kind.IsReplaceable(ev.Kind) || kind.IsParameterizedReplaceable(ev.Kind) {
|
||||
var wouldReplace bool
|
||||
var sers types.Uint40s
|
||||
if sers, err = d.GetSerialsFromFilter(f); chk.E(err) {
|
||||
var werr error
|
||||
if wouldReplace, sers, werr = d.WouldReplaceEvent(ev); werr != nil {
|
||||
if errors.Is(werr, ErrOlderThanExisting) {
|
||||
if kind.IsReplaceable(ev.Kind) {
|
||||
err = errors.New("blocked: event is older than existing replaceable event")
|
||||
} else {
|
||||
err = errors.New("blocked: event is older than existing addressable event")
|
||||
}
|
||||
return
|
||||
}
|
||||
if errors.Is(werr, ErrMissingDTag) {
|
||||
// keep behavior consistent with previous implementation
|
||||
err = ErrMissingDTag
|
||||
return
|
||||
}
|
||||
// any other error
|
||||
return
|
||||
}
|
||||
// if found, check timestamps before deleting
|
||||
if len(sers) > 0 {
|
||||
var shouldReplace bool = true
|
||||
if wouldReplace {
|
||||
for _, s := range sers {
|
||||
var oldEv *event.E
|
||||
if oldEv, err = d.FetchEventBySerial(s); chk.E(err) {
|
||||
continue
|
||||
}
|
||||
// Only replace if the new event is newer or same timestamp
|
||||
if ev.CreatedAt < oldEv.CreatedAt {
|
||||
// log.I.F(
|
||||
// "SaveEvent: rejecting older replaceable event ID=%s (created_at=%d) - existing event ID=%s (created_at=%d)",
|
||||
// hex.Enc(ev.ID), ev.CreatedAt, hex.Enc(oldEv.ID),
|
||||
// oldEv.CreatedAt,
|
||||
// )
|
||||
shouldReplace = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if shouldReplace {
|
||||
for _, s := range sers {
|
||||
var oldEv *event.E
|
||||
if oldEv, err = d.FetchEventBySerial(s); chk.E(err) {
|
||||
continue
|
||||
}
|
||||
// log.I.F(
|
||||
// "SaveEvent: replacing older replaceable event ID=%s (created_at=%d) with newer event ID=%s (created_at=%d)",
|
||||
// hex.Enc(oldEv.ID), oldEv.CreatedAt, hex.Enc(ev.ID),
|
||||
// ev.CreatedAt,
|
||||
// )
|
||||
if err = d.DeleteEventBySerial(
|
||||
c, s, oldEv,
|
||||
); chk.E(err) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Don't save the older event - return an error
|
||||
err = errors.New("blocked: event is older than existing replaceable event")
|
||||
return
|
||||
}
|
||||
}
|
||||
} else if kind.IsParameterizedReplaceable(ev.Kind) {
|
||||
// find the events and check timestamps before deleting
|
||||
dTag := ev.Tags.GetFirst([]byte("d"))
|
||||
if dTag == nil {
|
||||
err = errors.New("event is missing a d tag identifier")
|
||||
return
|
||||
}
|
||||
f := &filter.F{
|
||||
Authors: tag.NewFromBytesSlice(ev.Pubkey),
|
||||
Kinds: kind.NewS(kind.New(ev.Kind)),
|
||||
Tags: tag.NewS(
|
||||
tag.NewFromAny("d", dTag.Value()),
|
||||
),
|
||||
}
|
||||
var sers types.Uint40s
|
||||
if sers, err = d.GetSerialsFromFilter(f); chk.E(err) {
|
||||
return
|
||||
}
|
||||
// if found, check timestamps before deleting
|
||||
if len(sers) > 0 {
|
||||
var shouldReplace bool = true
|
||||
for _, s := range sers {
|
||||
var oldEv *event.E
|
||||
if oldEv, err = d.FetchEventBySerial(s); chk.E(err) {
|
||||
if err = d.DeleteEventBySerial(c, s, oldEv); chk.E(err) {
|
||||
continue
|
||||
}
|
||||
// Only replace if the new event is newer or same timestamp
|
||||
if ev.CreatedAt < oldEv.CreatedAt {
|
||||
// log.I.F(
|
||||
// "SaveEvent: rejecting older addressable event ID=%s (created_at=%d) - existing event ID=%s (created_at=%d)",
|
||||
// hex.Enc(ev.ID), ev.CreatedAt, hex.Enc(oldEv.ID),
|
||||
// oldEv.CreatedAt,
|
||||
// )
|
||||
shouldReplace = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if shouldReplace {
|
||||
for _, s := range sers {
|
||||
var oldEv *event.E
|
||||
if oldEv, err = d.FetchEventBySerial(s); chk.E(err) {
|
||||
continue
|
||||
}
|
||||
// log.I.F(
|
||||
// "SaveEvent: replacing older addressable event ID=%s (created_at=%d) with newer event ID=%s (created_at=%d)",
|
||||
// hex.Enc(oldEv.ID), oldEv.CreatedAt, hex.Enc(ev.ID),
|
||||
// ev.CreatedAt,
|
||||
// )
|
||||
if err = d.DeleteEventBySerial(
|
||||
c, s, oldEv,
|
||||
); chk.E(err) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Don't save the older event - return an error
|
||||
err = errors.New("blocked: event is older than existing addressable event")
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -230,10 +218,10 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
|
||||
return
|
||||
},
|
||||
)
|
||||
// log.T.F(
|
||||
// "total data written: %d bytes keys %d bytes values for event ID %s", kc,
|
||||
// vc, hex.Enc(ev.ID),
|
||||
// )
|
||||
log.T.F(
|
||||
"total data written: %d bytes keys %d bytes values for event ID %s", kc,
|
||||
vc, hex.Enc(ev.ID),
|
||||
)
|
||||
// log.T.C(
|
||||
// func() string {
|
||||
// return fmt.Sprintf("event:\n%s\n", ev.Serialize())
|
||||
|
||||
178
pkg/database/tokenize.go
Normal file
178
pkg/database/tokenize.go
Normal file
@@ -0,0 +1,178 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
sha "next.orly.dev/pkg/crypto/sha256"
|
||||
)
|
||||
|
||||
// TokenHashes extracts unique word hashes (8-byte truncated sha256) from content.
|
||||
// Rules:
|
||||
// - Unicode-aware: words are sequences of letters or numbers.
|
||||
// - Lowercased using unicode case mapping.
|
||||
// - Ignore URLs (starting with http://, https://, www., or containing "://").
|
||||
// - Ignore nostr: URIs and #[n] mentions.
|
||||
// - Ignore words shorter than 2 runes.
|
||||
// - Exclude 64-character hexadecimal strings (likely IDs/pubkeys).
|
||||
func TokenHashes(content []byte) [][]byte {
|
||||
s := string(content)
|
||||
var out [][]byte
|
||||
seen := make(map[string]struct{})
|
||||
|
||||
i := 0
|
||||
for i < len(s) {
|
||||
r, size := rune(s[i]), 1
|
||||
if r >= 0x80 {
|
||||
r, size = utf8DecodeRuneInString(s[i:])
|
||||
}
|
||||
|
||||
// Skip whitespace
|
||||
if unicode.IsSpace(r) {
|
||||
i += size
|
||||
continue
|
||||
}
|
||||
|
||||
// Skip URLs and schemes
|
||||
if hasPrefixFold(s[i:], "http://") || hasPrefixFold(s[i:], "https://") || hasPrefixFold(s[i:], "nostr:") || hasPrefixFold(s[i:], "www.") {
|
||||
i = skipUntilSpace(s, i)
|
||||
continue
|
||||
}
|
||||
// If token contains "://" ahead, treat as URL and skip to space
|
||||
if j := strings.Index(s[i:], "://"); j == 0 || (j > 0 && isWordStart(r)) {
|
||||
// Only if it's at start of token
|
||||
before := s[i : i+j]
|
||||
if len(before) == 0 || allAlphaNum(before) {
|
||||
i = skipUntilSpace(s, i)
|
||||
continue
|
||||
}
|
||||
}
|
||||
// Skip #[n] mentions
|
||||
if r == '#' && i+size < len(s) && s[i+size] == '[' {
|
||||
end := strings.IndexByte(s[i:], ']')
|
||||
if end >= 0 {
|
||||
i += end + 1
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Collect a word
|
||||
start := i
|
||||
var runes []rune
|
||||
for i < len(s) {
|
||||
r2, size2 := rune(s[i]), 1
|
||||
if r2 >= 0x80 {
|
||||
r2, size2 = utf8DecodeRuneInString(s[i:])
|
||||
}
|
||||
if unicode.IsLetter(r2) || unicode.IsNumber(r2) {
|
||||
runes = append(runes, unicode.ToLower(r2))
|
||||
i += size2
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
// If we didn't consume any rune for a word, advance by one rune to avoid stalling
|
||||
if i == start {
|
||||
_, size2 := utf8DecodeRuneInString(s[i:])
|
||||
i += size2
|
||||
continue
|
||||
}
|
||||
if len(runes) >= 2 {
|
||||
w := string(runes)
|
||||
// Exclude 64-char hex strings
|
||||
if isHex64(w) {
|
||||
continue
|
||||
}
|
||||
if _, ok := seen[w]; !ok {
|
||||
seen[w] = struct{}{}
|
||||
h := sha.Sum256([]byte(w))
|
||||
out = append(out, h[:8])
|
||||
}
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func hasPrefixFold(s, prefix string) bool {
|
||||
if len(s) < len(prefix) {
|
||||
return false
|
||||
}
|
||||
for i := 0; i < len(prefix); i++ {
|
||||
c := s[i]
|
||||
p := prefix[i]
|
||||
if c == p {
|
||||
continue
|
||||
}
|
||||
// ASCII case-insensitive
|
||||
if 'A' <= c && c <= 'Z' {
|
||||
c = c - 'A' + 'a'
|
||||
}
|
||||
if 'A' <= p && p <= 'Z' {
|
||||
p = p - 'A' + 'a'
|
||||
}
|
||||
if c != p {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func skipUntilSpace(s string, i int) int {
|
||||
for i < len(s) {
|
||||
r, size := rune(s[i]), 1
|
||||
if r >= 0x80 {
|
||||
r, size = utf8DecodeRuneInString(s[i:])
|
||||
}
|
||||
if unicode.IsSpace(r) {
|
||||
return i
|
||||
}
|
||||
i += size
|
||||
}
|
||||
return i
|
||||
}
|
||||
|
||||
func allAlphaNum(s string) bool {
|
||||
for _, r := range s {
|
||||
if !(unicode.IsLetter(r) || unicode.IsNumber(r)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func isWordStart(r rune) bool { return unicode.IsLetter(r) || unicode.IsNumber(r) }
|
||||
|
||||
// Minimal utf8 rune decode without importing utf8 to avoid extra deps elsewhere
|
||||
func utf8DecodeRuneInString(s string) (r rune, size int) {
|
||||
// Fallback to standard library if available; however, using basic decoding
|
||||
for i := 1; i <= 4 && i <= len(s); i++ {
|
||||
r, size = rune(s[0]), 1
|
||||
if r < 0x80 {
|
||||
return r, 1
|
||||
}
|
||||
// Use stdlib for correctness
|
||||
return []rune(s[:i])[0], len(string([]rune(s[:i])[0]))
|
||||
}
|
||||
return rune(s[0]), 1
|
||||
}
|
||||
|
||||
// isHex64 returns true if s is exactly 64 hex characters (0-9, a-f)
|
||||
func isHex64(s string) bool {
|
||||
if len(s) != 64 {
|
||||
return false
|
||||
}
|
||||
for i := 0; i < 64; i++ {
|
||||
c := s[i]
|
||||
if c >= '0' && c <= '9' {
|
||||
continue
|
||||
}
|
||||
if c >= 'a' && c <= 'f' {
|
||||
continue
|
||||
}
|
||||
if c >= 'A' && c <= 'F' {
|
||||
continue
|
||||
}
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
@@ -156,3 +156,21 @@ func (t *T) Relay() (key []byte) {
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// ToSliceOfStrings returns the tag's bytes slices as a slice of strings. This
|
||||
// method provides a convenient way to access the tag's contents in string format.
|
||||
//
|
||||
// # Return Values
|
||||
//
|
||||
// - s ([]string): A slice containing all tag elements converted to strings.
|
||||
//
|
||||
// # Expected Behaviour
|
||||
//
|
||||
// Returns an empty slice if the tag is empty, otherwise returns a new slice with
|
||||
// each byte slice element converted to a string.
|
||||
func (t *T) ToSliceOfStrings() (s []string) {
|
||||
for _, v := range t.T {
|
||||
s = append(s, string(v))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"bytes"
|
||||
|
||||
"lol.mleku.dev/chk"
|
||||
"lol.mleku.dev/log"
|
||||
"next.orly.dev/pkg/utils"
|
||||
)
|
||||
|
||||
@@ -83,6 +84,10 @@ func (s *S) MarshalJSON() (b []byte, err error) {
|
||||
}
|
||||
|
||||
func (s *S) Marshal(dst []byte) (b []byte) {
|
||||
if s == nil {
|
||||
log.I.F("tags cannot be used without initialization")
|
||||
return
|
||||
}
|
||||
b = dst
|
||||
b = append(b, '[')
|
||||
for i, ss := range *s {
|
||||
@@ -183,3 +188,24 @@ func (s *S) GetTagElement(i int) (t *T) {
|
||||
t = (*s)[i]
|
||||
return
|
||||
}
|
||||
|
||||
// ToSliceOfSliceOfStrings converts the tag collection into a two-dimensional
|
||||
// slice of strings, maintaining the structure of tags and their elements.
|
||||
//
|
||||
// # Return Values
|
||||
//
|
||||
// - ss ([][]string): A slice of string slices where each inner slice represents
|
||||
// a tag's elements converted from bytes to strings.
|
||||
//
|
||||
// - err (error): Currently unused but maintained for interface consistency.
|
||||
//
|
||||
// # Expected Behaviour
|
||||
//
|
||||
// Iterates through each tag in the collection and converts its byte elements
|
||||
// to strings, preserving the tag structure in the resulting nested slice.
|
||||
func (s *S) ToSliceOfSliceOfStrings() (ss [][]string) {
|
||||
for _, v := range *s {
|
||||
ss = append(ss, v.ToSliceOfStrings())
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -91,12 +91,22 @@ func Validate(evt *event.E, challenge []byte, relayURL string) (
|
||||
err = errorf.E("error parsing relay url: %s", err)
|
||||
return
|
||||
}
|
||||
// Allow both ws:// and wss:// schemes when behind a reverse proxy
|
||||
// This handles cases where the relay expects ws:// but receives wss:// from clients
|
||||
// connecting through HTTPS proxies
|
||||
if expected.Scheme != found.Scheme {
|
||||
err = errorf.E(
|
||||
"HTTP Scheme incorrect: expected '%s' got '%s",
|
||||
expected.Scheme, found.Scheme,
|
||||
)
|
||||
return
|
||||
// Check if this is a ws/wss scheme mismatch (acceptable behind proxy)
|
||||
if (expected.Scheme == "ws" && found.Scheme == "wss") ||
|
||||
(expected.Scheme == "wss" && found.Scheme == "ws") {
|
||||
// This is acceptable when behind a reverse proxy
|
||||
// The client will always send wss:// when connecting through HTTPS
|
||||
} else {
|
||||
err = errorf.E(
|
||||
"HTTP Scheme incorrect: expected '%s' got '%s",
|
||||
expected.Scheme, found.Scheme,
|
||||
)
|
||||
return
|
||||
}
|
||||
}
|
||||
if expected.Host != found.Host {
|
||||
err = errorf.E(
|
||||
|
||||
@@ -1 +1 @@
|
||||
v0.8.6
|
||||
v0.10.5
|
||||
@@ -303,11 +303,11 @@ The spider operates in two phases:
|
||||
|
||||
=== configuration
|
||||
|
||||
Enable the spider by setting the spider mode to "follow":
|
||||
Enable the spider by setting the spider mode to "follows":
|
||||
|
||||
[source,bash]
|
||||
----
|
||||
export ORLY_SPIDER_MODE=follow
|
||||
export ORLY_SPIDER_MODE=follows
|
||||
export ORLY_SPIDER_FREQUENCY=1h
|
||||
----
|
||||
|
||||
@@ -322,7 +322,7 @@ Configuration options:
|
||||
----
|
||||
# Enable both follows ACL and spider sync
|
||||
export ORLY_ACL_MODE=follows
|
||||
export ORLY_SPIDER_MODE=follow
|
||||
export ORLY_SPIDER_MODE=follows
|
||||
export ORLY_SPIDER_FREQUENCY=30m
|
||||
export ORLY_ADMINS=npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku
|
||||
|
||||
|
||||
@@ -1,28 +1,25 @@
|
||||
[Unit]
|
||||
Description=Stella's Orly Nostr Relay
|
||||
Description=Stella's Orly Nostr Relay (Docker Compose)
|
||||
Documentation=https://github.com/Silberengel/next.orly.dev
|
||||
After=network-online.target
|
||||
After=network-online.target docker.service
|
||||
Wants=network-online.target
|
||||
Requires=docker.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
User=madmin
|
||||
Group=madmin
|
||||
WorkingDirectory=/home/madmin/Projects/GitCitadel/next.orly.dev
|
||||
ExecStart=docker compose up stella-relay
|
||||
ExecStop=docker compose down
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
TimeoutStartSec=60
|
||||
TimeoutStopSec=30
|
||||
|
||||
# Environment variables
|
||||
Environment=ORLY_DATA_DIR=/home/madmin/.local/share/orly-relay
|
||||
Environment=ORLY_LISTEN=127.0.0.1
|
||||
Environment=ORLY_PORT=7777
|
||||
Environment=ORLY_LOG_LEVEL=info
|
||||
Environment=ORLY_OWNERS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx
|
||||
Environment=ORLY_ADMINS=npub1v30tsz9vw6ylpz63g0a702nj3xa26t3m7p5us8f2y2sd8v6cnsvq465zjx,npub1l5sga6xg72phsz5422ykujprejwud075ggrr3z2hwyrfgr7eylqstegx9z
|
||||
# Start the relay using docker compose
|
||||
ExecStart=/usr/bin/docker compose up -d orly-relay
|
||||
|
||||
# Stop the relay
|
||||
ExecStop=/usr/bin/docker compose down
|
||||
|
||||
# Reload configuration (restart containers)
|
||||
ExecReload=/usr/bin/docker compose restart orly-relay
|
||||
|
||||
# Security settings
|
||||
NoNewPrivileges=true
|
||||
@@ -35,5 +32,11 @@ ReadWritePaths=/home/madmin/Projects/GitCitadel/next.orly.dev/data
|
||||
LimitNOFILE=65536
|
||||
LimitNPROC=4096
|
||||
|
||||
# Restart policy
|
||||
Restart=on-failure
|
||||
RestartSec=10
|
||||
TimeoutStartSec=60
|
||||
TimeoutStopSec=30
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
Reference in New Issue
Block a user