Compare commits
17 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
57eec55727 | ||
|
|
7abcbafaf4 | ||
|
|
37d4be5e93 | ||
|
|
91e38edd2c | ||
|
|
cb50a9c5c4 | ||
|
|
c5be98bcaa | ||
|
|
417866ebf4 | ||
|
|
0e87337723 | ||
|
|
b10851c209 | ||
|
|
e68916ca5d | ||
|
|
0e30f7a697 | ||
|
|
a0af5bb45e | ||
|
|
9da1784b1b | ||
|
|
205f23fc0c | ||
|
|
489b9f4593 | ||
|
|
604d759a6a | ||
|
|
be72b694eb |
@@ -49,10 +49,12 @@ If no argument provided, default to `patch`.
|
||||
GIT_SSH_COMMAND="ssh -i ~/.ssh/gitmlekudev" git push ssh://mleku@git.mleku.dev:2222/mleku/next.orly.dev.git main --tags
|
||||
```
|
||||
|
||||
11. **Deploy to VPS** by running:
|
||||
```
|
||||
ssh relay.orly.dev 'cd ~/src/next.orly.dev && git stash && git pull origin main && export PATH=$PATH:~/go/bin && CGO_ENABLED=0 go build -o ~/.local/bin/next.orly.dev && sudo /usr/sbin/setcap cap_net_bind_service=+ep ~/.local/bin/next.orly.dev && sudo systemctl restart orly && ~/.local/bin/next.orly.dev version'
|
||||
11. **Deploy to relay.orly.dev** (ARM64):
|
||||
Build on remote (faster than uploading cross-compiled binary due to slow local bandwidth):
|
||||
```bash
|
||||
ssh relay.orly.dev 'cd ~/src/next.orly.dev && git pull origin main && GOPATH=$HOME CGO_ENABLED=0 ~/go/bin/go build -o ~/.local/bin/next.orly.dev && sudo /usr/sbin/setcap cap_net_bind_service=+ep ~/.local/bin/next.orly.dev && sudo systemctl restart orly && ~/.local/bin/next.orly.dev version'
|
||||
```
|
||||
Note: setcap must be re-applied after each binary rebuild to allow binding to ports 80/443.
|
||||
|
||||
12. **Report completion** with the new version and commit hash
|
||||
|
||||
|
||||
55
CLAUDE.md
55
CLAUDE.md
@@ -40,7 +40,7 @@ NOSTR_SECRET_KEY=nsec1... ./nurl https://relay.example.com/api/logs/clear
|
||||
|----------|---------|-------------|
|
||||
| `ORLY_PORT` | 3334 | Server port |
|
||||
| `ORLY_LOG_LEVEL` | info | trace/debug/info/warn/error |
|
||||
| `ORLY_DB_TYPE` | badger | badger/neo4j/wasmdb |
|
||||
| `ORLY_DB_TYPE` | badger | badger/bbolt/neo4j/wasmdb |
|
||||
| `ORLY_POLICY_ENABLED` | false | Enable policy system |
|
||||
| `ORLY_ACL_MODE` | none | none/follows/managed |
|
||||
| `ORLY_TLS_DOMAINS` | | Let's Encrypt domains |
|
||||
@@ -67,6 +67,7 @@ app/
|
||||
web/ → Svelte frontend (embedded via go:embed)
|
||||
pkg/
|
||||
database/ → Database interface + Badger implementation
|
||||
bbolt/ → BBolt backend (HDD-optimized, B+tree)
|
||||
neo4j/ → Neo4j backend with WoT extensions
|
||||
wasmdb/ → WebAssembly IndexedDB backend
|
||||
protocol/ → Nostr protocol (ws/, auth/, publish/)
|
||||
@@ -149,12 +150,59 @@ Before enabling auth-required on any deployment:
|
||||
|
||||
| Backend | Use Case | Build |
|
||||
|---------|----------|-------|
|
||||
| **Badger** (default) | Single-instance, embedded | Standard |
|
||||
| **Badger** (default) | Single-instance, SSD, high performance | Standard |
|
||||
| **BBolt** | HDD-optimized, large archives, lower memory | `ORLY_DB_TYPE=bbolt` |
|
||||
| **Neo4j** | Social graph, WoT queries | `ORLY_DB_TYPE=neo4j` |
|
||||
| **WasmDB** | Browser/WebAssembly | `GOOS=js GOARCH=wasm` |
|
||||
|
||||
All implement `pkg/database.Database` interface.
|
||||
|
||||
### Scaling for Large Archives
|
||||
|
||||
For archives with millions of events, consider:
|
||||
|
||||
**Option 1: Tune Badger (SSD recommended)**
|
||||
```bash
|
||||
# Increase caches for larger working set (requires more RAM)
|
||||
ORLY_DB_BLOCK_CACHE_MB=2048 # 2GB block cache
|
||||
ORLY_DB_INDEX_CACHE_MB=1024 # 1GB index cache
|
||||
ORLY_SERIAL_CACHE_PUBKEYS=500000 # 500k pubkeys
|
||||
ORLY_SERIAL_CACHE_EVENT_IDS=2000000 # 2M event IDs
|
||||
|
||||
# Higher compression to reduce disk IO
|
||||
ORLY_DB_ZSTD_LEVEL=9 # Best compression ratio
|
||||
|
||||
# Enable storage GC with aggressive eviction
|
||||
ORLY_GC_ENABLED=true
|
||||
ORLY_GC_BATCH_SIZE=5000
|
||||
ORLY_MAX_STORAGE_BYTES=107374182400 # 100GB cap
|
||||
```
|
||||
|
||||
**Option 2: Use BBolt for HDD/Low-Memory Deployments**
|
||||
```bash
|
||||
ORLY_DB_TYPE=bbolt
|
||||
|
||||
# Tune for your HDD
|
||||
ORLY_BBOLT_BATCH_MAX_EVENTS=10000 # Larger batches for HDD
|
||||
ORLY_BBOLT_BATCH_MAX_MB=256 # 256MB batch buffer
|
||||
ORLY_BBOLT_FLUSH_TIMEOUT_SEC=60 # Longer flush interval
|
||||
ORLY_BBOLT_BLOOM_SIZE_MB=32 # Larger bloom filter
|
||||
ORLY_BBOLT_MMAP_SIZE_MB=16384 # 16GB mmap (scales with DB size)
|
||||
```
|
||||
|
||||
**Migration Between Backends**
|
||||
```bash
|
||||
# Migrate from Badger to BBolt
|
||||
./orly migrate --from badger --to bbolt
|
||||
|
||||
# Migrate with custom target path
|
||||
./orly migrate --from badger --to bbolt --target-path /mnt/hdd/orly-archive
|
||||
```
|
||||
|
||||
**BBolt vs Badger Trade-offs:**
|
||||
- BBolt: Lower memory, HDD-friendly, simpler (B+tree), slower random reads
|
||||
- Badger: Higher memory, SSD-optimized (LSM), faster concurrent access
|
||||
|
||||
## Logging (lol.mleku.dev)
|
||||
|
||||
```go
|
||||
@@ -221,7 +269,8 @@ if (isValidNsec(nsec)) { ... }
|
||||
|
||||
## Dependencies
|
||||
|
||||
- `github.com/dgraph-io/badger/v4` - Badger DB
|
||||
- `github.com/dgraph-io/badger/v4` - Badger DB (LSM, SSD-optimized)
|
||||
- `go.etcd.io/bbolt` - BBolt DB (B+tree, HDD-optimized)
|
||||
- `github.com/neo4j/neo4j-go-driver/v5` - Neo4j
|
||||
- `github.com/gorilla/websocket` - WebSocket
|
||||
- `github.com/ebitengine/purego` - CGO-free C loading
|
||||
|
||||
@@ -20,8 +20,12 @@ func initializeBlossomServer(
|
||||
blossomCfg := &blossom.Config{
|
||||
BaseURL: "", // Will be set dynamically per request
|
||||
MaxBlobSize: 100 * 1024 * 1024, // 100MB default
|
||||
AllowedMimeTypes: nil, // Allow all MIME types by default
|
||||
AllowedMimeTypes: nil, // Allow all MIME types by default
|
||||
RequireAuth: cfg.AuthRequired || cfg.AuthToWrite,
|
||||
// Rate limiting for non-followed users
|
||||
RateLimitEnabled: cfg.BlossomRateLimitEnabled,
|
||||
DailyLimitMB: cfg.BlossomDailyLimitMB,
|
||||
BurstLimitMB: cfg.BlossomBurstLimitMB,
|
||||
}
|
||||
|
||||
// Create blossom server with relay's ACL registry
|
||||
@@ -31,7 +35,12 @@ func initializeBlossomServer(
|
||||
// We'll need to modify the handler to inject the baseURL per request
|
||||
// For now, we'll use a middleware approach
|
||||
|
||||
log.I.F("blossom server initialized with ACL mode: %s", cfg.ACLMode)
|
||||
if cfg.BlossomRateLimitEnabled {
|
||||
log.I.F("blossom server initialized with ACL mode: %s, rate limit: %dMB/day (burst: %dMB)",
|
||||
cfg.ACLMode, cfg.BlossomDailyLimitMB, cfg.BlossomBurstLimitMB)
|
||||
} else {
|
||||
log.I.F("blossom server initialized with ACL mode: %s", cfg.ACLMode)
|
||||
}
|
||||
return bs, nil
|
||||
}
|
||||
|
||||
|
||||
341
app/branding/branding.go
Normal file
341
app/branding/branding.go
Normal file
@@ -0,0 +1,341 @@
|
||||
package branding
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/fs"
|
||||
"mime"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"lol.mleku.dev/chk"
|
||||
"lol.mleku.dev/log"
|
||||
)
|
||||
|
||||
// Manager handles loading and serving custom branding assets
|
||||
type Manager struct {
|
||||
dir string
|
||||
config Config
|
||||
|
||||
// Cached assets for performance
|
||||
cachedAssets map[string][]byte
|
||||
cachedCSS []byte
|
||||
}
|
||||
|
||||
// New creates a new branding Manager by loading configuration from the specified directory
|
||||
func New(dir string) (*Manager, error) {
|
||||
m := &Manager{
|
||||
dir: dir,
|
||||
cachedAssets: make(map[string][]byte),
|
||||
}
|
||||
|
||||
// Load branding.json
|
||||
configPath := filepath.Join(dir, "branding.json")
|
||||
data, err := os.ReadFile(configPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
log.I.F("branding.json not found in %s, using defaults", dir)
|
||||
m.config = DefaultConfig()
|
||||
} else {
|
||||
return nil, fmt.Errorf("failed to read branding.json: %w", err)
|
||||
}
|
||||
} else {
|
||||
if err := json.Unmarshal(data, &m.config); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse branding.json: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Pre-load and cache CSS
|
||||
if err := m.loadCSS(); err != nil {
|
||||
log.W.F("failed to load custom CSS: %v", err)
|
||||
}
|
||||
|
||||
return m, nil
|
||||
}
|
||||
|
||||
// Dir returns the branding directory path
|
||||
func (m *Manager) Dir() string {
|
||||
return m.dir
|
||||
}
|
||||
|
||||
// Config returns the loaded branding configuration
|
||||
func (m *Manager) Config() Config {
|
||||
return m.config
|
||||
}
|
||||
|
||||
// GetAsset returns a custom asset by name with its MIME type
|
||||
// Returns the asset data, MIME type, and whether it was found
|
||||
func (m *Manager) GetAsset(name string) ([]byte, string, bool) {
|
||||
var assetPath string
|
||||
|
||||
switch name {
|
||||
case "logo":
|
||||
assetPath = m.config.Assets.Logo
|
||||
case "favicon":
|
||||
assetPath = m.config.Assets.Favicon
|
||||
case "icon-192":
|
||||
assetPath = m.config.Assets.Icon192
|
||||
case "icon-512":
|
||||
assetPath = m.config.Assets.Icon512
|
||||
default:
|
||||
return nil, "", false
|
||||
}
|
||||
|
||||
if assetPath == "" {
|
||||
return nil, "", false
|
||||
}
|
||||
|
||||
// Check cache first
|
||||
if data, ok := m.cachedAssets[name]; ok {
|
||||
return data, m.getMimeType(assetPath), true
|
||||
}
|
||||
|
||||
// Load from disk
|
||||
fullPath := filepath.Join(m.dir, assetPath)
|
||||
data, err := os.ReadFile(fullPath)
|
||||
if chk.D(err) {
|
||||
return nil, "", false
|
||||
}
|
||||
|
||||
// Cache for next time
|
||||
m.cachedAssets[name] = data
|
||||
return data, m.getMimeType(assetPath), true
|
||||
}
|
||||
|
||||
// GetAssetPath returns the full filesystem path for a custom asset
|
||||
func (m *Manager) GetAssetPath(name string) (string, bool) {
|
||||
var assetPath string
|
||||
|
||||
switch name {
|
||||
case "logo":
|
||||
assetPath = m.config.Assets.Logo
|
||||
case "favicon":
|
||||
assetPath = m.config.Assets.Favicon
|
||||
case "icon-192":
|
||||
assetPath = m.config.Assets.Icon192
|
||||
case "icon-512":
|
||||
assetPath = m.config.Assets.Icon512
|
||||
default:
|
||||
return "", false
|
||||
}
|
||||
|
||||
if assetPath == "" {
|
||||
return "", false
|
||||
}
|
||||
|
||||
fullPath := filepath.Join(m.dir, assetPath)
|
||||
if _, err := os.Stat(fullPath); err != nil {
|
||||
return "", false
|
||||
}
|
||||
|
||||
return fullPath, true
|
||||
}
|
||||
|
||||
// loadCSS loads and caches the custom CSS files
|
||||
func (m *Manager) loadCSS() error {
|
||||
var combined bytes.Buffer
|
||||
|
||||
// Load variables CSS first (if exists)
|
||||
if m.config.CSS.VariablesCSS != "" {
|
||||
varsPath := filepath.Join(m.dir, m.config.CSS.VariablesCSS)
|
||||
if data, err := os.ReadFile(varsPath); err == nil {
|
||||
combined.Write(data)
|
||||
combined.WriteString("\n")
|
||||
}
|
||||
}
|
||||
|
||||
// Load custom CSS (if exists)
|
||||
if m.config.CSS.CustomCSS != "" {
|
||||
customPath := filepath.Join(m.dir, m.config.CSS.CustomCSS)
|
||||
if data, err := os.ReadFile(customPath); err == nil {
|
||||
combined.Write(data)
|
||||
}
|
||||
}
|
||||
|
||||
if combined.Len() > 0 {
|
||||
m.cachedCSS = combined.Bytes()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetCustomCSS returns the combined custom CSS content
|
||||
func (m *Manager) GetCustomCSS() ([]byte, error) {
|
||||
if m.cachedCSS == nil {
|
||||
return nil, fs.ErrNotExist
|
||||
}
|
||||
return m.cachedCSS, nil
|
||||
}
|
||||
|
||||
// HasCustomCSS returns true if custom CSS is available
|
||||
func (m *Manager) HasCustomCSS() bool {
|
||||
return len(m.cachedCSS) > 0
|
||||
}
|
||||
|
||||
// GetManifest generates a customized manifest.json
|
||||
func (m *Manager) GetManifest(originalManifest []byte) ([]byte, error) {
|
||||
var manifest map[string]any
|
||||
|
||||
if err := json.Unmarshal(originalManifest, &manifest); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse original manifest: %w", err)
|
||||
}
|
||||
|
||||
// Apply customizations
|
||||
if m.config.App.Name != "" {
|
||||
manifest["name"] = m.config.App.Name
|
||||
}
|
||||
if m.config.App.ShortName != "" {
|
||||
manifest["short_name"] = m.config.App.ShortName
|
||||
}
|
||||
if m.config.App.Description != "" {
|
||||
manifest["description"] = m.config.App.Description
|
||||
}
|
||||
if m.config.Manifest.ThemeColor != "" {
|
||||
manifest["theme_color"] = m.config.Manifest.ThemeColor
|
||||
}
|
||||
if m.config.Manifest.BackgroundColor != "" {
|
||||
manifest["background_color"] = m.config.Manifest.BackgroundColor
|
||||
}
|
||||
|
||||
// Update icon paths to use branding endpoints
|
||||
if icons, ok := manifest["icons"].([]any); ok {
|
||||
for i, icon := range icons {
|
||||
if iconMap, ok := icon.(map[string]any); ok {
|
||||
if src, ok := iconMap["src"].(string); ok {
|
||||
// Replace icon paths with branding paths
|
||||
if strings.Contains(src, "192") {
|
||||
iconMap["src"] = "/branding/icon-192.png"
|
||||
} else if strings.Contains(src, "512") {
|
||||
iconMap["src"] = "/branding/icon-512.png"
|
||||
}
|
||||
icons[i] = iconMap
|
||||
}
|
||||
}
|
||||
}
|
||||
manifest["icons"] = icons
|
||||
}
|
||||
|
||||
return json.MarshalIndent(manifest, "", " ")
|
||||
}
|
||||
|
||||
// ModifyIndexHTML modifies the index.html to inject custom branding
|
||||
func (m *Manager) ModifyIndexHTML(original []byte) ([]byte, error) {
|
||||
html := string(original)
|
||||
|
||||
// Inject custom CSS link before </head>
|
||||
if m.HasCustomCSS() {
|
||||
cssLink := `<link rel="stylesheet" href="/branding/custom.css">`
|
||||
html = strings.Replace(html, "</head>", cssLink+"\n</head>", 1)
|
||||
}
|
||||
|
||||
// Inject JavaScript to change header text at runtime
|
||||
if m.config.App.Name != "" {
|
||||
// This script runs after DOM is loaded and updates the header text
|
||||
brandingScript := fmt.Sprintf(`<script>
|
||||
(function() {
|
||||
var appName = %q;
|
||||
function updateBranding() {
|
||||
var titles = document.querySelectorAll('.app-title');
|
||||
titles.forEach(function(el) {
|
||||
var badge = el.querySelector('.permission-badge');
|
||||
el.childNodes.forEach(function(node) {
|
||||
if (node.nodeType === 3 && node.textContent.trim()) {
|
||||
node.textContent = appName + ' ';
|
||||
}
|
||||
});
|
||||
if (!el.textContent.includes(appName)) {
|
||||
if (badge) {
|
||||
el.innerHTML = appName + ' ' + badge.outerHTML;
|
||||
} else {
|
||||
el.textContent = appName;
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', updateBranding);
|
||||
} else {
|
||||
updateBranding();
|
||||
}
|
||||
// Also run periodically to catch Svelte updates
|
||||
setInterval(updateBranding, 500);
|
||||
setTimeout(function() { clearInterval(this); }, 10000);
|
||||
})();
|
||||
</script>`, m.config.App.Name+" dashboard")
|
||||
html = strings.Replace(html, "</head>", brandingScript+"\n</head>", 1)
|
||||
}
|
||||
|
||||
// Replace title if custom title is set
|
||||
if m.config.App.Title != "" {
|
||||
titleRegex := regexp.MustCompile(`<title>[^<]*</title>`)
|
||||
html = titleRegex.ReplaceAllString(html, fmt.Sprintf("<title>%s</title>", m.config.App.Title))
|
||||
}
|
||||
|
||||
// Replace logo path to use branding endpoint
|
||||
if m.config.Assets.Logo != "" {
|
||||
// Replace orly.png references with branding logo endpoint
|
||||
html = strings.ReplaceAll(html, `"/orly.png"`, `"/branding/logo.png"`)
|
||||
html = strings.ReplaceAll(html, `'/orly.png'`, `'/branding/logo.png'`)
|
||||
html = strings.ReplaceAll(html, `src="/orly.png"`, `src="/branding/logo.png"`)
|
||||
}
|
||||
|
||||
// Replace favicon path
|
||||
if m.config.Assets.Favicon != "" {
|
||||
html = strings.ReplaceAll(html, `href="/favicon.png"`, `href="/branding/favicon.png"`)
|
||||
html = strings.ReplaceAll(html, `href="favicon.png"`, `href="/branding/favicon.png"`)
|
||||
}
|
||||
|
||||
// Replace manifest path to use dynamic endpoint
|
||||
html = strings.ReplaceAll(html, `href="/manifest.json"`, `href="/branding/manifest.json"`)
|
||||
html = strings.ReplaceAll(html, `href="manifest.json"`, `href="/branding/manifest.json"`)
|
||||
|
||||
return []byte(html), nil
|
||||
}
|
||||
|
||||
// NIP11Config returns the NIP-11 branding configuration
|
||||
func (m *Manager) NIP11Config() NIP11Config {
|
||||
return m.config.NIP11
|
||||
}
|
||||
|
||||
// AppName returns the custom app name, or empty string if not set
|
||||
func (m *Manager) AppName() string {
|
||||
return m.config.App.Name
|
||||
}
|
||||
|
||||
// getMimeType determines the MIME type from a file path
|
||||
func (m *Manager) getMimeType(path string) string {
|
||||
ext := filepath.Ext(path)
|
||||
mimeType := mime.TypeByExtension(ext)
|
||||
if mimeType == "" {
|
||||
// Default fallbacks
|
||||
switch strings.ToLower(ext) {
|
||||
case ".png":
|
||||
return "image/png"
|
||||
case ".jpg", ".jpeg":
|
||||
return "image/jpeg"
|
||||
case ".gif":
|
||||
return "image/gif"
|
||||
case ".svg":
|
||||
return "image/svg+xml"
|
||||
case ".ico":
|
||||
return "image/x-icon"
|
||||
case ".css":
|
||||
return "text/css"
|
||||
case ".js":
|
||||
return "application/javascript"
|
||||
default:
|
||||
return "application/octet-stream"
|
||||
}
|
||||
}
|
||||
return mimeType
|
||||
}
|
||||
|
||||
// ClearCache clears all cached assets (useful for hot-reload during development)
|
||||
func (m *Manager) ClearCache() {
|
||||
m.cachedAssets = make(map[string][]byte)
|
||||
m.cachedCSS = nil
|
||||
_ = m.loadCSS()
|
||||
}
|
||||
790
app/branding/init.go
Normal file
790
app/branding/init.go
Normal file
@@ -0,0 +1,790 @@
|
||||
package branding
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"embed"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"image"
|
||||
"image/color"
|
||||
"image/png"
|
||||
"io/fs"
|
||||
"math"
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
// BrandingStyle represents the type of branding kit to generate
|
||||
type BrandingStyle string
|
||||
|
||||
const (
|
||||
StyleORLY BrandingStyle = "orly" // ORLY-branded assets
|
||||
StyleGeneric BrandingStyle = "generic" // Generic/white-label assets
|
||||
)
|
||||
|
||||
// InitBrandingKit creates a branding directory with assets and configuration
|
||||
func InitBrandingKit(dir string, embeddedFS embed.FS, style BrandingStyle) error {
|
||||
// Create directory structure
|
||||
dirs := []string{
|
||||
dir,
|
||||
filepath.Join(dir, "assets"),
|
||||
filepath.Join(dir, "css"),
|
||||
}
|
||||
|
||||
for _, d := range dirs {
|
||||
if err := os.MkdirAll(d, 0755); err != nil {
|
||||
return fmt.Errorf("failed to create directory %s: %w", d, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Write branding.json based on style
|
||||
var config Config
|
||||
var cssTemplate, varsTemplate string
|
||||
|
||||
switch style {
|
||||
case StyleGeneric:
|
||||
config = GenericConfig()
|
||||
cssTemplate = GenericCSSTemplate
|
||||
varsTemplate = GenericCSSVariablesTemplate
|
||||
default:
|
||||
config = DefaultConfig()
|
||||
cssTemplate = CSSTemplate
|
||||
varsTemplate = CSSVariablesTemplate
|
||||
}
|
||||
|
||||
configData, err := json.MarshalIndent(config, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal config: %w", err)
|
||||
}
|
||||
configPath := filepath.Join(dir, "branding.json")
|
||||
if err := os.WriteFile(configPath, configData, 0644); err != nil {
|
||||
return fmt.Errorf("failed to write branding.json: %w", err)
|
||||
}
|
||||
|
||||
// Generate or extract assets based on style
|
||||
if style == StyleGeneric {
|
||||
// Generate generic placeholder images
|
||||
if err := generateGenericAssets(dir); err != nil {
|
||||
return fmt.Errorf("failed to generate generic assets: %w", err)
|
||||
}
|
||||
} else {
|
||||
// Extract ORLY embedded assets
|
||||
assetMappings := map[string]string{
|
||||
"web/dist/orly.png": filepath.Join(dir, "assets", "logo.png"),
|
||||
"web/dist/favicon.png": filepath.Join(dir, "assets", "favicon.png"),
|
||||
"web/dist/icon-192.png": filepath.Join(dir, "assets", "icon-192.png"),
|
||||
"web/dist/icon-512.png": filepath.Join(dir, "assets", "icon-512.png"),
|
||||
}
|
||||
|
||||
for src, dst := range assetMappings {
|
||||
data, err := fs.ReadFile(embeddedFS, src)
|
||||
if err != nil {
|
||||
altSrc := "web/" + filepath.Base(src)
|
||||
data, err = fs.ReadFile(embeddedFS, altSrc)
|
||||
if err != nil {
|
||||
fmt.Printf("Warning: could not extract %s: %v\n", src, err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
if err := os.WriteFile(dst, data, 0644); err != nil {
|
||||
return fmt.Errorf("failed to write %s: %w", dst, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Write CSS template
|
||||
cssPath := filepath.Join(dir, "css", "custom.css")
|
||||
if err := os.WriteFile(cssPath, []byte(cssTemplate), 0644); err != nil {
|
||||
return fmt.Errorf("failed to write custom.css: %w", err)
|
||||
}
|
||||
|
||||
// Write variables-only CSS template
|
||||
varsPath := filepath.Join(dir, "css", "variables.css")
|
||||
if err := os.WriteFile(varsPath, []byte(varsTemplate), 0644); err != nil {
|
||||
return fmt.Errorf("failed to write variables.css: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// generateGenericAssets creates simple geometric placeholder images
|
||||
func generateGenericAssets(dir string) error {
|
||||
// Color scheme: neutral blue-gray
|
||||
primaryColor := color.RGBA{R: 64, G: 128, B: 192, A: 255} // #4080C0 - professional blue
|
||||
transparent := color.RGBA{R: 0, G: 0, B: 0, A: 0} // Transparent background
|
||||
|
||||
// Generate each size
|
||||
sizes := map[string]int{
|
||||
"logo.png": 256,
|
||||
"favicon.png": 64,
|
||||
"icon-192.png": 192,
|
||||
"icon-512.png": 512,
|
||||
}
|
||||
|
||||
for filename, size := range sizes {
|
||||
img := generateRoundedSquare(size, primaryColor, transparent)
|
||||
path := filepath.Join(dir, "assets", filename)
|
||||
if err := savePNG(path, img); err != nil {
|
||||
return fmt.Errorf("failed to save %s: %w", filename, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// generateRoundedSquare creates a simple rounded square icon
|
||||
func generateRoundedSquare(size int, primary, bg color.RGBA) image.Image {
|
||||
img := image.NewRGBA(image.Rect(0, 0, size, size))
|
||||
|
||||
// Fill background
|
||||
for y := 0; y < size; y++ {
|
||||
for x := 0; x < size; x++ {
|
||||
img.Set(x, y, bg)
|
||||
}
|
||||
}
|
||||
|
||||
// Draw a rounded square in the center
|
||||
margin := size / 8
|
||||
cornerRadius := size / 6
|
||||
squareSize := size - (margin * 2)
|
||||
|
||||
for y := margin; y < margin+squareSize; y++ {
|
||||
for x := margin; x < margin+squareSize; x++ {
|
||||
// Check if point is inside rounded rectangle
|
||||
if isInsideRoundedRect(x-margin, y-margin, squareSize, squareSize, cornerRadius) {
|
||||
img.Set(x, y, primary)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Draw a simple inner circle (relay symbol)
|
||||
centerX := size / 2
|
||||
centerY := size / 2
|
||||
innerRadius := size / 5
|
||||
ringWidth := size / 20
|
||||
|
||||
for y := 0; y < size; y++ {
|
||||
for x := 0; x < size; x++ {
|
||||
dx := float64(x - centerX)
|
||||
dy := float64(y - centerY)
|
||||
dist := math.Sqrt(dx*dx + dy*dy)
|
||||
|
||||
// Ring (circle outline)
|
||||
if dist >= float64(innerRadius-ringWidth) && dist <= float64(innerRadius) {
|
||||
img.Set(x, y, bg)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return img
|
||||
}
|
||||
|
||||
// isInsideRoundedRect checks if a point is inside a rounded rectangle
|
||||
func isInsideRoundedRect(x, y, w, h, r int) bool {
|
||||
// Check corners
|
||||
if x < r && y < r {
|
||||
// Top-left corner
|
||||
return isInsideCircle(x, y, r, r, r)
|
||||
}
|
||||
if x >= w-r && y < r {
|
||||
// Top-right corner
|
||||
return isInsideCircle(x, y, w-r-1, r, r)
|
||||
}
|
||||
if x < r && y >= h-r {
|
||||
// Bottom-left corner
|
||||
return isInsideCircle(x, y, r, h-r-1, r)
|
||||
}
|
||||
if x >= w-r && y >= h-r {
|
||||
// Bottom-right corner
|
||||
return isInsideCircle(x, y, w-r-1, h-r-1, r)
|
||||
}
|
||||
|
||||
// Inside main rectangle
|
||||
return x >= 0 && x < w && y >= 0 && y < h
|
||||
}
|
||||
|
||||
// isInsideCircle checks if a point is inside a circle
|
||||
func isInsideCircle(x, y, cx, cy, r int) bool {
|
||||
dx := x - cx
|
||||
dy := y - cy
|
||||
return dx*dx+dy*dy <= r*r
|
||||
}
|
||||
|
||||
// savePNG saves an image as a PNG file
|
||||
func savePNG(path string, img image.Image) error {
|
||||
var buf bytes.Buffer
|
||||
if err := png.Encode(&buf, img); err != nil {
|
||||
return err
|
||||
}
|
||||
return os.WriteFile(path, buf.Bytes(), 0644)
|
||||
}
|
||||
|
||||
// GenericConfig returns a generic/white-label configuration
|
||||
func GenericConfig() Config {
|
||||
return Config{
|
||||
Version: 1,
|
||||
App: AppConfig{
|
||||
Name: "Relay",
|
||||
ShortName: "Relay",
|
||||
Title: "Relay Dashboard",
|
||||
Description: "Nostr relay service",
|
||||
},
|
||||
NIP11: NIP11Config{
|
||||
Name: "Relay",
|
||||
Description: "A Nostr relay",
|
||||
Icon: "",
|
||||
},
|
||||
Manifest: ManifestConfig{
|
||||
ThemeColor: "#4080C0",
|
||||
BackgroundColor: "#F0F4F8",
|
||||
},
|
||||
Assets: AssetsConfig{
|
||||
Logo: "assets/logo.png",
|
||||
Favicon: "assets/favicon.png",
|
||||
Icon192: "assets/icon-192.png",
|
||||
Icon512: "assets/icon-512.png",
|
||||
},
|
||||
CSS: CSSConfig{
|
||||
CustomCSS: "css/custom.css",
|
||||
VariablesCSS: "css/variables.css",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// CSSTemplate is the full CSS template with all variables and documentation
|
||||
const CSSTemplate = `/*
|
||||
* Custom Branding CSS for ORLY Relay
|
||||
* ==================================
|
||||
*
|
||||
* This file is loaded AFTER the default styles, so any rules here
|
||||
* will override the defaults. You can customize:
|
||||
*
|
||||
* 1. CSS Variables (colors, spacing, etc.)
|
||||
* 2. Component styles (buttons, cards, headers, etc.)
|
||||
* 3. Add completely custom styles
|
||||
*
|
||||
* Restart the relay to apply changes.
|
||||
*
|
||||
* For variable-only overrides, edit variables.css instead.
|
||||
*/
|
||||
|
||||
/* =============================================================================
|
||||
LIGHT THEME VARIABLES
|
||||
============================================================================= */
|
||||
|
||||
:root {
|
||||
/* Background colors */
|
||||
--bg-color: #ddd; /* Main page background */
|
||||
--header-bg: #eee; /* Header background */
|
||||
--sidebar-bg: #eee; /* Sidebar background */
|
||||
--card-bg: #f8f9fa; /* Card/container background */
|
||||
--panel-bg: #f8f9fa; /* Panel background */
|
||||
|
||||
/* Border colors */
|
||||
--border-color: #dee2e6; /* Default border color */
|
||||
|
||||
/* Text colors */
|
||||
--text-color: #444444; /* Primary text color */
|
||||
--text-muted: #6c757d; /* Secondary/muted text */
|
||||
|
||||
/* Input/form colors */
|
||||
--input-border: #ccc; /* Input border color */
|
||||
--input-bg: #ffffff; /* Input background */
|
||||
--input-text-color: #495057; /* Input text color */
|
||||
|
||||
/* Button colors */
|
||||
--button-bg: #ddd; /* Default button background */
|
||||
--button-hover-bg: #eee; /* Button hover background */
|
||||
--button-text: #444444; /* Button text color */
|
||||
--button-hover-border: #adb5bd; /* Button hover border */
|
||||
|
||||
/* Theme/accent colors */
|
||||
--primary: #00bcd4; /* Primary accent (cyan) */
|
||||
--primary-bg: rgba(0, 188, 212, 0.1); /* Primary background tint */
|
||||
--secondary: #6c757d; /* Secondary color */
|
||||
|
||||
/* Status colors */
|
||||
--success: #28a745; /* Success/positive */
|
||||
--success-bg: #d4edda; /* Success background */
|
||||
--success-text: #155724; /* Success text */
|
||||
--info: #17a2b8; /* Info/neutral */
|
||||
--warning: #ff3e00; /* Warning (Svelte orange) */
|
||||
--warning-bg: #fff3cd; /* Warning background */
|
||||
--danger: #dc3545; /* Danger/error */
|
||||
--danger-bg: #f8d7da; /* Danger background */
|
||||
--danger-text: #721c24; /* Danger text */
|
||||
--error-bg: #f8d7da; /* Error background */
|
||||
--error-text: #721c24; /* Error text */
|
||||
|
||||
/* Code block colors */
|
||||
--code-bg: #f8f9fa; /* Code block background */
|
||||
--code-text: #495057; /* Code text color */
|
||||
|
||||
/* Tab colors */
|
||||
--tab-inactive-bg: #bbb; /* Inactive tab background */
|
||||
|
||||
/* Link/accent colors */
|
||||
--accent-color: #007bff; /* Link color */
|
||||
--accent-hover-color: #0056b3; /* Link hover color */
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
DARK THEME VARIABLES
|
||||
============================================================================= */
|
||||
|
||||
body.dark-theme {
|
||||
/* Background colors */
|
||||
--bg-color: #263238; /* Main page background */
|
||||
--header-bg: #1e272c; /* Header background */
|
||||
--sidebar-bg: #1e272c; /* Sidebar background */
|
||||
--card-bg: #37474f; /* Card/container background */
|
||||
--panel-bg: #37474f; /* Panel background */
|
||||
|
||||
/* Border colors */
|
||||
--border-color: #404040; /* Default border color */
|
||||
|
||||
/* Text colors */
|
||||
--text-color: #ffffff; /* Primary text color */
|
||||
--text-muted: #adb5bd; /* Secondary/muted text */
|
||||
|
||||
/* Input/form colors */
|
||||
--input-border: #555; /* Input border color */
|
||||
--input-bg: #37474f; /* Input background */
|
||||
--input-text-color: #ffffff; /* Input text color */
|
||||
|
||||
/* Button colors */
|
||||
--button-bg: #263238; /* Default button background */
|
||||
--button-hover-bg: #1e272c; /* Button hover background */
|
||||
--button-text: #ffffff; /* Button text color */
|
||||
--button-hover-border: #6c757d; /* Button hover border */
|
||||
|
||||
/* Theme/accent colors */
|
||||
--primary: #00bcd4; /* Primary accent (cyan) */
|
||||
--primary-bg: rgba(0, 188, 212, 0.2); /* Primary background tint */
|
||||
--secondary: #6c757d; /* Secondary color */
|
||||
|
||||
/* Status colors */
|
||||
--success: #28a745; /* Success/positive */
|
||||
--success-bg: #1e4620; /* Success background (dark) */
|
||||
--success-text: #d4edda; /* Success text (light) */
|
||||
--info: #17a2b8; /* Info/neutral */
|
||||
--warning: #ff3e00; /* Warning (Svelte orange) */
|
||||
--warning-bg: #4d1f00; /* Warning background (dark) */
|
||||
--danger: #dc3545; /* Danger/error */
|
||||
--danger-bg: #4d1319; /* Danger background (dark) */
|
||||
--danger-text: #f8d7da; /* Danger text (light) */
|
||||
--error-bg: #4d1319; /* Error background */
|
||||
--error-text: #f8d7da; /* Error text */
|
||||
|
||||
/* Code block colors */
|
||||
--code-bg: #1e272c; /* Code block background */
|
||||
--code-text: #ffffff; /* Code text color */
|
||||
|
||||
/* Tab colors */
|
||||
--tab-inactive-bg: #1a1a1a; /* Inactive tab background */
|
||||
|
||||
/* Link/accent colors */
|
||||
--accent-color: #007bff; /* Link color */
|
||||
--accent-hover-color: #0056b3; /* Link hover color */
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
CUSTOM STYLES
|
||||
Add your custom CSS rules below. These will override any default styles.
|
||||
============================================================================= */
|
||||
|
||||
/* Example: Custom header styling
|
||||
.header {
|
||||
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);
|
||||
}
|
||||
*/
|
||||
|
||||
/* Example: Custom button styling
|
||||
.btn {
|
||||
border-radius: 8px;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.5px;
|
||||
}
|
||||
*/
|
||||
|
||||
/* Example: Custom card styling
|
||||
.card {
|
||||
border-radius: 12px;
|
||||
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
|
||||
}
|
||||
*/
|
||||
`
|
||||
|
||||
// CSSVariablesTemplate contains only CSS variable definitions
|
||||
const CSSVariablesTemplate = `/*
|
||||
* CSS Variables Override for ORLY Relay
|
||||
* ======================================
|
||||
*
|
||||
* This file contains only CSS variable definitions.
|
||||
* Edit values here to customize colors without touching component styles.
|
||||
*
|
||||
* For full CSS customization (including component styles),
|
||||
* edit custom.css instead.
|
||||
*/
|
||||
|
||||
/* Light theme variables */
|
||||
:root {
|
||||
--bg-color: #ddd;
|
||||
--header-bg: #eee;
|
||||
--sidebar-bg: #eee;
|
||||
--card-bg: #f8f9fa;
|
||||
--panel-bg: #f8f9fa;
|
||||
--border-color: #dee2e6;
|
||||
--text-color: #444444;
|
||||
--text-muted: #6c757d;
|
||||
--input-border: #ccc;
|
||||
--input-bg: #ffffff;
|
||||
--input-text-color: #495057;
|
||||
--button-bg: #ddd;
|
||||
--button-hover-bg: #eee;
|
||||
--button-text: #444444;
|
||||
--button-hover-border: #adb5bd;
|
||||
--primary: #00bcd4;
|
||||
--primary-bg: rgba(0, 188, 212, 0.1);
|
||||
--secondary: #6c757d;
|
||||
--success: #28a745;
|
||||
--success-bg: #d4edda;
|
||||
--success-text: #155724;
|
||||
--info: #17a2b8;
|
||||
--warning: #ff3e00;
|
||||
--warning-bg: #fff3cd;
|
||||
--danger: #dc3545;
|
||||
--danger-bg: #f8d7da;
|
||||
--danger-text: #721c24;
|
||||
--error-bg: #f8d7da;
|
||||
--error-text: #721c24;
|
||||
--code-bg: #f8f9fa;
|
||||
--code-text: #495057;
|
||||
--tab-inactive-bg: #bbb;
|
||||
--accent-color: #007bff;
|
||||
--accent-hover-color: #0056b3;
|
||||
}
|
||||
|
||||
/* Dark theme variables */
|
||||
body.dark-theme {
|
||||
--bg-color: #263238;
|
||||
--header-bg: #1e272c;
|
||||
--sidebar-bg: #1e272c;
|
||||
--card-bg: #37474f;
|
||||
--panel-bg: #37474f;
|
||||
--border-color: #404040;
|
||||
--text-color: #ffffff;
|
||||
--text-muted: #adb5bd;
|
||||
--input-border: #555;
|
||||
--input-bg: #37474f;
|
||||
--input-text-color: #ffffff;
|
||||
--button-bg: #263238;
|
||||
--button-hover-bg: #1e272c;
|
||||
--button-text: #ffffff;
|
||||
--button-hover-border: #6c757d;
|
||||
--primary: #00bcd4;
|
||||
--primary-bg: rgba(0, 188, 212, 0.2);
|
||||
--secondary: #6c757d;
|
||||
--success: #28a745;
|
||||
--success-bg: #1e4620;
|
||||
--success-text: #d4edda;
|
||||
--info: #17a2b8;
|
||||
--warning: #ff3e00;
|
||||
--warning-bg: #4d1f00;
|
||||
--danger: #dc3545;
|
||||
--danger-bg: #4d1319;
|
||||
--danger-text: #f8d7da;
|
||||
--error-bg: #4d1319;
|
||||
--error-text: #f8d7da;
|
||||
--code-bg: #1e272c;
|
||||
--code-text: #ffffff;
|
||||
--tab-inactive-bg: #1a1a1a;
|
||||
--accent-color: #007bff;
|
||||
--accent-hover-color: #0056b3;
|
||||
}
|
||||
`
|
||||
|
||||
// GenericCSSTemplate is the CSS template for generic/white-label branding
|
||||
const GenericCSSTemplate = `/*
|
||||
* Custom Branding CSS - White Label Template
|
||||
* ==========================================
|
||||
*
|
||||
* This file is loaded AFTER the default styles, so any rules here
|
||||
* will override the defaults. You can customize:
|
||||
*
|
||||
* 1. CSS Variables (colors, spacing, etc.)
|
||||
* 2. Component styles (buttons, cards, headers, etc.)
|
||||
* 3. Add completely custom styles
|
||||
*
|
||||
* Restart the relay to apply changes.
|
||||
*
|
||||
* For variable-only overrides, edit variables.css instead.
|
||||
*/
|
||||
|
||||
/* =============================================================================
|
||||
LIGHT THEME VARIABLES - Professional Blue-Gray
|
||||
============================================================================= */
|
||||
|
||||
html, body {
|
||||
/* Background colors */
|
||||
--bg-color: #F0F4F8; /* Light gray-blue background */
|
||||
--header-bg: #FFFFFF; /* Clean white header */
|
||||
--sidebar-bg: #FFFFFF; /* Clean white sidebar */
|
||||
--card-bg: #FFFFFF; /* White cards */
|
||||
--panel-bg: #FFFFFF; /* White panels */
|
||||
|
||||
/* Border colors */
|
||||
--border-color: #E2E8F0; /* Subtle gray border */
|
||||
|
||||
/* Text colors */
|
||||
--text-color: #334155; /* Dark slate text */
|
||||
--text-muted: #64748B; /* Muted slate */
|
||||
|
||||
/* Input/form colors */
|
||||
--input-border: #CBD5E1; /* Light slate border */
|
||||
--input-bg: #FFFFFF; /* White input */
|
||||
--input-text-color: #334155; /* Dark slate text */
|
||||
|
||||
/* Button colors */
|
||||
--button-bg: #F1F5F9; /* Light slate button */
|
||||
--button-hover-bg: #E2E8F0; /* Slightly darker on hover */
|
||||
--button-text: #334155; /* Dark slate text */
|
||||
--button-hover-border: #94A3B8; /* Medium slate border */
|
||||
|
||||
/* Theme/accent colors - Professional Blue */
|
||||
--primary: #4080C0; /* Professional blue */
|
||||
--primary-bg: rgba(64, 128, 192, 0.1); /* Light blue tint */
|
||||
--secondary: #64748B; /* Slate gray */
|
||||
|
||||
/* Status colors */
|
||||
--success: #22C55E; /* Green */
|
||||
--success-bg: #DCFCE7; /* Light green */
|
||||
--success-text: #166534; /* Dark green */
|
||||
--info: #3B82F6; /* Blue */
|
||||
--warning: #F59E0B; /* Amber */
|
||||
--warning-bg: #FEF3C7; /* Light amber */
|
||||
--danger: #EF4444; /* Red */
|
||||
--danger-bg: #FEE2E2; /* Light red */
|
||||
--danger-text: #991B1B; /* Dark red */
|
||||
--error-bg: #FEE2E2; /* Light red */
|
||||
--error-text: #991B1B; /* Dark red */
|
||||
|
||||
/* Code block colors */
|
||||
--code-bg: #F8FAFC; /* Very light slate */
|
||||
--code-text: #334155; /* Dark slate */
|
||||
|
||||
/* Tab colors */
|
||||
--tab-inactive-bg: #E2E8F0; /* Light slate */
|
||||
|
||||
/* Link/accent colors */
|
||||
--accent-color: #4080C0; /* Professional blue */
|
||||
--accent-hover-color: #2563EB; /* Darker blue */
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
DARK THEME VARIABLES - Professional Dark
|
||||
============================================================================= */
|
||||
|
||||
body.dark-theme {
|
||||
/* Background colors */
|
||||
--bg-color: #0F172A; /* Dark navy */
|
||||
--header-bg: #1E293B; /* Slate gray */
|
||||
--sidebar-bg: #1E293B; /* Slate gray */
|
||||
--card-bg: #1E293B; /* Slate gray */
|
||||
--panel-bg: #1E293B; /* Slate gray */
|
||||
|
||||
/* Border colors */
|
||||
--border-color: #334155; /* Medium slate */
|
||||
|
||||
/* Text colors */
|
||||
--text-color: #F8FAFC; /* Almost white */
|
||||
--text-muted: #94A3B8; /* Muted slate */
|
||||
|
||||
/* Input/form colors */
|
||||
--input-border: #475569; /* Slate border */
|
||||
--input-bg: #1E293B; /* Slate background */
|
||||
--input-text-color: #F8FAFC; /* Light text */
|
||||
|
||||
/* Button colors */
|
||||
--button-bg: #334155; /* Slate button */
|
||||
--button-hover-bg: #475569; /* Lighter on hover */
|
||||
--button-text: #F8FAFC; /* Light text */
|
||||
--button-hover-border: #64748B; /* Medium slate */
|
||||
|
||||
/* Theme/accent colors */
|
||||
--primary: #60A5FA; /* Lighter blue for dark mode */
|
||||
--primary-bg: rgba(96, 165, 250, 0.2); /* Blue tint */
|
||||
--secondary: #94A3B8; /* Muted slate */
|
||||
|
||||
/* Status colors */
|
||||
--success: #4ADE80; /* Bright green */
|
||||
--success-bg: #166534; /* Dark green */
|
||||
--success-text: #DCFCE7; /* Light green */
|
||||
--info: #60A5FA; /* Light blue */
|
||||
--warning: #FBBF24; /* Bright amber */
|
||||
--warning-bg: #78350F; /* Dark amber */
|
||||
--danger: #F87171; /* Light red */
|
||||
--danger-bg: #7F1D1D; /* Dark red */
|
||||
--danger-text: #FEE2E2; /* Light red */
|
||||
--error-bg: #7F1D1D; /* Dark red */
|
||||
--error-text: #FEE2E2; /* Light red */
|
||||
|
||||
/* Code block colors */
|
||||
--code-bg: #0F172A; /* Dark navy */
|
||||
--code-text: #F8FAFC; /* Light text */
|
||||
|
||||
/* Tab colors */
|
||||
--tab-inactive-bg: #1E293B; /* Slate */
|
||||
|
||||
/* Link/accent colors */
|
||||
--accent-color: #60A5FA; /* Light blue */
|
||||
--accent-hover-color: #93C5FD; /* Lighter blue */
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
PRIMARY BUTTON TEXT COLOR FIX
|
||||
Ensures buttons with primary background have white text for contrast
|
||||
============================================================================= */
|
||||
|
||||
/* Target all common button patterns that use primary background */
|
||||
button[class*="-btn"],
|
||||
button[class*="submit"],
|
||||
button[class*="action"],
|
||||
button[class*="save"],
|
||||
button[class*="add"],
|
||||
button[class*="create"],
|
||||
button[class*="connect"],
|
||||
button[class*="refresh"],
|
||||
button[class*="retry"],
|
||||
button[class*="send"],
|
||||
button[class*="apply"],
|
||||
button[class*="execute"],
|
||||
button[class*="run"],
|
||||
.primary-action,
|
||||
.action-button,
|
||||
.permission-badge,
|
||||
[class*="badge"] {
|
||||
color: #FFFFFF !important;
|
||||
}
|
||||
|
||||
/* More specific override for any button that visually appears to have primary bg */
|
||||
/* This uses a broad selector with low impact on non-primary buttons */
|
||||
html:not(.dark-theme) button:not([disabled]) {
|
||||
/* Default to inherit, primary buttons will be caught above */
|
||||
}
|
||||
|
||||
/* =============================================================================
|
||||
CUSTOM STYLES
|
||||
Add your custom CSS rules below. These will override any default styles.
|
||||
============================================================================= */
|
||||
|
||||
/* Example: Custom header styling
|
||||
.header {
|
||||
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1);
|
||||
}
|
||||
*/
|
||||
|
||||
/* Example: Custom button styling
|
||||
.btn {
|
||||
border-radius: 6px;
|
||||
font-weight: 500;
|
||||
}
|
||||
*/
|
||||
|
||||
/* Example: Custom card styling
|
||||
.card {
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1);
|
||||
}
|
||||
*/
|
||||
`
|
||||
|
||||
// GenericCSSVariablesTemplate contains CSS variables for generic/white-label branding
|
||||
const GenericCSSVariablesTemplate = `/*
|
||||
* CSS Variables Override - White Label Template
|
||||
* ==============================================
|
||||
*
|
||||
* This file contains only CSS variable definitions.
|
||||
* Edit values here to customize colors without touching component styles.
|
||||
*
|
||||
* For full CSS customization (including component styles),
|
||||
* edit custom.css instead.
|
||||
*/
|
||||
|
||||
/* Light theme variables - Professional Blue-Gray */
|
||||
/* Applied to both html and body for maximum compatibility */
|
||||
html, body {
|
||||
--bg-color: #F0F4F8;
|
||||
--header-bg: #FFFFFF;
|
||||
--sidebar-bg: #FFFFFF;
|
||||
--card-bg: #FFFFFF;
|
||||
--panel-bg: #FFFFFF;
|
||||
--border-color: #E2E8F0;
|
||||
--text-color: #334155;
|
||||
--text-muted: #64748B;
|
||||
--input-border: #CBD5E1;
|
||||
--input-bg: #FFFFFF;
|
||||
--input-text-color: #334155;
|
||||
--button-bg: #F1F5F9;
|
||||
--button-hover-bg: #E2E8F0;
|
||||
--button-text: #334155;
|
||||
--button-hover-border: #94A3B8;
|
||||
--primary: #4080C0;
|
||||
--primary-bg: rgba(64, 128, 192, 0.1);
|
||||
--secondary: #64748B;
|
||||
--success: #22C55E;
|
||||
--success-bg: #DCFCE7;
|
||||
--success-text: #166534;
|
||||
--info: #3B82F6;
|
||||
--warning: #F59E0B;
|
||||
--warning-bg: #FEF3C7;
|
||||
--danger: #EF4444;
|
||||
--danger-bg: #FEE2E2;
|
||||
--danger-text: #991B1B;
|
||||
--error-bg: #FEE2E2;
|
||||
--error-text: #991B1B;
|
||||
--code-bg: #F8FAFC;
|
||||
--code-text: #334155;
|
||||
--tab-inactive-bg: #E2E8F0;
|
||||
--accent-color: #4080C0;
|
||||
--accent-hover-color: #2563EB;
|
||||
}
|
||||
|
||||
/* Dark theme variables - Professional Dark */
|
||||
body.dark-theme {
|
||||
--bg-color: #0F172A;
|
||||
--header-bg: #1E293B;
|
||||
--sidebar-bg: #1E293B;
|
||||
--card-bg: #1E293B;
|
||||
--panel-bg: #1E293B;
|
||||
--border-color: #334155;
|
||||
--text-color: #F8FAFC;
|
||||
--text-muted: #94A3B8;
|
||||
--input-border: #475569;
|
||||
--input-bg: #1E293B;
|
||||
--input-text-color: #F8FAFC;
|
||||
--button-bg: #334155;
|
||||
--button-hover-bg: #475569;
|
||||
--button-text: #F8FAFC;
|
||||
--button-hover-border: #64748B;
|
||||
--primary: #60A5FA;
|
||||
--primary-bg: rgba(96, 165, 250, 0.2);
|
||||
--secondary: #94A3B8;
|
||||
--success: #4ADE80;
|
||||
--success-bg: #166534;
|
||||
--success-text: #DCFCE7;
|
||||
--info: #60A5FA;
|
||||
--warning: #FBBF24;
|
||||
--warning-bg: #78350F;
|
||||
--danger: #F87171;
|
||||
--danger-bg: #7F1D1D;
|
||||
--danger-text: #FEE2E2;
|
||||
--error-bg: #7F1D1D;
|
||||
--error-text: #FEE2E2;
|
||||
--code-bg: #0F172A;
|
||||
--code-text: #F8FAFC;
|
||||
--tab-inactive-bg: #1E293B;
|
||||
--accent-color: #60A5FA;
|
||||
--accent-hover-color: #93C5FD;
|
||||
}
|
||||
`
|
||||
81
app/branding/types.go
Normal file
81
app/branding/types.go
Normal file
@@ -0,0 +1,81 @@
|
||||
// Package branding provides white-label customization for the ORLY relay web UI.
|
||||
// It allows relay operators to customize the appearance, branding, and theme
|
||||
// without rebuilding the application.
|
||||
package branding
|
||||
|
||||
// Config is the main configuration structure loaded from branding.json
|
||||
type Config struct {
|
||||
Version int `json:"version"`
|
||||
App AppConfig `json:"app"`
|
||||
NIP11 NIP11Config `json:"nip11"`
|
||||
Manifest ManifestConfig `json:"manifest"`
|
||||
Assets AssetsConfig `json:"assets"`
|
||||
CSS CSSConfig `json:"css"`
|
||||
}
|
||||
|
||||
// AppConfig contains application-level branding settings
|
||||
type AppConfig struct {
|
||||
Name string `json:"name"` // Display name (e.g., "My Relay")
|
||||
ShortName string `json:"shortName"` // Short name for PWA (e.g., "Relay")
|
||||
Title string `json:"title"` // Browser tab title (e.g., "My Relay Dashboard")
|
||||
Description string `json:"description"` // Brief description
|
||||
}
|
||||
|
||||
// NIP11Config contains settings for the NIP-11 relay information document
|
||||
type NIP11Config struct {
|
||||
Name string `json:"name"` // Relay name in NIP-11 response
|
||||
Description string `json:"description"` // Relay description in NIP-11 response
|
||||
Icon string `json:"icon"` // Icon URL for NIP-11 response
|
||||
}
|
||||
|
||||
// ManifestConfig contains PWA manifest customization
|
||||
type ManifestConfig struct {
|
||||
ThemeColor string `json:"themeColor"` // Theme color (e.g., "#1a1a2e")
|
||||
BackgroundColor string `json:"backgroundColor"` // Background color (e.g., "#16213e")
|
||||
}
|
||||
|
||||
// AssetsConfig contains paths to custom asset files (relative to branding directory)
|
||||
type AssetsConfig struct {
|
||||
Logo string `json:"logo"` // Header logo image (replaces orly.png)
|
||||
Favicon string `json:"favicon"` // Browser favicon
|
||||
Icon192 string `json:"icon192"` // PWA icon 192x192
|
||||
Icon512 string `json:"icon512"` // PWA icon 512x512
|
||||
}
|
||||
|
||||
// CSSConfig contains paths to custom CSS files (relative to branding directory)
|
||||
type CSSConfig struct {
|
||||
CustomCSS string `json:"customCSS"` // Full CSS override file
|
||||
VariablesCSS string `json:"variablesCSS"` // CSS variables override file (optional)
|
||||
}
|
||||
|
||||
// DefaultConfig returns a default configuration with example values
|
||||
func DefaultConfig() Config {
|
||||
return Config{
|
||||
Version: 1,
|
||||
App: AppConfig{
|
||||
Name: "My Relay",
|
||||
ShortName: "Relay",
|
||||
Title: "My Relay Dashboard",
|
||||
Description: "A high-performance Nostr relay",
|
||||
},
|
||||
NIP11: NIP11Config{
|
||||
Name: "My Relay",
|
||||
Description: "Custom relay description",
|
||||
Icon: "",
|
||||
},
|
||||
Manifest: ManifestConfig{
|
||||
ThemeColor: "#000000",
|
||||
BackgroundColor: "#000000",
|
||||
},
|
||||
Assets: AssetsConfig{
|
||||
Logo: "assets/logo.png",
|
||||
Favicon: "assets/favicon.png",
|
||||
Icon192: "assets/icon-192.png",
|
||||
Icon512: "assets/icon-512.png",
|
||||
},
|
||||
CSS: CSSConfig{
|
||||
CustomCSS: "css/custom.css",
|
||||
VariablesCSS: "css/variables.css",
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -41,9 +41,9 @@ type C struct {
|
||||
EnableShutdown bool `env:"ORLY_ENABLE_SHUTDOWN" default:"false" usage:"if true, expose /shutdown on the health port to gracefully stop the process (for profiling)"`
|
||||
LogLevel string `env:"ORLY_LOG_LEVEL" default:"info" usage:"relay log level: fatal error warn info debug trace"`
|
||||
DBLogLevel string `env:"ORLY_DB_LOG_LEVEL" default:"info" usage:"database log level: fatal error warn info debug trace"`
|
||||
DBBlockCacheMB int `env:"ORLY_DB_BLOCK_CACHE_MB" default:"512" usage:"Badger block cache size in MB (higher improves read hit ratio)"`
|
||||
DBIndexCacheMB int `env:"ORLY_DB_INDEX_CACHE_MB" default:"256" usage:"Badger index cache size in MB (improves index lookup performance)"`
|
||||
DBZSTDLevel int `env:"ORLY_DB_ZSTD_LEVEL" default:"1" usage:"Badger ZSTD compression level (1=fast/500MB/s, 3=default, 9=best ratio, 0=disable)"`
|
||||
DBBlockCacheMB int `env:"ORLY_DB_BLOCK_CACHE_MB" default:"1024" usage:"Badger block cache size in MB (higher improves read hit ratio, increase for large archives)"`
|
||||
DBIndexCacheMB int `env:"ORLY_DB_INDEX_CACHE_MB" default:"512" usage:"Badger index cache size in MB (improves index lookup performance, increase for large archives)"`
|
||||
DBZSTDLevel int `env:"ORLY_DB_ZSTD_LEVEL" default:"3" usage:"Badger ZSTD compression level (1=fast/500MB/s, 3=balanced, 9=best ratio/slower, 0=disable)"`
|
||||
LogToStdout bool `env:"ORLY_LOG_TO_STDOUT" default:"false" usage:"log to stdout instead of stderr"`
|
||||
LogBufferSize int `env:"ORLY_LOG_BUFFER_SIZE" default:"10000" usage:"number of log entries to keep in memory for web UI viewing (0 disables)"`
|
||||
Pprof string `env:"ORLY_PPROF" usage:"enable pprof in modes: cpu,memory,allocation,heap,block,goroutine,threadcreate,mutex"`
|
||||
@@ -69,16 +69,26 @@ type C struct {
|
||||
|
||||
// Progressive throttle for follows ACL mode - allows non-followed users to write with increasing delay
|
||||
FollowsThrottleEnabled bool `env:"ORLY_FOLLOWS_THROTTLE" default:"false" usage:"enable progressive delay for non-followed users in follows ACL mode"`
|
||||
FollowsThrottlePerEvent time.Duration `env:"ORLY_FOLLOWS_THROTTLE_INCREMENT" default:"200ms" usage:"delay added per event for non-followed users"`
|
||||
FollowsThrottlePerEvent time.Duration `env:"ORLY_FOLLOWS_THROTTLE_INCREMENT" default:"25ms" usage:"delay added per event for non-followed users"`
|
||||
FollowsThrottleMaxDelay time.Duration `env:"ORLY_FOLLOWS_THROTTLE_MAX" default:"60s" usage:"maximum throttle delay cap"`
|
||||
|
||||
// Blossom blob storage service level settings
|
||||
// Blossom blob storage service settings
|
||||
BlossomEnabled bool `env:"ORLY_BLOSSOM_ENABLED" default:"true" usage:"enable Blossom blob storage server (only works with Badger backend)"`
|
||||
BlossomServiceLevels string `env:"ORLY_BLOSSOM_SERVICE_LEVELS" usage:"comma-separated list of service levels in format: name:storage_mb_per_sat_per_month (e.g., basic:1,premium:10)"`
|
||||
|
||||
// Blossom upload rate limiting (for non-followed users)
|
||||
BlossomRateLimitEnabled bool `env:"ORLY_BLOSSOM_RATE_LIMIT" default:"false" usage:"enable upload rate limiting for non-followed users"`
|
||||
BlossomDailyLimitMB int64 `env:"ORLY_BLOSSOM_DAILY_LIMIT_MB" default:"10" usage:"daily upload limit in MB for non-followed users (EMA averaged)"`
|
||||
BlossomBurstLimitMB int64 `env:"ORLY_BLOSSOM_BURST_LIMIT_MB" default:"50" usage:"max burst upload in MB (bucket cap)"`
|
||||
|
||||
// Web UI and dev mode settings
|
||||
WebDisableEmbedded bool `env:"ORLY_WEB_DISABLE" default:"false" usage:"disable serving the embedded web UI; useful for hot-reload during development"`
|
||||
WebDevProxyURL string `env:"ORLY_WEB_DEV_PROXY_URL" usage:"when ORLY_WEB_DISABLE is true, reverse-proxy non-API paths to this dev server URL (e.g. http://localhost:5173)"`
|
||||
|
||||
// Branding/white-label settings
|
||||
BrandingDir string `env:"ORLY_BRANDING_DIR" usage:"directory containing branding assets and configuration (default: ~/.config/ORLY/branding)"`
|
||||
BrandingEnabled bool `env:"ORLY_BRANDING_ENABLED" default:"true" usage:"enable custom branding if branding directory exists"`
|
||||
|
||||
// Sprocket settings
|
||||
SprocketEnabled bool `env:"ORLY_SPROCKET_ENABLED" default:"false" usage:"enable sprocket event processing plugin system"`
|
||||
|
||||
@@ -124,9 +134,9 @@ type C struct {
|
||||
Neo4jMaxTxRetrySeconds int `env:"ORLY_NEO4J_MAX_TX_RETRY_SEC" default:"30" usage:"max seconds for retryable transaction attempts"`
|
||||
Neo4jQueryResultLimit int `env:"ORLY_NEO4J_QUERY_RESULT_LIMIT" default:"10000" usage:"max results returned per query (prevents unbounded memory usage, 0=unlimited)"`
|
||||
|
||||
// Advanced database tuning
|
||||
SerialCachePubkeys int `env:"ORLY_SERIAL_CACHE_PUBKEYS" default:"100000" usage:"max pubkeys to cache for compact event storage (default: 100000, ~3.2MB memory)"`
|
||||
SerialCacheEventIds int `env:"ORLY_SERIAL_CACHE_EVENT_IDS" default:"500000" usage:"max event IDs to cache for compact event storage (default: 500000, ~16MB memory)"`
|
||||
// Advanced database tuning (increase for large archives to reduce cache misses)
|
||||
SerialCachePubkeys int `env:"ORLY_SERIAL_CACHE_PUBKEYS" default:"250000" usage:"max pubkeys to cache for compact event storage (~8MB memory, increase for large archives)"`
|
||||
SerialCacheEventIds int `env:"ORLY_SERIAL_CACHE_EVENT_IDS" default:"1000000" usage:"max event IDs to cache for compact event storage (~32MB memory, increase for large archives)"`
|
||||
|
||||
// Connection concurrency control
|
||||
MaxHandlersPerConnection int `env:"ORLY_MAX_HANDLERS_PER_CONN" default:"100" usage:"max concurrent message handlers per WebSocket connection (limits goroutine growth under load)"`
|
||||
@@ -439,6 +449,36 @@ func NRCRequested() (requested bool, subcommand string, args []string) {
|
||||
return
|
||||
}
|
||||
|
||||
// InitBrandingRequested checks if the first command line argument is "init-branding"
|
||||
// and returns the target directory and style if provided.
|
||||
//
|
||||
// Return Values
|
||||
// - requested: true if the 'init-branding' subcommand was provided
|
||||
// - targetDir: optional target directory for branding files (default: ~/.config/ORLY/branding)
|
||||
// - style: branding style ("orly" or "generic", default: "generic")
|
||||
//
|
||||
// Usage: orly init-branding [--style orly|generic] [path]
|
||||
func InitBrandingRequested() (requested bool, targetDir, style string) {
|
||||
style = "generic" // default to generic/white-label
|
||||
if len(os.Args) > 1 {
|
||||
switch strings.ToLower(os.Args[1]) {
|
||||
case "init-branding":
|
||||
requested = true
|
||||
// Parse remaining arguments
|
||||
for i := 2; i < len(os.Args); i++ {
|
||||
arg := os.Args[i]
|
||||
if arg == "--style" && i+1 < len(os.Args) {
|
||||
style = strings.ToLower(os.Args[i+1])
|
||||
i++ // skip next arg
|
||||
} else if !strings.HasPrefix(arg, "-") {
|
||||
targetDir = arg
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// KV is a key/value pair.
|
||||
type KV struct{ Key, Value string }
|
||||
|
||||
@@ -570,11 +610,16 @@ func PrintHelp(cfg *C, printer io.Writer) {
|
||||
)
|
||||
_, _ = fmt.Fprintf(
|
||||
printer,
|
||||
`Usage: %s [env|help|identity|migrate|serve|version]
|
||||
`Usage: %s [env|help|identity|init-branding|migrate|serve|version]
|
||||
|
||||
- env: print environment variables configuring %s
|
||||
- help: print this help text
|
||||
- identity: print the relay identity secret and public key
|
||||
- init-branding: create branding directory with default assets and CSS templates
|
||||
Example: %s init-branding [--style generic|orly] [/path/to/branding]
|
||||
Styles: generic (default) - neutral white-label branding
|
||||
orly - ORLY-branded assets
|
||||
Default location: ~/.config/%s/branding
|
||||
- migrate: migrate data between database backends
|
||||
Example: %s migrate --from badger --to bbolt
|
||||
- serve: start ephemeral relay with RAM-based storage at /dev/shm/orlyserve
|
||||
@@ -583,7 +628,7 @@ func PrintHelp(cfg *C, printer io.Writer) {
|
||||
- version: print version and exit (also: -v, --v, -version, --version)
|
||||
|
||||
`,
|
||||
cfg.AppName, cfg.AppName, cfg.AppName,
|
||||
cfg.AppName, cfg.AppName, cfg.AppName, cfg.AppName, cfg.AppName,
|
||||
)
|
||||
_, _ = fmt.Fprintf(
|
||||
printer,
|
||||
|
||||
@@ -23,15 +23,27 @@ type CashuMintRequest struct {
|
||||
}
|
||||
|
||||
// CashuMintResponse is the response body for token issuance.
|
||||
// Field names match NIP-XX Cashu Access Tokens spec.
|
||||
type CashuMintResponse struct {
|
||||
BlindedSignature string `json:"blinded_signature"` // Hex-encoded blinded signature C_
|
||||
KeysetID string `json:"keyset_id"` // Keyset ID used
|
||||
Expiry int64 `json:"expiry"` // Token expiration timestamp
|
||||
MintPubkey string `json:"mint_pubkey"` // Hex-encoded mint public key
|
||||
MintPubkey string `json:"pubkey"` // Hex-encoded mint public key (spec: "pubkey")
|
||||
}
|
||||
|
||||
// handleCashuMint handles POST /cashu/mint - issues a new token.
|
||||
func (s *Server) handleCashuMint(w http.ResponseWriter, r *http.Request) {
|
||||
// CORS headers for browser-based CAT token requests
|
||||
w.Header().Set("Access-Control-Allow-Origin", "*")
|
||||
w.Header().Set("Access-Control-Allow-Methods", "POST, OPTIONS")
|
||||
w.Header().Set("Access-Control-Allow-Headers", "Content-Type, Accept, Authorization")
|
||||
|
||||
// Handle preflight
|
||||
if r.Method == http.MethodOptions {
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return
|
||||
}
|
||||
|
||||
// Check if Cashu is enabled
|
||||
if s.CashuIssuer == nil {
|
||||
log.W.F("Cashu mint request but issuer not initialized")
|
||||
@@ -107,6 +119,17 @@ func (s *Server) handleCashuMint(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
// handleCashuKeysets handles GET /cashu/keysets - returns available keysets.
|
||||
func (s *Server) handleCashuKeysets(w http.ResponseWriter, r *http.Request) {
|
||||
// CORS headers for browser-based CAT support
|
||||
w.Header().Set("Access-Control-Allow-Origin", "*")
|
||||
w.Header().Set("Access-Control-Allow-Methods", "GET, OPTIONS")
|
||||
w.Header().Set("Access-Control-Allow-Headers", "Content-Type, Accept")
|
||||
|
||||
// Handle preflight
|
||||
if r.Method == http.MethodOptions {
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return
|
||||
}
|
||||
|
||||
if s.CashuIssuer == nil {
|
||||
http.Error(w, "Cashu tokens not enabled", http.StatusNotImplemented)
|
||||
return
|
||||
@@ -124,6 +147,17 @@ func (s *Server) handleCashuKeysets(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
// handleCashuInfo handles GET /cashu/info - returns mint information.
|
||||
func (s *Server) handleCashuInfo(w http.ResponseWriter, r *http.Request) {
|
||||
// CORS headers for browser-based CAT support detection
|
||||
w.Header().Set("Access-Control-Allow-Origin", "*")
|
||||
w.Header().Set("Access-Control-Allow-Methods", "GET, OPTIONS")
|
||||
w.Header().Set("Access-Control-Allow-Headers", "Content-Type, Accept")
|
||||
|
||||
// Handle preflight
|
||||
if r.Method == http.MethodOptions {
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return
|
||||
}
|
||||
|
||||
if s.CashuIssuer == nil {
|
||||
http.Error(w, "Cashu tokens not enabled", http.StatusNotImplemented)
|
||||
return
|
||||
|
||||
@@ -21,7 +21,7 @@ import (
|
||||
)
|
||||
|
||||
func (l *Listener) HandleEvent(msg []byte) (err error) {
|
||||
log.D.F("HandleEvent: START handling event: %s", msg)
|
||||
log.I.F("HandleEvent: START handling event: %s", string(msg[:min(200, len(msg))]))
|
||||
|
||||
// 1. Raw JSON validation (before unmarshal) - use validation service
|
||||
if result := l.eventValidator.ValidateRawJSON(msg); !result.Valid {
|
||||
@@ -146,8 +146,9 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
|
||||
// Require Cashu token for NIP-46 events when Cashu is enabled and ACL is active
|
||||
const kindNIP46 = 24133
|
||||
if env.E.Kind == kindNIP46 && l.CashuVerifier != nil && l.Config.ACLMode != "none" {
|
||||
log.D.F("HandleEvent: NIP-46 event from %s, cashuToken=%v, ACLMode=%s", l.remote, l.cashuToken != nil, l.Config.ACLMode)
|
||||
if l.cashuToken == nil {
|
||||
log.W.F("HandleEvent: rejecting NIP-46 event - Cashu access token required")
|
||||
log.W.F("HandleEvent: rejecting NIP-46 event from %s - Cashu access token required (connection has no token)", l.remote)
|
||||
if err = Ok.Error(l, env, "restricted: NIP-46 requires Cashu access token"); chk.E(err) {
|
||||
return
|
||||
}
|
||||
@@ -231,6 +232,11 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
|
||||
|
||||
// Authorization check (policy + ACL) - use authorization service
|
||||
decision := l.eventAuthorizer.Authorize(env.E, l.authedPubkey.Load(), l.remote, env.E.Kind)
|
||||
// Debug: log ephemeral event authorization
|
||||
if env.E.Kind >= 20000 && env.E.Kind < 30000 {
|
||||
log.I.F("ephemeral auth check: kind %d, allowed=%v, reason=%s",
|
||||
env.E.Kind, decision.Allowed, decision.DenyReason)
|
||||
}
|
||||
if !decision.Allowed {
|
||||
log.D.F("HandleEvent: authorization denied: %s (requireAuth=%v)", decision.DenyReason, decision.RequireAuth)
|
||||
if decision.RequireAuth {
|
||||
@@ -256,14 +262,17 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
|
||||
log.I.F("HandleEvent: authorized with access level %s", decision.AccessLevel)
|
||||
|
||||
// Progressive throttle for follows ACL mode (delays non-followed users)
|
||||
if delay := l.getFollowsThrottleDelay(env.E); delay > 0 {
|
||||
log.D.F("HandleEvent: applying progressive throttle delay of %v for %0x from %s",
|
||||
delay, env.E.Pubkey, l.remote)
|
||||
select {
|
||||
case <-l.ctx.Done():
|
||||
return l.ctx.Err()
|
||||
case <-time.After(delay):
|
||||
// Delay completed, continue processing
|
||||
// Skip throttle if a Cashu Access Token is present (authenticated via CAT)
|
||||
if l.cashuToken == nil {
|
||||
if delay := l.getFollowsThrottleDelay(env.E); delay > 0 {
|
||||
log.D.F("HandleEvent: applying progressive throttle delay of %v for %0x from %s",
|
||||
delay, env.E.Pubkey, l.remote)
|
||||
select {
|
||||
case <-l.ctx.Done():
|
||||
return l.ctx.Err()
|
||||
case <-time.After(delay):
|
||||
// Delay completed, continue processing
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -143,6 +143,12 @@ func (s *Server) handleCuratingNIP86Method(request NIP86Request, curatingACL *ac
|
||||
return s.handleUnblockCuratingIP(request.Params, dbACL)
|
||||
case "isconfigured":
|
||||
return s.handleIsConfigured(dbACL)
|
||||
case "scanpubkeys":
|
||||
return s.handleScanPubkeys(dbACL)
|
||||
case "geteventsforpubkey":
|
||||
return s.handleGetEventsForPubkey(request.Params, dbACL)
|
||||
case "deleteeventsforpubkey":
|
||||
return s.handleDeleteEventsForPubkey(request.Params, dbACL)
|
||||
default:
|
||||
return NIP86Response{Error: "Unknown method: " + request.Method}
|
||||
}
|
||||
@@ -167,6 +173,9 @@ func (s *Server) handleCuratingSupportedMethods() NIP86Response {
|
||||
"listblockedips",
|
||||
"unblockip",
|
||||
"isconfigured",
|
||||
"scanpubkeys",
|
||||
"geteventsforpubkey",
|
||||
"deleteeventsforpubkey",
|
||||
}
|
||||
return NIP86Response{Result: methods}
|
||||
}
|
||||
@@ -444,8 +453,11 @@ func (s *Server) handleGetCuratingConfig(dbACL *database.CuratingACL) NIP86Respo
|
||||
"first_ban_hours": config.FirstBanHours,
|
||||
"second_ban_hours": config.SecondBanHours,
|
||||
"allowed_kinds": config.AllowedKinds,
|
||||
"custom_kinds": config.AllowedKinds, // Alias for frontend compatibility
|
||||
"allowed_ranges": config.AllowedRanges,
|
||||
"kind_ranges": config.AllowedRanges, // Alias for frontend compatibility
|
||||
"kind_categories": config.KindCategories,
|
||||
"categories": config.KindCategories, // Alias for frontend compatibility
|
||||
"config_event_id": config.ConfigEventID,
|
||||
"config_pubkey": config.ConfigPubkey,
|
||||
"configured_at": config.ConfiguredAt,
|
||||
@@ -531,11 +543,23 @@ func GetKindCategoriesInfo() []map[string]interface{} {
|
||||
"kinds": []int{1063, 20, 21, 22},
|
||||
},
|
||||
{
|
||||
"id": "marketplace",
|
||||
"name": "Marketplace",
|
||||
"description": "Product listings, stalls, auctions",
|
||||
"id": "marketplace_nip15",
|
||||
"name": "Marketplace (NIP-15)",
|
||||
"description": "Legacy NIP-15 stalls and products",
|
||||
"kinds": []int{30017, 30018, 30019, 30020, 1021, 1022},
|
||||
},
|
||||
{
|
||||
"id": "marketplace_nip99",
|
||||
"name": "Marketplace (NIP-99/Gamma)",
|
||||
"description": "NIP-99 classified listings, collections, shipping, reviews (Plebeian Market)",
|
||||
"kinds": []int{30402, 30403, 30405, 30406, 31555},
|
||||
},
|
||||
{
|
||||
"id": "order_communication",
|
||||
"name": "Order Communication",
|
||||
"description": "Gamma Markets order messages and payment receipts",
|
||||
"kinds": []int{16, 17},
|
||||
},
|
||||
{
|
||||
"id": "groups_nip29",
|
||||
"name": "Group Messaging (NIP-29)",
|
||||
@@ -591,3 +615,122 @@ func parseRange(s string, parts []int) (int, error) {
|
||||
}
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
// handleScanPubkeys scans the database for all pubkeys and populates event counts
|
||||
// This is used to retroactively populate the unclassified users list
|
||||
func (s *Server) handleScanPubkeys(dbACL *database.CuratingACL) NIP86Response {
|
||||
result, err := dbACL.ScanAllPubkeys()
|
||||
if chk.E(err) {
|
||||
return NIP86Response{Error: "Failed to scan pubkeys: " + err.Error()}
|
||||
}
|
||||
|
||||
return NIP86Response{Result: map[string]interface{}{
|
||||
"total_pubkeys": result.TotalPubkeys,
|
||||
"total_events": result.TotalEvents,
|
||||
"skipped": result.Skipped,
|
||||
}}
|
||||
}
|
||||
|
||||
// handleGetEventsForPubkey returns events for a specific pubkey
|
||||
// Params: [pubkey, limit (optional, default 100), offset (optional, default 0)]
|
||||
func (s *Server) handleGetEventsForPubkey(params []interface{}, dbACL *database.CuratingACL) NIP86Response {
|
||||
if len(params) < 1 {
|
||||
return NIP86Response{Error: "Missing required parameter: pubkey"}
|
||||
}
|
||||
|
||||
pubkey, ok := params[0].(string)
|
||||
if !ok {
|
||||
return NIP86Response{Error: "Invalid pubkey parameter"}
|
||||
}
|
||||
|
||||
if len(pubkey) != 64 {
|
||||
return NIP86Response{Error: "Invalid pubkey format (must be 64 hex characters)"}
|
||||
}
|
||||
|
||||
// Parse optional limit (default 100)
|
||||
limit := 100
|
||||
if len(params) > 1 {
|
||||
if l, ok := params[1].(float64); ok {
|
||||
limit = int(l)
|
||||
if limit > 500 {
|
||||
limit = 500 // Cap at 500
|
||||
}
|
||||
if limit < 1 {
|
||||
limit = 1
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Parse optional offset (default 0)
|
||||
offset := 0
|
||||
if len(params) > 2 {
|
||||
if o, ok := params[2].(float64); ok {
|
||||
offset = int(o)
|
||||
if offset < 0 {
|
||||
offset = 0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
events, total, err := dbACL.GetEventsForPubkey(pubkey, limit, offset)
|
||||
if chk.E(err) {
|
||||
return NIP86Response{Error: "Failed to get events: " + err.Error()}
|
||||
}
|
||||
|
||||
// Convert to response format
|
||||
eventList := make([]map[string]interface{}, len(events))
|
||||
for i, ev := range events {
|
||||
eventList[i] = map[string]interface{}{
|
||||
"id": ev.ID,
|
||||
"kind": ev.Kind,
|
||||
"content": ev.Content,
|
||||
"created_at": ev.CreatedAt,
|
||||
}
|
||||
}
|
||||
|
||||
return NIP86Response{Result: map[string]interface{}{
|
||||
"events": eventList,
|
||||
"total": total,
|
||||
"limit": limit,
|
||||
"offset": offset,
|
||||
}}
|
||||
}
|
||||
|
||||
// handleDeleteEventsForPubkey deletes all events for a specific pubkey
|
||||
// This is only allowed for blacklisted pubkeys as a safety measure
|
||||
// Params: [pubkey]
|
||||
func (s *Server) handleDeleteEventsForPubkey(params []interface{}, dbACL *database.CuratingACL) NIP86Response {
|
||||
if len(params) < 1 {
|
||||
return NIP86Response{Error: "Missing required parameter: pubkey"}
|
||||
}
|
||||
|
||||
pubkey, ok := params[0].(string)
|
||||
if !ok {
|
||||
return NIP86Response{Error: "Invalid pubkey parameter"}
|
||||
}
|
||||
|
||||
if len(pubkey) != 64 {
|
||||
return NIP86Response{Error: "Invalid pubkey format (must be 64 hex characters)"}
|
||||
}
|
||||
|
||||
// Safety check: only allow deletion of events from blacklisted users
|
||||
isBlacklisted, err := dbACL.IsPubkeyBlacklisted(pubkey)
|
||||
if chk.E(err) {
|
||||
return NIP86Response{Error: "Failed to check blacklist status: " + err.Error()}
|
||||
}
|
||||
|
||||
if !isBlacklisted {
|
||||
return NIP86Response{Error: "Can only delete events from blacklisted users. Blacklist the user first."}
|
||||
}
|
||||
|
||||
// Delete all events for this pubkey
|
||||
deleted, err := dbACL.DeleteEventsForPubkey(pubkey)
|
||||
if chk.E(err) {
|
||||
return NIP86Response{Error: "Failed to delete events: " + err.Error()}
|
||||
}
|
||||
|
||||
return NIP86Response{Result: map[string]interface{}{
|
||||
"deleted": deleted,
|
||||
"pubkey": pubkey,
|
||||
}}
|
||||
}
|
||||
|
||||
@@ -115,6 +115,20 @@ func (s *Server) HandleRelayInfo(w http.ResponseWriter, r *http.Request) {
|
||||
description := version.Description + " dashboard: " + s.DashboardURL(r)
|
||||
icon := "https://i.nostr.build/6wGXAn7Zaw9mHxFg.png"
|
||||
|
||||
// Override with branding config if available
|
||||
if s.brandingMgr != nil {
|
||||
nip11 := s.brandingMgr.NIP11Config()
|
||||
if nip11.Name != "" {
|
||||
name = nip11.Name
|
||||
}
|
||||
if nip11.Description != "" {
|
||||
description = nip11.Description
|
||||
}
|
||||
if nip11.Icon != "" {
|
||||
icon = nip11.Icon
|
||||
}
|
||||
}
|
||||
|
||||
// Override with managed ACL config if in managed mode
|
||||
if s.Config.ACLMode == "managed" {
|
||||
// Get managed ACL instance
|
||||
|
||||
@@ -34,7 +34,6 @@ import (
|
||||
|
||||
func (l *Listener) HandleReq(msg []byte) (err error) {
|
||||
log.D.F("handling REQ: %s", msg)
|
||||
log.T.F("HandleReq: START processing from %s", l.remote)
|
||||
// var rem []byte
|
||||
env := reqenvelope.New()
|
||||
if _, err = env.Unmarshal(msg); chk.E(err) {
|
||||
|
||||
@@ -309,10 +309,12 @@ func (s *Server) Pinger(
|
||||
func (s *Server) extractWebSocketToken(r *http.Request, remote string) *token.Token {
|
||||
// Try query param first (WebSocket clients often can't set custom headers)
|
||||
tokenStr := r.URL.Query().Get("token")
|
||||
log.D.F("ws %s: CAT extraction - query param token: %v", remote, tokenStr != "")
|
||||
|
||||
// Try X-Cashu-Token header
|
||||
if tokenStr == "" {
|
||||
tokenStr = r.Header.Get("X-Cashu-Token")
|
||||
log.D.F("ws %s: CAT extraction - X-Cashu-Token header: %v", remote, tokenStr != "")
|
||||
}
|
||||
|
||||
// Try Authorization: Cashu scheme
|
||||
@@ -321,12 +323,15 @@ func (s *Server) extractWebSocketToken(r *http.Request, remote string) *token.To
|
||||
if strings.HasPrefix(auth, "Cashu ") {
|
||||
tokenStr = strings.TrimPrefix(auth, "Cashu ")
|
||||
}
|
||||
log.D.F("ws %s: CAT extraction - Authorization header: %v", remote, tokenStr != "")
|
||||
}
|
||||
|
||||
// No token provided - this is fine, connection proceeds without token
|
||||
if tokenStr == "" {
|
||||
log.D.F("ws %s: CAT extraction - no token found", remote)
|
||||
return nil
|
||||
}
|
||||
log.D.F("ws %s: CAT extraction - found token (len=%d)", remote, len(tokenStr))
|
||||
|
||||
// Parse the token
|
||||
tok, err := token.Parse(tokenStr)
|
||||
|
||||
21
app/main.go
21
app/main.go
@@ -10,9 +10,11 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/adrg/xdg"
|
||||
"golang.org/x/crypto/acme/autocert"
|
||||
"lol.mleku.dev/chk"
|
||||
"lol.mleku.dev/log"
|
||||
"next.orly.dev/app/branding"
|
||||
"next.orly.dev/app/config"
|
||||
"next.orly.dev/pkg/acl"
|
||||
"git.mleku.dev/mleku/nostr/crypto/keys"
|
||||
@@ -91,6 +93,21 @@ func Run(
|
||||
db: db,
|
||||
}
|
||||
|
||||
// Initialize branding/white-label manager if enabled
|
||||
if cfg.BrandingEnabled {
|
||||
brandingDir := cfg.BrandingDir
|
||||
if brandingDir == "" {
|
||||
brandingDir = filepath.Join(xdg.ConfigHome, cfg.AppName, "branding")
|
||||
}
|
||||
if _, err := os.Stat(brandingDir); err == nil {
|
||||
if l.brandingMgr, err = branding.New(brandingDir); err != nil {
|
||||
log.W.F("failed to load branding from %s: %v", brandingDir, err)
|
||||
} else {
|
||||
log.I.F("custom branding loaded from %s", brandingDir)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize NIP-43 invite manager if enabled
|
||||
if cfg.NIP43Enabled {
|
||||
l.InviteManager = nip43.NewInviteManager(cfg.NIP43InviteExpiry)
|
||||
@@ -435,7 +452,7 @@ func Run(
|
||||
|
||||
// Initialize Blossom blob storage server (only for Badger backend)
|
||||
// MUST be done before UserInterface() which registers routes
|
||||
if badgerDB, ok := db.(*database.D); ok {
|
||||
if badgerDB, ok := db.(*database.D); ok && cfg.BlossomEnabled {
|
||||
log.I.F("Badger backend detected, initializing Blossom server...")
|
||||
if l.blossomServer, err = initializeBlossomServer(ctx, cfg, badgerDB); err != nil {
|
||||
log.E.F("failed to initialize blossom server: %v", err)
|
||||
@@ -445,6 +462,8 @@ func Run(
|
||||
} else {
|
||||
log.W.F("blossom server initialization returned nil without error")
|
||||
}
|
||||
} else if !cfg.BlossomEnabled {
|
||||
log.I.F("Blossom server disabled via ORLY_BLOSSOM_ENABLED=false")
|
||||
} else {
|
||||
log.I.F("Non-Badger backend detected (type: %T), Blossom server not available", db)
|
||||
}
|
||||
|
||||
@@ -159,12 +159,26 @@ func (p *P) Deliver(ev *event.E) {
|
||||
sub Subscription
|
||||
}
|
||||
var deliveries []delivery
|
||||
// Debug: log ephemeral event delivery attempts
|
||||
isEphemeral := ev.Kind >= 20000 && ev.Kind < 30000
|
||||
if isEphemeral {
|
||||
var tagInfo string
|
||||
if ev.Tags != nil {
|
||||
tagInfo = string(ev.Tags.Marshal(nil))
|
||||
}
|
||||
log.I.F("ephemeral event kind %d, id %0x, checking %d connections for matches, tags: %s",
|
||||
ev.Kind, ev.ID[:8], len(p.Map), tagInfo)
|
||||
}
|
||||
for w, subs := range p.Map {
|
||||
for id, subscriber := range subs {
|
||||
if subscriber.Match(ev) {
|
||||
deliveries = append(
|
||||
deliveries, delivery{w: w, id: id, sub: subscriber},
|
||||
)
|
||||
} else if isEphemeral {
|
||||
// Debug: log why ephemeral events don't match
|
||||
log.I.F("ephemeral event kind %d did NOT match subscription %s (filters: %s)",
|
||||
ev.Kind, id, string(subscriber.S.Marshal(nil)))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
167
app/server.go
167
app/server.go
@@ -15,6 +15,7 @@ import (
|
||||
"time"
|
||||
|
||||
"lol.mleku.dev/chk"
|
||||
"next.orly.dev/app/branding"
|
||||
"next.orly.dev/app/config"
|
||||
"next.orly.dev/pkg/acl"
|
||||
"next.orly.dev/pkg/blossom"
|
||||
@@ -106,6 +107,9 @@ type Server struct {
|
||||
|
||||
// Tor hidden service
|
||||
torService *tor.Service
|
||||
|
||||
// Branding/white-label customization
|
||||
brandingMgr *branding.Manager
|
||||
}
|
||||
|
||||
// isIPBlacklisted checks if an IP address is blacklisted using the managed ACL system
|
||||
@@ -302,6 +306,12 @@ func (s *Server) UserInterface() {
|
||||
// Serve favicon.ico by serving favicon.png
|
||||
s.mux.HandleFunc("/favicon.ico", s.handleFavicon)
|
||||
|
||||
// Branding/white-label endpoints (custom assets, CSS, manifest)
|
||||
s.mux.HandleFunc("/branding/", s.handleBrandingAsset)
|
||||
|
||||
// Intercept /orly.png to serve custom logo if branding is active
|
||||
s.mux.HandleFunc("/orly.png", s.handleLogo)
|
||||
|
||||
// Serve the main login interface (and static assets) or proxy in dev mode
|
||||
s.mux.HandleFunc("/", s.handleLoginInterface)
|
||||
|
||||
@@ -401,6 +411,16 @@ func (s *Server) handleFavicon(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
// Check for custom branding favicon first
|
||||
if s.brandingMgr != nil {
|
||||
if data, mimeType, ok := s.brandingMgr.GetAsset("favicon"); ok {
|
||||
w.Header().Set("Content-Type", mimeType)
|
||||
w.Header().Set("Cache-Control", "public, max-age=86400")
|
||||
w.Write(data)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Serve favicon.png as favicon.ico from embedded web app
|
||||
w.Header().Set("Content-Type", "image/png")
|
||||
w.Header().Set("Cache-Control", "public, max-age=86400") // Cache for 1 day
|
||||
@@ -413,6 +433,30 @@ func (s *Server) handleFavicon(w http.ResponseWriter, r *http.Request) {
|
||||
ServeEmbeddedWeb(w, faviconReq)
|
||||
}
|
||||
|
||||
// handleLogo serves the logo image, using custom branding if available
|
||||
func (s *Server) handleLogo(w http.ResponseWriter, r *http.Request) {
|
||||
// In dev mode with proxy configured, forward to dev server
|
||||
if s.devProxy != nil {
|
||||
s.devProxy.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
// Check for custom branding logo first
|
||||
if s.brandingMgr != nil {
|
||||
if data, mimeType, ok := s.brandingMgr.GetAsset("logo"); ok {
|
||||
w.Header().Set("Content-Type", mimeType)
|
||||
w.Header().Set("Cache-Control", "public, max-age=86400")
|
||||
w.Write(data)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Fall back to embedded orly.png
|
||||
w.Header().Set("Content-Type", "image/png")
|
||||
w.Header().Set("Cache-Control", "public, max-age=86400")
|
||||
ServeEmbeddedWeb(w, r)
|
||||
}
|
||||
|
||||
// handleLoginInterface serves the main user interface for login
|
||||
func (s *Server) handleLoginInterface(w http.ResponseWriter, r *http.Request) {
|
||||
// In dev mode with proxy configured, forward to dev server
|
||||
@@ -427,10 +471,133 @@ func (s *Server) handleLoginInterface(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
// If branding is enabled and this is the index page, inject customizations
|
||||
if s.brandingMgr != nil && (r.URL.Path == "/" || r.URL.Path == "/index.html") {
|
||||
s.serveModifiedIndex(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
// Serve embedded web interface
|
||||
ServeEmbeddedWeb(w, r)
|
||||
}
|
||||
|
||||
// serveModifiedIndex serves the index.html with branding modifications injected
|
||||
func (s *Server) serveModifiedIndex(w http.ResponseWriter, r *http.Request) {
|
||||
// Read the embedded index.html
|
||||
fs := GetReactAppFS()
|
||||
file, err := fs.Open("index.html")
|
||||
if err != nil {
|
||||
// Fallback to embedded serving
|
||||
ServeEmbeddedWeb(w, r)
|
||||
return
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
originalHTML, err := io.ReadAll(file)
|
||||
if err != nil {
|
||||
ServeEmbeddedWeb(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
// Apply branding modifications
|
||||
modifiedHTML, err := s.brandingMgr.ModifyIndexHTML(originalHTML)
|
||||
if err != nil {
|
||||
ServeEmbeddedWeb(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
w.Header().Set("Cache-Control", "no-cache")
|
||||
w.Write(modifiedHTML)
|
||||
}
|
||||
|
||||
// handleBrandingAsset serves custom branding assets (logo, icons, CSS, manifest)
|
||||
func (s *Server) handleBrandingAsset(w http.ResponseWriter, r *http.Request) {
|
||||
// Extract asset name from path: /branding/logo.png -> logo.png
|
||||
path := strings.TrimPrefix(r.URL.Path, "/branding/")
|
||||
|
||||
// If no branding manager, return 404
|
||||
if s.brandingMgr == nil {
|
||||
http.NotFound(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
switch path {
|
||||
case "custom.css":
|
||||
// Serve combined custom CSS
|
||||
css, err := s.brandingMgr.GetCustomCSS()
|
||||
if err != nil {
|
||||
http.NotFound(w, r)
|
||||
return
|
||||
}
|
||||
w.Header().Set("Content-Type", "text/css; charset=utf-8")
|
||||
w.Header().Set("Cache-Control", "public, max-age=3600")
|
||||
w.Write(css)
|
||||
|
||||
case "manifest.json":
|
||||
// Serve customized manifest.json
|
||||
// First read the embedded manifest
|
||||
fs := GetReactAppFS()
|
||||
file, err := fs.Open("manifest.json")
|
||||
if err != nil {
|
||||
http.NotFound(w, r)
|
||||
return
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
originalManifest, err := io.ReadAll(file)
|
||||
if err != nil {
|
||||
http.NotFound(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
manifest, err := s.brandingMgr.GetManifest(originalManifest)
|
||||
if err != nil {
|
||||
// Fallback to original
|
||||
w.Header().Set("Content-Type", "application/manifest+json")
|
||||
w.Write(originalManifest)
|
||||
return
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/manifest+json")
|
||||
w.Header().Set("Cache-Control", "public, max-age=3600")
|
||||
w.Write(manifest)
|
||||
|
||||
case "logo.png":
|
||||
s.serveBrandingAsset(w, "logo")
|
||||
|
||||
case "favicon.png":
|
||||
s.serveBrandingAsset(w, "favicon")
|
||||
|
||||
case "icon-192.png":
|
||||
s.serveBrandingAsset(w, "icon-192")
|
||||
|
||||
case "icon-512.png":
|
||||
s.serveBrandingAsset(w, "icon-512")
|
||||
|
||||
default:
|
||||
http.NotFound(w, r)
|
||||
}
|
||||
}
|
||||
|
||||
// serveBrandingAsset serves a specific branding asset by name
|
||||
func (s *Server) serveBrandingAsset(w http.ResponseWriter, name string) {
|
||||
if s.brandingMgr == nil {
|
||||
http.NotFound(w, nil)
|
||||
return
|
||||
}
|
||||
|
||||
data, mimeType, ok := s.brandingMgr.GetAsset(name)
|
||||
if !ok {
|
||||
http.NotFound(w, nil)
|
||||
return
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", mimeType)
|
||||
w.Header().Set("Cache-Control", "public, max-age=86400")
|
||||
w.Write(data)
|
||||
}
|
||||
|
||||
// handleAuthChallenge generates a new authentication challenge
|
||||
func (s *Server) handleAuthChallenge(w http.ResponseWriter, r *http.Request) {
|
||||
if r.Method != http.MethodGet {
|
||||
|
||||
@@ -23,3 +23,9 @@ func ServeEmbeddedWeb(w http.ResponseWriter, r *http.Request) {
|
||||
// Serve the embedded web app
|
||||
http.FileServer(GetReactAppFS()).ServeHTTP(w, r)
|
||||
}
|
||||
|
||||
// GetEmbeddedWebFS returns the raw embedded filesystem for branding initialization.
|
||||
// This is used by the init-branding command to extract default assets.
|
||||
func GetEmbeddedWebFS() embed.FS {
|
||||
return reactAppFS
|
||||
}
|
||||
|
||||
4
app/web/dist/bundle.css
vendored
4
app/web/dist/bundle.css
vendored
File diff suppressed because one or more lines are too long
28
app/web/dist/bundle.js
vendored
28
app/web/dist/bundle.js
vendored
File diff suppressed because one or more lines are too long
2
app/web/dist/bundle.js.map
vendored
2
app/web/dist/bundle.js.map
vendored
File diff suppressed because one or more lines are too long
@@ -6,25 +6,35 @@
|
||||
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname, join } from 'path';
|
||||
import { writeFileSync } from 'fs';
|
||||
import { writeFileSync, existsSync } from 'fs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
const KINDS_URL = 'https://git.mleku.dev/mleku/nostr/raw/branch/main/encoders/kind/kinds.json';
|
||||
const OUTPUT_PATH = join(__dirname, '..', 'src', 'eventKinds.js');
|
||||
|
||||
async function fetchKinds() {
|
||||
console.log(`Fetching kinds from ${KINDS_URL}...`);
|
||||
|
||||
const response = await fetch(KINDS_URL);
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to fetch kinds.json: ${response.status} ${response.statusText}`);
|
||||
try {
|
||||
const response = await fetch(KINDS_URL, { timeout: 10000 });
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
console.log(`Fetched ${Object.keys(data.kinds).length} kinds (version: ${data.version})`);
|
||||
return data;
|
||||
} catch (error) {
|
||||
// Check if we have an existing eventKinds.js we can use
|
||||
if (existsSync(OUTPUT_PATH)) {
|
||||
console.warn(`Warning: Could not fetch kinds.json (${error.message})`);
|
||||
console.log(`Using existing ${OUTPUT_PATH}`);
|
||||
return null; // Signal to skip generation
|
||||
}
|
||||
throw new Error(`Failed to fetch kinds.json and no existing file: ${error.message}`);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
console.log(`Fetched ${Object.keys(data.kinds).length} kinds (version: ${data.version})`);
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
function generateEventKinds(data) {
|
||||
@@ -202,14 +212,18 @@ export const kindCategories = [
|
||||
async function main() {
|
||||
try {
|
||||
const data = await fetchKinds();
|
||||
|
||||
// If fetchKinds returned null, we're using the existing file
|
||||
if (data === null) {
|
||||
console.log('Skipping generation, using existing eventKinds.js');
|
||||
return;
|
||||
}
|
||||
|
||||
const kinds = generateEventKinds(data);
|
||||
const js = generateJS(kinds, data);
|
||||
|
||||
// Write to src/eventKinds.js
|
||||
const outPath = join(__dirname, '..', 'src', 'eventKinds.js');
|
||||
|
||||
writeFileSync(outPath, js);
|
||||
console.log(`Generated ${outPath} with ${kinds.length} kinds`);
|
||||
writeFileSync(OUTPUT_PATH, js);
|
||||
console.log(`Generated ${OUTPUT_PATH} with ${kinds.length} kinds`);
|
||||
} catch (error) {
|
||||
console.error('Error:', error.message);
|
||||
process.exit(1);
|
||||
|
||||
@@ -13,6 +13,15 @@
|
||||
let messageType = "info";
|
||||
let isConfigured = false;
|
||||
|
||||
// User detail view state
|
||||
let selectedUser = null;
|
||||
let selectedUserType = null; // "trusted", "blacklisted", or "unclassified"
|
||||
let userEvents = [];
|
||||
let userEventsTotal = 0;
|
||||
let userEventsOffset = 0;
|
||||
let loadingEvents = false;
|
||||
let expandedEvents = {}; // Track which events are expanded
|
||||
|
||||
// Configuration state
|
||||
let config = {
|
||||
daily_limit: 50,
|
||||
@@ -186,6 +195,19 @@
|
||||
}
|
||||
}
|
||||
|
||||
// Scan database for all pubkeys
|
||||
async function scanDatabase() {
|
||||
try {
|
||||
const result = await callNIP86API("scanpubkeys");
|
||||
showMessage(`Database scanned: ${result.total_pubkeys} pubkeys, ${result.total_events} events (${result.skipped} skipped)`, "success");
|
||||
// Refresh the unclassified users list
|
||||
await loadUnclassifiedUsers();
|
||||
} catch (error) {
|
||||
console.error("Failed to scan database:", error);
|
||||
showMessage("Failed to scan database: " + error.message, "error");
|
||||
}
|
||||
}
|
||||
|
||||
// Load spam events
|
||||
async function loadSpamEvents() {
|
||||
try {
|
||||
@@ -430,6 +452,176 @@
|
||||
if (!timestamp) return "";
|
||||
return new Date(timestamp).toLocaleString();
|
||||
}
|
||||
|
||||
// Show message helper
|
||||
function showMessage(msg, type = "info") {
|
||||
message = msg;
|
||||
messageType = type;
|
||||
}
|
||||
|
||||
// Open user detail view
|
||||
async function openUserDetail(pubkey, type) {
|
||||
console.log("openUserDetail called:", pubkey, type);
|
||||
selectedUser = pubkey;
|
||||
selectedUserType = type;
|
||||
userEvents = [];
|
||||
userEventsTotal = 0;
|
||||
userEventsOffset = 0;
|
||||
expandedEvents = {};
|
||||
console.log("selectedUser set to:", selectedUser);
|
||||
await loadUserEvents();
|
||||
}
|
||||
|
||||
// Close user detail view
|
||||
function closeUserDetail() {
|
||||
selectedUser = null;
|
||||
selectedUserType = null;
|
||||
userEvents = [];
|
||||
userEventsTotal = 0;
|
||||
userEventsOffset = 0;
|
||||
expandedEvents = {};
|
||||
}
|
||||
|
||||
// Load events for selected user
|
||||
async function loadUserEvents() {
|
||||
console.log("loadUserEvents called, selectedUser:", selectedUser, "loadingEvents:", loadingEvents);
|
||||
if (!selectedUser || loadingEvents) return;
|
||||
|
||||
try {
|
||||
loadingEvents = true;
|
||||
console.log("Calling geteventsforpubkey API...");
|
||||
const result = await callNIP86API("geteventsforpubkey", [selectedUser, 100, userEventsOffset]);
|
||||
console.log("API result:", result);
|
||||
if (result) {
|
||||
if (userEventsOffset === 0) {
|
||||
userEvents = result.events || [];
|
||||
} else {
|
||||
userEvents = [...userEvents, ...(result.events || [])];
|
||||
}
|
||||
userEventsTotal = result.total || 0;
|
||||
}
|
||||
} catch (error) {
|
||||
console.error("Failed to load user events:", error);
|
||||
showMessage("Failed to load events: " + error.message, "error");
|
||||
} finally {
|
||||
loadingEvents = false;
|
||||
}
|
||||
}
|
||||
|
||||
// Load more events
|
||||
async function loadMoreEvents() {
|
||||
userEventsOffset = userEvents.length;
|
||||
await loadUserEvents();
|
||||
}
|
||||
|
||||
// Toggle event expansion
|
||||
function toggleEventExpansion(eventId) {
|
||||
expandedEvents = {
|
||||
...expandedEvents,
|
||||
[eventId]: !expandedEvents[eventId]
|
||||
};
|
||||
}
|
||||
|
||||
// Truncate content to 6 lines (approximately 300 chars per line)
|
||||
function truncateContent(content, maxLines = 6) {
|
||||
if (!content) return "";
|
||||
const lines = content.split('\n');
|
||||
if (lines.length <= maxLines && content.length <= maxLines * 100) {
|
||||
return content;
|
||||
}
|
||||
// Truncate by lines or characters, whichever is smaller
|
||||
let truncated = lines.slice(0, maxLines).join('\n');
|
||||
if (truncated.length > maxLines * 100) {
|
||||
truncated = truncated.substring(0, maxLines * 100);
|
||||
}
|
||||
return truncated;
|
||||
}
|
||||
|
||||
// Check if content is truncated
|
||||
function isContentTruncated(content, maxLines = 6) {
|
||||
if (!content) return false;
|
||||
const lines = content.split('\n');
|
||||
return lines.length > maxLines || content.length > maxLines * 100;
|
||||
}
|
||||
|
||||
// Trust user from detail view and refresh
|
||||
async function trustUserFromDetail() {
|
||||
await trustPubkey(selectedUser, "");
|
||||
// Refresh list and go back
|
||||
await loadAllData();
|
||||
closeUserDetail();
|
||||
}
|
||||
|
||||
// Blacklist user from detail view and refresh
|
||||
async function blacklistUserFromDetail() {
|
||||
await blacklistPubkey(selectedUser, "");
|
||||
// Refresh list and go back
|
||||
await loadAllData();
|
||||
closeUserDetail();
|
||||
}
|
||||
|
||||
// Untrust user from detail view and refresh
|
||||
async function untrustUserFromDetail() {
|
||||
await untrustPubkey(selectedUser);
|
||||
await loadAllData();
|
||||
closeUserDetail();
|
||||
}
|
||||
|
||||
// Unblacklist user from detail view and refresh
|
||||
async function unblacklistUserFromDetail() {
|
||||
await unblacklistPubkey(selectedUser);
|
||||
await loadAllData();
|
||||
closeUserDetail();
|
||||
}
|
||||
|
||||
// Delete all events for a blacklisted user
|
||||
async function deleteAllEventsForUser() {
|
||||
if (!confirm(`Delete ALL ${userEventsTotal} events from this user? This cannot be undone.`)) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
isLoading = true;
|
||||
const result = await callNIP86API("deleteeventsforpubkey", [selectedUser]);
|
||||
showMessage(`Deleted ${result.deleted} events`, "success");
|
||||
// Refresh the events list
|
||||
userEvents = [];
|
||||
userEventsTotal = 0;
|
||||
userEventsOffset = 0;
|
||||
await loadUserEvents();
|
||||
} catch (error) {
|
||||
console.error("Failed to delete events:", error);
|
||||
showMessage("Failed to delete events: " + error.message, "error");
|
||||
} finally {
|
||||
isLoading = false;
|
||||
}
|
||||
}
|
||||
|
||||
// Get kind name
|
||||
function getKindName(kind) {
|
||||
const kindNames = {
|
||||
0: "Metadata",
|
||||
1: "Text Note",
|
||||
3: "Follow List",
|
||||
4: "Encrypted DM",
|
||||
6: "Repost",
|
||||
7: "Reaction",
|
||||
14: "Chat Message",
|
||||
16: "Order Message",
|
||||
17: "Payment Receipt",
|
||||
1063: "File Metadata",
|
||||
10002: "Relay List",
|
||||
30017: "Stall",
|
||||
30018: "Product (NIP-15)",
|
||||
30023: "Long-form",
|
||||
30078: "App Data",
|
||||
30402: "Product (NIP-99)",
|
||||
30405: "Collection",
|
||||
30406: "Shipping",
|
||||
31555: "Review",
|
||||
};
|
||||
return kindNames[kind] || `Kind ${kind}`;
|
||||
}
|
||||
</script>
|
||||
|
||||
<div class="curation-view">
|
||||
@@ -532,29 +724,97 @@
|
||||
</div>
|
||||
</div>
|
||||
{:else}
|
||||
<!-- Active Mode -->
|
||||
<div class="tabs">
|
||||
<button class="tab" class:active={activeTab === "trusted"} on:click={() => activeTab = "trusted"}>
|
||||
Trusted ({trustedPubkeys.length})
|
||||
</button>
|
||||
<button class="tab" class:active={activeTab === "blacklist"} on:click={() => activeTab = "blacklist"}>
|
||||
Blacklist ({blacklistedPubkeys.length})
|
||||
</button>
|
||||
<button class="tab" class:active={activeTab === "unclassified"} on:click={() => activeTab = "unclassified"}>
|
||||
Unclassified ({unclassifiedUsers.length})
|
||||
</button>
|
||||
<button class="tab" class:active={activeTab === "spam"} on:click={() => activeTab = "spam"}>
|
||||
Spam ({spamEvents.length})
|
||||
</button>
|
||||
<button class="tab" class:active={activeTab === "ips"} on:click={() => activeTab = "ips"}>
|
||||
Blocked IPs ({blockedIPs.length})
|
||||
</button>
|
||||
<button class="tab" class:active={activeTab === "settings"} on:click={() => activeTab = "settings"}>
|
||||
Settings
|
||||
</button>
|
||||
</div>
|
||||
<!-- User Detail View -->
|
||||
{#if selectedUser}
|
||||
<div class="user-detail-view">
|
||||
<div class="detail-header">
|
||||
<div class="detail-header-left">
|
||||
<button class="back-btn" on:click={closeUserDetail}>
|
||||
← Back
|
||||
</button>
|
||||
<h3>User Events</h3>
|
||||
<span class="detail-pubkey" title={selectedUser}>{formatPubkey(selectedUser)}</span>
|
||||
<span class="detail-count">{userEventsTotal} events</span>
|
||||
</div>
|
||||
<div class="detail-header-right">
|
||||
{#if selectedUserType === "trusted"}
|
||||
<button class="btn-danger" on:click={untrustUserFromDetail}>Remove Trust</button>
|
||||
<button class="btn-danger" on:click={blacklistUserFromDetail}>Blacklist</button>
|
||||
{:else if selectedUserType === "blacklisted"}
|
||||
<button class="btn-delete-all" on:click={deleteAllEventsForUser} disabled={isLoading || userEventsTotal === 0}>
|
||||
Delete All Events
|
||||
</button>
|
||||
<button class="btn-success" on:click={unblacklistUserFromDetail}>Remove from Blacklist</button>
|
||||
<button class="btn-success" on:click={trustUserFromDetail}>Trust</button>
|
||||
{:else}
|
||||
<button class="btn-success" on:click={trustUserFromDetail}>Trust</button>
|
||||
<button class="btn-danger" on:click={blacklistUserFromDetail}>Blacklist</button>
|
||||
{/if}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="tab-content">
|
||||
<div class="events-list">
|
||||
{#if loadingEvents && userEvents.length === 0}
|
||||
<div class="loading">Loading events...</div>
|
||||
{:else if userEvents.length === 0}
|
||||
<div class="empty">No events found for this user.</div>
|
||||
{:else}
|
||||
{#each userEvents as event}
|
||||
<div class="event-item">
|
||||
<div class="event-header">
|
||||
<span class="event-kind">{getKindName(event.kind)}</span>
|
||||
<span class="event-id" title={event.id}>{formatPubkey(event.id)}</span>
|
||||
<span class="event-time">{formatDate(event.created_at * 1000)}</span>
|
||||
</div>
|
||||
<div class="event-content" class:expanded={expandedEvents[event.id]}>
|
||||
{#if expandedEvents[event.id] || !isContentTruncated(event.content)}
|
||||
<pre>{event.content || "(empty)"}</pre>
|
||||
{:else}
|
||||
<pre>{truncateContent(event.content)}...</pre>
|
||||
{/if}
|
||||
</div>
|
||||
{#if isContentTruncated(event.content)}
|
||||
<button class="expand-btn" on:click={() => toggleEventExpansion(event.id)}>
|
||||
{expandedEvents[event.id] ? "Show less" : "Show more"}
|
||||
</button>
|
||||
{/if}
|
||||
</div>
|
||||
{/each}
|
||||
|
||||
{#if userEvents.length < userEventsTotal}
|
||||
<div class="load-more">
|
||||
<button on:click={loadMoreEvents} disabled={loadingEvents}>
|
||||
{loadingEvents ? "Loading..." : `Load more (${userEvents.length} of ${userEventsTotal})`}
|
||||
</button>
|
||||
</div>
|
||||
{/if}
|
||||
{/if}
|
||||
</div>
|
||||
</div>
|
||||
{:else}
|
||||
<!-- Active Mode -->
|
||||
<div class="tabs">
|
||||
<button class="tab" class:active={activeTab === "trusted"} on:click={() => activeTab = "trusted"}>
|
||||
Trusted ({trustedPubkeys.length})
|
||||
</button>
|
||||
<button class="tab" class:active={activeTab === "blacklist"} on:click={() => activeTab = "blacklist"}>
|
||||
Blacklist ({blacklistedPubkeys.length})
|
||||
</button>
|
||||
<button class="tab" class:active={activeTab === "unclassified"} on:click={() => activeTab = "unclassified"}>
|
||||
Unclassified ({unclassifiedUsers.length})
|
||||
</button>
|
||||
<button class="tab" class:active={activeTab === "spam"} on:click={() => activeTab = "spam"}>
|
||||
Spam ({spamEvents.length})
|
||||
</button>
|
||||
<button class="tab" class:active={activeTab === "ips"} on:click={() => activeTab = "ips"}>
|
||||
Blocked IPs ({blockedIPs.length})
|
||||
</button>
|
||||
<button class="tab" class:active={activeTab === "settings"} on:click={() => activeTab = "settings"}>
|
||||
Settings
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div class="tab-content">
|
||||
{#if activeTab === "trusted"}
|
||||
<div class="section">
|
||||
<h3>Trusted Publishers</h3>
|
||||
@@ -579,7 +839,7 @@
|
||||
<div class="list">
|
||||
{#if trustedPubkeys.length > 0}
|
||||
{#each trustedPubkeys as item}
|
||||
<div class="list-item">
|
||||
<div class="list-item clickable" on:click={() => openUserDetail(item.pubkey, "trusted")}>
|
||||
<div class="item-main">
|
||||
<span class="pubkey" title={item.pubkey}>{formatPubkey(item.pubkey)}</span>
|
||||
{#if item.note}
|
||||
@@ -587,7 +847,7 @@
|
||||
{/if}
|
||||
</div>
|
||||
<div class="item-actions">
|
||||
<button class="btn-danger" on:click={() => untrustPubkey(item.pubkey)}>
|
||||
<button class="btn-danger" on:click|stopPropagation={() => untrustPubkey(item.pubkey)}>
|
||||
Remove
|
||||
</button>
|
||||
</div>
|
||||
@@ -624,7 +884,7 @@
|
||||
<div class="list">
|
||||
{#if blacklistedPubkeys.length > 0}
|
||||
{#each blacklistedPubkeys as item}
|
||||
<div class="list-item">
|
||||
<div class="list-item clickable" on:click={() => openUserDetail(item.pubkey, "blacklisted")}>
|
||||
<div class="item-main">
|
||||
<span class="pubkey" title={item.pubkey}>{formatPubkey(item.pubkey)}</span>
|
||||
{#if item.reason}
|
||||
@@ -632,7 +892,7 @@
|
||||
{/if}
|
||||
</div>
|
||||
<div class="item-actions">
|
||||
<button class="btn-success" on:click={() => unblacklistPubkey(item.pubkey)}>
|
||||
<button class="btn-success" on:click|stopPropagation={() => unblacklistPubkey(item.pubkey)}>
|
||||
Remove
|
||||
</button>
|
||||
</div>
|
||||
@@ -650,23 +910,28 @@
|
||||
<h3>Unclassified Users</h3>
|
||||
<p class="help-text">Users who have posted events but haven't been classified. Sorted by event count.</p>
|
||||
|
||||
<button class="refresh-btn" on:click={loadUnclassifiedUsers} disabled={isLoading}>
|
||||
Refresh
|
||||
</button>
|
||||
<div class="button-row">
|
||||
<button class="refresh-btn" on:click={loadUnclassifiedUsers} disabled={isLoading}>
|
||||
Refresh
|
||||
</button>
|
||||
<button class="scan-btn" on:click={scanDatabase} disabled={isLoading}>
|
||||
Scan Database
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div class="list">
|
||||
{#if unclassifiedUsers.length > 0}
|
||||
{#each unclassifiedUsers as user}
|
||||
<div class="list-item">
|
||||
<div class="list-item clickable" on:click={() => openUserDetail(user.pubkey, "unclassified")}>
|
||||
<div class="item-main">
|
||||
<span class="pubkey" title={user.pubkey}>{formatPubkey(user.pubkey)}</span>
|
||||
<span class="event-count">{user.total_events} events</span>
|
||||
<span class="event-count">{user.event_count} events</span>
|
||||
</div>
|
||||
<div class="item-actions">
|
||||
<button class="btn-success" on:click={() => trustPubkey(user.pubkey, "")}>
|
||||
<button class="btn-success" on:click|stopPropagation={() => trustPubkey(user.pubkey, "")}>
|
||||
Trust
|
||||
</button>
|
||||
<button class="btn-danger" on:click={() => blacklistPubkey(user.pubkey, "")}>
|
||||
<button class="btn-danger" on:click|stopPropagation={() => blacklistPubkey(user.pubkey, "")}>
|
||||
Blacklist
|
||||
</button>
|
||||
</div>
|
||||
@@ -840,6 +1105,7 @@
|
||||
</div>
|
||||
{/if}
|
||||
</div>
|
||||
{/if}
|
||||
{/if}
|
||||
</div>
|
||||
|
||||
@@ -1149,6 +1415,26 @@
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.button-row {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
.scan-btn {
|
||||
padding: 0.5rem 1rem;
|
||||
background: var(--warning, #f0ad4e);
|
||||
color: var(--text-color);
|
||||
border: none;
|
||||
border-radius: 4px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.scan-btn:disabled {
|
||||
opacity: 0.6;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.list {
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 4px;
|
||||
@@ -1222,6 +1508,26 @@
|
||||
font-size: 0.85em;
|
||||
}
|
||||
|
||||
.btn-delete-all {
|
||||
padding: 0.35rem 0.75rem;
|
||||
background: #8B0000;
|
||||
color: white;
|
||||
border: none;
|
||||
border-radius: 4px;
|
||||
cursor: pointer;
|
||||
font-size: 0.85em;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.btn-delete-all:hover:not(:disabled) {
|
||||
background: #660000;
|
||||
}
|
||||
|
||||
.btn-delete-all:disabled {
|
||||
opacity: 0.5;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.empty {
|
||||
padding: 2rem;
|
||||
text-align: center;
|
||||
@@ -1229,4 +1535,187 @@
|
||||
opacity: 0.6;
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
/* Clickable list items */
|
||||
.list-item.clickable {
|
||||
cursor: pointer;
|
||||
transition: background-color 0.2s;
|
||||
}
|
||||
|
||||
.list-item.clickable:hover {
|
||||
background-color: var(--button-hover-bg);
|
||||
}
|
||||
|
||||
/* User Detail View */
|
||||
.user-detail-view {
|
||||
background: var(--card-bg);
|
||||
border-radius: 8px;
|
||||
padding: 1.5em;
|
||||
border: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.detail-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 1.5rem;
|
||||
padding-bottom: 1rem;
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
flex-wrap: wrap;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
.detail-header-left {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 1rem;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.detail-header-left h3 {
|
||||
margin: 0;
|
||||
color: var(--text-color);
|
||||
}
|
||||
|
||||
.detail-header-right {
|
||||
display: flex;
|
||||
gap: 0.5rem;
|
||||
}
|
||||
|
||||
.back-btn {
|
||||
padding: 0.5rem 1rem;
|
||||
background: var(--bg-color);
|
||||
color: var(--text-color);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 4px;
|
||||
cursor: pointer;
|
||||
font-size: 0.9em;
|
||||
}
|
||||
|
||||
.back-btn:hover {
|
||||
background: var(--button-hover-bg);
|
||||
}
|
||||
|
||||
.detail-pubkey {
|
||||
font-family: monospace;
|
||||
font-size: 0.9em;
|
||||
color: var(--text-color);
|
||||
background: var(--bg-color);
|
||||
padding: 0.25rem 0.5rem;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
.detail-count {
|
||||
font-size: 0.85em;
|
||||
color: var(--success);
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* Events List */
|
||||
.events-list {
|
||||
max-height: 600px;
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
.event-item {
|
||||
background: var(--bg-color);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 6px;
|
||||
padding: 1rem;
|
||||
margin-bottom: 0.75rem;
|
||||
}
|
||||
|
||||
.event-header {
|
||||
display: flex;
|
||||
gap: 1rem;
|
||||
margin-bottom: 0.5rem;
|
||||
flex-wrap: wrap;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.event-kind {
|
||||
background: var(--accent-color);
|
||||
color: var(--text-color);
|
||||
padding: 0.2rem 0.5rem;
|
||||
border-radius: 4px;
|
||||
font-size: 0.8em;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.event-id {
|
||||
font-family: monospace;
|
||||
font-size: 0.8em;
|
||||
color: var(--text-color);
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
.event-time {
|
||||
font-size: 0.8em;
|
||||
color: var(--text-color);
|
||||
opacity: 0.6;
|
||||
}
|
||||
|
||||
.event-content {
|
||||
background: var(--card-bg);
|
||||
border-radius: 4px;
|
||||
padding: 0.75rem;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.event-content pre {
|
||||
margin: 0;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
font-family: inherit;
|
||||
font-size: 0.9em;
|
||||
color: var(--text-color);
|
||||
max-height: 150px;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.event-content.expanded pre {
|
||||
max-height: none;
|
||||
}
|
||||
|
||||
.expand-btn {
|
||||
margin-top: 0.5rem;
|
||||
padding: 0.25rem 0.5rem;
|
||||
background: transparent;
|
||||
color: var(--accent-color);
|
||||
border: 1px solid var(--accent-color);
|
||||
border-radius: 4px;
|
||||
cursor: pointer;
|
||||
font-size: 0.8em;
|
||||
}
|
||||
|
||||
.expand-btn:hover {
|
||||
background: var(--accent-color);
|
||||
color: var(--text-color);
|
||||
}
|
||||
|
||||
.load-more {
|
||||
text-align: center;
|
||||
padding: 1rem;
|
||||
}
|
||||
|
||||
.load-more button {
|
||||
padding: 0.5rem 1.5rem;
|
||||
background: var(--info);
|
||||
color: var(--text-color);
|
||||
border: none;
|
||||
border-radius: 4px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.load-more button:disabled {
|
||||
opacity: 0.6;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.loading {
|
||||
padding: 2rem;
|
||||
text-align: center;
|
||||
color: var(--text-color);
|
||||
opacity: 0.6;
|
||||
}
|
||||
</style>
|
||||
|
||||
@@ -408,7 +408,7 @@
|
||||
|
||||
.kind-number {
|
||||
background: var(--primary);
|
||||
color: var(--text-color);
|
||||
color: #ffffff;
|
||||
padding: 0.1em 0.4em;
|
||||
border: 0;
|
||||
font-size: 0.7em;
|
||||
@@ -455,7 +455,7 @@
|
||||
|
||||
.delete-target {
|
||||
background: var(--danger);
|
||||
color: var(--text-color);
|
||||
color: #ffffff;
|
||||
padding: 0.1em 0.3em;
|
||||
border-radius: 0.2rem;
|
||||
font-size: 0.7em;
|
||||
|
||||
@@ -30,11 +30,23 @@ export const curationKindCategories = [
|
||||
kinds: [1063, 20, 21, 22],
|
||||
},
|
||||
{
|
||||
id: "marketplace",
|
||||
name: "Marketplace",
|
||||
description: "Product listings, stalls, and marketplace events",
|
||||
id: "marketplace_nip15",
|
||||
name: "Marketplace (NIP-15)",
|
||||
description: "Legacy NIP-15 stalls and products",
|
||||
kinds: [30017, 30018, 30019, 30020],
|
||||
},
|
||||
{
|
||||
id: "marketplace_nip99",
|
||||
name: "Marketplace (NIP-99/Gamma)",
|
||||
description: "NIP-99 classified listings, collections, shipping, reviews (Plebeian Market)",
|
||||
kinds: [30402, 30403, 30405, 30406, 31555],
|
||||
},
|
||||
{
|
||||
id: "order_communication",
|
||||
name: "Order Communication",
|
||||
description: "Gamma Markets order messages and payment receipts (kinds 16, 17)",
|
||||
kinds: [16, 17],
|
||||
},
|
||||
{
|
||||
id: "groups_nip29",
|
||||
name: "Group Messaging (NIP-29)",
|
||||
|
||||
@@ -179,6 +179,28 @@ export class Nip07Signer {
|
||||
}
|
||||
}
|
||||
|
||||
// Merge two event arrays, deduplicating by event id
|
||||
// Newer events (by created_at) take precedence for same id
|
||||
function mergeAndDeduplicateEvents(cached, relay) {
|
||||
const eventMap = new Map();
|
||||
|
||||
// Add cached events first
|
||||
for (const event of cached) {
|
||||
eventMap.set(event.id, event);
|
||||
}
|
||||
|
||||
// Add/update with relay events (they may be newer)
|
||||
for (const event of relay) {
|
||||
const existing = eventMap.get(event.id);
|
||||
if (!existing || event.created_at >= existing.created_at) {
|
||||
eventMap.set(event.id, event);
|
||||
}
|
||||
}
|
||||
|
||||
// Return sorted by created_at descending (newest first)
|
||||
return Array.from(eventMap.values()).sort((a, b) => b.created_at - a.created_at);
|
||||
}
|
||||
|
||||
// IndexedDB helpers for unified event storage
|
||||
// This provides a local cache that all components can access
|
||||
const DB_NAME = "nostrCache";
|
||||
@@ -573,9 +595,10 @@ export async function fetchEvents(filters, options = {}) {
|
||||
} = options;
|
||||
|
||||
// Try to get cached events first if requested
|
||||
let cachedEvents = [];
|
||||
if (useCache) {
|
||||
try {
|
||||
const cachedEvents = await queryEventsFromDB(filters);
|
||||
cachedEvents = await queryEventsFromDB(filters);
|
||||
if (cachedEvents.length > 0) {
|
||||
console.log(`Found ${cachedEvents.length} cached events in IndexedDB`);
|
||||
}
|
||||
@@ -585,17 +608,19 @@ export async function fetchEvents(filters, options = {}) {
|
||||
}
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const events = [];
|
||||
const relayEvents = [];
|
||||
const timeoutId = setTimeout(() => {
|
||||
console.log(`Timeout reached after ${timeout}ms, returning ${events.length} events`);
|
||||
console.log(`Timeout reached after ${timeout}ms, returning ${relayEvents.length} relay events`);
|
||||
sub.close();
|
||||
|
||||
|
||||
// Store all received events in IndexedDB before resolving
|
||||
if (events.length > 0) {
|
||||
putEvents(events).catch(e => console.warn("Failed to cache events", e));
|
||||
if (relayEvents.length > 0) {
|
||||
putEvents(relayEvents).catch(e => console.warn("Failed to cache events", e));
|
||||
}
|
||||
|
||||
resolve(events);
|
||||
|
||||
// Merge cached events with relay events, deduplicate by id
|
||||
const mergedEvents = mergeAndDeduplicateEvents(cachedEvents, relayEvents);
|
||||
resolve(mergedEvents);
|
||||
}, timeout);
|
||||
|
||||
try {
|
||||
@@ -615,22 +640,25 @@ export async function fetchEvents(filters, options = {}) {
|
||||
created_at: event.created_at,
|
||||
content_preview: event.content?.substring(0, 50)
|
||||
});
|
||||
events.push(event);
|
||||
|
||||
relayEvents.push(event);
|
||||
|
||||
// Store event immediately in IndexedDB
|
||||
putEvent(event).catch(e => console.warn("Failed to cache event", e));
|
||||
},
|
||||
oneose() {
|
||||
console.log(`✅ EOSE received for REQ [${subId}], got ${events.length} events`);
|
||||
console.log(`✅ EOSE received for REQ [${subId}], got ${relayEvents.length} relay events`);
|
||||
clearTimeout(timeoutId);
|
||||
sub.close();
|
||||
|
||||
|
||||
// Store all events in IndexedDB before resolving
|
||||
if (events.length > 0) {
|
||||
putEvents(events).catch(e => console.warn("Failed to cache events", e));
|
||||
if (relayEvents.length > 0) {
|
||||
putEvents(relayEvents).catch(e => console.warn("Failed to cache events", e));
|
||||
}
|
||||
|
||||
resolve(events);
|
||||
|
||||
// Merge cached events with relay events, deduplicate by id
|
||||
const mergedEvents = mergeAndDeduplicateEvents(cachedEvents, relayEvents);
|
||||
console.log(`Merged ${cachedEvents.length} cached + ${relayEvents.length} relay = ${mergedEvents.length} total events`);
|
||||
resolve(mergedEvents);
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
246
docs/BRANDING_GUIDE.md
Normal file
246
docs/BRANDING_GUIDE.md
Normal file
@@ -0,0 +1,246 @@
|
||||
# White-Label Branding Guide
|
||||
|
||||
ORLY supports full white-label branding, allowing relay operators to customize the UI appearance without rebuilding the application. All branding is loaded at runtime from a configuration directory.
|
||||
|
||||
## Quick Start
|
||||
|
||||
Generate a branding kit:
|
||||
|
||||
```bash
|
||||
# Generic/white-label branding (recommended for customization)
|
||||
./orly init-branding --style generic
|
||||
|
||||
# ORLY-branded template
|
||||
./orly init-branding --style orly
|
||||
|
||||
# Custom output directory
|
||||
./orly init-branding --style generic /path/to/branding
|
||||
```
|
||||
|
||||
The branding kit is created at `~/.config/ORLY/branding/` by default.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
~/.config/ORLY/branding/
|
||||
branding.json # Main configuration
|
||||
assets/
|
||||
logo.png # Header logo (replaces default)
|
||||
favicon.png # Browser favicon
|
||||
icon-192.png # PWA icon 192x192
|
||||
icon-512.png # PWA icon 512x512
|
||||
css/
|
||||
custom.css # Full CSS override
|
||||
variables.css # CSS variable overrides only
|
||||
```
|
||||
|
||||
## Configuration (branding.json)
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 1,
|
||||
"app": {
|
||||
"name": "My Relay",
|
||||
"shortName": "Relay",
|
||||
"title": "My Relay Dashboard",
|
||||
"description": "A high-performance Nostr relay"
|
||||
},
|
||||
"nip11": {
|
||||
"name": "My Relay",
|
||||
"description": "Custom relay description for NIP-11",
|
||||
"icon": "https://example.com/icon.png"
|
||||
},
|
||||
"manifest": {
|
||||
"themeColor": "#4080C0",
|
||||
"backgroundColor": "#F0F4F8"
|
||||
},
|
||||
"assets": {
|
||||
"logo": "assets/logo.png",
|
||||
"favicon": "assets/favicon.png",
|
||||
"icon192": "assets/icon-192.png",
|
||||
"icon512": "assets/icon-512.png"
|
||||
},
|
||||
"css": {
|
||||
"customCSS": "css/custom.css",
|
||||
"variablesCSS": "css/variables.css"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Sections
|
||||
|
||||
| Section | Description |
|
||||
|---------|-------------|
|
||||
| `app` | Application name and titles displayed in the UI |
|
||||
| `nip11` | NIP-11 relay information document fields |
|
||||
| `manifest` | PWA manifest colors |
|
||||
| `assets` | Paths to custom images (relative to branding dir) |
|
||||
| `css` | Paths to custom CSS files |
|
||||
|
||||
## Custom Assets
|
||||
|
||||
Replace the generated placeholder images with your own:
|
||||
|
||||
| Asset | Size | Purpose |
|
||||
|-------|------|---------|
|
||||
| `logo.png` | 256x256 recommended | Header logo |
|
||||
| `favicon.png` | 64x64 | Browser tab icon |
|
||||
| `icon-192.png` | 192x192 | PWA icon (Android) |
|
||||
| `icon-512.png` | 512x512 | PWA splash screen |
|
||||
|
||||
**Tip**: Use PNG format with transparency for best results.
|
||||
|
||||
## CSS Customization
|
||||
|
||||
### Quick Theme Changes (variables.css)
|
||||
|
||||
Edit `css/variables.css` to change colors without touching component styles:
|
||||
|
||||
```css
|
||||
/* Light theme */
|
||||
html, body {
|
||||
--bg-color: #F0F4F8;
|
||||
--header-bg: #FFFFFF;
|
||||
--primary: #4080C0;
|
||||
--text-color: #334155;
|
||||
/* ... see generated file for all variables */
|
||||
}
|
||||
|
||||
/* Dark theme */
|
||||
body.dark-theme {
|
||||
--bg-color: #0F172A;
|
||||
--header-bg: #1E293B;
|
||||
--primary: #60A5FA;
|
||||
--text-color: #F8FAFC;
|
||||
}
|
||||
```
|
||||
|
||||
### Full CSS Override (custom.css)
|
||||
|
||||
Edit `css/custom.css` for complete control over styling:
|
||||
|
||||
```css
|
||||
/* Custom header */
|
||||
.header {
|
||||
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);
|
||||
}
|
||||
|
||||
/* Custom buttons */
|
||||
button {
|
||||
border-radius: 8px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* Custom cards */
|
||||
.card {
|
||||
border-radius: 12px;
|
||||
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
|
||||
}
|
||||
```
|
||||
|
||||
### Available CSS Variables
|
||||
|
||||
#### Background Colors
|
||||
- `--bg-color` - Main page background
|
||||
- `--header-bg` - Header background
|
||||
- `--sidebar-bg` - Sidebar background
|
||||
- `--card-bg` - Card/container background
|
||||
- `--panel-bg` - Panel background
|
||||
|
||||
#### Text Colors
|
||||
- `--text-color` - Primary text
|
||||
- `--text-muted` - Secondary/muted text
|
||||
|
||||
#### Theme Colors
|
||||
- `--primary` - Primary accent color
|
||||
- `--primary-bg` - Primary background tint
|
||||
- `--secondary` - Secondary color
|
||||
- `--accent-color` - Link color
|
||||
- `--accent-hover-color` - Link hover color
|
||||
|
||||
#### Status Colors
|
||||
- `--success`, `--success-bg`, `--success-text`
|
||||
- `--warning`, `--warning-bg`
|
||||
- `--danger`, `--danger-bg`, `--danger-text`
|
||||
- `--info`
|
||||
|
||||
#### Form/Input Colors
|
||||
- `--input-bg` - Input background
|
||||
- `--input-border` - Input border
|
||||
- `--input-text-color` - Input text
|
||||
|
||||
#### Button Colors
|
||||
- `--button-bg` - Default button background
|
||||
- `--button-hover-bg` - Button hover background
|
||||
- `--button-text` - Button text color
|
||||
- `--button-hover-border` - Button hover border
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `ORLY_BRANDING_DIR` | `~/.config/ORLY/branding` | Branding directory path |
|
||||
| `ORLY_BRANDING_ENABLED` | `true` | Enable/disable custom branding |
|
||||
|
||||
## Applying Changes
|
||||
|
||||
Restart the relay to apply branding changes:
|
||||
|
||||
```bash
|
||||
# Stop and start the relay
|
||||
pkill orly
|
||||
./orly
|
||||
```
|
||||
|
||||
Changes to CSS and assets require a restart. The relay logs will show:
|
||||
|
||||
```
|
||||
custom branding loaded from /home/user/.config/ORLY/branding
|
||||
```
|
||||
|
||||
## Branding Endpoints
|
||||
|
||||
The relay serves branding assets at these endpoints:
|
||||
|
||||
| Endpoint | Description |
|
||||
|----------|-------------|
|
||||
| `/branding/logo.png` | Custom logo |
|
||||
| `/branding/favicon.png` | Custom favicon |
|
||||
| `/branding/icon-192.png` | PWA icon 192x192 |
|
||||
| `/branding/icon-512.png` | PWA icon 512x512 |
|
||||
| `/branding/custom.css` | Combined CSS (variables + custom) |
|
||||
| `/branding/manifest.json` | Customized PWA manifest |
|
||||
|
||||
## Disabling Branding
|
||||
|
||||
To use the default ORLY branding:
|
||||
|
||||
```bash
|
||||
# Option 1: Remove branding directory
|
||||
rm -rf ~/.config/ORLY/branding
|
||||
|
||||
# Option 2: Disable via environment
|
||||
ORLY_BRANDING_ENABLED=false ./orly
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Branding not loading
|
||||
- Check that `~/.config/ORLY/branding/branding.json` exists
|
||||
- Verify file permissions (readable by relay process)
|
||||
- Check relay logs for branding load messages
|
||||
|
||||
### CSS changes not appearing
|
||||
- Hard refresh the browser (Ctrl+Shift+R)
|
||||
- Clear browser cache
|
||||
- Verify CSS syntax is valid
|
||||
|
||||
### Logo not showing
|
||||
- Ensure image path in `branding.json` is correct
|
||||
- Check image file exists and is readable
|
||||
- Use PNG format with appropriate dimensions
|
||||
|
||||
### Colors look wrong in light/dark mode
|
||||
- Light theme uses `html, body` selector
|
||||
- Dark theme uses `body.dark-theme` selector
|
||||
- Ensure both themes are defined if customizing
|
||||
290
docs/NIP-CURATION.md
Normal file
290
docs/NIP-CURATION.md
Normal file
@@ -0,0 +1,290 @@
|
||||
# NIP-XX: Relay Curation Mode
|
||||
|
||||
`draft` `optional`
|
||||
|
||||
This NIP defines a relay operating mode where operators can curate content through a three-tier publisher classification system (trusted, blacklisted, unclassified) with rate limiting, IP-based flood protection, and event kind filtering. Configuration and management are performed through Nostr events and a NIP-86 JSON-RPC API.
|
||||
|
||||
## Motivation
|
||||
|
||||
Public relays face challenges managing spam, abuse, and resource consumption. Traditional approaches (pay-to-relay, invite-only, WoT-based) each have limitations. Curation mode provides relay operators with fine-grained control over who can publish what, while maintaining an open-by-default stance that allows unknown users to participate within limits.
|
||||
|
||||
## Overview
|
||||
|
||||
Curation mode introduces:
|
||||
|
||||
1. **Publisher Classification**: Three-tier system (trusted, blacklisted, unclassified)
|
||||
2. **Rate Limiting**: Per-pubkey and per-IP daily event limits
|
||||
3. **Kind Filtering**: Configurable allowed event kinds
|
||||
4. **Configuration Event**: Kind 30078 replaceable event for relay configuration
|
||||
5. **Management API**: NIP-86 JSON-RPC endpoints for administration
|
||||
|
||||
## Configuration Event (Kind 30078)
|
||||
|
||||
The relay MUST be configured with a kind 30078 replaceable event before accepting events from non-owner/admin pubkeys. This event uses the `d` tag value `curating-config`.
|
||||
|
||||
### Event Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": 30078,
|
||||
"tags": [
|
||||
["d", "curating-config"],
|
||||
["daily_limit", "<number>"],
|
||||
["ip_daily_limit", "<number>"],
|
||||
["first_ban_hours", "<number>"],
|
||||
["second_ban_hours", "<number>"],
|
||||
["kind_category", "<category_id>"],
|
||||
["kind", "<kind_number>"],
|
||||
["kind_range", "<start>-<end>"]
|
||||
],
|
||||
"content": "{}",
|
||||
"pubkey": "<owner_or_admin_pubkey>",
|
||||
"created_at": <unix_timestamp>
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Tags
|
||||
|
||||
| Tag | Description | Default |
|
||||
|-----|-------------|---------|
|
||||
| `d` | MUST be `"curating-config"` | Required |
|
||||
| `daily_limit` | Max events per day for unclassified users | 50 |
|
||||
| `ip_daily_limit` | Max events per day from a single IP | 500 |
|
||||
| `first_ban_hours` | First offense IP ban duration (hours) | 1 |
|
||||
| `second_ban_hours` | Subsequent offense IP ban duration (hours) | 168 |
|
||||
| `kind_category` | Predefined kind category (repeatable) | - |
|
||||
| `kind` | Individual allowed kind number (repeatable) | - |
|
||||
| `kind_range` | Allowed kind range as "start-end" (repeatable) | - |
|
||||
|
||||
### Kind Categories
|
||||
|
||||
Relays SHOULD support these predefined categories:
|
||||
|
||||
| Category ID | Kinds | Description |
|
||||
|-------------|-------|-------------|
|
||||
| `social` | 0, 1, 3, 6, 7, 10002 | Profiles, notes, contacts, reposts, reactions, relay lists |
|
||||
| `dm` | 4, 14, 1059 | Direct messages (NIP-04, NIP-17, gift wraps) |
|
||||
| `longform` | 30023, 30024 | Long-form articles and drafts |
|
||||
| `media` | 1063, 20, 21, 22 | File metadata, picture, video, audio events |
|
||||
| `lists` | 10000, 10001, 10003, 30000, 30001, 30003 | Mute lists, pins, bookmarks, people lists |
|
||||
| `groups_nip29` | 9-12, 9000-9002, 39000-39002 | NIP-29 relay-based groups |
|
||||
| `groups_nip72` | 34550, 1111, 4550 | NIP-72 moderated communities |
|
||||
| `marketplace_nip15` | 30017-30020, 1021, 1022 | NIP-15 stalls and products |
|
||||
| `marketplace_nip99` | 30402, 30403, 30405, 30406, 31555 | NIP-99 classified listings |
|
||||
| `order_communication` | 16, 17 | Marketplace order messages |
|
||||
|
||||
Relays MAY define additional categories.
|
||||
|
||||
### Example Configuration Event
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": 30078,
|
||||
"tags": [
|
||||
["d", "curating-config"],
|
||||
["daily_limit", "100"],
|
||||
["ip_daily_limit", "1000"],
|
||||
["first_ban_hours", "2"],
|
||||
["second_ban_hours", "336"],
|
||||
["kind_category", "social"],
|
||||
["kind_category", "dm"],
|
||||
["kind", "1984"],
|
||||
["kind_range", "30000-39999"]
|
||||
],
|
||||
"content": "{}",
|
||||
"pubkey": "a1b2c3...",
|
||||
"created_at": 1700000000
|
||||
}
|
||||
```
|
||||
|
||||
## Publisher Classification
|
||||
|
||||
### Trusted Publishers
|
||||
|
||||
- Unlimited publishing rights
|
||||
- Bypass rate limiting and IP flood protection
|
||||
- Events visible to all users
|
||||
|
||||
### Blacklisted Publishers
|
||||
|
||||
- Cannot publish any events
|
||||
- Events rejected with `"blocked: pubkey is blacklisted"` notice
|
||||
- Existing events hidden from queries (visible only to admins/owners)
|
||||
|
||||
### Unclassified Publishers (Default)
|
||||
|
||||
- Subject to daily event limit
|
||||
- Subject to IP flood protection
|
||||
- Events visible to all users
|
||||
- Can be promoted to trusted or demoted to blacklisted
|
||||
|
||||
## Event Processing Flow
|
||||
|
||||
When an event is received, the relay MUST process it as follows:
|
||||
|
||||
1. **Configuration Check**: Reject if relay is not configured (no kind 30078 event)
|
||||
2. **Access Level Check**: Determine pubkey's access level
|
||||
- Owners and admins: always accept, bypass all limits
|
||||
- IP-blocked: reject with temporary block notice
|
||||
- Blacklisted: reject with blacklist notice
|
||||
- Trusted: accept, bypass rate limits
|
||||
- Unclassified: continue to rate limit checks
|
||||
3. **Kind Filter**: Reject if event kind is not in allowed list
|
||||
4. **Rate Limit Check**:
|
||||
- Check pubkey's daily event count against `daily_limit`
|
||||
- Check IP's daily event count against `ip_daily_limit`
|
||||
5. **Accept or Reject**: Accept if all checks pass
|
||||
|
||||
### IP Flood Protection
|
||||
|
||||
When a pubkey exceeds `daily_limit`:
|
||||
|
||||
1. Record IP offense
|
||||
2. If first offense: block IP for `first_ban_hours`
|
||||
3. If subsequent offense: block IP for `second_ban_hours`
|
||||
4. Track which pubkeys triggered the offense for admin review
|
||||
|
||||
## Management API (NIP-86)
|
||||
|
||||
All management endpoints require NIP-98 HTTP authentication from an owner or admin pubkey.
|
||||
|
||||
### Trust Management
|
||||
|
||||
| Method | Parameters | Description |
|
||||
|--------|------------|-------------|
|
||||
| `trustpubkey` | `[pubkey_hex, note?]` | Add pubkey to trusted list |
|
||||
| `untrustpubkey` | `[pubkey_hex]` | Remove pubkey from trusted list |
|
||||
| `listtrustedpubkeys` | `[]` | List all trusted pubkeys |
|
||||
|
||||
### Blacklist Management
|
||||
|
||||
| Method | Parameters | Description |
|
||||
|--------|------------|-------------|
|
||||
| `blacklistpubkey` | `[pubkey_hex, reason?]` | Add pubkey to blacklist |
|
||||
| `unblacklistpubkey` | `[pubkey_hex]` | Remove pubkey from blacklist |
|
||||
| `listblacklistedpubkeys` | `[]` | List all blacklisted pubkeys |
|
||||
|
||||
### User Inspection
|
||||
|
||||
| Method | Parameters | Description |
|
||||
|--------|------------|-------------|
|
||||
| `listunclassifiedusers` | `[limit?]` | List unclassified users sorted by event count |
|
||||
| `geteventsforpubkey` | `[pubkey_hex, limit?, offset?]` | Get events from a pubkey |
|
||||
| `deleteeventsforpubkey` | `[pubkey_hex]` | Delete all events from a blacklisted pubkey |
|
||||
| `scanpubkeys` | `[]` | Scan database to populate unclassified users list |
|
||||
|
||||
### Spam Management
|
||||
|
||||
| Method | Parameters | Description |
|
||||
|--------|------------|-------------|
|
||||
| `markspam` | `[event_id_hex, pubkey?, reason?]` | Flag event as spam (hides from queries) |
|
||||
| `unmarkspam` | `[event_id_hex]` | Remove spam flag |
|
||||
| `listspamevents` | `[]` | List spam-flagged events |
|
||||
| `deleteevent` | `[event_id_hex]` | Permanently delete an event |
|
||||
|
||||
### IP Management
|
||||
|
||||
| Method | Parameters | Description |
|
||||
|--------|------------|-------------|
|
||||
| `listblockedips` | `[]` | List currently blocked IPs |
|
||||
| `unblockip` | `[ip_address]` | Remove IP block |
|
||||
|
||||
### Configuration
|
||||
|
||||
| Method | Parameters | Description |
|
||||
|--------|------------|-------------|
|
||||
| `getcuratingconfig` | `[]` | Get current configuration |
|
||||
| `isconfigured` | `[]` | Check if relay is configured |
|
||||
| `supportedmethods` | `[]` | List available management methods |
|
||||
|
||||
### Example API Request
|
||||
|
||||
```http
|
||||
POST /api HTTP/1.1
|
||||
Host: relay.example.com
|
||||
Authorization: Nostr <base64_nip98_event>
|
||||
Content-Type: application/json
|
||||
|
||||
```
|
||||
|
||||
### Example API Response
|
||||
|
||||
```json
|
||||
{
|
||||
"result": {
|
||||
"success": true,
|
||||
"message": "Pubkey added to trusted list"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Event Visibility
|
||||
|
||||
| Viewer | Sees Trusted Events | Sees Blacklisted Events | Sees Spam-Flagged Events |
|
||||
|--------|---------------------|-------------------------|--------------------------|
|
||||
| Owner/Admin | Yes | Yes | Yes |
|
||||
| Regular User | Yes | No | No |
|
||||
|
||||
## Relay Information Document
|
||||
|
||||
Relays implementing this NIP SHOULD advertise it in their NIP-11 relay information document:
|
||||
|
||||
```json
|
||||
{
|
||||
"supported_nips": [11, 86, "XX"],
|
||||
"limitation": {
|
||||
"curation_mode": true,
|
||||
"daily_limit": 50,
|
||||
"ip_daily_limit": 500
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
### Rate Limit Reset
|
||||
|
||||
Daily counters SHOULD reset at UTC midnight (00:00:00 UTC).
|
||||
|
||||
### Caching
|
||||
|
||||
Implementations SHOULD cache trusted/blacklisted status and allowed kinds in memory for performance, refreshing periodically (e.g., hourly).
|
||||
|
||||
### Database Keys
|
||||
|
||||
Suggested key prefixes for persistent storage:
|
||||
|
||||
- `CURATING_ACL_CONFIG` - Current configuration
|
||||
- `CURATING_ACL_TRUSTED_PUBKEY_{pubkey}` - Trusted publishers
|
||||
- `CURATING_ACL_BLACKLISTED_PUBKEY_{pubkey}` - Blacklisted publishers
|
||||
- `CURATING_ACL_EVENT_COUNT_{pubkey}_{date}` - Daily event counts
|
||||
- `CURATING_ACL_IP_EVENT_COUNT_{ip}_{date}` - IP daily event counts
|
||||
- `CURATING_ACL_IP_OFFENSE_{ip}` - IP offense tracking
|
||||
- `CURATING_ACL_BLOCKED_IP_{ip}` - Active IP blocks
|
||||
- `CURATING_ACL_SPAM_EVENT_{event_id}` - Spam-flagged events
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **NIP-98 Authentication**: All management API calls MUST require valid NIP-98 authentication from owner or admin pubkeys
|
||||
2. **IP Spoofing**: Relays SHOULD use `X-Forwarded-For` or `X-Real-IP` headers carefully, only trusting them from known reverse proxies
|
||||
3. **Rate Limit Bypass**: Trusted status should be granted carefully as it bypasses all rate limiting
|
||||
4. **Event Deletion**: Deleted events cannot be recovered; implementations SHOULD consider soft-delete with admin recovery option
|
||||
|
||||
## Compatibility
|
||||
|
||||
This NIP is compatible with:
|
||||
- NIP-42 (Authentication): Can require auth before accepting events
|
||||
- NIP-86 (Relay Management API): Uses NIP-86 for management endpoints
|
||||
- NIP-98 (HTTP Auth): Uses NIP-98 for API authentication
|
||||
|
||||
## Reference Implementation
|
||||
|
||||
- ORLY Relay: https://github.com/mleku/orly
|
||||
|
||||
## Changelog
|
||||
|
||||
- Initial draft
|
||||
|
||||
## Changelog
|
||||
|
||||
- Initial draft
|
||||
@@ -137,7 +137,7 @@ Where `payload` is the standard Nostr message array, e.g.:
|
||||
The encrypted content structure:
|
||||
```json
|
||||
{
|
||||
"type": "EVENT" | "OK" | "EOSE" | "NOTICE" | "CLOSED" | "COUNT" | "AUTH",
|
||||
"type": "EVENT" | "OK" | "EOSE" | "NOTICE" | "CLOSED" | "COUNT" | "AUTH" | "CHUNK",
|
||||
"payload": <standard_nostr_response_array>
|
||||
}
|
||||
```
|
||||
@@ -150,6 +150,7 @@ Where `payload` is the standard Nostr response array, e.g.:
|
||||
- `["CLOSED", "<sub_id>", "<message>"]`
|
||||
- `["COUNT", "<sub_id>", {"count": <n>}]`
|
||||
- `["AUTH", "<challenge>"]`
|
||||
- `[<chunk_object>]` (for CHUNK type, see Message Segmentation)
|
||||
|
||||
### Session Management
|
||||
|
||||
@@ -168,6 +169,85 @@ The conversation key is derived from:
|
||||
- **Secret-based auth**: ECDH between client's secret key (derived from URI secret) and relay's public key
|
||||
- **CAT auth**: ECDH between client's Nostr key and relay's public key
|
||||
|
||||
### Message Segmentation
|
||||
|
||||
Some Nostr events exceed the typical relay message size limits (commonly 64KB). NRC supports message segmentation to handle large payloads by splitting them into multiple chunks.
|
||||
|
||||
#### When to Chunk
|
||||
|
||||
Senders SHOULD chunk messages when the JSON-serialized response exceeds 40KB. This threshold accounts for:
|
||||
- NIP-44 encryption overhead (~100 bytes)
|
||||
- Base64 encoding expansion (~33%)
|
||||
- Event wrapper overhead (tags, signature, etc.)
|
||||
|
||||
#### Chunk Message Format
|
||||
|
||||
When a response is too large, it is split into multiple CHUNK responses:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "CHUNK",
|
||||
"payload": [{
|
||||
"type": "CHUNK",
|
||||
"messageId": "<uuid>",
|
||||
"index": 0,
|
||||
"total": 3,
|
||||
"data": "<base64_encoded_chunk>"
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
Fields:
|
||||
- `messageId`: A unique identifier (UUID) for the chunked message, used to correlate chunks
|
||||
- `index`: Zero-based chunk index (0, 1, 2, ...)
|
||||
- `total`: Total number of chunks in this message
|
||||
- `data`: Base64-encoded segment of the original message
|
||||
|
||||
#### Chunking Process (Sender)
|
||||
|
||||
1. Serialize the original response message to JSON
|
||||
2. If the serialized length exceeds the threshold (40KB), proceed with chunking
|
||||
3. Encode the JSON string as UTF-8, then Base64 encode it
|
||||
4. Split the Base64 string into chunks of the maximum chunk size
|
||||
5. Generate a unique `messageId` (UUID recommended)
|
||||
6. Send each chunk as a separate CHUNK response event
|
||||
|
||||
Example encoding (JavaScript):
|
||||
```javascript
|
||||
const encoded = btoa(unescape(encodeURIComponent(jsonString)))
|
||||
```
|
||||
|
||||
#### Reassembly Process (Receiver)
|
||||
|
||||
1. When receiving a CHUNK response, buffer it by `messageId`
|
||||
2. Track received chunks by `index`
|
||||
3. When all chunks are received (`chunks.size === total`):
|
||||
a. Concatenate chunk data in index order (0, 1, 2, ...)
|
||||
b. Base64 decode the concatenated string
|
||||
c. Parse as UTF-8 JSON to recover the original response
|
||||
4. Process the reassembled response as normal
|
||||
5. Clean up the chunk buffer
|
||||
|
||||
Example decoding (JavaScript):
|
||||
```javascript
|
||||
const jsonString = decodeURIComponent(escape(atob(concatenatedBase64)))
|
||||
const response = JSON.parse(jsonString)
|
||||
```
|
||||
|
||||
#### Chunk Buffer Management
|
||||
|
||||
Receivers MUST implement chunk buffer cleanup:
|
||||
- Discard incomplete chunk buffers after 60 seconds of inactivity
|
||||
- Limit the number of concurrent incomplete messages to prevent memory exhaustion
|
||||
- Log warnings when discarding stale buffers for debugging
|
||||
|
||||
#### Ordering and Reliability
|
||||
|
||||
- Chunks MAY arrive out of order; receivers MUST reassemble by index
|
||||
- Missing chunks result in message loss; the incomplete buffer is eventually discarded
|
||||
- Duplicate chunks (same messageId + index) SHOULD be ignored
|
||||
- Each chunk is sent as a separate encrypted NRC response event
|
||||
|
||||
### Authentication
|
||||
|
||||
#### Secret-Based Authentication
|
||||
@@ -208,6 +288,9 @@ The conversation key is derived from:
|
||||
4. Match responses using the `e` tag (references request event ID)
|
||||
5. Handle EOSE by waiting for kind 24892 with type "EOSE" in content
|
||||
6. For subscriptions, maintain mapping of internal sub IDs to tunnel session
|
||||
7. **Chunking**: Maintain a chunk buffer map keyed by `messageId`
|
||||
8. **Chunking**: When receiving CHUNK responses, buffer chunks and reassemble when complete
|
||||
9. **Chunking**: Implement 60-second timeout for incomplete chunk buffers
|
||||
|
||||
## Bridge Implementation Notes
|
||||
|
||||
@@ -217,10 +300,14 @@ The conversation key is derived from:
|
||||
4. Capture all relay responses and wrap in kind 24892
|
||||
5. Sign with relay's key and publish to rendezvous relay
|
||||
6. Maintain session state for subscription mapping
|
||||
7. **Chunking**: Check response size before sending; chunk if > 40KB
|
||||
8. **Chunking**: Use consistent messageId (UUID) across all chunks of a message
|
||||
9. **Chunking**: Send chunks in order (index 0, 1, 2, ...) for optimal reassembly
|
||||
|
||||
## Reference Implementations
|
||||
|
||||
- ORLY Relay: [https://git.mleku.dev/mleku/next.orly.dev](https://git.mleku.dev/mleku/next.orly.dev)
|
||||
- ORLY Relay (Bridge): [https://git.mleku.dev/mleku/next.orly.dev](https://git.mleku.dev/mleku/next.orly.dev)
|
||||
- Smesh Client: [https://git.mleku.dev/mleku/smesh](https://git.mleku.dev/mleku/smesh)
|
||||
|
||||
## See Also
|
||||
|
||||
|
||||
8
go.mod
8
go.mod
@@ -3,12 +3,14 @@ module next.orly.dev
|
||||
go 1.25.3
|
||||
|
||||
require (
|
||||
git.mleku.dev/mleku/nostr v1.0.12
|
||||
git.mleku.dev/mleku/nostr v1.0.13
|
||||
github.com/adrg/xdg v0.5.3
|
||||
github.com/alexflint/go-arg v1.6.1
|
||||
github.com/aperturerobotics/go-indexeddb v0.2.3
|
||||
github.com/bits-and-blooms/bloom/v3 v3.7.1
|
||||
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0
|
||||
github.com/dgraph-io/badger/v4 v4.8.0
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/gorilla/websocket v1.5.3
|
||||
github.com/hack-pad/safejs v0.1.1
|
||||
github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0
|
||||
@@ -22,6 +24,7 @@ require (
|
||||
github.com/stretchr/testify v1.11.1
|
||||
github.com/vertex-lab/nostr-sqlite v0.3.2
|
||||
go-simpler.org/env v0.12.0
|
||||
go.etcd.io/bbolt v1.4.3
|
||||
go.uber.org/atomic v1.11.0
|
||||
golang.org/x/crypto v0.46.0
|
||||
golang.org/x/lint v0.0.0-20241112194109-818c5a804067
|
||||
@@ -37,7 +40,6 @@ require (
|
||||
github.com/ImVexed/fasturl v0.0.0-20230304231329-4e41488060f3 // indirect
|
||||
github.com/alexflint/go-scalar v1.2.0 // indirect
|
||||
github.com/bits-and-blooms/bitset v1.24.2 // indirect
|
||||
github.com/bits-and-blooms/bloom/v3 v3.7.1 // indirect
|
||||
github.com/btcsuite/btcd/btcec/v2 v2.3.4 // indirect
|
||||
github.com/btcsuite/btcd/chaincfg/chainhash v1.1.0 // indirect
|
||||
github.com/bytedance/sonic v1.13.1 // indirect
|
||||
@@ -56,7 +58,6 @@ require (
|
||||
github.com/google/btree v1.1.2 // indirect
|
||||
github.com/google/flatbuffers v25.9.23+incompatible // indirect
|
||||
github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
||||
@@ -72,7 +73,6 @@ require (
|
||||
github.com/tidwall/match v1.1.1 // indirect
|
||||
github.com/tidwall/pretty v1.2.1 // indirect
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
||||
go.etcd.io/bbolt v1.4.3 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||
go.opentelemetry.io/otel v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.38.0 // indirect
|
||||
|
||||
5
go.sum
5
go.sum
@@ -1,5 +1,5 @@
|
||||
git.mleku.dev/mleku/nostr v1.0.12 h1:bjsFUh1Q3fGpU7qsqxggGgrGGUt2OBdu1w8hjDM4gJE=
|
||||
git.mleku.dev/mleku/nostr v1.0.12/go.mod h1:kJwSMmLRnAJ7QJtgXDv2wGgceFU0luwVqrgAL3MI93M=
|
||||
git.mleku.dev/mleku/nostr v1.0.13 h1:FqeOQ9ZX8AFVsAI6XisQkB6cgmhn9DNQ2a8li9gx7aY=
|
||||
git.mleku.dev/mleku/nostr v1.0.13/go.mod h1:kJwSMmLRnAJ7QJtgXDv2wGgceFU0luwVqrgAL3MI93M=
|
||||
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
|
||||
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
|
||||
github.com/ImVexed/fasturl v0.0.0-20230304231329-4e41488060f3 h1:ClzzXMDDuUbWfNNZqGeYq4PnYOlwlOVIvSyNaIy0ykg=
|
||||
@@ -161,6 +161,7 @@ github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4=
|
||||
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
|
||||
github.com/twmb/murmur3 v1.1.8 h1:8Yt9taO/WN3l08xErzjeschgZU2QSrwm1kclYq+0aRg=
|
||||
github.com/twmb/murmur3 v1.1.8/go.mod h1:Qq/R7NUyOfr65zD+6Q5IHKsJLwP7exErjN6lyyq3OSQ=
|
||||
github.com/vertex-lab/nostr-sqlite v0.3.2 h1:8nZYYIwiKnWLA446qA/wL/Gy+bU0kuaxdLfUyfeTt/E=
|
||||
github.com/vertex-lab/nostr-sqlite v0.3.2/go.mod h1:5bw1wMgJhSdrumsZAWxqy+P0u1g+q02PnlGQn15dnSM=
|
||||
|
||||
43
main.go
43
main.go
@@ -8,6 +8,7 @@ import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"runtime/debug"
|
||||
"strings"
|
||||
@@ -15,16 +16,18 @@ import (
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/adrg/xdg"
|
||||
"github.com/pkg/profile"
|
||||
"golang.org/x/term"
|
||||
"lol.mleku.dev/chk"
|
||||
"lol.mleku.dev/log"
|
||||
"next.orly.dev/app"
|
||||
"next.orly.dev/app/branding"
|
||||
"next.orly.dev/app/config"
|
||||
"next.orly.dev/pkg/acl"
|
||||
"git.mleku.dev/mleku/nostr/crypto/keys"
|
||||
"git.mleku.dev/mleku/nostr/encoders/bech32encoding"
|
||||
_ "next.orly.dev/pkg/bbolt" // Import for bbolt factory registration
|
||||
bboltdb "next.orly.dev/pkg/bbolt" // Import for bbolt factory and type
|
||||
"next.orly.dev/pkg/database"
|
||||
neo4jdb "next.orly.dev/pkg/neo4j" // Import for neo4j factory and type
|
||||
"git.mleku.dev/mleku/nostr/encoders/hex"
|
||||
@@ -49,6 +52,40 @@ func main() {
|
||||
}
|
||||
log.I.F("starting %s %s", cfg.AppName, version.V)
|
||||
|
||||
// Handle 'init-branding' subcommand: create branding directory with default assets
|
||||
if requested, targetDir, style := config.InitBrandingRequested(); requested {
|
||||
if targetDir == "" {
|
||||
targetDir = filepath.Join(xdg.ConfigHome, cfg.AppName, "branding")
|
||||
}
|
||||
|
||||
// Validate and convert style
|
||||
var brandingStyle branding.BrandingStyle
|
||||
switch style {
|
||||
case "orly":
|
||||
brandingStyle = branding.StyleORLY
|
||||
case "generic", "":
|
||||
brandingStyle = branding.StyleGeneric
|
||||
default:
|
||||
fmt.Fprintf(os.Stderr, "Unknown style: %s (use 'orly' or 'generic')\n", style)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Printf("Initializing %s branding kit at: %s\n", style, targetDir)
|
||||
if err := branding.InitBrandingKit(targetDir, app.GetEmbeddedWebFS(), brandingStyle); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Println("\nBranding kit created successfully!")
|
||||
fmt.Println("\nFiles created:")
|
||||
fmt.Println(" branding.json - Main configuration file")
|
||||
fmt.Println(" assets/ - Logo, favicon, and PWA icons")
|
||||
fmt.Println(" css/custom.css - Full CSS override template")
|
||||
fmt.Println(" css/variables.css - CSS variables-only template")
|
||||
fmt.Println("\nEdit these files to customize your relay's appearance.")
|
||||
fmt.Println("Restart the relay to apply changes.")
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
// Handle 'identity' subcommand: print relay identity secret and pubkey and exit
|
||||
if config.IdentityRequested() {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
@@ -617,6 +654,10 @@ func main() {
|
||||
n4jDB.MaxConcurrentQueries(),
|
||||
)
|
||||
log.I.F("rate limiter configured for Neo4j backend (target: %dMB)", targetMB)
|
||||
} else if _, ok := db.(*bboltdb.B); ok {
|
||||
// BBolt uses memory-mapped IO, so memory-only limiter is appropriate
|
||||
limiter = ratelimit.NewMemoryOnlyLimiter(rlConfig)
|
||||
log.I.F("rate limiter configured for BBolt backend (target: %dMB)", targetMB)
|
||||
} else {
|
||||
// For other backends, create a disabled limiter
|
||||
limiter = ratelimit.NewDisabledLimiter()
|
||||
|
||||
@@ -138,7 +138,7 @@ func (f *Follows) Configure(cfg ...any) (err error) {
|
||||
if f.cfg.FollowsThrottleEnabled {
|
||||
perEvent := f.cfg.FollowsThrottlePerEvent
|
||||
if perEvent == 0 {
|
||||
perEvent = 200 * time.Millisecond
|
||||
perEvent = 25 * time.Millisecond
|
||||
}
|
||||
maxDelay := f.cfg.FollowsThrottleMaxDelay
|
||||
if maxDelay == 0 {
|
||||
|
||||
@@ -200,6 +200,12 @@ func (s *Server) handleUpload(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
// Check bandwidth rate limit (non-followed users)
|
||||
if !s.checkBandwidthLimit(pubkey, remoteAddr, int64(len(body))) {
|
||||
s.setErrorResponse(w, http.StatusTooManyRequests, "upload rate limit exceeded, try again later")
|
||||
return
|
||||
}
|
||||
|
||||
// Calculate SHA256 after auth check
|
||||
sha256Hash := CalculateSHA256(body)
|
||||
sha256Hex := hex.Enc(sha256Hash)
|
||||
@@ -647,6 +653,12 @@ func (s *Server) handleMirror(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
// Check bandwidth rate limit (non-followed users)
|
||||
if !s.checkBandwidthLimit(pubkey, remoteAddr, int64(len(body))) {
|
||||
s.setErrorResponse(w, http.StatusTooManyRequests, "upload rate limit exceeded, try again later")
|
||||
return
|
||||
}
|
||||
|
||||
// Note: pubkey may be nil for anonymous uploads if ACL allows it
|
||||
|
||||
// Detect MIME type from remote response
|
||||
@@ -726,6 +738,12 @@ func (s *Server) handleMediaUpload(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
// Check bandwidth rate limit (non-followed users)
|
||||
if !s.checkBandwidthLimit(pubkey, remoteAddr, int64(len(body))) {
|
||||
s.setErrorResponse(w, http.StatusTooManyRequests, "upload rate limit exceeded, try again later")
|
||||
return
|
||||
}
|
||||
|
||||
// Note: pubkey may be nil for anonymous uploads if ACL allows it
|
||||
|
||||
// Optimize media (placeholder - actual optimization would be implemented here)
|
||||
|
||||
131
pkg/blossom/ratelimit.go
Normal file
131
pkg/blossom/ratelimit.go
Normal file
@@ -0,0 +1,131 @@
|
||||
package blossom
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// BandwidthState tracks upload bandwidth for an identity
|
||||
type BandwidthState struct {
|
||||
BucketBytes int64 // Current token bucket level (bytes available)
|
||||
LastUpdate time.Time // Last time bucket was updated
|
||||
}
|
||||
|
||||
// BandwidthLimiter implements token bucket rate limiting for uploads.
|
||||
// Each identity gets a bucket that replenishes at dailyLimit/day rate.
|
||||
// Uploads consume tokens from the bucket.
|
||||
type BandwidthLimiter struct {
|
||||
mu sync.Mutex
|
||||
states map[string]*BandwidthState // keyed by pubkey hex or IP
|
||||
dailyLimit int64 // bytes per day
|
||||
burstLimit int64 // max bucket size (burst capacity)
|
||||
refillRate float64 // bytes per second refill rate
|
||||
}
|
||||
|
||||
// NewBandwidthLimiter creates a new bandwidth limiter.
|
||||
// dailyLimitMB is the average daily limit in megabytes.
|
||||
// burstLimitMB is the maximum burst capacity in megabytes.
|
||||
func NewBandwidthLimiter(dailyLimitMB, burstLimitMB int64) *BandwidthLimiter {
|
||||
dailyBytes := dailyLimitMB * 1024 * 1024
|
||||
burstBytes := burstLimitMB * 1024 * 1024
|
||||
|
||||
return &BandwidthLimiter{
|
||||
states: make(map[string]*BandwidthState),
|
||||
dailyLimit: dailyBytes,
|
||||
burstLimit: burstBytes,
|
||||
refillRate: float64(dailyBytes) / 86400.0, // bytes per second
|
||||
}
|
||||
}
|
||||
|
||||
// CheckAndConsume checks if an upload of the given size is allowed for the identity,
|
||||
// and if so, consumes the tokens. Returns true if allowed, false if rate limited.
|
||||
// The identity should be pubkey hex for authenticated users, or IP for anonymous.
|
||||
func (bl *BandwidthLimiter) CheckAndConsume(identity string, sizeBytes int64) bool {
|
||||
bl.mu.Lock()
|
||||
defer bl.mu.Unlock()
|
||||
|
||||
now := time.Now()
|
||||
state, exists := bl.states[identity]
|
||||
|
||||
if !exists {
|
||||
// New identity starts with full burst capacity
|
||||
state = &BandwidthState{
|
||||
BucketBytes: bl.burstLimit,
|
||||
LastUpdate: now,
|
||||
}
|
||||
bl.states[identity] = state
|
||||
} else {
|
||||
// Refill bucket based on elapsed time
|
||||
elapsed := now.Sub(state.LastUpdate).Seconds()
|
||||
refill := int64(elapsed * bl.refillRate)
|
||||
state.BucketBytes += refill
|
||||
if state.BucketBytes > bl.burstLimit {
|
||||
state.BucketBytes = bl.burstLimit
|
||||
}
|
||||
state.LastUpdate = now
|
||||
}
|
||||
|
||||
// Check if upload fits in bucket
|
||||
if state.BucketBytes >= sizeBytes {
|
||||
state.BucketBytes -= sizeBytes
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// GetAvailable returns the currently available bytes for an identity.
|
||||
func (bl *BandwidthLimiter) GetAvailable(identity string) int64 {
|
||||
bl.mu.Lock()
|
||||
defer bl.mu.Unlock()
|
||||
|
||||
state, exists := bl.states[identity]
|
||||
if !exists {
|
||||
return bl.burstLimit // New users have full capacity
|
||||
}
|
||||
|
||||
// Calculate current level with refill
|
||||
now := time.Now()
|
||||
elapsed := now.Sub(state.LastUpdate).Seconds()
|
||||
refill := int64(elapsed * bl.refillRate)
|
||||
available := state.BucketBytes + refill
|
||||
if available > bl.burstLimit {
|
||||
available = bl.burstLimit
|
||||
}
|
||||
|
||||
return available
|
||||
}
|
||||
|
||||
// GetTimeUntilAvailable returns how long until the given bytes will be available.
|
||||
func (bl *BandwidthLimiter) GetTimeUntilAvailable(identity string, sizeBytes int64) time.Duration {
|
||||
available := bl.GetAvailable(identity)
|
||||
if available >= sizeBytes {
|
||||
return 0
|
||||
}
|
||||
|
||||
needed := sizeBytes - available
|
||||
seconds := float64(needed) / bl.refillRate
|
||||
return time.Duration(seconds * float64(time.Second))
|
||||
}
|
||||
|
||||
// Cleanup removes entries that have fully replenished (at burst limit).
|
||||
func (bl *BandwidthLimiter) Cleanup() {
|
||||
bl.mu.Lock()
|
||||
defer bl.mu.Unlock()
|
||||
|
||||
now := time.Now()
|
||||
for key, state := range bl.states {
|
||||
elapsed := now.Sub(state.LastUpdate).Seconds()
|
||||
refill := int64(elapsed * bl.refillRate)
|
||||
if state.BucketBytes+refill >= bl.burstLimit {
|
||||
delete(bl.states, key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Stats returns the number of tracked identities.
|
||||
func (bl *BandwidthLimiter) Stats() int {
|
||||
bl.mu.Lock()
|
||||
defer bl.mu.Unlock()
|
||||
return len(bl.states)
|
||||
}
|
||||
@@ -19,6 +19,9 @@ type Server struct {
|
||||
maxBlobSize int64
|
||||
allowedMimeTypes map[string]bool
|
||||
requireAuth bool
|
||||
|
||||
// Rate limiting for uploads
|
||||
bandwidthLimiter *BandwidthLimiter
|
||||
}
|
||||
|
||||
// Config holds configuration for the Blossom server
|
||||
@@ -27,6 +30,11 @@ type Config struct {
|
||||
MaxBlobSize int64
|
||||
AllowedMimeTypes []string
|
||||
RequireAuth bool
|
||||
|
||||
// Rate limiting (for non-followed users)
|
||||
RateLimitEnabled bool
|
||||
DailyLimitMB int64
|
||||
BurstLimitMB int64
|
||||
}
|
||||
|
||||
// NewServer creates a new Blossom server instance
|
||||
@@ -48,6 +56,20 @@ func NewServer(db *database.D, aclRegistry *acl.S, cfg *Config) *Server {
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize bandwidth limiter if enabled
|
||||
var bwLimiter *BandwidthLimiter
|
||||
if cfg.RateLimitEnabled {
|
||||
dailyMB := cfg.DailyLimitMB
|
||||
if dailyMB <= 0 {
|
||||
dailyMB = 10 // 10MB default
|
||||
}
|
||||
burstMB := cfg.BurstLimitMB
|
||||
if burstMB <= 0 {
|
||||
burstMB = 50 // 50MB default burst
|
||||
}
|
||||
bwLimiter = NewBandwidthLimiter(dailyMB, burstMB)
|
||||
}
|
||||
|
||||
return &Server{
|
||||
db: db,
|
||||
storage: storage,
|
||||
@@ -56,6 +78,7 @@ func NewServer(db *database.D, aclRegistry *acl.S, cfg *Config) *Server {
|
||||
maxBlobSize: cfg.MaxBlobSize,
|
||||
allowedMimeTypes: allowedMap,
|
||||
requireAuth: cfg.RequireAuth,
|
||||
bandwidthLimiter: bwLimiter,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -208,6 +231,44 @@ func (s *Server) checkACL(
|
||||
return actual >= required
|
||||
}
|
||||
|
||||
// isRateLimitExempt returns true if the user is exempt from rate limiting.
|
||||
// Users with write access or higher (followed users, admins, owners) are exempt.
|
||||
func (s *Server) isRateLimitExempt(pubkey []byte, remoteAddr string) bool {
|
||||
if s.acl == nil {
|
||||
return true // No ACL configured, no rate limiting
|
||||
}
|
||||
|
||||
level := s.acl.GetAccessLevel(pubkey, remoteAddr)
|
||||
|
||||
// Followed users get "write" level, admins/owners get higher
|
||||
// Only "read" and "none" are rate limited
|
||||
return level == "write" || level == "admin" || level == "owner"
|
||||
}
|
||||
|
||||
// checkBandwidthLimit checks if the upload is allowed under rate limits.
|
||||
// Returns true if allowed, false if rate limited.
|
||||
// Exempt users (followed, admin, owner) always return true.
|
||||
func (s *Server) checkBandwidthLimit(pubkey []byte, remoteAddr string, sizeBytes int64) bool {
|
||||
if s.bandwidthLimiter == nil {
|
||||
return true // No rate limiting configured
|
||||
}
|
||||
|
||||
// Check if user is exempt
|
||||
if s.isRateLimitExempt(pubkey, remoteAddr) {
|
||||
return true
|
||||
}
|
||||
|
||||
// Use pubkey hex if available, otherwise IP
|
||||
var identity string
|
||||
if len(pubkey) > 0 {
|
||||
identity = string(pubkey) // Will be converted to hex in handler
|
||||
} else {
|
||||
identity = remoteAddr
|
||||
}
|
||||
|
||||
return s.bandwidthLimiter.CheckAndConsume(identity, sizeBytes)
|
||||
}
|
||||
|
||||
// BaseURLKey is the context key for the base URL (exported for use by app handler)
|
||||
type BaseURLKey struct{}
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ package issuer
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
@@ -222,22 +223,28 @@ func (i *Issuer) GetActiveKeysetID() string {
|
||||
|
||||
// MintInfo contains public information about the mint.
|
||||
type MintInfo struct {
|
||||
Name string `json:"name,omitempty"`
|
||||
Version string `json:"version"`
|
||||
TokenTTL int64 `json:"token_ttl"`
|
||||
MaxKinds int `json:"max_kinds,omitempty"`
|
||||
MaxKindRanges int `json:"max_kind_ranges,omitempty"`
|
||||
Name string `json:"name,omitempty"`
|
||||
Version string `json:"version"`
|
||||
Pubkey string `json:"pubkey"`
|
||||
TokenTTL int64 `json:"token_ttl"`
|
||||
MaxKinds int `json:"max_kinds,omitempty"`
|
||||
MaxKindRanges int `json:"max_kind_ranges,omitempty"`
|
||||
SupportedScopes []string `json:"supported_scopes,omitempty"`
|
||||
}
|
||||
|
||||
// GetMintInfo returns public information about the issuer.
|
||||
func (i *Issuer) GetMintInfo(name string) MintInfo {
|
||||
var pubkeyHex string
|
||||
if ks := i.keysets.GetSigningKeyset(); ks != nil {
|
||||
pubkeyHex = hex.EncodeToString(ks.SerializePublicKey())
|
||||
}
|
||||
return MintInfo{
|
||||
Name: name,
|
||||
Version: "NIP-XX/1",
|
||||
TokenTTL: int64(i.config.DefaultTTL.Seconds()),
|
||||
MaxKinds: i.config.MaxKinds,
|
||||
MaxKindRanges: i.config.MaxKindRanges,
|
||||
Name: name,
|
||||
Version: "NIP-XX/1",
|
||||
Pubkey: pubkeyHex,
|
||||
TokenTTL: int64(i.config.DefaultTTL.Seconds()),
|
||||
MaxKinds: i.config.MaxKinds,
|
||||
MaxKindRanges: i.config.MaxKindRanges,
|
||||
SupportedScopes: i.config.AllowedScopes,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,12 +4,17 @@ package database
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/dgraph-io/badger/v4"
|
||||
"github.com/minio/sha256-simd"
|
||||
"git.mleku.dev/mleku/nostr/encoders/filter"
|
||||
"git.mleku.dev/mleku/nostr/encoders/hex"
|
||||
"git.mleku.dev/mleku/nostr/encoders/tag"
|
||||
)
|
||||
|
||||
// CuratingConfig represents the configuration for curating ACL mode
|
||||
@@ -965,14 +970,17 @@ func kindInRange(kind int, rangeStr string) bool {
|
||||
// kindInCategory checks if a kind belongs to a predefined category
|
||||
func kindInCategory(kind int, category string) bool {
|
||||
categories := map[string][]int{
|
||||
"social": {0, 1, 3, 6, 7, 10002},
|
||||
"dm": {4, 14, 1059},
|
||||
"longform": {30023, 30024},
|
||||
"media": {1063, 20, 21, 22},
|
||||
"marketplace": {30017, 30018, 30019, 30020, 1021, 1022},
|
||||
"groups_nip29": {9, 10, 11, 12, 9000, 9001, 9002, 39000, 39001, 39002},
|
||||
"groups_nip72": {34550, 1111, 4550},
|
||||
"lists": {10000, 10001, 10003, 30000, 30001, 30003},
|
||||
"social": {0, 1, 3, 6, 7, 10002},
|
||||
"dm": {4, 14, 1059},
|
||||
"longform": {30023, 30024},
|
||||
"media": {1063, 20, 21, 22},
|
||||
"marketplace": {30017, 30018, 30019, 30020, 1021, 1022}, // Legacy alias
|
||||
"marketplace_nip15": {30017, 30018, 30019, 30020, 1021, 1022},
|
||||
"marketplace_nip99": {30402, 30403, 30405, 30406, 31555}, // NIP-99/Gamma Markets (Plebeian Market)
|
||||
"order_communication": {16, 17}, // Gamma Markets order messages
|
||||
"groups_nip29": {9, 10, 11, 12, 9000, 9001, 9002, 39000, 39001, 39002},
|
||||
"groups_nip72": {34550, 1111, 4550},
|
||||
"lists": {10000, 10001, 10003, 30000, 30001, 30003},
|
||||
}
|
||||
|
||||
kinds, ok := categories[category]
|
||||
@@ -987,3 +995,236 @@ func kindInCategory(kind int, category string) bool {
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// ==================== Database Scanning ====================
|
||||
|
||||
// ScanResult contains the results of scanning all pubkeys in the database
|
||||
type ScanResult struct {
|
||||
TotalPubkeys int `json:"total_pubkeys"`
|
||||
TotalEvents int `json:"total_events"`
|
||||
Skipped int `json:"skipped"` // Trusted/blacklisted users skipped
|
||||
}
|
||||
|
||||
// ScanAllPubkeys scans the database to find all unique pubkeys and count their events.
|
||||
// This populates the event count data needed for the unclassified users list.
|
||||
// It uses the SerialPubkey index to find all pubkeys, then counts events for each.
|
||||
func (c *CuratingACL) ScanAllPubkeys() (*ScanResult, error) {
|
||||
result := &ScanResult{}
|
||||
|
||||
// First, get all trusted and blacklisted pubkeys to skip
|
||||
trusted, err := c.ListTrustedPubkeys()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
blacklisted, err := c.ListBlacklistedPubkeys()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
excludeSet := make(map[string]struct{})
|
||||
for _, t := range trusted {
|
||||
excludeSet[t.Pubkey] = struct{}{}
|
||||
}
|
||||
for _, b := range blacklisted {
|
||||
excludeSet[b.Pubkey] = struct{}{}
|
||||
}
|
||||
|
||||
// Scan the SerialPubkey index to get all pubkeys
|
||||
pubkeys := make(map[string]struct{})
|
||||
|
||||
err = c.View(func(txn *badger.Txn) error {
|
||||
// SerialPubkey prefix is "spk"
|
||||
prefix := []byte("spk")
|
||||
it := txn.NewIterator(badger.IteratorOptions{Prefix: prefix})
|
||||
defer it.Close()
|
||||
|
||||
for it.Rewind(); it.Valid(); it.Next() {
|
||||
item := it.Item()
|
||||
// The value contains the 32-byte pubkey
|
||||
val, err := item.ValueCopy(nil)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if len(val) == 32 {
|
||||
// Convert to hex
|
||||
pubkeyHex := fmt.Sprintf("%x", val)
|
||||
pubkeys[pubkeyHex] = struct{}{}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
result.TotalPubkeys = len(pubkeys)
|
||||
|
||||
// For each pubkey, count events and store the count
|
||||
today := time.Now().Format("2006-01-02")
|
||||
|
||||
for pubkeyHex := range pubkeys {
|
||||
// Skip if trusted or blacklisted
|
||||
if _, excluded := excludeSet[pubkeyHex]; excluded {
|
||||
result.Skipped++
|
||||
continue
|
||||
}
|
||||
|
||||
// Count events for this pubkey using the Pubkey index
|
||||
count, err := c.countEventsForPubkey(pubkeyHex)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if count > 0 {
|
||||
result.TotalEvents += count
|
||||
|
||||
// Store the event count
|
||||
ec := PubkeyEventCount{
|
||||
Pubkey: pubkeyHex,
|
||||
Date: today,
|
||||
Count: count,
|
||||
LastEvent: time.Now(),
|
||||
}
|
||||
|
||||
err = c.Update(func(txn *badger.Txn) error {
|
||||
key := c.getEventCountKey(pubkeyHex, today)
|
||||
data, err := json.Marshal(ec)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return txn.Set(key, data)
|
||||
})
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// EventSummary represents a simplified event for display in the UI
|
||||
type EventSummary struct {
|
||||
ID string `json:"id"`
|
||||
Kind int `json:"kind"`
|
||||
Content string `json:"content"`
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
}
|
||||
|
||||
// GetEventsForPubkey fetches events for a pubkey, returning simplified event data
|
||||
// limit specifies max events to return, offset is for pagination
|
||||
func (c *CuratingACL) GetEventsForPubkey(pubkeyHex string, limit, offset int) ([]EventSummary, int, error) {
|
||||
var events []EventSummary
|
||||
|
||||
// First, count total events for this pubkey
|
||||
totalCount, err := c.countEventsForPubkey(pubkeyHex)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
// Decode the pubkey hex to bytes
|
||||
pubkeyBytes, err := hex.DecAppend(nil, []byte(pubkeyHex))
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("invalid pubkey hex: %w", err)
|
||||
}
|
||||
|
||||
// Create a filter to query events by author
|
||||
// Use a larger limit to account for offset, then slice
|
||||
queryLimit := uint(limit + offset)
|
||||
f := &filter.F{
|
||||
Authors: tag.NewFromBytesSlice(pubkeyBytes),
|
||||
Limit: &queryLimit,
|
||||
}
|
||||
|
||||
// Query events using the database's QueryEvents method
|
||||
ctx := context.Background()
|
||||
evs, err := c.D.QueryEvents(ctx, f)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
// Apply offset and convert to EventSummary
|
||||
for i, ev := range evs {
|
||||
if i < offset {
|
||||
continue
|
||||
}
|
||||
if len(events) >= limit {
|
||||
break
|
||||
}
|
||||
events = append(events, EventSummary{
|
||||
ID: hex.Enc(ev.ID),
|
||||
Kind: int(ev.Kind),
|
||||
Content: string(ev.Content),
|
||||
CreatedAt: ev.CreatedAt,
|
||||
})
|
||||
}
|
||||
|
||||
return events, totalCount, nil
|
||||
}
|
||||
|
||||
// DeleteEventsForPubkey deletes all events for a given pubkey
|
||||
// Returns the number of events deleted
|
||||
func (c *CuratingACL) DeleteEventsForPubkey(pubkeyHex string) (int, error) {
|
||||
// Decode the pubkey hex to bytes
|
||||
pubkeyBytes, err := hex.DecAppend(nil, []byte(pubkeyHex))
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("invalid pubkey hex: %w", err)
|
||||
}
|
||||
|
||||
// Create a filter to find all events by this author
|
||||
f := &filter.F{
|
||||
Authors: tag.NewFromBytesSlice(pubkeyBytes),
|
||||
}
|
||||
|
||||
// Query all events for this pubkey
|
||||
ctx := context.Background()
|
||||
evs, err := c.D.QueryEvents(ctx, f)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Delete each event
|
||||
deleted := 0
|
||||
for _, ev := range evs {
|
||||
if err := c.D.DeleteEvent(ctx, ev.ID); err != nil {
|
||||
// Log error but continue deleting
|
||||
continue
|
||||
}
|
||||
deleted++
|
||||
}
|
||||
|
||||
return deleted, nil
|
||||
}
|
||||
|
||||
// countEventsForPubkey counts events in the database for a given pubkey hex string
|
||||
func (c *CuratingACL) countEventsForPubkey(pubkeyHex string) (int, error) {
|
||||
count := 0
|
||||
|
||||
// Decode the pubkey hex to bytes
|
||||
pubkeyBytes := make([]byte, 32)
|
||||
for i := 0; i < 32 && i*2+1 < len(pubkeyHex); i++ {
|
||||
fmt.Sscanf(pubkeyHex[i*2:i*2+2], "%02x", &pubkeyBytes[i])
|
||||
}
|
||||
|
||||
// Compute the pubkey hash (SHA256 of pubkey, first 8 bytes)
|
||||
// This matches the PubHash type in indexes/types/pubhash.go
|
||||
pkh := sha256.Sum256(pubkeyBytes)
|
||||
|
||||
// Scan the Pubkey index (prefix "pc-") for this pubkey
|
||||
err := c.View(func(txn *badger.Txn) error {
|
||||
// Build prefix: "pc-" + 8-byte SHA256 hash of pubkey
|
||||
prefix := make([]byte, 3+8)
|
||||
copy(prefix[:3], []byte("pc-"))
|
||||
copy(prefix[3:], pkh[:8])
|
||||
|
||||
it := txn.NewIterator(badger.IteratorOptions{Prefix: prefix})
|
||||
defer it.Close()
|
||||
|
||||
for it.Rewind(); it.Valid(); it.Next() {
|
||||
count++
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
return count, err
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"sort"
|
||||
|
||||
"lol.mleku.dev/chk"
|
||||
"lol.mleku.dev/errorf"
|
||||
"lol.mleku.dev/log"
|
||||
"next.orly.dev/pkg/database/indexes"
|
||||
types2 "next.orly.dev/pkg/database/indexes/types"
|
||||
@@ -44,6 +45,12 @@ func NormalizeTagValueForHash(key byte, valueBytes []byte) []byte {
|
||||
func CreateIdHashFromData(data []byte) (i *types2.IdHash, err error) {
|
||||
i = new(types2.IdHash)
|
||||
|
||||
// Skip empty data to avoid noisy errors
|
||||
if len(data) == 0 {
|
||||
err = errorf.E("CreateIdHashFromData: empty ID provided")
|
||||
return
|
||||
}
|
||||
|
||||
// If data looks like hex string and has the right length for hex-encoded
|
||||
// sha256
|
||||
if len(data) == 64 {
|
||||
@@ -95,6 +102,11 @@ func GetIndexesFromFilter(f *filter.F) (idxs []Range, err error) {
|
||||
// should be an error, but convention just ignores it.
|
||||
if f.Ids.Len() > 0 {
|
||||
for _, id := range f.Ids.T {
|
||||
// Skip empty IDs - some filters have empty ID values
|
||||
if len(id) == 0 {
|
||||
log.D.F("GetIndexesFromFilter: skipping empty ID in filter (ids=%d)", f.Ids.Len())
|
||||
continue
|
||||
}
|
||||
if err = func() (err error) {
|
||||
var i *types2.IdHash
|
||||
if i, err = CreateIdHashFromData(id); chk.E(err) {
|
||||
|
||||
@@ -20,6 +20,10 @@ import (
|
||||
|
||||
func (d *D) GetSerialById(id []byte) (ser *types.Uint40, err error) {
|
||||
// log.T.F("GetSerialById: input id=%s", hex.Enc(id))
|
||||
if len(id) == 0 {
|
||||
err = errorf.E("GetSerialById: called with empty ID")
|
||||
return
|
||||
}
|
||||
var idxs []Range
|
||||
if idxs, err = GetIndexesFromFilter(&filter.F{Ids: tag.NewFromBytesSlice(id)}); chk.E(err) {
|
||||
return
|
||||
@@ -102,6 +106,10 @@ func (d *D) GetSerialsByIdsWithFilter(
|
||||
|
||||
// Process each ID sequentially
|
||||
for _, id := range ids.T {
|
||||
// Skip empty IDs
|
||||
if len(id) == 0 {
|
||||
continue
|
||||
}
|
||||
// idHex := hex.Enc(id)
|
||||
|
||||
// Get the index prefix for this ID
|
||||
|
||||
@@ -24,8 +24,8 @@ func (i *IdHash) Set(idh []byte) {
|
||||
func (i *IdHash) FromId(id []byte) (err error) {
|
||||
if len(id) != sha256.Size {
|
||||
err = errorf.E(
|
||||
"FromId: invalid ID length, got %d require %d", len(id),
|
||||
sha256.Size,
|
||||
"FromId: invalid ID length, got %d require %d (data=%x)", len(id),
|
||||
sha256.Size, id,
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package routing
|
||||
import (
|
||||
"git.mleku.dev/mleku/nostr/encoders/event"
|
||||
"git.mleku.dev/mleku/nostr/encoders/kind"
|
||||
"lol.mleku.dev/log"
|
||||
)
|
||||
|
||||
// Publisher abstracts event delivery to subscribers.
|
||||
@@ -22,6 +23,7 @@ func IsEphemeral(k uint16) bool {
|
||||
// - Are immediately delivered to subscribers
|
||||
func MakeEphemeralHandler(publisher Publisher) Handler {
|
||||
return func(ev *event.E, authedPubkey []byte) Result {
|
||||
log.I.F("ephemeral handler received event kind %d, id %0x", ev.Kind, ev.ID[:8])
|
||||
// Clone and deliver immediately without persistence
|
||||
cloned := ev.Clone()
|
||||
go publisher.Deliver(cloned)
|
||||
|
||||
@@ -10,12 +10,15 @@ import (
|
||||
"git.mleku.dev/mleku/nostr/encoders/filter"
|
||||
"git.mleku.dev/mleku/nostr/encoders/hex"
|
||||
"git.mleku.dev/mleku/nostr/encoders/tag"
|
||||
"lol.mleku.dev/log"
|
||||
"next.orly.dev/pkg/database/indexes/types"
|
||||
"next.orly.dev/pkg/interfaces/store"
|
||||
)
|
||||
|
||||
// QueryEvents retrieves events matching the given filter
|
||||
func (n *N) QueryEvents(c context.Context, f *filter.F) (evs event.S, err error) {
|
||||
log.T.F("Neo4j QueryEvents called with filter: kinds=%v, authors=%d, tags=%v",
|
||||
f.Kinds != nil, f.Authors != nil && len(f.Authors.T) > 0, f.Tags != nil)
|
||||
return n.QueryEventsWithOptions(c, f, false, false)
|
||||
}
|
||||
|
||||
@@ -101,6 +104,7 @@ func (n *N) buildCypherQuery(f *filter.F, includeDeleteEvents bool) (string, map
|
||||
// Normalize to lowercase hex using our utility function
|
||||
// This handles both binary-encoded pubkeys and hex string pubkeys (including uppercase)
|
||||
hexAuthor := NormalizePubkeyHex(author)
|
||||
log.T.F("Neo4j author filter: raw_len=%d, normalized=%q", len(author), hexAuthor)
|
||||
if hexAuthor == "" {
|
||||
continue
|
||||
}
|
||||
@@ -130,30 +134,39 @@ func (n *N) buildCypherQuery(f *filter.F, includeDeleteEvents bool) (string, map
|
||||
}
|
||||
|
||||
// Time range filters - for temporal queries
|
||||
if f.Since != nil {
|
||||
// Note: Check both pointer and value - a zero timestamp (Unix epoch 1970) is almost
|
||||
// certainly not a valid constraint as Nostr events didn't exist then
|
||||
if f.Since != nil && f.Since.V > 0 {
|
||||
params["since"] = f.Since.V
|
||||
whereClauses = append(whereClauses, "e.created_at >= $since")
|
||||
}
|
||||
if f.Until != nil {
|
||||
if f.Until != nil && f.Until.V > 0 {
|
||||
params["until"] = f.Until.V
|
||||
whereClauses = append(whereClauses, "e.created_at <= $until")
|
||||
}
|
||||
|
||||
// Tag filters - this is where Neo4j's graph capabilities shine
|
||||
// We can efficiently traverse tag relationships
|
||||
// We use EXISTS subqueries to efficiently filter events by tags
|
||||
// This ensures events are only returned if they have matching tags
|
||||
tagIndex := 0
|
||||
if f.Tags != nil {
|
||||
for _, tagValues := range *f.Tags {
|
||||
if len(tagValues.T) > 0 {
|
||||
tagVarName := fmt.Sprintf("t%d", tagIndex)
|
||||
tagTypeParam := fmt.Sprintf("tagType_%d", tagIndex)
|
||||
tagValuesParam := fmt.Sprintf("tagValues_%d", tagIndex)
|
||||
|
||||
// Add tag relationship to MATCH clause
|
||||
matchClause += fmt.Sprintf(" OPTIONAL MATCH (e)-[:TAGGED_WITH]->(%s:Tag)", tagVarName)
|
||||
// The first element is the tag type (e.g., "e", "p", "#e", "#p", etc.)
|
||||
// Filter tags may have "#" prefix (e.g., "#d" for d-tag filters)
|
||||
// Event tags are stored without prefix, so we must strip it
|
||||
tagTypeBytes := tagValues.T[0]
|
||||
var tagType string
|
||||
if len(tagTypeBytes) > 0 && tagTypeBytes[0] == '#' {
|
||||
tagType = string(tagTypeBytes[1:]) // Strip "#" prefix
|
||||
} else {
|
||||
tagType = string(tagTypeBytes)
|
||||
}
|
||||
|
||||
// The first element is the tag type (e.g., "e", "p", etc.)
|
||||
tagType := string(tagValues.T[0])
|
||||
log.T.F("Neo4j tag filter: type=%q (raw=%q, len=%d)", tagType, string(tagTypeBytes), len(tagTypeBytes))
|
||||
|
||||
// Convert remaining tag values to strings (skip first element which is the type)
|
||||
// For e/p tags, use NormalizePubkeyHex to handle binary encoding and uppercase hex
|
||||
@@ -162,26 +175,34 @@ func (n *N) buildCypherQuery(f *filter.F, includeDeleteEvents bool) (string, map
|
||||
if tagType == "e" || tagType == "p" {
|
||||
// Normalize e/p tag values to lowercase hex (handles binary encoding)
|
||||
normalized := NormalizePubkeyHex(tv)
|
||||
log.T.F("Neo4j tag filter: %s-tag value normalized: %q (raw len=%d, binary=%v)",
|
||||
tagType, normalized, len(tv), IsBinaryEncoded(tv))
|
||||
if normalized != "" {
|
||||
tagValueStrings = append(tagValueStrings, normalized)
|
||||
}
|
||||
} else {
|
||||
// For other tags, use direct string conversion
|
||||
tagValueStrings = append(tagValueStrings, string(tv))
|
||||
val := string(tv)
|
||||
log.T.F("Neo4j tag filter: %s-tag value: %q (len=%d)", tagType, val, len(val))
|
||||
tagValueStrings = append(tagValueStrings, val)
|
||||
}
|
||||
}
|
||||
|
||||
// Skip if no valid values after normalization
|
||||
if len(tagValueStrings) == 0 {
|
||||
log.W.F("Neo4j tag filter: no valid values for tag type %q, skipping", tagType)
|
||||
continue
|
||||
}
|
||||
|
||||
// Add WHERE conditions for this tag
|
||||
log.T.F("Neo4j tag filter: type=%s, values=%v", tagType, tagValueStrings)
|
||||
|
||||
// Use EXISTS subquery to filter events that have matching tags
|
||||
// This is more correct than OPTIONAL MATCH because it requires the tag to exist
|
||||
params[tagTypeParam] = tagType
|
||||
params[tagValuesParam] = tagValueStrings
|
||||
whereClauses = append(whereClauses,
|
||||
fmt.Sprintf("(%s.type = $%s AND %s.value IN $%s)",
|
||||
tagVarName, tagTypeParam, tagVarName, tagValuesParam))
|
||||
fmt.Sprintf("EXISTS { MATCH (e)-[:TAGGED_WITH]->(t:Tag) WHERE t.type = $%s AND t.value IN $%s }",
|
||||
tagTypeParam, tagValuesParam))
|
||||
|
||||
tagIndex++
|
||||
}
|
||||
@@ -248,6 +269,26 @@ RETURN e.id AS id,
|
||||
// Combine all parts
|
||||
cypher := matchClause + whereClause + returnClause + orderClause + limitClause
|
||||
|
||||
// Log the generated query for debugging
|
||||
log.T.F("Neo4j query: %s", cypher)
|
||||
// Log params at trace level for debugging
|
||||
var paramSummary strings.Builder
|
||||
for k, v := range params {
|
||||
switch val := v.(type) {
|
||||
case []string:
|
||||
if len(val) <= 3 {
|
||||
paramSummary.WriteString(fmt.Sprintf("%s: %v ", k, val))
|
||||
} else {
|
||||
paramSummary.WriteString(fmt.Sprintf("%s: [%d values] ", k, len(val)))
|
||||
}
|
||||
case []int64:
|
||||
paramSummary.WriteString(fmt.Sprintf("%s: %v ", k, val))
|
||||
default:
|
||||
paramSummary.WriteString(fmt.Sprintf("%s: %v ", k, v))
|
||||
}
|
||||
}
|
||||
log.T.F("Neo4j params: %s", paramSummary.String())
|
||||
|
||||
return cypher, params
|
||||
}
|
||||
|
||||
@@ -300,19 +341,17 @@ func (n *N) parseEventsFromResult(result *CollectedResult) ([]*event.E, error) {
|
||||
_ = tags.UnmarshalJSON([]byte(tagsStr))
|
||||
}
|
||||
|
||||
// Create event
|
||||
// Create event with decoded binary fields
|
||||
e := &event.E{
|
||||
ID: id,
|
||||
Pubkey: pubkey,
|
||||
Kind: uint16(kind),
|
||||
CreatedAt: createdAt,
|
||||
Content: []byte(content),
|
||||
Tags: tags,
|
||||
Sig: sig,
|
||||
}
|
||||
|
||||
// Copy fixed-size arrays
|
||||
copy(e.ID[:], id)
|
||||
copy(e.Sig[:], sig)
|
||||
copy(e.Pubkey[:], pubkey)
|
||||
|
||||
events = append(events, e)
|
||||
}
|
||||
|
||||
|
||||
@@ -462,3 +462,584 @@ func TestCountEvents(t *testing.T) {
|
||||
|
||||
t.Logf("✓ Count events returned correct count: %d", count)
|
||||
}
|
||||
|
||||
// TestQueryEventsByTagWithHashPrefix tests that tag filters with "#" prefix work correctly.
|
||||
// This is a regression test for a bug where filter tags like "#d" were not being matched
|
||||
// because the "#" prefix wasn't being stripped before comparison with stored tags.
|
||||
func TestQueryEventsByTagWithHashPrefix(t *testing.T) {
|
||||
if testDB == nil {
|
||||
t.Skip("Neo4j not available")
|
||||
}
|
||||
|
||||
cleanTestDatabase()
|
||||
|
||||
ctx := context.Background()
|
||||
signer := createTestSignerLocal(t)
|
||||
baseTs := timestamp.Now().V
|
||||
|
||||
// Create events with d-tags (parameterized replaceable kind)
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event with d=id1",
|
||||
tag.NewS(tag.NewFromAny("d", "id1")), baseTs)
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event with d=id2",
|
||||
tag.NewS(tag.NewFromAny("d", "id2")), baseTs+1)
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event with d=id3",
|
||||
tag.NewS(tag.NewFromAny("d", "id3")), baseTs+2)
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event with d=other",
|
||||
tag.NewS(tag.NewFromAny("d", "other")), baseTs+3)
|
||||
|
||||
// Query with "#d" prefix (as clients send it) - should match events with d=id1
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(30382)),
|
||||
Tags: tag.NewS(tag.NewFromAny("#d", "id1")),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to query events with #d tag: %v", err)
|
||||
}
|
||||
|
||||
if len(evs) != 1 {
|
||||
t.Fatalf("Expected 1 event with d=id1, got %d", len(evs))
|
||||
}
|
||||
|
||||
// Verify the returned event has the correct d-tag
|
||||
dTag := evs[0].Tags.GetFirst([]byte("d"))
|
||||
if dTag == nil || string(dTag.Value()) != "id1" {
|
||||
t.Fatalf("Expected d=id1, got d=%s", dTag.Value())
|
||||
}
|
||||
|
||||
t.Logf("✓ Query with #d prefix returned correct event")
|
||||
}
|
||||
|
||||
// TestQueryEventsByTagMultipleValues tests that tag filters with multiple values
|
||||
// use OR logic (match events with ANY of the values).
|
||||
func TestQueryEventsByTagMultipleValues(t *testing.T) {
|
||||
if testDB == nil {
|
||||
t.Skip("Neo4j not available")
|
||||
}
|
||||
|
||||
cleanTestDatabase()
|
||||
|
||||
ctx := context.Background()
|
||||
signer := createTestSignerLocal(t)
|
||||
baseTs := timestamp.Now().V
|
||||
|
||||
// Create events with different d-tags
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event A",
|
||||
tag.NewS(tag.NewFromAny("d", "target-1")), baseTs)
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event B",
|
||||
tag.NewS(tag.NewFromAny("d", "target-2")), baseTs+1)
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event C",
|
||||
tag.NewS(tag.NewFromAny("d", "target-3")), baseTs+2)
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event D (not target)",
|
||||
tag.NewS(tag.NewFromAny("d", "other-value")), baseTs+3)
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event E (no match)",
|
||||
tag.NewS(tag.NewFromAny("d", "different")), baseTs+4)
|
||||
|
||||
// Query with multiple d-tag values using "#d" prefix
|
||||
// Should match events with d=target-1 OR d=target-2 OR d=target-3
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(30382)),
|
||||
Tags: tag.NewS(tag.NewFromAny("#d", "target-1", "target-2", "target-3")),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to query events with multiple #d values: %v", err)
|
||||
}
|
||||
|
||||
if len(evs) != 3 {
|
||||
t.Fatalf("Expected 3 events matching the d-tag values, got %d", len(evs))
|
||||
}
|
||||
|
||||
// Verify returned events have correct d-tags
|
||||
validDTags := map[string]bool{"target-1": false, "target-2": false, "target-3": false}
|
||||
for _, ev := range evs {
|
||||
dTag := ev.Tags.GetFirst([]byte("d"))
|
||||
if dTag == nil {
|
||||
t.Fatalf("Event missing d-tag")
|
||||
}
|
||||
dValue := string(dTag.Value())
|
||||
if _, ok := validDTags[dValue]; !ok {
|
||||
t.Fatalf("Unexpected d-tag value: %s", dValue)
|
||||
}
|
||||
validDTags[dValue] = true
|
||||
}
|
||||
|
||||
// Verify all expected d-tags were found
|
||||
for dValue, found := range validDTags {
|
||||
if !found {
|
||||
t.Fatalf("Expected to find event with d=%s", dValue)
|
||||
}
|
||||
}
|
||||
|
||||
t.Logf("✓ Query with multiple #d values returned correct events")
|
||||
}
|
||||
|
||||
// TestQueryEventsByTagNoMatch tests that tag filters correctly return no results
|
||||
// when no events match the filter.
|
||||
func TestQueryEventsByTagNoMatch(t *testing.T) {
|
||||
if testDB == nil {
|
||||
t.Skip("Neo4j not available")
|
||||
}
|
||||
|
||||
cleanTestDatabase()
|
||||
|
||||
ctx := context.Background()
|
||||
signer := createTestSignerLocal(t)
|
||||
baseTs := timestamp.Now().V
|
||||
|
||||
// Create events with d-tags
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event",
|
||||
tag.NewS(tag.NewFromAny("d", "existing-value")), baseTs)
|
||||
|
||||
// Query for d-tag value that doesn't exist
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(30382)),
|
||||
Tags: tag.NewS(tag.NewFromAny("#d", "non-existent-value")),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to query events: %v", err)
|
||||
}
|
||||
|
||||
if len(evs) != 0 {
|
||||
t.Fatalf("Expected 0 events for non-matching d-tag, got %d", len(evs))
|
||||
}
|
||||
|
||||
t.Logf("✓ Query with non-matching #d value returned no events")
|
||||
}
|
||||
|
||||
// TestQueryEventsByTagWithKindAndAuthor tests the combination of kind, author, and tag filters.
|
||||
// This is the specific case reported by the user with kind 30382.
|
||||
func TestQueryEventsByTagWithKindAndAuthor(t *testing.T) {
|
||||
if testDB == nil {
|
||||
t.Skip("Neo4j not available")
|
||||
}
|
||||
|
||||
cleanTestDatabase()
|
||||
|
||||
ctx := context.Background()
|
||||
alice := createTestSignerLocal(t)
|
||||
bob := createTestSignerLocal(t)
|
||||
baseTs := timestamp.Now().V
|
||||
|
||||
// Create events from different authors with d-tags
|
||||
createAndSaveEventLocal(t, ctx, alice, 30382, "Alice target 1",
|
||||
tag.NewS(tag.NewFromAny("d", "card-1")), baseTs)
|
||||
createAndSaveEventLocal(t, ctx, alice, 30382, "Alice target 2",
|
||||
tag.NewS(tag.NewFromAny("d", "card-2")), baseTs+1)
|
||||
createAndSaveEventLocal(t, ctx, alice, 30382, "Alice other",
|
||||
tag.NewS(tag.NewFromAny("d", "other-card")), baseTs+2)
|
||||
createAndSaveEventLocal(t, ctx, bob, 30382, "Bob target 1",
|
||||
tag.NewS(tag.NewFromAny("d", "card-1")), baseTs+3) // Same d-tag as Alice but different author
|
||||
|
||||
// Query for Alice's events with specific d-tags
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(30382)),
|
||||
Authors: tag.NewFromBytesSlice(alice.Pub()),
|
||||
Tags: tag.NewS(tag.NewFromAny("#d", "card-1", "card-2")),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to query events: %v", err)
|
||||
}
|
||||
|
||||
// Should only return Alice's 2 events, not Bob's even though he has card-1
|
||||
if len(evs) != 2 {
|
||||
t.Fatalf("Expected 2 events from Alice with matching d-tags, got %d", len(evs))
|
||||
}
|
||||
|
||||
alicePubkey := hex.Enc(alice.Pub())
|
||||
for _, ev := range evs {
|
||||
if hex.Enc(ev.Pubkey[:]) != alicePubkey {
|
||||
t.Fatalf("Expected author %s, got %s", alicePubkey, hex.Enc(ev.Pubkey[:]))
|
||||
}
|
||||
dTag := ev.Tags.GetFirst([]byte("d"))
|
||||
dValue := string(dTag.Value())
|
||||
if dValue != "card-1" && dValue != "card-2" {
|
||||
t.Fatalf("Expected d=card-1 or card-2, got d=%s", dValue)
|
||||
}
|
||||
}
|
||||
|
||||
t.Logf("✓ Query with kind, author, and #d filter returned correct events")
|
||||
}
|
||||
|
||||
// TestBinaryTagFilterRegression tests that queries with #e and #p tags work correctly
|
||||
// even when tags are stored with binary-encoded values but filters come as hex strings.
|
||||
// This mirrors the Badger database test for binary tag handling.
|
||||
func TestBinaryTagFilterRegression(t *testing.T) {
|
||||
if testDB == nil {
|
||||
t.Skip("Neo4j not available")
|
||||
}
|
||||
|
||||
cleanTestDatabase()
|
||||
|
||||
ctx := context.Background()
|
||||
author := createTestSignerLocal(t)
|
||||
referenced := createTestSignerLocal(t)
|
||||
baseTs := timestamp.Now().V
|
||||
|
||||
// Create a referenced event to get a valid event ID for e-tag
|
||||
refEvent := createAndSaveEventLocal(t, ctx, referenced, 1, "Referenced event", nil, baseTs)
|
||||
|
||||
// Get hex representations
|
||||
refEventIdHex := hex.Enc(refEvent.ID)
|
||||
refPubkeyHex := hex.Enc(referenced.Pub())
|
||||
|
||||
// Create test event with e, p, d, and other tags
|
||||
testEvent := createAndSaveEventLocal(t, ctx, author, 30520, "Event with binary tags",
|
||||
tag.NewS(
|
||||
tag.NewFromAny("d", "test-d-value"),
|
||||
tag.NewFromAny("p", string(refPubkeyHex)),
|
||||
tag.NewFromAny("e", string(refEventIdHex)),
|
||||
tag.NewFromAny("t", "test-topic"),
|
||||
), baseTs+1)
|
||||
|
||||
testEventIdHex := hex.Enc(testEvent.ID)
|
||||
|
||||
// Test case 1: Query WITHOUT #e/#p tags (baseline - should work)
|
||||
t.Run("QueryWithoutEPTags", func(t *testing.T) {
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(30520)),
|
||||
Authors: tag.NewFromBytesSlice(author.Pub()),
|
||||
Tags: tag.NewS(tag.NewFromAny("#d", "test-d-value")),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Query without e/p tags failed: %v", err)
|
||||
}
|
||||
|
||||
if len(evs) == 0 {
|
||||
t.Fatal("Expected to find event with d tag filter, got 0 results")
|
||||
}
|
||||
|
||||
found := false
|
||||
for _, ev := range evs {
|
||||
if hex.Enc(ev.ID) == testEventIdHex {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Errorf("Expected event ID %s not found", testEventIdHex)
|
||||
}
|
||||
})
|
||||
|
||||
// Test case 2: Query WITH #p tag
|
||||
t.Run("QueryWithPTag", func(t *testing.T) {
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(30520)),
|
||||
Authors: tag.NewFromBytesSlice(author.Pub()),
|
||||
Tags: tag.NewS(
|
||||
tag.NewFromAny("#d", "test-d-value"),
|
||||
tag.NewFromAny("#p", string(refPubkeyHex)),
|
||||
),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Query with #p tag failed: %v", err)
|
||||
}
|
||||
|
||||
if len(evs) == 0 {
|
||||
t.Fatalf("REGRESSION: Expected to find event with #p tag filter, got 0 results")
|
||||
}
|
||||
})
|
||||
|
||||
// Test case 3: Query WITH #e tag
|
||||
t.Run("QueryWithETag", func(t *testing.T) {
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(30520)),
|
||||
Authors: tag.NewFromBytesSlice(author.Pub()),
|
||||
Tags: tag.NewS(
|
||||
tag.NewFromAny("#d", "test-d-value"),
|
||||
tag.NewFromAny("#e", string(refEventIdHex)),
|
||||
),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Query with #e tag failed: %v", err)
|
||||
}
|
||||
|
||||
if len(evs) == 0 {
|
||||
t.Fatalf("REGRESSION: Expected to find event with #e tag filter, got 0 results")
|
||||
}
|
||||
})
|
||||
|
||||
// Test case 4: Query WITH BOTH #e AND #p tags
|
||||
t.Run("QueryWithBothEAndPTags", func(t *testing.T) {
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(30520)),
|
||||
Authors: tag.NewFromBytesSlice(author.Pub()),
|
||||
Tags: tag.NewS(
|
||||
tag.NewFromAny("#d", "test-d-value"),
|
||||
tag.NewFromAny("#e", string(refEventIdHex)),
|
||||
tag.NewFromAny("#p", string(refPubkeyHex)),
|
||||
),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Query with both #e and #p tags failed: %v", err)
|
||||
}
|
||||
|
||||
if len(evs) == 0 {
|
||||
t.Fatalf("REGRESSION: Expected to find event with #e and #p tag filters, got 0 results")
|
||||
}
|
||||
})
|
||||
|
||||
t.Logf("✓ Binary tag filter regression tests passed")
|
||||
}
|
||||
|
||||
// TestParameterizedReplaceableEvents tests that parameterized replaceable events (kind 30000+)
|
||||
// are handled correctly - only the newest version should be returned in queries by kind/author/d-tag.
|
||||
func TestParameterizedReplaceableEvents(t *testing.T) {
|
||||
if testDB == nil {
|
||||
t.Skip("Neo4j not available")
|
||||
}
|
||||
|
||||
cleanTestDatabase()
|
||||
|
||||
ctx := context.Background()
|
||||
signer := createTestSignerLocal(t)
|
||||
baseTs := timestamp.Now().V
|
||||
|
||||
// Create older parameterized replaceable event
|
||||
createAndSaveEventLocal(t, ctx, signer, 30000, "Original event",
|
||||
tag.NewS(tag.NewFromAny("d", "test-param")), baseTs-7200) // 2 hours ago
|
||||
|
||||
// Create newer event with same kind/author/d-tag
|
||||
createAndSaveEventLocal(t, ctx, signer, 30000, "Newer event",
|
||||
tag.NewS(tag.NewFromAny("d", "test-param")), baseTs-3600) // 1 hour ago
|
||||
|
||||
// Create newest event with same kind/author/d-tag
|
||||
newestEvent := createAndSaveEventLocal(t, ctx, signer, 30000, "Newest event",
|
||||
tag.NewS(tag.NewFromAny("d", "test-param")), baseTs) // Now
|
||||
|
||||
// Query for events - should only return the newest one
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(30000)),
|
||||
Authors: tag.NewFromBytesSlice(signer.Pub()),
|
||||
Tags: tag.NewS(tag.NewFromAny("#d", "test-param")),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to query parameterized replaceable events: %v", err)
|
||||
}
|
||||
|
||||
// Note: Neo4j backend may or may not automatically deduplicate replaceable events
|
||||
// depending on implementation. The important thing is that the newest is returned first.
|
||||
if len(evs) == 0 {
|
||||
t.Fatal("Expected at least 1 event")
|
||||
}
|
||||
|
||||
// Verify the first (most recent) event is the newest one
|
||||
if hex.Enc(evs[0].ID) != hex.Enc(newestEvent.ID) {
|
||||
t.Logf("Note: Expected newest event first, got different order")
|
||||
}
|
||||
|
||||
t.Logf("✓ Parameterized replaceable events test returned %d events", len(evs))
|
||||
}
|
||||
|
||||
// TestQueryForIds tests the QueryForIds method
|
||||
func TestQueryForIds(t *testing.T) {
|
||||
if testDB == nil {
|
||||
t.Skip("Neo4j not available")
|
||||
}
|
||||
|
||||
cleanTestDatabase()
|
||||
|
||||
ctx := context.Background()
|
||||
signer := createTestSignerLocal(t)
|
||||
baseTs := timestamp.Now().V
|
||||
|
||||
// Create test events
|
||||
ev1 := createAndSaveEventLocal(t, ctx, signer, 1, "Event 1", nil, baseTs)
|
||||
ev2 := createAndSaveEventLocal(t, ctx, signer, 1, "Event 2", nil, baseTs+1)
|
||||
createAndSaveEventLocal(t, ctx, signer, 7, "Reaction", nil, baseTs+2)
|
||||
|
||||
// Query for IDs of kind 1 events
|
||||
idPkTs, err := testDB.QueryForIds(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(1)),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to query for IDs: %v", err)
|
||||
}
|
||||
|
||||
if len(idPkTs) != 2 {
|
||||
t.Fatalf("Expected 2 IDs for kind 1 events, got %d", len(idPkTs))
|
||||
}
|
||||
|
||||
// Verify IDs match our events
|
||||
foundIds := make(map[string]bool)
|
||||
for _, r := range idPkTs {
|
||||
foundIds[hex.Enc(r.Id)] = true
|
||||
}
|
||||
|
||||
if !foundIds[hex.Enc(ev1.ID)] {
|
||||
t.Error("Event 1 ID not found in results")
|
||||
}
|
||||
if !foundIds[hex.Enc(ev2.ID)] {
|
||||
t.Error("Event 2 ID not found in results")
|
||||
}
|
||||
|
||||
t.Logf("✓ QueryForIds returned correct IDs")
|
||||
}
|
||||
|
||||
// TestQueryForSerials tests the QueryForSerials method
|
||||
func TestQueryForSerials(t *testing.T) {
|
||||
if testDB == nil {
|
||||
t.Skip("Neo4j not available")
|
||||
}
|
||||
|
||||
cleanTestDatabase()
|
||||
|
||||
ctx := context.Background()
|
||||
signer := createTestSignerLocal(t)
|
||||
baseTs := timestamp.Now().V
|
||||
|
||||
// Create test events
|
||||
createAndSaveEventLocal(t, ctx, signer, 1, "Event 1", nil, baseTs)
|
||||
createAndSaveEventLocal(t, ctx, signer, 1, "Event 2", nil, baseTs+1)
|
||||
createAndSaveEventLocal(t, ctx, signer, 1, "Event 3", nil, baseTs+2)
|
||||
|
||||
// Query for serials
|
||||
serials, err := testDB.QueryForSerials(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(1)),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to query for serials: %v", err)
|
||||
}
|
||||
|
||||
if len(serials) != 3 {
|
||||
t.Fatalf("Expected 3 serials, got %d", len(serials))
|
||||
}
|
||||
|
||||
t.Logf("✓ QueryForSerials returned %d serials", len(serials))
|
||||
}
|
||||
|
||||
// TestQueryEventsComplex tests complex filter combinations
|
||||
func TestQueryEventsComplex(t *testing.T) {
|
||||
if testDB == nil {
|
||||
t.Skip("Neo4j not available")
|
||||
}
|
||||
|
||||
cleanTestDatabase()
|
||||
|
||||
ctx := context.Background()
|
||||
alice := createTestSignerLocal(t)
|
||||
bob := createTestSignerLocal(t)
|
||||
baseTs := timestamp.Now().V
|
||||
|
||||
// Create diverse set of events
|
||||
createAndSaveEventLocal(t, ctx, alice, 1, "Alice note with bitcoin tag",
|
||||
tag.NewS(tag.NewFromAny("t", "bitcoin")), baseTs)
|
||||
createAndSaveEventLocal(t, ctx, alice, 1, "Alice note with nostr tag",
|
||||
tag.NewS(tag.NewFromAny("t", "nostr")), baseTs+1)
|
||||
createAndSaveEventLocal(t, ctx, alice, 7, "Alice reaction",
|
||||
nil, baseTs+2)
|
||||
createAndSaveEventLocal(t, ctx, bob, 1, "Bob note with bitcoin tag",
|
||||
tag.NewS(tag.NewFromAny("t", "bitcoin")), baseTs+3)
|
||||
|
||||
// Test: kinds + tags (no authors)
|
||||
t.Run("KindsAndTags", func(t *testing.T) {
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(1)),
|
||||
Tags: tag.NewS(tag.NewFromAny("#t", "bitcoin")),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Query failed: %v", err)
|
||||
}
|
||||
if len(evs) != 2 {
|
||||
t.Fatalf("Expected 2 events with kind=1 and #t=bitcoin, got %d", len(evs))
|
||||
}
|
||||
})
|
||||
|
||||
// Test: authors + tags (no kinds)
|
||||
t.Run("AuthorsAndTags", func(t *testing.T) {
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Authors: tag.NewFromBytesSlice(alice.Pub()),
|
||||
Tags: tag.NewS(tag.NewFromAny("#t", "bitcoin")),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Query failed: %v", err)
|
||||
}
|
||||
if len(evs) != 1 {
|
||||
t.Fatalf("Expected 1 event from Alice with #t=bitcoin, got %d", len(evs))
|
||||
}
|
||||
})
|
||||
|
||||
// Test: kinds + authors (no tags)
|
||||
t.Run("KindsAndAuthors", func(t *testing.T) {
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(1)),
|
||||
Authors: tag.NewFromBytesSlice(alice.Pub()),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Query failed: %v", err)
|
||||
}
|
||||
if len(evs) != 2 {
|
||||
t.Fatalf("Expected 2 kind=1 events from Alice, got %d", len(evs))
|
||||
}
|
||||
})
|
||||
|
||||
// Test: all three filters
|
||||
t.Run("AllFilters", func(t *testing.T) {
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(1)),
|
||||
Authors: tag.NewFromBytesSlice(alice.Pub()),
|
||||
Tags: tag.NewS(tag.NewFromAny("#t", "nostr")),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Query failed: %v", err)
|
||||
}
|
||||
if len(evs) != 1 {
|
||||
t.Fatalf("Expected 1 event (Alice kind=1 #t=nostr), got %d", len(evs))
|
||||
}
|
||||
})
|
||||
|
||||
t.Logf("✓ Complex filter combination tests passed")
|
||||
}
|
||||
|
||||
// TestQueryEventsMultipleTagTypes tests filtering with multiple different tag types
|
||||
func TestQueryEventsMultipleTagTypes(t *testing.T) {
|
||||
if testDB == nil {
|
||||
t.Skip("Neo4j not available")
|
||||
}
|
||||
|
||||
cleanTestDatabase()
|
||||
|
||||
ctx := context.Background()
|
||||
signer := createTestSignerLocal(t)
|
||||
baseTs := timestamp.Now().V
|
||||
|
||||
// Create events with multiple tag types
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event with d and client tags",
|
||||
tag.NewS(
|
||||
tag.NewFromAny("d", "user-1"),
|
||||
tag.NewFromAny("client", "app-a"),
|
||||
), baseTs)
|
||||
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event with d and different client",
|
||||
tag.NewS(
|
||||
tag.NewFromAny("d", "user-2"),
|
||||
tag.NewFromAny("client", "app-b"),
|
||||
), baseTs+1)
|
||||
|
||||
createAndSaveEventLocal(t, ctx, signer, 30382, "Event with only d tag",
|
||||
tag.NewS(
|
||||
tag.NewFromAny("d", "user-3"),
|
||||
), baseTs+2)
|
||||
|
||||
// Query with multiple tag types (should AND them together)
|
||||
evs, err := testDB.QueryEvents(ctx, &filter.F{
|
||||
Kinds: kind.NewS(kind.New(30382)),
|
||||
Tags: tag.NewS(
|
||||
tag.NewFromAny("#d", "user-1", "user-2"),
|
||||
tag.NewFromAny("#client", "app-a"),
|
||||
),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Query with multiple tag types failed: %v", err)
|
||||
}
|
||||
|
||||
// Should match only the first event (user-1 with app-a)
|
||||
if len(evs) != 1 {
|
||||
t.Fatalf("Expected 1 event matching both #d and #client, got %d", len(evs))
|
||||
}
|
||||
|
||||
dTag := evs[0].Tags.GetFirst([]byte("d"))
|
||||
if string(dTag.Value()) != "user-1" {
|
||||
t.Fatalf("Expected d=user-1, got d=%s", dTag.Value())
|
||||
}
|
||||
|
||||
t.Logf("✓ Multiple tag types filter test passed")
|
||||
}
|
||||
|
||||
@@ -54,3 +54,11 @@ func MonitorFromNeo4jDriver(
|
||||
) loadmonitor.Monitor {
|
||||
return NewNeo4jMonitor(driver, querySem, maxConcurrency, 100*time.Millisecond)
|
||||
}
|
||||
|
||||
// NewMemoryOnlyLimiter creates a rate limiter that only monitors process memory.
|
||||
// Useful for database backends that don't have their own load metrics (e.g., BBolt).
|
||||
// Since BBolt uses memory-mapped IO, memory pressure is still relevant.
|
||||
func NewMemoryOnlyLimiter(config Config) *Limiter {
|
||||
monitor := NewMemoryMonitor(100 * time.Millisecond)
|
||||
return NewLimiter(config, monitor)
|
||||
}
|
||||
|
||||
@@ -377,29 +377,26 @@ func (l *Limiter) ComputeDelay(opType OperationType) time.Duration {
|
||||
|
||||
// In emergency mode, apply progressive throttling for writes
|
||||
if inEmergency {
|
||||
// Calculate how far above recovery threshold we are
|
||||
// At emergency threshold, add 1x normal delay
|
||||
// For every additional 10% above emergency, double the delay
|
||||
excessPressure := metrics.MemoryPressure - l.config.RecoveryThreshold
|
||||
if excessPressure > 0 {
|
||||
// Progressive multiplier: starts at 2x, doubles every 10% excess
|
||||
multiplier := 2.0
|
||||
for excess := excessPressure; excess > 0.1; excess -= 0.1 {
|
||||
multiplier *= 2
|
||||
}
|
||||
|
||||
emergencyDelaySec := delaySec * multiplier
|
||||
maxEmergencySec := float64(l.config.EmergencyMaxDelayMs) / 1000.0
|
||||
|
||||
if emergencyDelaySec > maxEmergencySec {
|
||||
emergencyDelaySec = maxEmergencySec
|
||||
}
|
||||
// Minimum emergency delay of 100ms to allow other operations
|
||||
if emergencyDelaySec < 0.1 {
|
||||
emergencyDelaySec = 0.1
|
||||
}
|
||||
delaySec = emergencyDelaySec
|
||||
// Calculate how far above emergency threshold we are
|
||||
// Linear scaling: multiplier = 1 + (excess * 5)
|
||||
// At emergency threshold: 1x, at +20% above: 2x, at +40% above: 3x
|
||||
excessPressure := metrics.MemoryPressure - l.config.EmergencyThreshold
|
||||
if excessPressure < 0 {
|
||||
excessPressure = 0
|
||||
}
|
||||
multiplier := 1.0 + excessPressure*5.0
|
||||
|
||||
emergencyDelaySec := delaySec * multiplier
|
||||
maxEmergencySec := float64(l.config.EmergencyMaxDelayMs) / 1000.0
|
||||
|
||||
if emergencyDelaySec > maxEmergencySec {
|
||||
emergencyDelaySec = maxEmergencySec
|
||||
}
|
||||
// Minimum emergency delay of 100ms to allow other operations
|
||||
if emergencyDelaySec < 0.1 {
|
||||
emergencyDelaySec = 0.1
|
||||
}
|
||||
delaySec = emergencyDelaySec
|
||||
}
|
||||
|
||||
if delaySec > 0 {
|
||||
|
||||
214
pkg/ratelimit/memory_monitor.go
Normal file
214
pkg/ratelimit/memory_monitor.go
Normal file
@@ -0,0 +1,214 @@
|
||||
//go:build !(js && wasm)
|
||||
|
||||
package ratelimit
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"next.orly.dev/pkg/interfaces/loadmonitor"
|
||||
)
|
||||
|
||||
// MemoryMonitor is a simple load monitor that only tracks process memory.
|
||||
// Used for database backends that don't have their own load metrics (e.g., BBolt).
|
||||
type MemoryMonitor struct {
|
||||
// Configuration
|
||||
pollInterval time.Duration
|
||||
targetBytes atomic.Uint64
|
||||
|
||||
// State
|
||||
running atomic.Bool
|
||||
stopChan chan struct{}
|
||||
doneChan chan struct{}
|
||||
|
||||
// Metrics (protected by mutex)
|
||||
mu sync.RWMutex
|
||||
currentMetrics loadmonitor.Metrics
|
||||
|
||||
// Latency tracking
|
||||
queryLatencies []time.Duration
|
||||
writeLatencies []time.Duration
|
||||
latencyMu sync.Mutex
|
||||
|
||||
// Emergency mode
|
||||
emergencyThreshold float64 // e.g., 1.167 (target + 1/6)
|
||||
recoveryThreshold float64 // e.g., 0.833 (target - 1/6)
|
||||
inEmergency atomic.Bool
|
||||
}
|
||||
|
||||
// NewMemoryMonitor creates a memory-only load monitor.
|
||||
// pollInterval controls how often memory is sampled (recommended: 100ms).
|
||||
func NewMemoryMonitor(pollInterval time.Duration) *MemoryMonitor {
|
||||
m := &MemoryMonitor{
|
||||
pollInterval: pollInterval,
|
||||
stopChan: make(chan struct{}),
|
||||
doneChan: make(chan struct{}),
|
||||
queryLatencies: make([]time.Duration, 0, 100),
|
||||
writeLatencies: make([]time.Duration, 0, 100),
|
||||
emergencyThreshold: 1.167, // Default: target + 1/6
|
||||
recoveryThreshold: 0.833, // Default: target - 1/6
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
// GetMetrics returns the current load metrics.
|
||||
func (m *MemoryMonitor) GetMetrics() loadmonitor.Metrics {
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
return m.currentMetrics
|
||||
}
|
||||
|
||||
// RecordQueryLatency records a query latency sample.
|
||||
func (m *MemoryMonitor) RecordQueryLatency(latency time.Duration) {
|
||||
m.latencyMu.Lock()
|
||||
defer m.latencyMu.Unlock()
|
||||
|
||||
m.queryLatencies = append(m.queryLatencies, latency)
|
||||
if len(m.queryLatencies) > 100 {
|
||||
m.queryLatencies = m.queryLatencies[1:]
|
||||
}
|
||||
}
|
||||
|
||||
// RecordWriteLatency records a write latency sample.
|
||||
func (m *MemoryMonitor) RecordWriteLatency(latency time.Duration) {
|
||||
m.latencyMu.Lock()
|
||||
defer m.latencyMu.Unlock()
|
||||
|
||||
m.writeLatencies = append(m.writeLatencies, latency)
|
||||
if len(m.writeLatencies) > 100 {
|
||||
m.writeLatencies = m.writeLatencies[1:]
|
||||
}
|
||||
}
|
||||
|
||||
// SetMemoryTarget sets the target memory limit in bytes.
|
||||
func (m *MemoryMonitor) SetMemoryTarget(bytes uint64) {
|
||||
m.targetBytes.Store(bytes)
|
||||
}
|
||||
|
||||
// SetEmergencyThreshold sets the memory threshold for emergency mode.
|
||||
func (m *MemoryMonitor) SetEmergencyThreshold(threshold float64) {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
m.emergencyThreshold = threshold
|
||||
}
|
||||
|
||||
// GetEmergencyThreshold returns the current emergency threshold.
|
||||
func (m *MemoryMonitor) GetEmergencyThreshold() float64 {
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
return m.emergencyThreshold
|
||||
}
|
||||
|
||||
// ForceEmergencyMode manually triggers emergency mode for a duration.
|
||||
func (m *MemoryMonitor) ForceEmergencyMode(duration time.Duration) {
|
||||
m.inEmergency.Store(true)
|
||||
go func() {
|
||||
time.Sleep(duration)
|
||||
m.inEmergency.Store(false)
|
||||
}()
|
||||
}
|
||||
|
||||
// Start begins background metric collection.
|
||||
func (m *MemoryMonitor) Start() <-chan struct{} {
|
||||
if m.running.Swap(true) {
|
||||
// Already running
|
||||
return m.doneChan
|
||||
}
|
||||
|
||||
go m.pollLoop()
|
||||
return m.doneChan
|
||||
}
|
||||
|
||||
// Stop halts background metric collection.
|
||||
func (m *MemoryMonitor) Stop() {
|
||||
if !m.running.Swap(false) {
|
||||
return
|
||||
}
|
||||
close(m.stopChan)
|
||||
<-m.doneChan
|
||||
}
|
||||
|
||||
// pollLoop continuously samples memory and updates metrics.
|
||||
func (m *MemoryMonitor) pollLoop() {
|
||||
defer close(m.doneChan)
|
||||
|
||||
ticker := time.NewTicker(m.pollInterval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-m.stopChan:
|
||||
return
|
||||
case <-ticker.C:
|
||||
m.updateMetrics()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// updateMetrics samples current memory and updates the metrics.
|
||||
func (m *MemoryMonitor) updateMetrics() {
|
||||
target := m.targetBytes.Load()
|
||||
if target == 0 {
|
||||
target = 1 // Avoid division by zero
|
||||
}
|
||||
|
||||
// Get physical memory using the same method as other monitors
|
||||
procMem := ReadProcessMemoryStats()
|
||||
physicalMemBytes := procMem.PhysicalMemoryBytes()
|
||||
physicalMemMB := physicalMemBytes / (1024 * 1024)
|
||||
|
||||
// Calculate memory pressure
|
||||
memPressure := float64(physicalMemBytes) / float64(target)
|
||||
|
||||
// Check emergency mode thresholds
|
||||
m.mu.RLock()
|
||||
emergencyThreshold := m.emergencyThreshold
|
||||
recoveryThreshold := m.recoveryThreshold
|
||||
m.mu.RUnlock()
|
||||
|
||||
wasEmergency := m.inEmergency.Load()
|
||||
if memPressure > emergencyThreshold {
|
||||
m.inEmergency.Store(true)
|
||||
} else if memPressure < recoveryThreshold && wasEmergency {
|
||||
m.inEmergency.Store(false)
|
||||
}
|
||||
|
||||
// Calculate average latencies
|
||||
m.latencyMu.Lock()
|
||||
var avgQuery, avgWrite time.Duration
|
||||
if len(m.queryLatencies) > 0 {
|
||||
var total time.Duration
|
||||
for _, l := range m.queryLatencies {
|
||||
total += l
|
||||
}
|
||||
avgQuery = total / time.Duration(len(m.queryLatencies))
|
||||
}
|
||||
if len(m.writeLatencies) > 0 {
|
||||
var total time.Duration
|
||||
for _, l := range m.writeLatencies {
|
||||
total += l
|
||||
}
|
||||
avgWrite = total / time.Duration(len(m.writeLatencies))
|
||||
}
|
||||
m.latencyMu.Unlock()
|
||||
|
||||
// Update metrics
|
||||
m.mu.Lock()
|
||||
m.currentMetrics = loadmonitor.Metrics{
|
||||
MemoryPressure: memPressure,
|
||||
WriteLoad: 0, // No database-specific load metric
|
||||
ReadLoad: 0, // No database-specific load metric
|
||||
QueryLatency: avgQuery,
|
||||
WriteLatency: avgWrite,
|
||||
Timestamp: time.Now(),
|
||||
InEmergencyMode: m.inEmergency.Load(),
|
||||
CompactionPending: false, // BBolt doesn't have compaction
|
||||
PhysicalMemoryMB: physicalMemMB,
|
||||
}
|
||||
m.mu.Unlock()
|
||||
}
|
||||
|
||||
// Ensure MemoryMonitor implements the required interfaces
|
||||
var _ loadmonitor.Monitor = (*MemoryMonitor)(nil)
|
||||
var _ loadmonitor.EmergencyModeMonitor = (*MemoryMonitor)(nil)
|
||||
@@ -1 +1 @@
|
||||
v0.48.11
|
||||
v0.52.1
|
||||
|
||||
Reference in New Issue
Block a user