Compare commits

...

4 Commits

Author SHA1 Message Date
woikos
489b9f4593 Improve release command VPS deployment docs (v0.48.14)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Clarify ARM64 build-on-remote approach for relay.orly.dev
- Remove unnecessary git stash from deployment command
- Add note about setcap needing reapplication after binary rebuild
- Use explicit GOPATH and go binary path for clarity

Files modified:
- .claude/commands/release.md: Improved deployment step documentation
- pkg/version/version: v0.48.13 -> v0.48.14

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 11:14:20 +01:00
woikos
604d759a6a Fix web UI not showing cached events and add Blossom toggle (v0.48.13)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Fix fetchEvents() discarding IndexedDB cached events instead of merging with relay results
- Add mergeAndDeduplicateEvents() helper to combine and dedupe events by ID
- Add ORLY_BLOSSOM_ENABLED config option to disable Blossom server
- Make fetch-kinds.js fall back to existing eventKinds.js when network unavailable

Files modified:
- app/web/src/nostr.js: Fix event caching, add merge helper
- app/web/scripts/fetch-kinds.js: Add fallback for network failures
- app/config/config.go: Add BlossomEnabled config field
- app/main.go: Check BlossomEnabled before initializing Blossom server
- pkg/version/version: Bump to v0.48.13

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-11 04:55:55 +01:00
woikos
be72b694eb Add BBolt rate limiting and tune Badger defaults for large archives (v0.48.12)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Increase Badger cache defaults: block 512→1024MB, index 256→512MB
- Increase serial cache defaults: pubkeys 100k→250k, event IDs 500k→1M
- Change ZSTD default from level 1 (fast) to level 3 (balanced)
- Add memory-only rate limiter for BBolt backend
- Add BBolt to database backend docs with scaling recommendations
- Document migration between Badger and BBolt backends

Files modified:
- app/config/config.go: Tuned defaults for large-scale deployments
- main.go: Add BBolt rate limiter support
- pkg/ratelimit/factory.go: Add NewMemoryOnlyLimiter factory
- pkg/ratelimit/memory_monitor.go: New memory-only load monitor
- CLAUDE.md: Add BBolt docs and scaling guide

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 11:55:07 +01:00
woikos
61f6027a64 Remove auto-profile creation and add auth config docs (v0.48.11)
Some checks failed
Go / build-and-release (push) Has been cancelled
- Remove createDefaultProfile() function from nostr.js that auto-created
  placeholder profiles for new users - profiles should not be auto-generated
- Add auth-required configuration caution section to CLAUDE.md documenting
  risks of enabling NIP-42 auth on production relays

Files modified:
- CLAUDE.md: Added auth-required configuration section
- app/web/src/nostr.js: Removed createDefaultProfile and auto-profile logic
- app/web/dist/bundle.js: Rebuilt with changes
- pkg/version/version: v0.48.11

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 10:22:56 +01:00
12 changed files with 389 additions and 108 deletions

View File

@@ -49,10 +49,12 @@ If no argument provided, default to `patch`.
GIT_SSH_COMMAND="ssh -i ~/.ssh/gitmlekudev" git push ssh://mleku@git.mleku.dev:2222/mleku/next.orly.dev.git main --tags
```
11. **Deploy to VPS** by running:
```
ssh relay.orly.dev 'cd ~/src/next.orly.dev && git stash && git pull origin main && export PATH=$PATH:~/go/bin && CGO_ENABLED=0 go build -o ~/.local/bin/next.orly.dev && sudo /usr/sbin/setcap cap_net_bind_service=+ep ~/.local/bin/next.orly.dev && sudo systemctl restart orly && ~/.local/bin/next.orly.dev version'
11. **Deploy to relay.orly.dev** (ARM64):
Build on remote (faster than uploading cross-compiled binary due to slow local bandwidth):
```bash
ssh relay.orly.dev 'cd ~/src/next.orly.dev && git pull origin main && GOPATH=$HOME CGO_ENABLED=0 ~/go/bin/go build -o ~/.local/bin/next.orly.dev && sudo /usr/sbin/setcap cap_net_bind_service=+ep ~/.local/bin/next.orly.dev && sudo systemctl restart orly && ~/.local/bin/next.orly.dev version'
```
Note: setcap must be re-applied after each binary rebuild to allow binding to ports 80/443.
12. **Report completion** with the new version and commit hash

View File

@@ -40,7 +40,7 @@ NOSTR_SECRET_KEY=nsec1... ./nurl https://relay.example.com/api/logs/clear
|----------|---------|-------------|
| `ORLY_PORT` | 3334 | Server port |
| `ORLY_LOG_LEVEL` | info | trace/debug/info/warn/error |
| `ORLY_DB_TYPE` | badger | badger/neo4j/wasmdb |
| `ORLY_DB_TYPE` | badger | badger/bbolt/neo4j/wasmdb |
| `ORLY_POLICY_ENABLED` | false | Enable policy system |
| `ORLY_ACL_MODE` | none | none/follows/managed |
| `ORLY_TLS_DOMAINS` | | Let's Encrypt domains |
@@ -67,6 +67,7 @@ app/
web/ → Svelte frontend (embedded via go:embed)
pkg/
database/ → Database interface + Badger implementation
bbolt/ → BBolt backend (HDD-optimized, B+tree)
neo4j/ → Neo4j backend with WoT extensions
wasmdb/ → WebAssembly IndexedDB backend
protocol/ → Nostr protocol (ws/, auth/, publish/)
@@ -130,16 +131,78 @@ if timeout > DefaultTimeoutSeconds {
- Provide public API methods (`IsEnabled()`, `CheckPolicy()`)
- Never change unexported→exported to fix bugs
### 6. Auth-Required Configuration (CAUTION)
**Be extremely careful when modifying auth-related settings in deployment configs.**
The `ORLY_AUTH_REQUIRED` and `ORLY_AUTH_TO_WRITE` settings control whether clients must authenticate via NIP-42 before interacting with the relay. Changing these on a production relay can:
- **Lock out all existing clients** if they don't support NIP-42 auth
- **Break automated systems** (bots, bridges, scrapers) that depend on anonymous access
- **Cause data sync issues** if upstream relays can't push events
Before enabling auth-required on any deployment:
1. Verify all expected clients support NIP-42
2. Ensure the relay identity key is properly configured
3. Test with a non-production instance first
## Database Backends
| Backend | Use Case | Build |
|---------|----------|-------|
| **Badger** (default) | Single-instance, embedded | Standard |
| **Badger** (default) | Single-instance, SSD, high performance | Standard |
| **BBolt** | HDD-optimized, large archives, lower memory | `ORLY_DB_TYPE=bbolt` |
| **Neo4j** | Social graph, WoT queries | `ORLY_DB_TYPE=neo4j` |
| **WasmDB** | Browser/WebAssembly | `GOOS=js GOARCH=wasm` |
All implement `pkg/database.Database` interface.
### Scaling for Large Archives
For archives with millions of events, consider:
**Option 1: Tune Badger (SSD recommended)**
```bash
# Increase caches for larger working set (requires more RAM)
ORLY_DB_BLOCK_CACHE_MB=2048 # 2GB block cache
ORLY_DB_INDEX_CACHE_MB=1024 # 1GB index cache
ORLY_SERIAL_CACHE_PUBKEYS=500000 # 500k pubkeys
ORLY_SERIAL_CACHE_EVENT_IDS=2000000 # 2M event IDs
# Higher compression to reduce disk IO
ORLY_DB_ZSTD_LEVEL=9 # Best compression ratio
# Enable storage GC with aggressive eviction
ORLY_GC_ENABLED=true
ORLY_GC_BATCH_SIZE=5000
ORLY_MAX_STORAGE_BYTES=107374182400 # 100GB cap
```
**Option 2: Use BBolt for HDD/Low-Memory Deployments**
```bash
ORLY_DB_TYPE=bbolt
# Tune for your HDD
ORLY_BBOLT_BATCH_MAX_EVENTS=10000 # Larger batches for HDD
ORLY_BBOLT_BATCH_MAX_MB=256 # 256MB batch buffer
ORLY_BBOLT_FLUSH_TIMEOUT_SEC=60 # Longer flush interval
ORLY_BBOLT_BLOOM_SIZE_MB=32 # Larger bloom filter
ORLY_BBOLT_MMAP_SIZE_MB=16384 # 16GB mmap (scales with DB size)
```
**Migration Between Backends**
```bash
# Migrate from Badger to BBolt
./orly migrate --from badger --to bbolt
# Migrate with custom target path
./orly migrate --from badger --to bbolt --target-path /mnt/hdd/orly-archive
```
**BBolt vs Badger Trade-offs:**
- BBolt: Lower memory, HDD-friendly, simpler (B+tree), slower random reads
- Badger: Higher memory, SSD-optimized (LSM), faster concurrent access
## Logging (lol.mleku.dev)
```go
@@ -206,7 +269,8 @@ if (isValidNsec(nsec)) { ... }
## Dependencies
- `github.com/dgraph-io/badger/v4` - Badger DB
- `github.com/dgraph-io/badger/v4` - Badger DB (LSM, SSD-optimized)
- `go.etcd.io/bbolt` - BBolt DB (B+tree, HDD-optimized)
- `github.com/neo4j/neo4j-go-driver/v5` - Neo4j
- `github.com/gorilla/websocket` - WebSocket
- `github.com/ebitengine/purego` - CGO-free C loading

View File

@@ -41,9 +41,9 @@ type C struct {
EnableShutdown bool `env:"ORLY_ENABLE_SHUTDOWN" default:"false" usage:"if true, expose /shutdown on the health port to gracefully stop the process (for profiling)"`
LogLevel string `env:"ORLY_LOG_LEVEL" default:"info" usage:"relay log level: fatal error warn info debug trace"`
DBLogLevel string `env:"ORLY_DB_LOG_LEVEL" default:"info" usage:"database log level: fatal error warn info debug trace"`
DBBlockCacheMB int `env:"ORLY_DB_BLOCK_CACHE_MB" default:"512" usage:"Badger block cache size in MB (higher improves read hit ratio)"`
DBIndexCacheMB int `env:"ORLY_DB_INDEX_CACHE_MB" default:"256" usage:"Badger index cache size in MB (improves index lookup performance)"`
DBZSTDLevel int `env:"ORLY_DB_ZSTD_LEVEL" default:"1" usage:"Badger ZSTD compression level (1=fast/500MB/s, 3=default, 9=best ratio, 0=disable)"`
DBBlockCacheMB int `env:"ORLY_DB_BLOCK_CACHE_MB" default:"1024" usage:"Badger block cache size in MB (higher improves read hit ratio, increase for large archives)"`
DBIndexCacheMB int `env:"ORLY_DB_INDEX_CACHE_MB" default:"512" usage:"Badger index cache size in MB (improves index lookup performance, increase for large archives)"`
DBZSTDLevel int `env:"ORLY_DB_ZSTD_LEVEL" default:"3" usage:"Badger ZSTD compression level (1=fast/500MB/s, 3=balanced, 9=best ratio/slower, 0=disable)"`
LogToStdout bool `env:"ORLY_LOG_TO_STDOUT" default:"false" usage:"log to stdout instead of stderr"`
LogBufferSize int `env:"ORLY_LOG_BUFFER_SIZE" default:"10000" usage:"number of log entries to keep in memory for web UI viewing (0 disables)"`
Pprof string `env:"ORLY_PPROF" usage:"enable pprof in modes: cpu,memory,allocation,heap,block,goroutine,threadcreate,mutex"`
@@ -72,7 +72,8 @@ type C struct {
FollowsThrottlePerEvent time.Duration `env:"ORLY_FOLLOWS_THROTTLE_INCREMENT" default:"200ms" usage:"delay added per event for non-followed users"`
FollowsThrottleMaxDelay time.Duration `env:"ORLY_FOLLOWS_THROTTLE_MAX" default:"60s" usage:"maximum throttle delay cap"`
// Blossom blob storage service level settings
// Blossom blob storage service settings
BlossomEnabled bool `env:"ORLY_BLOSSOM_ENABLED" default:"true" usage:"enable Blossom blob storage server (only works with Badger backend)"`
BlossomServiceLevels string `env:"ORLY_BLOSSOM_SERVICE_LEVELS" usage:"comma-separated list of service levels in format: name:storage_mb_per_sat_per_month (e.g., basic:1,premium:10)"`
// Web UI and dev mode settings
@@ -124,9 +125,9 @@ type C struct {
Neo4jMaxTxRetrySeconds int `env:"ORLY_NEO4J_MAX_TX_RETRY_SEC" default:"30" usage:"max seconds for retryable transaction attempts"`
Neo4jQueryResultLimit int `env:"ORLY_NEO4J_QUERY_RESULT_LIMIT" default:"10000" usage:"max results returned per query (prevents unbounded memory usage, 0=unlimited)"`
// Advanced database tuning
SerialCachePubkeys int `env:"ORLY_SERIAL_CACHE_PUBKEYS" default:"100000" usage:"max pubkeys to cache for compact event storage (default: 100000, ~3.2MB memory)"`
SerialCacheEventIds int `env:"ORLY_SERIAL_CACHE_EVENT_IDS" default:"500000" usage:"max event IDs to cache for compact event storage (default: 500000, ~16MB memory)"`
// Advanced database tuning (increase for large archives to reduce cache misses)
SerialCachePubkeys int `env:"ORLY_SERIAL_CACHE_PUBKEYS" default:"250000" usage:"max pubkeys to cache for compact event storage (~8MB memory, increase for large archives)"`
SerialCacheEventIds int `env:"ORLY_SERIAL_CACHE_EVENT_IDS" default:"1000000" usage:"max event IDs to cache for compact event storage (~32MB memory, increase for large archives)"`
// Connection concurrency control
MaxHandlersPerConnection int `env:"ORLY_MAX_HANDLERS_PER_CONN" default:"100" usage:"max concurrent message handlers per WebSocket connection (limits goroutine growth under load)"`

View File

@@ -435,7 +435,7 @@ func Run(
// Initialize Blossom blob storage server (only for Badger backend)
// MUST be done before UserInterface() which registers routes
if badgerDB, ok := db.(*database.D); ok {
if badgerDB, ok := db.(*database.D); ok && cfg.BlossomEnabled {
log.I.F("Badger backend detected, initializing Blossom server...")
if l.blossomServer, err = initializeBlossomServer(ctx, cfg, badgerDB); err != nil {
log.E.F("failed to initialize blossom server: %v", err)
@@ -445,6 +445,8 @@ func Run(
} else {
log.W.F("blossom server initialization returned nil without error")
}
} else if !cfg.BlossomEnabled {
log.I.F("Blossom server disabled via ORLY_BLOSSOM_ENABLED=false")
} else {
log.I.F("Non-Badger backend detected (type: %T), Blossom server not available", db)
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -6,25 +6,35 @@
import { fileURLToPath } from 'url';
import { dirname, join } from 'path';
import { writeFileSync } from 'fs';
import { writeFileSync, existsSync } from 'fs';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const KINDS_URL = 'https://git.mleku.dev/mleku/nostr/raw/branch/main/encoders/kind/kinds.json';
const OUTPUT_PATH = join(__dirname, '..', 'src', 'eventKinds.js');
async function fetchKinds() {
console.log(`Fetching kinds from ${KINDS_URL}...`);
const response = await fetch(KINDS_URL);
if (!response.ok) {
throw new Error(`Failed to fetch kinds.json: ${response.status} ${response.statusText}`);
try {
const response = await fetch(KINDS_URL, { timeout: 10000 });
if (!response.ok) {
throw new Error(`HTTP ${response.status} ${response.statusText}`);
}
const data = await response.json();
console.log(`Fetched ${Object.keys(data.kinds).length} kinds (version: ${data.version})`);
return data;
} catch (error) {
// Check if we have an existing eventKinds.js we can use
if (existsSync(OUTPUT_PATH)) {
console.warn(`Warning: Could not fetch kinds.json (${error.message})`);
console.log(`Using existing ${OUTPUT_PATH}`);
return null; // Signal to skip generation
}
throw new Error(`Failed to fetch kinds.json and no existing file: ${error.message}`);
}
const data = await response.json();
console.log(`Fetched ${Object.keys(data.kinds).length} kinds (version: ${data.version})`);
return data;
}
function generateEventKinds(data) {
@@ -202,14 +212,18 @@ export const kindCategories = [
async function main() {
try {
const data = await fetchKinds();
// If fetchKinds returned null, we're using the existing file
if (data === null) {
console.log('Skipping generation, using existing eventKinds.js');
return;
}
const kinds = generateEventKinds(data);
const js = generateJS(kinds, data);
// Write to src/eventKinds.js
const outPath = join(__dirname, '..', 'src', 'eventKinds.js');
writeFileSync(outPath, js);
console.log(`Generated ${outPath} with ${kinds.length} kinds`);
writeFileSync(OUTPUT_PATH, js);
console.log(`Generated ${OUTPUT_PATH} with ${kinds.length} kinds`);
} catch (error) {
console.error('Error:', error.message);
process.exit(1);

View File

@@ -179,6 +179,28 @@ export class Nip07Signer {
}
}
// Merge two event arrays, deduplicating by event id
// Newer events (by created_at) take precedence for same id
function mergeAndDeduplicateEvents(cached, relay) {
const eventMap = new Map();
// Add cached events first
for (const event of cached) {
eventMap.set(event.id, event);
}
// Add/update with relay events (they may be newer)
for (const event of relay) {
const existing = eventMap.get(event.id);
if (!existing || event.created_at >= existing.created_at) {
eventMap.set(event.id, event);
}
}
// Return sorted by created_at descending (newest first)
return Array.from(eventMap.values()).sort((a, b) => b.created_at - a.created_at);
}
// IndexedDB helpers for unified event storage
// This provides a local cache that all components can access
const DB_NAME = "nostrCache";
@@ -480,14 +502,9 @@ export async function fetchUserProfile(pubkey) {
console.warn("Failed to fetch profile from fallback relays:", error);
}
// 4) No profile found anywhere - create a default profile for new users
console.log("No profile found for pubkey, creating default:", pubkey);
try {
return await createDefaultProfile(pubkey);
} catch (e) {
console.error("Failed to create default profile:", e);
return null;
}
// 4) No profile found anywhere
console.log("No profile found for pubkey:", pubkey);
return null;
}
// Helper to fetch profile from fallback relays
@@ -561,57 +578,6 @@ async function processProfileEvent(profileEvent, pubkey) {
return profile;
}
/**
* Create a default profile for new users
* @param {string} pubkey - The user's public key (hex)
* @returns {Promise<Object>} - The created profile
*/
async function createDefaultProfile(pubkey) {
// Generate name from first 6 chars of pubkey
const shortId = pubkey.slice(0, 6);
const defaultName = `testuser${shortId}`;
// Get the current origin for the logo URL
const logoUrl = `${window.location.origin}/orly.png`;
const profileContent = {
name: defaultName,
display_name: defaultName,
picture: logoUrl,
about: "New ORLY user"
};
const profile = {
name: defaultName,
displayName: defaultName,
picture: logoUrl,
about: "New ORLY user",
pubkey: pubkey
};
// Try to publish the profile if we have a signer
if (nostrClient.signer) {
try {
const event = {
kind: 0,
content: JSON.stringify(profileContent),
tags: [],
created_at: Math.floor(Date.now() / 1000)
};
// Sign and publish using the websocket-auth client
const signedEvent = await nostrClient.signer.signEvent(event);
await nostrClient.publish(signedEvent);
console.log("Default profile published:", signedEvent.id);
} catch (e) {
console.warn("Failed to publish default profile:", e);
// Still return the profile even if publishing fails
}
}
return profile;
}
// Fetch events
export async function fetchEvents(filters, options = {}) {
console.log(`Starting event fetch with filters:`, JSON.stringify(filters, null, 2));
@@ -629,9 +595,10 @@ export async function fetchEvents(filters, options = {}) {
} = options;
// Try to get cached events first if requested
let cachedEvents = [];
if (useCache) {
try {
const cachedEvents = await queryEventsFromDB(filters);
cachedEvents = await queryEventsFromDB(filters);
if (cachedEvents.length > 0) {
console.log(`Found ${cachedEvents.length} cached events in IndexedDB`);
}
@@ -641,17 +608,19 @@ export async function fetchEvents(filters, options = {}) {
}
return new Promise((resolve, reject) => {
const events = [];
const relayEvents = [];
const timeoutId = setTimeout(() => {
console.log(`Timeout reached after ${timeout}ms, returning ${events.length} events`);
console.log(`Timeout reached after ${timeout}ms, returning ${relayEvents.length} relay events`);
sub.close();
// Store all received events in IndexedDB before resolving
if (events.length > 0) {
putEvents(events).catch(e => console.warn("Failed to cache events", e));
if (relayEvents.length > 0) {
putEvents(relayEvents).catch(e => console.warn("Failed to cache events", e));
}
resolve(events);
// Merge cached events with relay events, deduplicate by id
const mergedEvents = mergeAndDeduplicateEvents(cachedEvents, relayEvents);
resolve(mergedEvents);
}, timeout);
try {
@@ -671,22 +640,25 @@ export async function fetchEvents(filters, options = {}) {
created_at: event.created_at,
content_preview: event.content?.substring(0, 50)
});
events.push(event);
relayEvents.push(event);
// Store event immediately in IndexedDB
putEvent(event).catch(e => console.warn("Failed to cache event", e));
},
oneose() {
console.log(`✅ EOSE received for REQ [${subId}], got ${events.length} events`);
console.log(`✅ EOSE received for REQ [${subId}], got ${relayEvents.length} relay events`);
clearTimeout(timeoutId);
sub.close();
// Store all events in IndexedDB before resolving
if (events.length > 0) {
putEvents(events).catch(e => console.warn("Failed to cache events", e));
if (relayEvents.length > 0) {
putEvents(relayEvents).catch(e => console.warn("Failed to cache events", e));
}
resolve(events);
// Merge cached events with relay events, deduplicate by id
const mergedEvents = mergeAndDeduplicateEvents(cachedEvents, relayEvents);
console.log(`Merged ${cachedEvents.length} cached + ${relayEvents.length} relay = ${mergedEvents.length} total events`);
resolve(mergedEvents);
}
}
);

View File

@@ -24,7 +24,7 @@ import (
"next.orly.dev/pkg/acl"
"git.mleku.dev/mleku/nostr/crypto/keys"
"git.mleku.dev/mleku/nostr/encoders/bech32encoding"
_ "next.orly.dev/pkg/bbolt" // Import for bbolt factory registration
bboltdb "next.orly.dev/pkg/bbolt" // Import for bbolt factory and type
"next.orly.dev/pkg/database"
neo4jdb "next.orly.dev/pkg/neo4j" // Import for neo4j factory and type
"git.mleku.dev/mleku/nostr/encoders/hex"
@@ -617,6 +617,10 @@ func main() {
n4jDB.MaxConcurrentQueries(),
)
log.I.F("rate limiter configured for Neo4j backend (target: %dMB)", targetMB)
} else if _, ok := db.(*bboltdb.B); ok {
// BBolt uses memory-mapped IO, so memory-only limiter is appropriate
limiter = ratelimit.NewMemoryOnlyLimiter(rlConfig)
log.I.F("rate limiter configured for BBolt backend (target: %dMB)", targetMB)
} else {
// For other backends, create a disabled limiter
limiter = ratelimit.NewDisabledLimiter()

View File

@@ -54,3 +54,11 @@ func MonitorFromNeo4jDriver(
) loadmonitor.Monitor {
return NewNeo4jMonitor(driver, querySem, maxConcurrency, 100*time.Millisecond)
}
// NewMemoryOnlyLimiter creates a rate limiter that only monitors process memory.
// Useful for database backends that don't have their own load metrics (e.g., BBolt).
// Since BBolt uses memory-mapped IO, memory pressure is still relevant.
func NewMemoryOnlyLimiter(config Config) *Limiter {
monitor := NewMemoryMonitor(100 * time.Millisecond)
return NewLimiter(config, monitor)
}

View File

@@ -0,0 +1,214 @@
//go:build !(js && wasm)
package ratelimit
import (
"sync"
"sync/atomic"
"time"
"next.orly.dev/pkg/interfaces/loadmonitor"
)
// MemoryMonitor is a simple load monitor that only tracks process memory.
// Used for database backends that don't have their own load metrics (e.g., BBolt).
type MemoryMonitor struct {
// Configuration
pollInterval time.Duration
targetBytes atomic.Uint64
// State
running atomic.Bool
stopChan chan struct{}
doneChan chan struct{}
// Metrics (protected by mutex)
mu sync.RWMutex
currentMetrics loadmonitor.Metrics
// Latency tracking
queryLatencies []time.Duration
writeLatencies []time.Duration
latencyMu sync.Mutex
// Emergency mode
emergencyThreshold float64 // e.g., 1.167 (target + 1/6)
recoveryThreshold float64 // e.g., 0.833 (target - 1/6)
inEmergency atomic.Bool
}
// NewMemoryMonitor creates a memory-only load monitor.
// pollInterval controls how often memory is sampled (recommended: 100ms).
func NewMemoryMonitor(pollInterval time.Duration) *MemoryMonitor {
m := &MemoryMonitor{
pollInterval: pollInterval,
stopChan: make(chan struct{}),
doneChan: make(chan struct{}),
queryLatencies: make([]time.Duration, 0, 100),
writeLatencies: make([]time.Duration, 0, 100),
emergencyThreshold: 1.167, // Default: target + 1/6
recoveryThreshold: 0.833, // Default: target - 1/6
}
return m
}
// GetMetrics returns the current load metrics.
func (m *MemoryMonitor) GetMetrics() loadmonitor.Metrics {
m.mu.RLock()
defer m.mu.RUnlock()
return m.currentMetrics
}
// RecordQueryLatency records a query latency sample.
func (m *MemoryMonitor) RecordQueryLatency(latency time.Duration) {
m.latencyMu.Lock()
defer m.latencyMu.Unlock()
m.queryLatencies = append(m.queryLatencies, latency)
if len(m.queryLatencies) > 100 {
m.queryLatencies = m.queryLatencies[1:]
}
}
// RecordWriteLatency records a write latency sample.
func (m *MemoryMonitor) RecordWriteLatency(latency time.Duration) {
m.latencyMu.Lock()
defer m.latencyMu.Unlock()
m.writeLatencies = append(m.writeLatencies, latency)
if len(m.writeLatencies) > 100 {
m.writeLatencies = m.writeLatencies[1:]
}
}
// SetMemoryTarget sets the target memory limit in bytes.
func (m *MemoryMonitor) SetMemoryTarget(bytes uint64) {
m.targetBytes.Store(bytes)
}
// SetEmergencyThreshold sets the memory threshold for emergency mode.
func (m *MemoryMonitor) SetEmergencyThreshold(threshold float64) {
m.mu.Lock()
defer m.mu.Unlock()
m.emergencyThreshold = threshold
}
// GetEmergencyThreshold returns the current emergency threshold.
func (m *MemoryMonitor) GetEmergencyThreshold() float64 {
m.mu.RLock()
defer m.mu.RUnlock()
return m.emergencyThreshold
}
// ForceEmergencyMode manually triggers emergency mode for a duration.
func (m *MemoryMonitor) ForceEmergencyMode(duration time.Duration) {
m.inEmergency.Store(true)
go func() {
time.Sleep(duration)
m.inEmergency.Store(false)
}()
}
// Start begins background metric collection.
func (m *MemoryMonitor) Start() <-chan struct{} {
if m.running.Swap(true) {
// Already running
return m.doneChan
}
go m.pollLoop()
return m.doneChan
}
// Stop halts background metric collection.
func (m *MemoryMonitor) Stop() {
if !m.running.Swap(false) {
return
}
close(m.stopChan)
<-m.doneChan
}
// pollLoop continuously samples memory and updates metrics.
func (m *MemoryMonitor) pollLoop() {
defer close(m.doneChan)
ticker := time.NewTicker(m.pollInterval)
defer ticker.Stop()
for {
select {
case <-m.stopChan:
return
case <-ticker.C:
m.updateMetrics()
}
}
}
// updateMetrics samples current memory and updates the metrics.
func (m *MemoryMonitor) updateMetrics() {
target := m.targetBytes.Load()
if target == 0 {
target = 1 // Avoid division by zero
}
// Get physical memory using the same method as other monitors
procMem := ReadProcessMemoryStats()
physicalMemBytes := procMem.PhysicalMemoryBytes()
physicalMemMB := physicalMemBytes / (1024 * 1024)
// Calculate memory pressure
memPressure := float64(physicalMemBytes) / float64(target)
// Check emergency mode thresholds
m.mu.RLock()
emergencyThreshold := m.emergencyThreshold
recoveryThreshold := m.recoveryThreshold
m.mu.RUnlock()
wasEmergency := m.inEmergency.Load()
if memPressure > emergencyThreshold {
m.inEmergency.Store(true)
} else if memPressure < recoveryThreshold && wasEmergency {
m.inEmergency.Store(false)
}
// Calculate average latencies
m.latencyMu.Lock()
var avgQuery, avgWrite time.Duration
if len(m.queryLatencies) > 0 {
var total time.Duration
for _, l := range m.queryLatencies {
total += l
}
avgQuery = total / time.Duration(len(m.queryLatencies))
}
if len(m.writeLatencies) > 0 {
var total time.Duration
for _, l := range m.writeLatencies {
total += l
}
avgWrite = total / time.Duration(len(m.writeLatencies))
}
m.latencyMu.Unlock()
// Update metrics
m.mu.Lock()
m.currentMetrics = loadmonitor.Metrics{
MemoryPressure: memPressure,
WriteLoad: 0, // No database-specific load metric
ReadLoad: 0, // No database-specific load metric
QueryLatency: avgQuery,
WriteLatency: avgWrite,
Timestamp: time.Now(),
InEmergencyMode: m.inEmergency.Load(),
CompactionPending: false, // BBolt doesn't have compaction
PhysicalMemoryMB: physicalMemMB,
}
m.mu.Unlock()
}
// Ensure MemoryMonitor implements the required interfaces
var _ loadmonitor.Monitor = (*MemoryMonitor)(nil)
var _ loadmonitor.EmergencyModeMonitor = (*MemoryMonitor)(nil)

View File

@@ -1 +1 @@
v0.48.10
v0.48.14