Compare commits

..

18 Commits

Author SHA1 Message Date
3c11aa6f01 refactored SaveEvent to return if it is replacing an event
Some checks failed
Go / build (push) Has been cancelled
2025-10-10 22:18:53 +01:00
bc5177e0ec refactor save event method to expose whether it replaced an event 2025-10-10 22:16:07 +01:00
0cdf44c2c9 fully working deletes bump to v0.14.0
Some checks failed
Go / build (push) Has been cancelled
2025-10-10 21:52:32 +01:00
40f3cb6f6e Enhance delete event functionality and UI updates
- Improved logging in handle-delete.go for admin and owner checks during delete operations.
- Updated handle-event.go to ensure delete events are saved before processing, with enhanced error handling.
- Added fetchDeleteEventsByTarget function in nostr.js to retrieve delete events targeting specific event IDs.
- Modified App.svelte to include delete event verification and improved event sorting by creation timestamp.
- Enhanced UI to display delete event information and added loading indicators for event refresh actions.
2025-10-10 21:23:36 +01:00
67a74980f9 Enhance delete event handling and logging
- Improved logging for delete events in handle-delete.go, including detailed information about the event and its tags.
- Added checks for admin and owner deletions, with appropriate logging for each case.
- Updated HandleEvent to process delete events more robustly, including success and error logging.
- Introduced a new fetchEventById function in nostr.js to verify event deletion.
- Updated App.svelte to handle event deletion verification and state management.
- Changed favicon references in HTML files to use the new orly-favicon.png.
- Added orly-favicon.png to the public and docs directories for consistent branding.
2025-10-10 20:36:53 +01:00
dc184d7ff5 Revert ephemeral event handling changes that broke relaytester
- Remove ephemeral event handling in handle-event.go
- Remove ephemeral event rejection in save-event.go
- Remove formatTimestamp function and title attributes in App.svelte
- Remove TestEphemeralEventRejection test
- Fix slice bounds bug in get-serials-by-range.go
- Restore correct error message format for existing events
- Revert version from v0.13.2 to v0.12.3

This reverts commit 075838150d which introduced
a critical bug causing runtime panics in the relaytester.
2025-10-10 19:55:39 +01:00
c31cada271 small cleanups 2025-10-10 17:42:53 +01:00
075dc6b545 Integrate NDK for Nostr client functionality and update dependencies
This commit introduces the Nostr Development Kit (NDK) to enhance the Nostr client functionality. Key changes include:

- Added `NDKPrivateKeySigner` for improved authentication methods in `LoginModal.svelte` and `App.svelte`.
- Refactored the Nostr client to utilize NDK for connection and event fetching, streamlining the connection process and event handling.
- Updated `go.mod` and `package.json` to include `@nostr-dev-kit/ndk` as a dependency.
- Created a new `package-lock.json` to reflect the updated dependency tree.

These changes improve the overall architecture and maintainability of the Nostr client.
2025-10-10 17:37:43 +01:00
919747c910 Remove fallback external relays from DEFAULT_RELAYS in constants.js to streamline relay connection handling. 2025-10-10 17:11:21 +01:00
0acf51baba fix favicon 2025-10-10 10:38:47 +01:00
e75d0deb7d Add orly.png image to web and docs directories 2025-10-10 10:30:48 +01:00
96276f2fc4 putting a proper favicon in
Some checks failed
Go / build (push) Has been cancelled
2025-10-10 10:25:50 +01:00
14a94feed6 add favicon 2025-10-10 09:44:25 +01:00
075838150d fix ephemeral handling to not save
Some checks failed
Go / build (push) Has been cancelled
2025-10-10 09:35:15 +01:00
2637f4b85c add simple fulltext search
Some checks failed
Go / build (push) Has been cancelled
2025-10-10 09:17:53 +01:00
27af174753 Implement event deletion logic with relay handling in App.svelte and add connectToRelay method in NostrClient
Some checks failed
Go / build (push) Has been cancelled
This commit enhances the event deletion process by introducing conditional publishing to external relays based on user roles and ownership. It also adds a new method in the NostrClient class to connect to a single relay, improving the flexibility of relay management. The version is bumped to v0.12.3 to reflect these changes.
2025-10-10 09:07:43 +01:00
cad366795a bump to v0.12.2 for sprocket failure handling fix
Some checks failed
Go / build (push) Has been cancelled
2025-10-09 19:56:25 +01:00
e14b89bc8b Enhance Sprocket functionality and error handling
This commit introduces significant improvements to the Sprocket system, including:

- Detailed documentation in `readme.adoc` for manual updates and failure handling.
- Implementation of automatic disablement of Sprocket on failure, with periodic checks for recovery.
- Enhanced logging for event rejection when Sprocket is disabled or not running.

These changes ensure better user guidance and system resilience during Sprocket failures.
2025-10-09 19:55:20 +01:00
50 changed files with 3737 additions and 868 deletions

View File

@@ -24,23 +24,46 @@ func (l *Listener) GetSerialsFromFilter(f *filter.F) (
}
func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
// log.I.C(
// func() string {
// return fmt.Sprintf(
// "delete event\n%s", env.E.Serialize(),
// )
// },
// )
log.I.F("HandleDelete: processing delete event %0x from pubkey %0x", env.E.ID, env.E.Pubkey)
log.I.F("HandleDelete: delete event tags: %d tags", len(*env.E.Tags))
for i, t := range *env.E.Tags {
log.I.F("HandleDelete: tag %d: %s = %s", i, string(t.Key()), string(t.Value()))
}
// Debug: log admin and owner lists
log.I.F("HandleDelete: checking against %d admins and %d owners", len(l.Admins), len(l.Owners))
for i, pk := range l.Admins {
log.I.F("HandleDelete: admin[%d] = %0x (hex: %s)", i, pk, hex.Enc(pk))
}
for i, pk := range l.Owners {
log.I.F("HandleDelete: owner[%d] = %0x (hex: %s)", i, pk, hex.Enc(pk))
}
log.I.F("HandleDelete: delete event pubkey = %0x (hex: %s)", env.E.Pubkey, hex.Enc(env.E.Pubkey))
var ownerDelete bool
for _, pk := range l.Admins {
if utils.FastEqual(pk, env.E.Pubkey) {
ownerDelete = true
log.I.F("HandleDelete: delete event from admin/owner %0x", env.E.Pubkey)
break
}
}
if !ownerDelete {
for _, pk := range l.Owners {
if utils.FastEqual(pk, env.E.Pubkey) {
ownerDelete = true
log.I.F("HandleDelete: delete event from owner %0x", env.E.Pubkey)
break
}
}
}
if !ownerDelete {
log.I.F("HandleDelete: delete event from regular user %0x", env.E.Pubkey)
}
// process the tags in the delete event
var deleteErr error
var validDeletionFound bool
var deletionCount int
for _, t := range *env.E.Tags {
// first search for a tags, as these are the simplest to process
if utils.FastEqual(t.Key(), []byte("a")) {
@@ -109,8 +132,10 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
if err = l.DeleteEventBySerial(
l.Ctx(), s, ev,
); chk.E(err) {
log.E.F("HandleDelete: failed to delete event %s: %v", hex.Enc(ev.ID), err)
continue
}
deletionCount++
}
}
}
@@ -121,21 +146,27 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
if utils.FastEqual(t.Key(), []byte("e")) {
val := t.Value()
if len(val) == 0 {
log.W.F("HandleDelete: empty e-tag value")
continue
}
log.I.F("HandleDelete: processing e-tag with value: %s", string(val))
var dst []byte
if b, e := hex.Dec(string(val)); chk.E(e) {
log.E.F("HandleDelete: failed to decode hex event ID %s: %v", string(val), e)
continue
} else {
dst = b
log.I.F("HandleDelete: decoded event ID: %0x", dst)
}
f := &filter.F{
Ids: tag.NewFromBytesSlice(dst),
}
var sers types.Uint40s
if sers, err = l.GetSerialsFromFilter(f); chk.E(err) {
log.E.F("HandleDelete: failed to get serials from filter: %v", err)
continue
}
log.I.F("HandleDelete: found %d serials for event ID %s", len(sers), string(val))
// if found, delete them
if len(sers) > 0 {
// there should be only one event per serial, so we can just
@@ -145,8 +176,14 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
if ev, err = l.FetchEventBySerial(s); chk.E(err) {
continue
}
// allow deletion if the signer is the author OR an admin/owner
if !(ownerDelete || utils.FastEqual(env.E.Pubkey, ev.Pubkey)) {
// Debug: log the comparison details
log.I.F("HandleDelete: checking deletion permission for event %s", hex.Enc(ev.ID))
log.I.F("HandleDelete: delete event pubkey = %s, target event pubkey = %s", hex.Enc(env.E.Pubkey), hex.Enc(ev.Pubkey))
log.I.F("HandleDelete: ownerDelete = %v, pubkey match = %v", ownerDelete, utils.FastEqual(env.E.Pubkey, ev.Pubkey))
// For admin/owner deletes: allow deletion regardless of pubkey match
// For regular users: allow deletion only if the signer is the author
if !ownerDelete && !utils.FastEqual(env.E.Pubkey, ev.Pubkey) {
log.W.F(
"HandleDelete: attempted deletion of event %s by unauthorized user - delete pubkey=%s, event pubkey=%s",
hex.Enc(ev.ID), hex.Enc(env.E.Pubkey),
@@ -154,6 +191,7 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
)
continue
}
log.I.F("HandleDelete: deletion authorized for event %s", hex.Enc(ev.ID))
validDeletionFound = true
// exclude delete events
if ev.Kind == kind.EventDeletion.K {
@@ -164,8 +202,10 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
hex.Enc(ev.ID), hex.Enc(env.E.Pubkey),
)
if err = l.DeleteEventBySerial(l.Ctx(), s, ev); chk.E(err) {
log.E.F("HandleDelete: failed to delete event %s: %v", hex.Enc(ev.ID), err)
continue
}
deletionCount++
}
continue
}
@@ -198,23 +238,32 @@ func (l *Listener) HandleDelete(env *eventenvelope.Submission) (err error) {
if ev, err = l.FetchEventBySerial(s); chk.E(err) {
continue
}
// check that the author is the same as the signer of the
// delete, for the k tag case the author is the signer of
// the event.
if !utils.FastEqual(env.E.Pubkey, ev.Pubkey) {
// For admin/owner deletes: allow deletion regardless of pubkey match
// For regular users: allow deletion only if the signer is the author
if !ownerDelete && !utils.FastEqual(env.E.Pubkey, ev.Pubkey) {
continue
}
validDeletionFound = true
log.I.F(
"HandleDelete: deleting event %s via k-tag by authorized user %s",
hex.Enc(ev.ID), hex.Enc(env.E.Pubkey),
)
if err = l.DeleteEventBySerial(l.Ctx(), s, ev); chk.E(err) {
log.E.F("HandleDelete: failed to delete event %s: %v", hex.Enc(ev.ID), err)
continue
}
deletionCount++
}
continue
}
}
continue
}
// If no valid deletions were found, return an error
if !validDeletionFound {
log.W.F("HandleDelete: no valid deletions found for event %0x", env.E.ID)
return fmt.Errorf("blocked: cannot delete events that belong to other users")
}
log.I.F("HandleDelete: successfully processed %d deletions for event %0x", deletionCount, env.E.ID)
return
}

View File

@@ -12,6 +12,7 @@ import (
"next.orly.dev/pkg/encoders/envelopes/authenvelope"
"next.orly.dev/pkg/encoders/envelopes/eventenvelope"
"next.orly.dev/pkg/encoders/envelopes/okenvelope"
"next.orly.dev/pkg/encoders/hex"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/reason"
"next.orly.dev/pkg/utils"
@@ -21,25 +22,49 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
log.D.F("handling event: %s", msg)
// decode the envelope
env := eventenvelope.NewSubmission()
log.I.F("HandleEvent: received event message length: %d", len(msg))
if msg, err = env.Unmarshal(msg); chk.E(err) {
log.E.F("HandleEvent: failed to unmarshal event: %v", err)
return
}
log.I.F(
"HandleEvent: successfully unmarshaled event, kind: %d, pubkey: %s",
env.E.Kind, hex.Enc(env.E.Pubkey),
)
defer func() {
if env != nil && env.E != nil {
env.E.Free()
}
}()
log.I.F("HandleEvent: continuing with event processing...")
if len(msg) > 0 {
log.I.F("extra '%s'", msg)
}
// Check if sprocket is enabled and process event through it
if l.sprocketManager != nil && l.sprocketManager.IsEnabled() {
if !l.sprocketManager.IsRunning() {
// Sprocket is enabled but not running - drop all messages
log.W.F("sprocket is enabled but not running, dropping event %0x", env.E.ID)
if l.sprocketManager.IsDisabled() {
// Sprocket is disabled due to failure - reject all events
log.W.F("sprocket is disabled, rejecting event %0x", env.E.ID)
if err = Ok.Error(
l, env, "sprocket policy not available",
l, env,
"sprocket disabled - events rejected until sprocket is restored",
); chk.E(err) {
return
}
return
}
if !l.sprocketManager.IsRunning() {
// Sprocket is enabled but not running - reject all events
log.W.F(
"sprocket is enabled but not running, rejecting event %0x",
env.E.ID,
)
if err = Ok.Error(
l, env,
"sprocket not running - events rejected until sprocket starts",
); chk.E(err) {
return
}
@@ -117,45 +142,106 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
return
}
// check permissions of user
accessLevel := acl.Registry.GetAccessLevel(l.authedPubkey.Load(), l.remote)
switch accessLevel {
case "none":
log.D.F(
"handle event: sending 'OK,false,auth-required...' to %s", l.remote,
log.I.F(
"HandleEvent: checking ACL permissions for pubkey: %s",
hex.Enc(l.authedPubkey.Load()),
)
// If ACL mode is "none" and no pubkey is set, use the event's pubkey
var pubkeyForACL []byte
if len(l.authedPubkey.Load()) == 0 && acl.Registry.Active.Load() == "none" {
pubkeyForACL = env.E.Pubkey
log.I.F(
"HandleEvent: ACL mode is 'none', using event pubkey for ACL check: %s",
hex.Enc(pubkeyForACL),
)
if err = okenvelope.NewFrom(
env.Id(), false,
reason.AuthRequired.F("auth required for write access"),
).Write(l); chk.E(err) {
// return
}
log.D.F("handle event: sending challenge to %s", l.remote)
if err = authenvelope.NewChallengeWith(l.challenge.Load()).
Write(l); chk.E(err) {
return
}
return
case "read":
log.D.F(
"handle event: sending 'OK,false,auth-required:...' to %s",
l.remote,
)
if err = okenvelope.NewFrom(
env.Id(), false,
reason.AuthRequired.F("auth required for write access"),
).Write(l); chk.E(err) {
return
}
log.D.F("handle event: sending challenge to %s", l.remote)
if err = authenvelope.NewChallengeWith(l.challenge.Load()).
Write(l); chk.E(err) {
return
}
return
default:
// user has write access or better, continue
// log.D.F("user has %s access", accessLevel)
} else {
pubkeyForACL = l.authedPubkey.Load()
}
accessLevel := acl.Registry.GetAccessLevel(pubkeyForACL, l.remote)
log.I.F("HandleEvent: ACL access level: %s", accessLevel)
// Skip ACL check for admin/owner delete events
skipACLCheck := false
if env.E.Kind == kind.EventDeletion.K {
// Check if the delete event signer is admin or owner
for _, admin := range l.Admins {
if utils.FastEqual(admin, env.E.Pubkey) {
skipACLCheck = true
log.I.F("HandleEvent: admin delete event - skipping ACL check")
break
}
}
if !skipACLCheck {
for _, owner := range l.Owners {
if utils.FastEqual(owner, env.E.Pubkey) {
skipACLCheck = true
log.I.F("HandleEvent: owner delete event - skipping ACL check")
break
}
}
}
}
if !skipACLCheck {
switch accessLevel {
case "none":
log.D.F(
"handle event: sending 'OK,false,auth-required...' to %s",
l.remote,
)
if err = okenvelope.NewFrom(
env.Id(), false,
reason.AuthRequired.F("auth required for write access"),
).Write(l); chk.E(err) {
// return
}
log.D.F("handle event: sending challenge to %s", l.remote)
if err = authenvelope.NewChallengeWith(l.challenge.Load()).
Write(l); chk.E(err) {
return
}
return
case "read":
log.D.F(
"handle event: sending 'OK,false,auth-required:...' to %s",
l.remote,
)
if err = okenvelope.NewFrom(
env.Id(), false,
reason.AuthRequired.F("auth required for write access"),
).Write(l); chk.E(err) {
return
}
log.D.F("handle event: sending challenge to %s", l.remote)
if err = authenvelope.NewChallengeWith(l.challenge.Load()).
Write(l); chk.E(err) {
return
}
return
default:
// user has write access or better, continue
log.I.F("HandleEvent: user has %s access, continuing", accessLevel)
}
} else {
log.I.F("HandleEvent: skipping ACL check for admin/owner delete event")
}
// check if event is ephemeral - if so, deliver and return early
if kind.IsEphemeral(env.E.Kind) {
log.D.F("handling ephemeral event %0x (kind %d)", env.E.ID, env.E.Kind)
// Send OK response for ephemeral events
if err = Ok.Ok(l, env, ""); chk.E(err) {
return
}
// Deliver the event to subscribers immediately
clonedEvent := env.E.Clone()
go l.publishers.Deliver(clonedEvent)
log.D.F("delivered ephemeral event %0x", env.E.ID)
return
}
// check for protected tag (NIP-70)
protectedTag := env.E.Tags.GetFirst([]byte("-"))
if protectedTag != nil && acl.Registry.Active.Load() != "none" {
@@ -171,8 +257,25 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
}
}
// if the event is a delete, process the delete
log.I.F(
"HandleEvent: checking if event is delete - kind: %d, EventDeletion.K: %d",
env.E.Kind, kind.EventDeletion.K,
)
if env.E.Kind == kind.EventDeletion.K {
if err = l.HandleDelete(env); err != nil {
log.I.F("processing delete event %0x", env.E.ID)
// Store the delete event itself FIRST to ensure it's available for queries
saveCtx, cancel := context.WithTimeout(
context.Background(), 30*time.Second,
)
defer cancel()
log.I.F(
"attempting to save delete event %0x from pubkey %0x", env.E.ID,
env.E.Pubkey,
)
log.I.F("delete event pubkey hex: %s", hex.Enc(env.E.Pubkey))
if _, err = l.SaveEvent(saveCtx, env.E); err != nil {
log.E.F("failed to save delete event %0x: %v", env.E.ID, err)
if strings.HasPrefix(err.Error(), "blocked:") {
errStr := err.Error()[len("blocked: "):len(err.Error())]
if err = Ok.Error(
@@ -182,10 +285,46 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
}
return
}
chk.E(err)
return
}
log.I.F("successfully saved delete event %0x", env.E.ID)
// Now process the deletion (remove target events)
if err = l.HandleDelete(env); err != nil {
log.E.F("HandleDelete failed for event %0x: %v", env.E.ID, err)
if strings.HasPrefix(err.Error(), "blocked:") {
errStr := err.Error()[len("blocked: "):len(err.Error())]
if err = Ok.Error(
l, env, errStr,
); chk.E(err) {
return
}
return
}
// For non-blocked errors, still send OK but log the error
log.W.F("Delete processing failed but continuing: %v", err)
} else {
log.I.F(
"HandleDelete completed successfully for event %0x", env.E.ID,
)
}
// Send OK response for delete events
if err = Ok.Ok(l, env, ""); chk.E(err) {
return
}
// Deliver the delete event to subscribers
clonedEvent := env.E.Clone()
go l.publishers.Deliver(clonedEvent)
log.D.F("processed delete event %0x", env.E.ID)
return
} else {
// check if the event was deleted
if err = l.CheckForDeleted(env.E, l.Admins); err != nil {
// Combine admins and owners for deletion checking
adminOwners := append(l.Admins, l.Owners...)
if err = l.CheckForDeleted(env.E, adminOwners); err != nil {
if strings.HasPrefix(err.Error(), "blocked:") {
errStr := err.Error()[len("blocked: "):len(err.Error())]
if err = Ok.Error(
@@ -200,7 +339,7 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
saveCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// log.I.F("saving event %0x, %s", env.E.ID, env.E.Serialize())
if _, _, err = l.SaveEvent(saveCtx, env.E); err != nil {
if _, err = l.SaveEvent(saveCtx, env.E); err != nil {
if strings.HasPrefix(err.Error(), "blocked:") {
errStr := err.Error()[len("blocked: "):len(err.Error())]
if err = Ok.Error(
@@ -223,12 +362,21 @@ func (l *Listener) HandleEvent(msg []byte) (err error) {
go l.publishers.Deliver(clonedEvent)
log.D.F("saved event %0x", env.E.ID)
var isNewFromAdmin bool
// Check if event is from admin or owner
for _, admin := range l.Admins {
if utils.FastEqual(admin, env.E.Pubkey) {
isNewFromAdmin = true
break
}
}
if !isNewFromAdmin {
for _, owner := range l.Owners {
if utils.FastEqual(owner, env.E.Pubkey) {
isNewFromAdmin = true
break
}
}
}
if isNewFromAdmin {
log.I.F("new event from admin %0x", env.E.Pubkey)
// if a follow list was saved, reconfigure ACLs now that it is persisted

View File

@@ -19,11 +19,9 @@ func Run(
) (quit chan struct{}) {
// shutdown handler
go func() {
select {
case <-ctx.Done():
log.I.F("shutting down")
close(quit)
}
<-ctx.Done()
log.I.F("shutting down")
close(quit)
}()
// get the admins
var err error
@@ -38,6 +36,18 @@ func Run(
}
adminKeys = append(adminKeys, pk)
}
// get the owners
var ownerKeys [][]byte
for _, owner := range cfg.Owners {
if len(owner) == 0 {
continue
}
var pk []byte
if pk, err = bech32encoding.NpubOrHexToPublicKeyBinary(owner); chk.E(err) {
continue
}
ownerKeys = append(ownerKeys, pk)
}
// start listener
l := &Server{
Ctx: ctx,
@@ -45,6 +55,7 @@ func Run(
D: db,
publishers: publish.New(NewPublisher(ctx)),
Admins: adminKeys,
Owners: ownerKeys,
}
// Initialize sprocket manager
@@ -70,6 +81,18 @@ func Run(
cfg.Admins = append(cfg.Admins, pk)
log.I.F("added relay identity to admins for follow-list whitelisting")
}
// also ensure relay identity pubkey is considered an owner for full control
found = false
for _, o := range cfg.Owners {
if o == pk {
found = true
break
}
}
if !found {
cfg.Owners = append(cfg.Owners, pk)
log.I.F("added relay identity to owners for full control")
}
}
}

View File

@@ -166,7 +166,7 @@ func (pp *PaymentProcessor) syncFollowList() error {
}
// sign and save
ev.Sign(sign)
if _, _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
if _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
return err
}
log.I.F(
@@ -224,7 +224,7 @@ func (pp *PaymentProcessor) checkSubscriptionStatus() error {
key := item.KeyCopy(nil)
// key format: sub:<hexpub>
hexpub := string(key[len(prefix):])
var sub database.Subscription
if err := item.Value(
func(val []byte) error {
@@ -233,23 +233,23 @@ func (pp *PaymentProcessor) checkSubscriptionStatus() error {
); err != nil {
continue // skip invalid subscription records
}
pubkey, err := hex.Dec(hexpub)
if err != nil {
continue // skip invalid pubkey
}
// Check if paid subscription is expiring in 7 days
if !sub.PaidUntil.IsZero() {
// Format dates for comparison (ignore time component)
paidUntilDate := sub.PaidUntil.Truncate(24 * time.Hour)
sevenDaysDate := sevenDaysFromNow.Truncate(24 * time.Hour)
if paidUntilDate.Equal(sevenDaysDate) {
go pp.createExpiryWarningNote(pubkey, sub.PaidUntil)
}
}
// Check if user is on trial (no paid subscription, trial not expired)
if sub.PaidUntil.IsZero() && now.Before(sub.TrialEnd) {
go pp.createTrialReminderNote(pubkey, sub.TrialEnd)
@@ -261,7 +261,9 @@ func (pp *PaymentProcessor) checkSubscriptionStatus() error {
}
// createExpiryWarningNote creates a warning note for users whose paid subscription expires in 7 days
func (pp *PaymentProcessor) createExpiryWarningNote(userPubkey []byte, expiryTime time.Time) error {
func (pp *PaymentProcessor) createExpiryWarningNote(
userPubkey []byte, expiryTime time.Time,
) error {
// Get relay identity secret to sign the note
skb, err := pp.db.GetRelayIdentitySecret()
if err != nil || len(skb) != 32 {
@@ -286,7 +288,8 @@ func (pp *PaymentProcessor) createExpiryWarningNote(userPubkey []byte, expiryTim
}
// Create the warning note content
content := fmt.Sprintf(`⚠️ Subscription Expiring Soon ⚠️
content := fmt.Sprintf(
`⚠️ Subscription Expiring Soon ⚠️
Your paid subscription to this relay will expire in 7 days on %s.
@@ -304,8 +307,10 @@ Don't lose access to your private relay! Extend your subscription today.
Relay: nostr:%s
Log in to the relay dashboard to access your configuration at: %s`,
expiryTime.Format("2006-01-02 15:04:05 UTC"), monthlyPrice, monthlyPrice, string(relayNpubForContent), pp.getDashboardURL())
Log in to the relay dashboard to access your configuration at: %s`,
expiryTime.Format("2006-01-02 15:04:05 UTC"), monthlyPrice,
monthlyPrice, string(relayNpubForContent), pp.getDashboardURL(),
)
// Build the event
ev := event.New()
@@ -320,17 +325,20 @@ Log in to the relay dashboard to access your configuration at: %s`,
// Add expiration tag (5 days from creation)
noteExpiry := time.Now().AddDate(0, 0, 5)
*ev.Tags = append(*ev.Tags, tag.NewFromAny("expiration", fmt.Sprintf("%d", noteExpiry.Unix())))
*ev.Tags = append(
*ev.Tags,
tag.NewFromAny("expiration", fmt.Sprintf("%d", noteExpiry.Unix())),
)
// Add "private" tag with authorized npubs (user and relay)
var authorizedNpubs []string
// Add user npub
userNpub, err := bech32encoding.BinToNpub(userPubkey)
if err == nil {
authorizedNpubs = append(authorizedNpubs, string(userNpub))
}
// Add relay npub
relayNpub, err := bech32encoding.BinToNpub(sign.Pub())
if err == nil {
@@ -344,20 +352,27 @@ Log in to the relay dashboard to access your configuration at: %s`,
}
// Add a special tag to mark this as an expiry warning
*ev.Tags = append(*ev.Tags, tag.NewFromAny("warning", "subscription-expiry"))
*ev.Tags = append(
*ev.Tags, tag.NewFromAny("warning", "subscription-expiry"),
)
// Sign and save the event
ev.Sign(sign)
if _, _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
if _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
return fmt.Errorf("failed to save expiry warning note: %w", err)
}
log.I.F("created expiry warning note for user %s (expires %s)", hex.Enc(userPubkey), expiryTime.Format("2006-01-02"))
log.I.F(
"created expiry warning note for user %s (expires %s)",
hex.Enc(userPubkey), expiryTime.Format("2006-01-02"),
)
return nil
}
// createTrialReminderNote creates a reminder note for users on trial to support the relay
func (pp *PaymentProcessor) createTrialReminderNote(userPubkey []byte, trialEnd time.Time) error {
func (pp *PaymentProcessor) createTrialReminderNote(
userPubkey []byte, trialEnd time.Time,
) error {
// Get relay identity secret to sign the note
skb, err := pp.db.GetRelayIdentitySecret()
if err != nil || len(skb) != 32 {
@@ -385,7 +400,8 @@ func (pp *PaymentProcessor) createTrialReminderNote(userPubkey []byte, trialEnd
}
// Create the reminder note content
content := fmt.Sprintf(`🆓 Free Trial Reminder 🆓
content := fmt.Sprintf(
`🆓 Free Trial Reminder 🆓
You're currently using this relay for FREE! Your trial expires on %s.
@@ -407,8 +423,10 @@ Thank you for considering supporting decentralized communication!
Relay: nostr:%s
Log in to the relay dashboard to access your configuration at: %s`,
trialEnd.Format("2006-01-02 15:04:05 UTC"), monthlyPrice, dailyRate, monthlyPrice, string(relayNpubForContent), pp.getDashboardURL())
Log in to the relay dashboard to access your configuration at: %s`,
trialEnd.Format("2006-01-02 15:04:05 UTC"), monthlyPrice, dailyRate,
monthlyPrice, string(relayNpubForContent), pp.getDashboardURL(),
)
// Build the event
ev := event.New()
@@ -423,17 +441,20 @@ Log in to the relay dashboard to access your configuration at: %s`,
// Add expiration tag (5 days from creation)
noteExpiry := time.Now().AddDate(0, 0, 5)
*ev.Tags = append(*ev.Tags, tag.NewFromAny("expiration", fmt.Sprintf("%d", noteExpiry.Unix())))
*ev.Tags = append(
*ev.Tags,
tag.NewFromAny("expiration", fmt.Sprintf("%d", noteExpiry.Unix())),
)
// Add "private" tag with authorized npubs (user and relay)
var authorizedNpubs []string
// Add user npub
userNpub, err := bech32encoding.BinToNpub(userPubkey)
if err == nil {
authorizedNpubs = append(authorizedNpubs, string(userNpub))
}
// Add relay npub
relayNpub, err := bech32encoding.BinToNpub(sign.Pub())
if err == nil {
@@ -451,11 +472,14 @@ Log in to the relay dashboard to access your configuration at: %s`,
// Sign and save the event
ev.Sign(sign)
if _, _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
if _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
return fmt.Errorf("failed to save trial reminder note: %w", err)
}
log.I.F("created trial reminder note for user %s (trial ends %s)", hex.Enc(userPubkey), trialEnd.Format("2006-01-02"))
log.I.F(
"created trial reminder note for user %s (trial ends %s)",
hex.Enc(userPubkey), trialEnd.Format("2006-01-02"),
)
return nil
}
@@ -501,8 +525,13 @@ func (pp *PaymentProcessor) handleNotification(
if skb, err := pp.db.GetRelayIdentitySecret(); err == nil && len(skb) == 32 {
var signer p256k.Signer
if err := signer.InitSec(skb); err == nil {
if !strings.EqualFold(hex.Enc(rpk), hex.Enc(signer.Pub())) {
log.W.F("relay_pubkey in payment metadata does not match this relay identity: got %s want %s", hex.Enc(rpk), hex.Enc(signer.Pub()))
if !strings.EqualFold(
hex.Enc(rpk), hex.Enc(signer.Pub()),
) {
log.W.F(
"relay_pubkey in payment metadata does not match this relay identity: got %s want %s",
hex.Enc(rpk), hex.Enc(signer.Pub()),
)
}
}
}
@@ -557,9 +586,15 @@ func (pp *PaymentProcessor) handleNotification(
// Log helpful identifiers
var payerHex = hex.Enc(pubkey)
if userNpub == "" {
log.I.F("payment processed: payer %s %d sats -> %d days", payerHex, satsReceived, days)
log.I.F(
"payment processed: payer %s %d sats -> %d days", payerHex,
satsReceived, days,
)
} else {
log.I.F("payment processed: %s (%s) %d sats -> %d days", userNpub, payerHex, satsReceived, days)
log.I.F(
"payment processed: %s (%s) %d sats -> %d days", userNpub, payerHex,
satsReceived, days,
)
}
// Update ACL follows cache and relay follow list immediately
@@ -578,7 +613,9 @@ func (pp *PaymentProcessor) handleNotification(
}
// createPaymentNote creates a note recording the payment with private tag for authorization
func (pp *PaymentProcessor) createPaymentNote(payerPubkey []byte, satsReceived int64, days int) error {
func (pp *PaymentProcessor) createPaymentNote(
payerPubkey []byte, satsReceived int64, days int,
) error {
// Get relay identity secret to sign the note
skb, err := pp.db.GetRelayIdentitySecret()
if err != nil || len(skb) != 32 {
@@ -611,8 +648,11 @@ func (pp *PaymentProcessor) createPaymentNote(payerPubkey []byte, satsReceived i
}
// Create the note content with nostr:npub link and dashboard link
content := fmt.Sprintf("Payment received: %d sats for %d days. Subscription expires: %s\n\nRelay: nostr:%s\n\nLog in to the relay dashboard to access your configuration at: %s",
satsReceived, days, expiryTime.Format("2006-01-02 15:04:05 UTC"), string(relayNpubForContent), pp.getDashboardURL())
content := fmt.Sprintf(
"Payment received: %d sats for %d days. Subscription expires: %s\n\nRelay: nostr:%s\n\nLog in to the relay dashboard to access your configuration at: %s",
satsReceived, days, expiryTime.Format("2006-01-02 15:04:05 UTC"),
string(relayNpubForContent), pp.getDashboardURL(),
)
// Build the event
ev := event.New()
@@ -627,17 +667,20 @@ func (pp *PaymentProcessor) createPaymentNote(payerPubkey []byte, satsReceived i
// Add expiration tag (5 days from creation)
noteExpiry := time.Now().AddDate(0, 0, 5)
*ev.Tags = append(*ev.Tags, tag.NewFromAny("expiration", fmt.Sprintf("%d", noteExpiry.Unix())))
*ev.Tags = append(
*ev.Tags,
tag.NewFromAny("expiration", fmt.Sprintf("%d", noteExpiry.Unix())),
)
// Add "private" tag with authorized npubs (payer and relay)
var authorizedNpubs []string
// Add payer npub
payerNpub, err := bech32encoding.BinToNpub(payerPubkey)
if err == nil {
authorizedNpubs = append(authorizedNpubs, string(payerNpub))
}
// Add relay npub
relayNpub, err := bech32encoding.BinToNpub(sign.Pub())
if err == nil {
@@ -652,11 +695,14 @@ func (pp *PaymentProcessor) createPaymentNote(payerPubkey []byte, satsReceived i
// Sign and save the event
ev.Sign(sign)
if _, _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
if _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
return fmt.Errorf("failed to save payment note: %w", err)
}
log.I.F("created payment note for %s with private authorization", hex.Enc(payerPubkey))
log.I.F(
"created payment note for %s with private authorization",
hex.Enc(payerPubkey),
)
return nil
}
@@ -686,7 +732,8 @@ func (pp *PaymentProcessor) CreateWelcomeNote(userPubkey []byte) error {
}
// Create the welcome note content with nostr:npub link
content := fmt.Sprintf(`Welcome to the relay! 🎉
content := fmt.Sprintf(
`Welcome to the relay! 🎉
You have a FREE 30-day trial that started when you first logged in.
@@ -706,7 +753,9 @@ Relay: nostr:%s
Log in to the relay dashboard to access your configuration at: %s
Enjoy your time on the relay!`, monthlyPrice, monthlyPrice, string(relayNpubForContent), pp.getDashboardURL())
Enjoy your time on the relay!`, monthlyPrice, monthlyPrice,
string(relayNpubForContent), pp.getDashboardURL(),
)
// Build the event
ev := event.New()
@@ -721,17 +770,20 @@ Enjoy your time on the relay!`, monthlyPrice, monthlyPrice, string(relayNpubForC
// Add expiration tag (5 days from creation)
noteExpiry := time.Now().AddDate(0, 0, 5)
*ev.Tags = append(*ev.Tags, tag.NewFromAny("expiration", fmt.Sprintf("%d", noteExpiry.Unix())))
*ev.Tags = append(
*ev.Tags,
tag.NewFromAny("expiration", fmt.Sprintf("%d", noteExpiry.Unix())),
)
// Add "private" tag with authorized npubs (user and relay)
var authorizedNpubs []string
// Add user npub
userNpub, err := bech32encoding.BinToNpub(userPubkey)
if err == nil {
authorizedNpubs = append(authorizedNpubs, string(userNpub))
}
// Add relay npub
relayNpub, err := bech32encoding.BinToNpub(sign.Pub())
if err == nil {
@@ -749,7 +801,7 @@ Enjoy your time on the relay!`, monthlyPrice, monthlyPrice, string(relayNpubForC
// Sign and save the event
ev.Sign(sign)
if _, _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
if _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
return fmt.Errorf("failed to save welcome note: %w", err)
}
@@ -846,13 +898,15 @@ func (pp *PaymentProcessor) UpdateRelayProfile() error {
relayURL := strings.Replace(pp.getDashboardURL(), "https://", "wss://", 1)
// Create profile content as JSON
profileContent := fmt.Sprintf(`{
profileContent := fmt.Sprintf(
`{
"name": "Relay Bot",
"about": "This relay requires a subscription to access. Zap any of my notes to pay for access. Monthly price: %d sats (%d sats/day). Relay: %s",
"lud16": "",
"nip05": "",
"website": "%s"
}`, monthlyPrice, dailyRate, relayURL, pp.getDashboardURL())
}`, monthlyPrice, dailyRate, relayURL, pp.getDashboardURL(),
)
// Build the profile event
ev := event.New()
@@ -864,7 +918,7 @@ func (pp *PaymentProcessor) UpdateRelayProfile() error {
// Sign and save the event
ev.Sign(sign)
if _, _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
if _, err := pp.db.SaveEvent(pp.ctx, ev); err != nil {
return fmt.Errorf("failed to save relay profile: %w", err)
}

View File

@@ -34,6 +34,7 @@ type Server struct {
remote string
publishers *publish.S
Admins [][]byte
Owners [][]byte
*database.D
// optional reverse proxy for dev web server
@@ -179,6 +180,9 @@ func (s *Server) UserInterface() {
s.challengeMutex.Unlock()
}
// Serve favicon.ico by serving orly-favicon.png
s.mux.HandleFunc("/favicon.ico", s.handleFavicon)
// Serve the main login interface (and static assets) or proxy in dev mode
s.mux.HandleFunc("/", s.handleLoginInterface)
@@ -203,6 +207,26 @@ func (s *Server) UserInterface() {
s.mux.HandleFunc("/api/sprocket/config", s.handleSprocketConfig)
}
// handleFavicon serves orly-favicon.png as favicon.ico
func (s *Server) handleFavicon(w http.ResponseWriter, r *http.Request) {
// In dev mode with proxy configured, forward to dev server
if s.devProxy != nil {
s.devProxy.ServeHTTP(w, r)
return
}
// Serve orly-favicon.png as favicon.ico from embedded web app
w.Header().Set("Content-Type", "image/png")
w.Header().Set("Cache-Control", "public, max-age=86400") // Cache for 1 day
// Create a request for orly-favicon.png and serve it
faviconReq := &http.Request{
Method: "GET",
URL: &url.URL{Path: "/orly-favicon.png"},
}
ServeEmbeddedWeb(w, faviconReq)
}
// handleLoginInterface serves the main user interface for login
func (s *Server) handleLoginInterface(w http.ResponseWriter, r *http.Request) {
// In dev mode with proxy configured, forward to dev server

View File

@@ -37,6 +37,7 @@ type SprocketManager struct {
mutex sync.RWMutex
isRunning bool
enabled bool
disabled bool // true when sprocket is disabled due to failure
stdin io.WriteCloser
stdout io.ReadCloser
stderr io.ReadCloser
@@ -56,21 +57,105 @@ func NewSprocketManager(ctx context.Context, appName string, enabled bool) *Spro
configDir: configDir,
scriptPath: scriptPath,
enabled: enabled,
disabled: false,
responseChan: make(chan SprocketResponse, 100), // Buffered channel for responses
}
// Start the sprocket script if it exists and is enabled
if enabled {
go sm.startSprocketIfExists()
// Start periodic check for sprocket script availability
go sm.periodicCheck()
}
return sm
}
// disableSprocket disables sprocket due to failure
func (sm *SprocketManager) disableSprocket() {
sm.mutex.Lock()
defer sm.mutex.Unlock()
if !sm.disabled {
sm.disabled = true
log.W.F("sprocket disabled due to failure - all events will be rejected (script location: %s)", sm.scriptPath)
}
}
// enableSprocket re-enables sprocket and attempts to start it
func (sm *SprocketManager) enableSprocket() {
sm.mutex.Lock()
defer sm.mutex.Unlock()
if sm.disabled {
sm.disabled = false
log.I.F("sprocket re-enabled, attempting to start")
// Attempt to start sprocket in background
go func() {
if _, err := os.Stat(sm.scriptPath); err == nil {
if err := sm.StartSprocket(); err != nil {
log.E.F("failed to restart sprocket: %v", err)
sm.disableSprocket()
} else {
log.I.F("sprocket restarted successfully")
}
} else {
log.W.F("sprocket script still not found, keeping disabled")
sm.disableSprocket()
}
}()
}
}
// periodicCheck periodically checks if sprocket script becomes available
func (sm *SprocketManager) periodicCheck() {
ticker := time.NewTicker(30 * time.Second) // Check every 30 seconds
defer ticker.Stop()
for {
select {
case <-sm.ctx.Done():
return
case <-ticker.C:
sm.mutex.RLock()
disabled := sm.disabled
running := sm.isRunning
sm.mutex.RUnlock()
// Only check if sprocket is disabled or not running
if disabled || !running {
if _, err := os.Stat(sm.scriptPath); err == nil {
// Script is available, try to enable/restart
if disabled {
sm.enableSprocket()
} else if !running {
// Script exists but sprocket isn't running, try to start
go func() {
if err := sm.StartSprocket(); err != nil {
log.E.F("failed to restart sprocket: %v", err)
sm.disableSprocket()
} else {
log.I.F("sprocket restarted successfully")
}
}()
}
}
}
}
}
}
// startSprocketIfExists starts the sprocket script if the file exists
func (sm *SprocketManager) startSprocketIfExists() {
if _, err := os.Stat(sm.scriptPath); err == nil {
sm.StartSprocket()
if err := sm.StartSprocket(); err != nil {
log.E.F("failed to start sprocket: %v", err)
sm.disableSprocket()
}
} else {
log.W.F("sprocket script not found at %s, disabling sprocket", sm.scriptPath)
sm.disableSprocket()
}
}
@@ -473,6 +558,13 @@ func (sm *SprocketManager) IsRunning() bool {
return sm.isRunning
}
// IsDisabled returns whether sprocket is disabled due to failure
func (sm *SprocketManager) IsDisabled() bool {
sm.mutex.RLock()
defer sm.mutex.RUnlock()
return sm.disabled
}
// monitorProcess monitors the sprocket process and cleans up when it exits
func (sm *SprocketManager) monitorProcess() {
if sm.currentCmd == nil {
@@ -504,6 +596,9 @@ func (sm *SprocketManager) monitorProcess() {
if err != nil {
log.E.F("sprocket process exited with error: %v", err)
// Auto-disable sprocket on failure
sm.disabled = true
log.W.F("sprocket disabled due to process failure - all events will be rejected (script location: %s)", sm.scriptPath)
} else {
log.I.F("sprocket process exited normally")
}

View File

@@ -4,6 +4,7 @@
"": {
"name": "svelte-app",
"dependencies": {
"@nostr-dev-kit/ndk": "^2.17.3",
"sirv-cli": "^2.0.0",
},
"devDependencies": {
@@ -19,6 +20,10 @@
},
},
"packages": {
"@codesandbox/nodebox": ["@codesandbox/nodebox@0.1.8", "", { "dependencies": { "outvariant": "^1.4.0", "strict-event-emitter": "^0.4.3" } }, "sha512-2VRS6JDSk+M+pg56GA6CryyUSGPjBEe8Pnae0QL3jJF1mJZJVMDKr93gJRtBbLkfZN6LD/DwMtf+2L0bpWrjqg=="],
"@codesandbox/sandpack-client": ["@codesandbox/sandpack-client@2.19.8", "", { "dependencies": { "@codesandbox/nodebox": "0.1.8", "buffer": "^6.0.3", "dequal": "^2.0.2", "mime-db": "^1.52.0", "outvariant": "1.4.0", "static-browser-server": "1.0.3" } }, "sha512-CMV4nr1zgKzVpx4I3FYvGRM5YT0VaQhALMW9vy4wZRhEyWAtJITQIqZzrTGWqB1JvV7V72dVEUCUPLfYz5hgJQ=="],
"@jridgewell/gen-mapping": ["@jridgewell/gen-mapping@0.3.13", "", { "dependencies": { "@jridgewell/sourcemap-codec": "^1.5.0", "@jridgewell/trace-mapping": "^0.3.24" } }, "sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA=="],
"@jridgewell/resolve-uri": ["@jridgewell/resolve-uri@3.1.2", "", {}, "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw=="],
@@ -29,6 +34,18 @@
"@jridgewell/trace-mapping": ["@jridgewell/trace-mapping@0.3.31", "", { "dependencies": { "@jridgewell/resolve-uri": "^3.1.0", "@jridgewell/sourcemap-codec": "^1.4.14" } }, "sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw=="],
"@noble/ciphers": ["@noble/ciphers@0.5.3", "", {}, "sha512-B0+6IIHiqEs3BPMT0hcRmHvEj2QHOLu+uwt+tqDDeVd0oyVzh7BPrDcPjRnV1PV/5LaknXJJQvOuRGR0zQJz+w=="],
"@noble/curves": ["@noble/curves@1.9.7", "", { "dependencies": { "@noble/hashes": "1.8.0" } }, "sha512-gbKGcRUYIjA3/zCCNaWDciTMFI0dCkvou3TL8Zmy5Nc7sJ47a0jtOeZoTaMxkuqRo9cRhjOdZJXegxYE5FN/xw=="],
"@noble/hashes": ["@noble/hashes@1.8.0", "", {}, "sha512-jCs9ldd7NwzpgXDIf6P3+NrHh9/sD6CQdxHyjQI+h/6rDNo88ypBxxz45UDuZHz9r3tNz7N/VInSVoVdtXEI4A=="],
"@noble/secp256k1": ["@noble/secp256k1@2.3.0", "", {}, "sha512-0TQed2gcBbIrh7Ccyw+y/uZQvbJwm7Ao4scBUxqpBCcsOlZG0O4KGfjtNAy/li4W8n1xt3dxrwJ0beZ2h2G6Kw=="],
"@nostr-dev-kit/ndk": ["@nostr-dev-kit/ndk@2.17.3", "", { "dependencies": { "@codesandbox/sandpack-client": "^2.19.8", "@noble/curves": "^1.6.0", "@noble/hashes": "^1.5.0", "@noble/secp256k1": "^2.1.0", "@scure/base": "^1.1.9", "debug": "^4.3.6", "light-bolt11-decoder": "^3.2.0", "shiki": "^3.13.0", "tseep": "^1.3.1", "typescript-lru-cache": "^2" }, "peerDependencies": { "nostr-tools": "^2" } }, "sha512-CwOTRPxyOcxg5X4VEBzI7leA/bE7t4Yv9tZ6KpG4H4fDhuI6YXRbb9oKLG9KJqVOIbRrYT27sBF82Z6dE3B1qw=="],
"@open-draft/deferred-promise": ["@open-draft/deferred-promise@2.2.0", "", {}, "sha512-CecwLWx3rhxVQF6V4bAgPS5t+So2sTbPgAzafKkVizyi7tlwpcFpdFqq+wqF2OwNBmqFuu6tOyouTuxgpMfzmA=="],
"@polka/url": ["@polka/url@1.0.0-next.29", "", {}, "sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww=="],
"@rollup/plugin-commonjs": ["@rollup/plugin-commonjs@24.1.0", "", { "dependencies": { "@rollup/pluginutils": "^5.0.1", "commondir": "^1.0.1", "estree-walker": "^2.0.2", "glob": "^8.0.3", "is-reference": "1.2.1", "magic-string": "^0.27.0" }, "peerDependencies": { "rollup": "^2.68.0||^3.0.0" }, "optionalPeers": ["rollup"] }, "sha512-eSL45hjhCWI0jCCXcNtLVqM5N1JlBGvlFfY0m6oOYnLCJ6N0qEXoZql4sY2MOUArzhH4SA/qBpTxvvZp2Sc+DQ=="],
@@ -39,34 +56,82 @@
"@rollup/pluginutils": ["@rollup/pluginutils@5.3.0", "", { "dependencies": { "@types/estree": "^1.0.0", "estree-walker": "^2.0.2", "picomatch": "^4.0.2" }, "peerDependencies": { "rollup": "^1.20.0||^2.0.0||^3.0.0||^4.0.0" }, "optionalPeers": ["rollup"] }, "sha512-5EdhGZtnu3V88ces7s53hhfK5KSASnJZv8Lulpc04cWO3REESroJXg73DFsOmgbU2BhwV0E20bu2IDZb3VKW4Q=="],
"@scure/base": ["@scure/base@1.2.6", "", {}, "sha512-g/nm5FgUa//MCj1gV09zTJTaM6KBAHqLN907YVQqf7zC49+DcO4B1so4ZX07Ef10Twr6nuqYEH9GEggFXA4Fmg=="],
"@scure/bip32": ["@scure/bip32@1.3.1", "", { "dependencies": { "@noble/curves": "~1.1.0", "@noble/hashes": "~1.3.1", "@scure/base": "~1.1.0" } }, "sha512-osvveYtyzdEVbt3OfwwXFr4P2iVBL5u1Q3q4ONBfDY/UpOuXmOlbgwc1xECEboY8wIays8Yt6onaWMUdUbfl0A=="],
"@scure/bip39": ["@scure/bip39@1.2.1", "", { "dependencies": { "@noble/hashes": "~1.3.0", "@scure/base": "~1.1.0" } }, "sha512-Z3/Fsz1yr904dduJD0NpiyRHhRYHdcnyh73FZWiV+/qhWi83wNJ3NWolYqCEN+ZWsUz2TWwajJggcRE9r1zUYg=="],
"@shikijs/core": ["@shikijs/core@3.13.0", "", { "dependencies": { "@shikijs/types": "3.13.0", "@shikijs/vscode-textmate": "^10.0.2", "@types/hast": "^3.0.4", "hast-util-to-html": "^9.0.5" } }, "sha512-3P8rGsg2Eh2qIHekwuQjzWhKI4jV97PhvYjYUzGqjvJfqdQPz+nMlfWahU24GZAyW1FxFI1sYjyhfh5CoLmIUA=="],
"@shikijs/engine-javascript": ["@shikijs/engine-javascript@3.13.0", "", { "dependencies": { "@shikijs/types": "3.13.0", "@shikijs/vscode-textmate": "^10.0.2", "oniguruma-to-es": "^4.3.3" } }, "sha512-Ty7xv32XCp8u0eQt8rItpMs6rU9Ki6LJ1dQOW3V/56PKDcpvfHPnYFbsx5FFUP2Yim34m/UkazidamMNVR4vKg=="],
"@shikijs/engine-oniguruma": ["@shikijs/engine-oniguruma@3.13.0", "", { "dependencies": { "@shikijs/types": "3.13.0", "@shikijs/vscode-textmate": "^10.0.2" } }, "sha512-O42rBGr4UDSlhT2ZFMxqM7QzIU+IcpoTMzb3W7AlziI1ZF7R8eS2M0yt5Ry35nnnTX/LTLXFPUjRFCIW+Operg=="],
"@shikijs/langs": ["@shikijs/langs@3.13.0", "", { "dependencies": { "@shikijs/types": "3.13.0" } }, "sha512-672c3WAETDYHwrRP0yLy3W1QYB89Hbpj+pO4KhxK6FzIrDI2FoEXNiNCut6BQmEApYLfuYfpgOZaqbY+E9b8wQ=="],
"@shikijs/themes": ["@shikijs/themes@3.13.0", "", { "dependencies": { "@shikijs/types": "3.13.0" } }, "sha512-Vxw1Nm1/Od8jyA7QuAenaV78BG2nSr3/gCGdBkLpfLscddCkzkL36Q5b67SrLLfvAJTOUzW39x4FHVCFriPVgg=="],
"@shikijs/types": ["@shikijs/types@3.13.0", "", { "dependencies": { "@shikijs/vscode-textmate": "^10.0.2", "@types/hast": "^3.0.4" } }, "sha512-oM9P+NCFri/mmQ8LoFGVfVyemm5Hi27330zuOBp0annwJdKH1kOLndw3zCtAVDehPLg9fKqoEx3Ht/wNZxolfw=="],
"@shikijs/vscode-textmate": ["@shikijs/vscode-textmate@10.0.2", "", {}, "sha512-83yeghZ2xxin3Nj8z1NMd/NCuca+gsYXswywDy5bHvwlWL8tpTQmzGeUuHd9FC3E/SBEMvzJRwWEOz5gGes9Qg=="],
"@types/estree": ["@types/estree@1.0.8", "", {}, "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w=="],
"@types/hast": ["@types/hast@3.0.4", "", { "dependencies": { "@types/unist": "*" } }, "sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ=="],
"@types/mdast": ["@types/mdast@4.0.4", "", { "dependencies": { "@types/unist": "*" } }, "sha512-kGaNbPh1k7AFzgpud/gMdvIm5xuECykRR+JnWKQno9TAXVa6WIVCGTPvYGekIDL4uwCZQSYbUxNBSb1aUo79oA=="],
"@types/resolve": ["@types/resolve@1.20.2", "", {}, "sha512-60BCwRFOZCQhDncwQdxxeOEEkbc5dIMccYLwbxsS4TUNeVECQ/pBJ0j09mrHOl/JJvpRPGwO9SvE4nR2Nb/a4Q=="],
"@types/unist": ["@types/unist@3.0.3", "", {}, "sha512-ko/gIFJRv177XgZsZcBwnqJN5x/Gien8qNOn0D5bQU/zAzVf9Zt3BlcUiLqhV9y4ARk0GbT3tnUiPNgnTXzc/Q=="],
"@ungap/structured-clone": ["@ungap/structured-clone@1.3.0", "", {}, "sha512-WmoN8qaIAo7WTYWbAZuG8PYEhn5fkz7dZrqTBZ7dtt//lL2Gwms1IcnQ5yHqjDfX8Ft5j4YzDM23f87zBfDe9g=="],
"acorn": ["acorn@8.15.0", "", { "bin": { "acorn": "bin/acorn" } }, "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg=="],
"anymatch": ["anymatch@3.1.3", "", { "dependencies": { "normalize-path": "^3.0.0", "picomatch": "^2.0.4" } }, "sha512-KMReFUr0B4t+D+OBkjR3KYqvocp2XaSzO55UcB6mgQMd3KbcE+mWTyvVV7D/zsdEbNnV6acZUutkiHQXvTr1Rw=="],
"balanced-match": ["balanced-match@1.0.2", "", {}, "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw=="],
"base64-js": ["base64-js@1.5.1", "", {}, "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA=="],
"binary-extensions": ["binary-extensions@2.3.0", "", {}, "sha512-Ceh+7ox5qe7LJuLHoY0feh3pHuUDHAcRUeyL2VYghZwfpkNIy/+8Ocg0a3UuSoYzavmylwuLWQOf3hl0jjMMIw=="],
"brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="],
"braces": ["braces@3.0.3", "", { "dependencies": { "fill-range": "^7.1.1" } }, "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA=="],
"buffer": ["buffer@6.0.3", "", { "dependencies": { "base64-js": "^1.3.1", "ieee754": "^1.2.1" } }, "sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA=="],
"buffer-from": ["buffer-from@1.1.2", "", {}, "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ=="],
"ccount": ["ccount@2.0.1", "", {}, "sha512-eyrF0jiFpY+3drT6383f1qhkbGsLSifNAjA61IUjZjmLCWjItY6LB9ft9YhoDgwfmclB2zhu51Lc7+95b8NRAg=="],
"character-entities-html4": ["character-entities-html4@2.1.0", "", {}, "sha512-1v7fgQRj6hnSwFpq1Eu0ynr/CDEw0rXo2B61qXrLNdHZmPKgb7fqS1a2JwF0rISo9q77jDI8VMEHoApn8qDoZA=="],
"character-entities-legacy": ["character-entities-legacy@3.0.0", "", {}, "sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ=="],
"chokidar": ["chokidar@3.6.0", "", { "dependencies": { "anymatch": "~3.1.2", "braces": "~3.0.2", "glob-parent": "~5.1.2", "is-binary-path": "~2.1.0", "is-glob": "~4.0.1", "normalize-path": "~3.0.0", "readdirp": "~3.6.0" }, "optionalDependencies": { "fsevents": "~2.3.2" } }, "sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw=="],
"comma-separated-tokens": ["comma-separated-tokens@2.0.3", "", {}, "sha512-Fu4hJdvzeylCfQPp9SGWidpzrMs7tTrlu6Vb8XGaRGck8QSNZJJp538Wrb60Lax4fPwR64ViY468OIUTbRlGZg=="],
"commander": ["commander@2.20.3", "", {}, "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ=="],
"commondir": ["commondir@1.0.1", "", {}, "sha512-W9pAhw0ja1Edb5GVdIF1mjZw/ASI0AlShXM83UUGe2DVr5TdAPEA1OA8m/g8zWp9x6On7gqufY+FatDbC3MDQg=="],
"console-clear": ["console-clear@1.1.1", "", {}, "sha512-pMD+MVR538ipqkG5JXeOEbKWS5um1H4LUUccUQG68qpeqBYbzYy79Gh55jkd2TtPdRfUaLWdv6LPP//5Zt0aPQ=="],
"debug": ["debug@4.4.3", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="],
"deepmerge": ["deepmerge@4.3.1", "", {}, "sha512-3sUqbMEc77XqpdNO7FRyRog+eW3ph+GYCbj+rK+uYyRMuwsVy0rMiVtPn+QJlKFvWP/1PYpapqYn0Me2knFn+A=="],
"dequal": ["dequal@2.0.3", "", {}, "sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA=="],
"devlop": ["devlop@1.1.0", "", { "dependencies": { "dequal": "^2.0.0" } }, "sha512-RWmIqhcFf1lRYBvNmr7qTNuyCt/7/ns2jbpp1+PalgE/rDQcBT0fioSMUpJ93irlUhC5hrg4cYqe6U+0ImW0rA=="],
"dotenv": ["dotenv@16.6.1", "", {}, "sha512-uBq4egWHTcTt33a72vpSG0z3HnPuIl6NqYcTrKEg2azoEyl2hpW0zqlxysq2pK9HlDIHyHyakeYaYnSAwd8bow=="],
"estree-walker": ["estree-walker@2.0.2", "", {}, "sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w=="],
"fill-range": ["fill-range@7.1.1", "", { "dependencies": { "to-regex-range": "^5.0.1" } }, "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg=="],
@@ -85,6 +150,14 @@
"hasown": ["hasown@2.0.2", "", { "dependencies": { "function-bind": "^1.1.2" } }, "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ=="],
"hast-util-to-html": ["hast-util-to-html@9.0.5", "", { "dependencies": { "@types/hast": "^3.0.0", "@types/unist": "^3.0.0", "ccount": "^2.0.0", "comma-separated-tokens": "^2.0.0", "hast-util-whitespace": "^3.0.0", "html-void-elements": "^3.0.0", "mdast-util-to-hast": "^13.0.0", "property-information": "^7.0.0", "space-separated-tokens": "^2.0.0", "stringify-entities": "^4.0.0", "zwitch": "^2.0.4" } }, "sha512-OguPdidb+fbHQSU4Q4ZiLKnzWo8Wwsf5bZfbvu7//a9oTYoqD/fWpe96NuHkoS9h0ccGOTe0C4NGXdtS0iObOw=="],
"hast-util-whitespace": ["hast-util-whitespace@3.0.0", "", { "dependencies": { "@types/hast": "^3.0.0" } }, "sha512-88JUN06ipLwsnv+dVn+OIYOvAuvBMy/Qoi6O7mQHxdPXpjy+Cd6xRkWwux7DKO+4sYILtLBRIKgsdpS2gQc7qw=="],
"html-void-elements": ["html-void-elements@3.0.0", "", {}, "sha512-bEqo66MRXsUGxWHV5IP0PUiAWwoEjba4VCzg0LjFJBpchPaTfyfCKTG6bc5F8ucKec3q5y6qOdGyYTSBEvhCrg=="],
"ieee754": ["ieee754@1.2.1", "", {}, "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA=="],
"inflight": ["inflight@1.0.6", "", { "dependencies": { "once": "^1.3.0", "wrappy": "1" } }, "sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA=="],
"inherits": ["inherits@2.0.4", "", {}, "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="],
@@ -105,6 +178,8 @@
"kleur": ["kleur@4.1.5", "", {}, "sha512-o+NO+8WrRiQEE4/7nwRJhN1HWpVmJm511pBHUxPLtp0BUISzlBplORYSmTclCnJvQq2tKu/sgl3xVpkc7ZWuQQ=="],
"light-bolt11-decoder": ["light-bolt11-decoder@3.2.0", "", { "dependencies": { "@scure/base": "1.1.1" } }, "sha512-3QEofgiBOP4Ehs9BI+RkZdXZNtSys0nsJ6fyGeSiAGCBsMwHGUDS/JQlY/sTnWs91A2Nh0S9XXfA8Sy9g6QpuQ=="],
"livereload": ["livereload@0.9.3", "", { "dependencies": { "chokidar": "^3.5.0", "livereload-js": "^3.3.1", "opts": ">= 1.2.0", "ws": "^7.4.3" }, "bin": { "livereload": "bin/livereload.js" } }, "sha512-q7Z71n3i4X0R9xthAryBdNGVGAO2R5X+/xXpmKeuPMrteg+W2U8VusTKV3YiJbXZwKsOlFlHe+go6uSNjfxrZw=="],
"livereload-js": ["livereload-js@3.4.1", "", {}, "sha512-5MP0uUeVCec89ZbNOT/i97Mc+q3SxXmiUGhRFOTmhrGPn//uWVQdCvcLJDy64MSBR5MidFdOR7B9viumoavy6g=="],
@@ -113,26 +188,60 @@
"magic-string": ["magic-string@0.27.0", "", { "dependencies": { "@jridgewell/sourcemap-codec": "^1.4.13" } }, "sha512-8UnnX2PeRAPZuN12svgR9j7M1uWMovg/CEnIwIG0LFkXSJJe4PdfUGiTGl8V9bsBHFUtfVINcSyYxd7q+kx9fA=="],
"mdast-util-to-hast": ["mdast-util-to-hast@13.2.0", "", { "dependencies": { "@types/hast": "^3.0.0", "@types/mdast": "^4.0.0", "@ungap/structured-clone": "^1.0.0", "devlop": "^1.0.0", "micromark-util-sanitize-uri": "^2.0.0", "trim-lines": "^3.0.0", "unist-util-position": "^5.0.0", "unist-util-visit": "^5.0.0", "vfile": "^6.0.0" } }, "sha512-QGYKEuUsYT9ykKBCMOEDLsU5JRObWQusAolFMeko/tYPufNkRffBAQjIE+99jbA87xv6FgmjLtwjh9wBWajwAA=="],
"micromark-util-character": ["micromark-util-character@2.1.1", "", { "dependencies": { "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-wv8tdUTJ3thSFFFJKtpYKOYiGP2+v96Hvk4Tu8KpCAsTMs6yi+nVmGh1syvSCsaxz45J6Jbw+9DD6g97+NV67Q=="],
"micromark-util-encode": ["micromark-util-encode@2.0.1", "", {}, "sha512-c3cVx2y4KqUnwopcO9b/SCdo2O67LwJJ/UyqGfbigahfegL9myoEFoDYZgkT7f36T0bLrM9hZTAaAyH+PCAXjw=="],
"micromark-util-sanitize-uri": ["micromark-util-sanitize-uri@2.0.1", "", { "dependencies": { "micromark-util-character": "^2.0.0", "micromark-util-encode": "^2.0.0", "micromark-util-symbol": "^2.0.0" } }, "sha512-9N9IomZ/YuGGZZmQec1MbgxtlgougxTodVwDzzEouPKo3qFWvymFHWcnDi2vzV1ff6kas9ucW+o3yzJK9YB1AQ=="],
"micromark-util-symbol": ["micromark-util-symbol@2.0.1", "", {}, "sha512-vs5t8Apaud9N28kgCrRUdEed4UJ+wWNvicHLPxCa9ENlYuAY31M0ETy5y1vA33YoNPDFTghEbnh6efaE8h4x0Q=="],
"micromark-util-types": ["micromark-util-types@2.0.2", "", {}, "sha512-Yw0ECSpJoViF1qTU4DC6NwtC4aWGt1EkzaQB8KPPyCRR8z9TWeV0HbEFGTO+ZY1wB22zmxnJqhPyTpOVCpeHTA=="],
"mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="],
"minimatch": ["minimatch@5.1.6", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-lKwV/1brpG6mBUFHtb7NUmtABCb2WZZmm2wNiOA5hAb8VdCS4B3dtMWyvcoViccwAW/COERjXLt0zP1zXUN26g=="],
"mri": ["mri@1.2.0", "", {}, "sha512-tzzskb3bG8LvYGFF/mDTpq3jpI6Q9wc3LEmBaghu+DdCssd1FakN7Bc0hVNmEyGq1bq3RgfkCb3cmQLpNPOroA=="],
"mrmime": ["mrmime@2.0.1", "", {}, "sha512-Y3wQdFg2Va6etvQ5I82yUhGdsKrcYox6p7FfL1LbK2J4V01F9TGlepTIhnK24t7koZibmg82KGglhA1XK5IsLQ=="],
"ms": ["ms@2.1.3", "", {}, "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="],
"normalize-path": ["normalize-path@3.0.0", "", {}, "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA=="],
"nostr-tools": ["nostr-tools@2.17.0", "", { "dependencies": { "@noble/ciphers": "^0.5.1", "@noble/curves": "1.2.0", "@noble/hashes": "1.3.1", "@scure/base": "1.1.1", "@scure/bip32": "1.3.1", "@scure/bip39": "1.2.1", "nostr-wasm": "0.1.0" }, "peerDependencies": { "typescript": ">=5.0.0" }, "optionalPeers": ["typescript"] }, "sha512-lrvHM7cSaGhz7F0YuBvgHMoU2s8/KuThihDoOYk8w5gpVHTy0DeUCAgCN8uLGeuSl5MAWekJr9Dkfo5HClqO9w=="],
"nostr-wasm": ["nostr-wasm@0.1.0", "", {}, "sha512-78BTryCLcLYv96ONU8Ws3Q1JzjlAt+43pWQhIl86xZmWeegYCNLPml7yQ+gG3vR6V5h4XGj+TxO+SS5dsThQIA=="],
"once": ["once@1.4.0", "", { "dependencies": { "wrappy": "1" } }, "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="],
"oniguruma-parser": ["oniguruma-parser@0.12.1", "", {}, "sha512-8Unqkvk1RYc6yq2WBYRj4hdnsAxVze8i7iPfQr8e4uSP3tRv0rpZcbGUDvxfQQcdwHt/e9PrMvGCsa8OqG9X3w=="],
"oniguruma-to-es": ["oniguruma-to-es@4.3.3", "", { "dependencies": { "oniguruma-parser": "^0.12.1", "regex": "^6.0.1", "regex-recursion": "^6.0.2" } }, "sha512-rPiZhzC3wXwE59YQMRDodUwwT9FZ9nNBwQQfsd1wfdtlKEyCdRV0avrTcSZ5xlIvGRVPd/cx6ZN45ECmS39xvg=="],
"opts": ["opts@2.0.2", "", {}, "sha512-k41FwbcLnlgnFh69f4qdUfvDQ+5vaSDnVPFI/y5XuhKRq97EnVVneO9F1ESVCdiVu4fCS2L8usX3mU331hB7pg=="],
"outvariant": ["outvariant@1.4.0", "", {}, "sha512-AlWY719RF02ujitly7Kk/0QlV+pXGFDHrHf9O2OKqyqgBieaPOIeuSkL8sRK6j2WK+/ZAURq2kZsY0d8JapUiw=="],
"path-parse": ["path-parse@1.0.7", "", {}, "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw=="],
"picomatch": ["picomatch@4.0.3", "", {}, "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q=="],
"property-information": ["property-information@7.1.0", "", {}, "sha512-TwEZ+X+yCJmYfL7TPUOcvBZ4QfoT5YenQiJuX//0th53DE6w0xxLEtfK3iyryQFddXuvkIk51EEgrJQ0WJkOmQ=="],
"randombytes": ["randombytes@2.1.0", "", { "dependencies": { "safe-buffer": "^5.1.0" } }, "sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ=="],
"readdirp": ["readdirp@3.6.0", "", { "dependencies": { "picomatch": "^2.2.1" } }, "sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA=="],
"regex": ["regex@6.0.1", "", { "dependencies": { "regex-utilities": "^2.3.0" } }, "sha512-uorlqlzAKjKQZ5P+kTJr3eeJGSVroLKoHmquUj4zHWuR+hEyNqlXsSKlYYF5F4NI6nl7tWCs0apKJ0lmfsXAPA=="],
"regex-recursion": ["regex-recursion@6.0.2", "", { "dependencies": { "regex-utilities": "^2.3.0" } }, "sha512-0YCaSCq2VRIebiaUviZNs0cBz1kg5kVS2UKUfNIx8YVs1cN3AV7NTctO5FOKBA+UT2BPJIWZauYHPqJODG50cg=="],
"regex-utilities": ["regex-utilities@2.3.0", "", {}, "sha512-8VhliFJAWRaUiVvREIiW2NXXTmHs4vMNnSzuJVhscgmGav3g9VDxLrQndI3dZZVVdp0ZO/5v0xmX516/7M9cng=="],
"resolve": ["resolve@1.22.10", "", { "dependencies": { "is-core-module": "^2.16.0", "path-parse": "^1.0.7", "supports-preserve-symlinks-flag": "^1.0.0" }, "bin": { "resolve": "bin/resolve" } }, "sha512-NPRy+/ncIMeDlTAsuqwKIiferiawhefFJtkNSW0qZJEqMEb+qBt/77B/jGeeek+F0uOeN05CDa6HXbbIgtVX4w=="],
"resolve.exports": ["resolve.exports@2.0.3", "", {}, "sha512-OcXjMsGdhL4XnbShKpAcSqPMzQoYkYyhbEaeSko47MjRP9NfEQMhZkXL1DoFlt9LWQn4YttrdnV6X2OiyzBi+A=="],
@@ -153,6 +262,8 @@
"serialize-javascript": ["serialize-javascript@6.0.2", "", { "dependencies": { "randombytes": "^2.1.0" } }, "sha512-Saa1xPByTTq2gdeFZYLLo+RFE35NHZkAbqZeWNd3BpzppeVisAqpDjcp8dyf6uIvEqJRd46jemmyA4iFIeVk8g=="],
"shiki": ["shiki@3.13.0", "", { "dependencies": { "@shikijs/core": "3.13.0", "@shikijs/engine-javascript": "3.13.0", "@shikijs/engine-oniguruma": "3.13.0", "@shikijs/langs": "3.13.0", "@shikijs/themes": "3.13.0", "@shikijs/types": "3.13.0", "@shikijs/vscode-textmate": "^10.0.2", "@types/hast": "^3.0.4" } }, "sha512-aZW4l8Og16CokuCLf8CF8kq+KK2yOygapU5m3+hoGw0Mdosc6fPitjM+ujYarppj5ZIKGyPDPP1vqmQhr+5/0g=="],
"sirv": ["sirv@2.0.4", "", { "dependencies": { "@polka/url": "^1.0.0-next.24", "mrmime": "^2.0.0", "totalist": "^3.0.0" } }, "sha512-94Bdh3cC2PKrbgSOUqTiGPWVZeSiXfKOVZNJniWoqrWrRkB1CJzBU3NEbiTsPcYy1lDsANA/THzS+9WBiy5nfQ=="],
"sirv-cli": ["sirv-cli@2.0.2", "", { "dependencies": { "console-clear": "^1.1.0", "get-port": "^3.2.0", "kleur": "^4.1.4", "local-access": "^1.0.1", "sade": "^1.6.0", "semiver": "^1.0.0", "sirv": "^2.0.0", "tinydate": "^1.0.0" }, "bin": { "sirv": "bin.js" } }, "sha512-OtSJDwxsF1NWHc7ps3Sa0s+dPtP15iQNJzfKVz+MxkEo3z72mCD+yu30ct79rPr0CaV1HXSOBp+MIY5uIhHZ1A=="],
@@ -163,6 +274,14 @@
"source-map-support": ["source-map-support@0.5.21", "", { "dependencies": { "buffer-from": "^1.0.0", "source-map": "^0.6.0" } }, "sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w=="],
"space-separated-tokens": ["space-separated-tokens@2.0.2", "", {}, "sha512-PEGlAwrG8yXGXRjW32fGbg66JAlOAwbObuqVoJpv/mRgoWDQfgH1wDPvtzWyUSNAXBGSk8h755YDbbcEy3SH2Q=="],
"static-browser-server": ["static-browser-server@1.0.3", "", { "dependencies": { "@open-draft/deferred-promise": "^2.1.0", "dotenv": "^16.0.3", "mime-db": "^1.52.0", "outvariant": "^1.3.0" } }, "sha512-ZUyfgGDdFRbZGGJQ1YhiM930Yczz5VlbJObrQLlk24+qNHVQx4OlLcYswEUo3bIyNAbQUIUR9Yr5/Hqjzqb4zA=="],
"strict-event-emitter": ["strict-event-emitter@0.4.6", "", {}, "sha512-12KWeb+wixJohmnwNFerbyiBrAlq5qJLwIt38etRtKtmmHyDSoGlIqFE9wx+4IwG0aDjI7GV8tc8ZccjWZZtTg=="],
"stringify-entities": ["stringify-entities@4.0.4", "", { "dependencies": { "character-entities-html4": "^2.0.0", "character-entities-legacy": "^3.0.0" } }, "sha512-IwfBptatlO+QCJUo19AqvrPNqlVMpW9YEL2LIVY+Rpv2qsjCGxaDLNRgeGsQWJhfItebuJhsGSLjaBbNSQ+ieg=="],
"supports-preserve-symlinks-flag": ["supports-preserve-symlinks-flag@1.0.0", "", {}, "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w=="],
"svelte": ["svelte@3.59.2", "", {}, "sha512-vzSyuGr3eEoAtT/A6bmajosJZIUWySzY2CzB3w2pgPvnkUjGqlDnsNnA0PMO+mMAhuyMul6C2uuZzY6ELSkzyA=="],
@@ -175,16 +294,60 @@
"totalist": ["totalist@3.0.1", "", {}, "sha512-sf4i37nQ2LBx4m3wB74y+ubopq6W/dIzXg0FDGjsYnZHVa1Da8FH853wlL2gtUhg+xJXjfk3kUZS3BRoQeoQBQ=="],
"trim-lines": ["trim-lines@3.0.1", "", {}, "sha512-kRj8B+YHZCc9kQYdWfJB2/oUl9rA99qbowYYBtr4ui4mZyAQ2JpvVBd/6U2YloATfqBhBTSMhTpgBHtU0Mf3Rg=="],
"tseep": ["tseep@1.3.1", "", {}, "sha512-ZPtfk1tQnZVyr7BPtbJ93qaAh2lZuIOpTMjhrYa4XctT8xe7t4SAW9LIxrySDuYMsfNNayE51E/WNGrNVgVicQ=="],
"typescript-lru-cache": ["typescript-lru-cache@2.0.0", "", {}, "sha512-Jp57Qyy8wXeMkdNuZiglE6v2Cypg13eDA1chHwDG6kq51X7gk4K7P7HaDdzZKCxkegXkVHNcPD0n5aW6OZH3aA=="],
"unist-util-is": ["unist-util-is@6.0.0", "", { "dependencies": { "@types/unist": "^3.0.0" } }, "sha512-2qCTHimwdxLfz+YzdGfkqNlH0tLi9xjTnHddPmJwtIG9MGsdbutfTc4P+haPD7l7Cjxf/WZj+we5qfVPvvxfYw=="],
"unist-util-position": ["unist-util-position@5.0.0", "", { "dependencies": { "@types/unist": "^3.0.0" } }, "sha512-fucsC7HjXvkB5R3kTCO7kUjRdrS0BJt3M/FPxmHMBOm8JQi2BsHAHFsy27E0EolP8rp0NzXsJ+jNPyDWvOJZPA=="],
"unist-util-stringify-position": ["unist-util-stringify-position@4.0.0", "", { "dependencies": { "@types/unist": "^3.0.0" } }, "sha512-0ASV06AAoKCDkS2+xw5RXJywruurpbC4JZSm7nr7MOt1ojAzvyyaO+UxZf18j8FCF6kmzCZKcAgN/yu2gm2XgQ=="],
"unist-util-visit": ["unist-util-visit@5.0.0", "", { "dependencies": { "@types/unist": "^3.0.0", "unist-util-is": "^6.0.0", "unist-util-visit-parents": "^6.0.0" } }, "sha512-MR04uvD+07cwl/yhVuVWAtw+3GOR/knlL55Nd/wAdblk27GCVt3lqpTivy/tkJcZoNPzTwS1Y+KMojlLDhoTzg=="],
"unist-util-visit-parents": ["unist-util-visit-parents@6.0.1", "", { "dependencies": { "@types/unist": "^3.0.0", "unist-util-is": "^6.0.0" } }, "sha512-L/PqWzfTP9lzzEa6CKs0k2nARxTdZduw3zyh8d2NVBnsyvHjSX4TWse388YrrQKbvI8w20fGjGlhgT96WwKykw=="],
"vfile": ["vfile@6.0.3", "", { "dependencies": { "@types/unist": "^3.0.0", "vfile-message": "^4.0.0" } }, "sha512-KzIbH/9tXat2u30jf+smMwFCsno4wHVdNmzFyL+T/L3UGqqk6JKfVqOFOZEpZSHADH1k40ab6NUIXZq422ov3Q=="],
"vfile-message": ["vfile-message@4.0.3", "", { "dependencies": { "@types/unist": "^3.0.0", "unist-util-stringify-position": "^4.0.0" } }, "sha512-QTHzsGd1EhbZs4AsQ20JX1rC3cOlt/IWJruk893DfLRr57lcnOeMaWG4K0JrRta4mIJZKth2Au3mM3u03/JWKw=="],
"wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="],
"ws": ["ws@7.5.10", "", { "peerDependencies": { "bufferutil": "^4.0.1", "utf-8-validate": "^5.0.2" }, "optionalPeers": ["bufferutil", "utf-8-validate"] }, "sha512-+dbF1tHwZpXcbOJdVOkzLDxZP1ailvSxM6ZweXTegylPny803bFhA+vqBYw4s31NSAk4S2Qz+AKXK9a4wkdjcQ=="],
"zwitch": ["zwitch@2.0.4", "", {}, "sha512-bXE4cR/kVZhKZX/RjPEflHaKVhUVl85noU3v6b8apfQEc1x4A+zBxjZ4lN8LqGd6WZ3dl98pY4o717VFmoPp+A=="],
"@scure/bip32/@noble/curves": ["@noble/curves@1.1.0", "", { "dependencies": { "@noble/hashes": "1.3.1" } }, "sha512-091oBExgENk/kGj3AZmtBDMpxQPDtxQABR2B9lb1JbVTs6ytdzZNwvhxQ4MWasRNEzlbEH8jCWFCwhF/Obj5AA=="],
"@scure/bip32/@noble/hashes": ["@noble/hashes@1.3.2", "", {}, "sha512-MVC8EAQp7MvEcm30KWENFjgR+Mkmf+D189XJTkFIlwohU5hcBbn1ZkKq7KVTi2Hme3PMGF390DaL52beVrIihQ=="],
"@scure/bip32/@scure/base": ["@scure/base@1.1.1", "", {}, "sha512-ZxOhsSyxYwLJj3pLZCefNitxsj093tb2vq90mp2txoYeBqbcjDjqFhyM8eUjq/uFm6zJ+mUuqxlS2FkuSY1MTA=="],
"@scure/bip39/@noble/hashes": ["@noble/hashes@1.3.2", "", {}, "sha512-MVC8EAQp7MvEcm30KWENFjgR+Mkmf+D189XJTkFIlwohU5hcBbn1ZkKq7KVTi2Hme3PMGF390DaL52beVrIihQ=="],
"@scure/bip39/@scure/base": ["@scure/base@1.1.1", "", {}, "sha512-ZxOhsSyxYwLJj3pLZCefNitxsj093tb2vq90mp2txoYeBqbcjDjqFhyM8eUjq/uFm6zJ+mUuqxlS2FkuSY1MTA=="],
"anymatch/picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
"light-bolt11-decoder/@scure/base": ["@scure/base@1.1.1", "", {}, "sha512-ZxOhsSyxYwLJj3pLZCefNitxsj093tb2vq90mp2txoYeBqbcjDjqFhyM8eUjq/uFm6zJ+mUuqxlS2FkuSY1MTA=="],
"nostr-tools/@noble/curves": ["@noble/curves@1.2.0", "", { "dependencies": { "@noble/hashes": "1.3.2" } }, "sha512-oYclrNgRaM9SsBUBVbb8M6DTV7ZHRTKugureoYEncY5c65HOmRzvSiTE3y5CYaPYJA/GVkrhXEoF0M3Ya9PMnw=="],
"nostr-tools/@noble/hashes": ["@noble/hashes@1.3.1", "", {}, "sha512-EbqwksQwz9xDRGfDST86whPBgM65E0OH/pCgqW0GBVzO22bNE+NuIbeTb714+IfSjU3aRk47EUvXIb5bTsenKA=="],
"nostr-tools/@scure/base": ["@scure/base@1.1.1", "", {}, "sha512-ZxOhsSyxYwLJj3pLZCefNitxsj093tb2vq90mp2txoYeBqbcjDjqFhyM8eUjq/uFm6zJ+mUuqxlS2FkuSY1MTA=="],
"readdirp/picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
"rollup-plugin-svelte/@rollup/pluginutils": ["@rollup/pluginutils@4.2.1", "", { "dependencies": { "estree-walker": "^2.0.1", "picomatch": "^2.2.2" } }, "sha512-iKnFXr7NkdZAIHiIWE+BX5ULi/ucVFYWD6TbAV+rZctiRTY2PL6tsIKhoIOaoskiWAkgu+VsbXgUVDNLHf+InQ=="],
"@scure/bip32/@noble/curves/@noble/hashes": ["@noble/hashes@1.3.1", "", {}, "sha512-EbqwksQwz9xDRGfDST86whPBgM65E0OH/pCgqW0GBVzO22bNE+NuIbeTb714+IfSjU3aRk47EUvXIb5bTsenKA=="],
"nostr-tools/@noble/curves/@noble/hashes": ["@noble/hashes@1.3.2", "", {}, "sha512-MVC8EAQp7MvEcm30KWENFjgR+Mkmf+D189XJTkFIlwohU5hcBbn1ZkKq7KVTi2Hme3PMGF390DaL52beVrIihQ=="],
"rollup-plugin-svelte/@rollup/pluginutils/picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
}
}

View File

@@ -4,7 +4,7 @@
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Next Orly</title>
<link rel="icon" href="/favicon.png" type="image/png" />
<link rel="icon" href="/orly-favicon.png" type="image/png" />
<link rel="stylesheet" href="/bundle.css" />
</head>
<body>

1796
app/web/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -19,6 +19,7 @@
"svelte": "^3.55.0"
},
"dependencies": {
"@nostr-dev-kit/ndk": "^2.17.3",
"sirv-cli": "^2.0.0"
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.1 KiB

After

Width:  |  Height:  |  Size: 514 KiB

View File

@@ -6,7 +6,7 @@
<title>ORLY?</title>
<link rel="icon" type="image/png" href="/orly.png" />
<link rel="icon" type="image/png" href="/orly-favicon.png" />
<link rel="stylesheet" href="/global.css" />
<link rel="stylesheet" href="/build/bundle.css" />

Binary file not shown.

After

Width:  |  Height:  |  Size: 379 KiB

View File

@@ -1,6 +1,7 @@
<script>
import LoginModal from './LoginModal.svelte';
import { initializeNostrClient, fetchUserProfile, fetchAllEvents, fetchUserEvents, nostrClient } from './nostr.js';
import { initializeNostrClient, fetchUserProfile, fetchAllEvents, fetchUserEvents, searchEvents, fetchEventById, fetchDeleteEventsByTarget, nostrClient, NostrClient } from './nostr.js';
import { NDKPrivateKeySigner } from '@nostr-dev-kit/ndk';
let isDarkTheme = false;
let showLoginModal = false;
@@ -24,6 +25,10 @@
let oldestEventTimestamp = null; // For timestamp-based pagination
let newestEventTimestamp = null; // For loading newer events
// Search results state
let searchResults = new Map(); // Map of searchTabId -> { events, isLoading, hasMore, oldestTimestamp }
let isLoadingSearch = false;
// Screen-filling events view state
let eventsPerScreen = 20; // Default, will be calculated based on screen size
@@ -35,6 +40,13 @@
// Events filter toggle
let showOnlyMyEvents = false;
// My Events state
let myEvents = [];
let isLoadingMyEvents = false;
let hasMoreMyEvents = true;
let oldestMyEventTimestamp = null;
let newestMyEventTimestamp = null;
// Sprocket management state
let sprocketScript = '';
@@ -135,6 +147,7 @@
}
function truncatePubkey(pubkey) {
if (!pubkey) return 'unknown';
return pubkey.slice(0, 8) + '...' + pubkey.slice(-8);
}
@@ -161,10 +174,12 @@
await loadAllEvents(true, authors);
}
// Events are filtered server-side, but add client-side filtering as backup
$: filteredEvents = showOnlyMyEvents && isLoggedIn && userPubkey
? allEvents.filter(event => event.pubkey === userPubkey)
: allEvents;
// Sort events by created_at timestamp (newest first)
$: filteredEvents = (showOnlyMyEvents && isLoggedIn && userPubkey
? allEvents.filter(event => event.pubkey && event.pubkey === userPubkey)
: allEvents).sort((a, b) => b.created_at - a.created_at);
async function deleteEvent(eventId) {
if (!isLoggedIn) {
@@ -181,7 +196,7 @@
// Check permissions: admin/owner can delete any event, write users can only delete their own events
const canDelete = (userRole === 'admin' || userRole === 'owner') ||
(userRole === 'write' && event.pubkey === userPubkey);
(userRole === 'write' && event.pubkey && event.pubkey === userPubkey);
if (!canDelete) {
alert('You do not have permission to delete this event');
@@ -203,26 +218,172 @@
kind: 5,
created_at: Math.floor(Date.now() / 1000),
tags: [['e', eventId]], // e-tag referencing the event to delete
content: '',
pubkey: userPubkey
content: ''
// Don't set pubkey - let the signer set it
};
console.log('Created delete event template:', deleteEventTemplate);
console.log('User pubkey:', userPubkey);
console.log('Target event:', event);
console.log('Target event pubkey:', event.pubkey);
// Sign the event using the signer
const signedDeleteEvent = await userSigner.signEvent(deleteEventTemplate);
console.log('Signed delete event:', signedDeleteEvent);
console.log('Signed delete event pubkey:', signedDeleteEvent.pubkey);
console.log('Delete event tags:', signedDeleteEvent.tags);
// Publish the delete event to the relay
const result = await nostrClient.publish(signedDeleteEvent);
console.log('Delete event published:', result);
// Determine if we should publish to external relays
// Only publish to external relays if:
// 1. User is deleting their own event, OR
// 2. User is admin/owner AND deleting their own event
const isDeletingOwnEvent = event.pubkey && event.pubkey === userPubkey;
const isAdminOrOwner = (userRole === 'admin' || userRole === 'owner');
const shouldPublishToExternalRelays = isDeletingOwnEvent;
if (result.success && result.okCount > 0) {
// Remove from local list
allEvents = allEvents.filter(event => event.id !== eventId);
alert(`Event deleted successfully (accepted by ${result.okCount} relay(s))`);
if (shouldPublishToExternalRelays) {
// Publish the delete event to all relays (including external ones)
const result = await nostrClient.publish(signedDeleteEvent);
console.log('Delete event published:', result);
if (result.success && result.okCount > 0) {
// Wait a moment for the deletion to propagate
await new Promise(resolve => setTimeout(resolve, 2000));
// Verify the event was actually deleted by trying to fetch it
try {
const deletedEvent = await fetchEventById(eventId, { timeout: 5000 });
if (deletedEvent) {
console.warn('Event still exists after deletion attempt:', deletedEvent);
alert(`Warning: Delete event was accepted by ${result.okCount} relay(s), but the event still exists on the relay. This may indicate the relay does not properly handle delete events.`);
} else {
console.log('Event successfully deleted and verified');
}
} catch (fetchError) {
console.log('Could not fetch event after deletion (likely deleted):', fetchError.message);
}
// Also verify that the delete event has been saved
try {
const deleteEvents = await fetchDeleteEventsByTarget(eventId, { timeout: 5000 });
if (deleteEvents.length > 0) {
console.log(`Delete event verification: Found ${deleteEvents.length} delete event(s) targeting ${eventId}`);
// Check if our delete event is among them
const ourDeleteEvent = deleteEvents.find(de => de.pubkey && de.pubkey === userPubkey);
if (ourDeleteEvent) {
console.log('Our delete event found in database:', ourDeleteEvent.id);
} else {
console.warn('Our delete event not found in database, but other delete events exist');
}
} else {
console.warn('No delete events found in database for target event:', eventId);
}
} catch (deleteFetchError) {
console.log('Could not verify delete event in database:', deleteFetchError.message);
}
// Remove from local lists
allEvents = allEvents.filter(event => event.id !== eventId);
myEvents = myEvents.filter(event => event.id !== eventId);
// Remove from global cache
globalEventsCache = globalEventsCache.filter(event => event.id !== eventId);
// Remove from search results cache
for (const [tabId, searchResult] of searchResults) {
if (searchResult.events) {
searchResult.events = searchResult.events.filter(event => event.id !== eventId);
searchResults.set(tabId, searchResult);
}
}
// Update persistent state
savePersistentState();
// Reload events to show the new delete event at the top
console.log('Reloading events to show delete event...');
const authors = showOnlyMyEvents && isLoggedIn && userPubkey ? [userPubkey] : null;
await loadAllEvents(true, authors);
alert(`Event deleted successfully (accepted by ${result.okCount} relay(s))`);
} else {
throw new Error('No relays accepted the delete event');
}
} else {
throw new Error('No relays accepted the delete event');
// Admin/owner deleting someone else's event - only publish to local relay
// We need to publish only to the local relay, not external ones
const localRelayUrl = `wss://${window.location.host}/`;
// Create a modified client that only connects to the local relay
const localClient = new NostrClient();
await localClient.connectToRelay(localRelayUrl);
const result = await localClient.publish(signedDeleteEvent);
console.log('Delete event published to local relay only:', result);
if (result.success && result.okCount > 0) {
// Wait a moment for the deletion to propagate
await new Promise(resolve => setTimeout(resolve, 2000));
// Verify the event was actually deleted by trying to fetch it
try {
const deletedEvent = await fetchEventById(eventId, { timeout: 5000 });
if (deletedEvent) {
console.warn('Event still exists after deletion attempt:', deletedEvent);
alert(`Warning: Delete event was accepted by ${result.okCount} relay(s), but the event still exists on the relay. This may indicate the relay does not properly handle delete events.`);
} else {
console.log('Event successfully deleted and verified');
}
} catch (fetchError) {
console.log('Could not fetch event after deletion (likely deleted):', fetchError.message);
}
// Also verify that the delete event has been saved
try {
const deleteEvents = await fetchDeleteEventsByTarget(eventId, { timeout: 5000 });
if (deleteEvents.length > 0) {
console.log(`Delete event verification: Found ${deleteEvents.length} delete event(s) targeting ${eventId}`);
// Check if our delete event is among them
const ourDeleteEvent = deleteEvents.find(de => de.pubkey && de.pubkey === userPubkey);
if (ourDeleteEvent) {
console.log('Our delete event found in database:', ourDeleteEvent.id);
} else {
console.warn('Our delete event not found in database, but other delete events exist');
}
} else {
console.warn('No delete events found in database for target event:', eventId);
}
} catch (deleteFetchError) {
console.log('Could not verify delete event in database:', deleteFetchError.message);
}
// Remove from local lists
allEvents = allEvents.filter(event => event.id !== eventId);
myEvents = myEvents.filter(event => event.id !== eventId);
// Remove from global cache
globalEventsCache = globalEventsCache.filter(event => event.id !== eventId);
// Remove from search results cache
for (const [tabId, searchResult] of searchResults) {
if (searchResult.events) {
searchResult.events = searchResult.events.filter(event => event.id !== eventId);
searchResults.set(tabId, searchResult);
}
}
// Update persistent state
savePersistentState();
// Reload events to show the new delete event at the top
console.log('Reloading events to show delete event...');
const authors = showOnlyMyEvents && isLoggedIn && userPubkey ? [userPubkey] : null;
await loadAllEvents(true, authors);
alert(`Event deleted successfully (local relay only - admin/owner deleting other user's event)`);
} else {
throw new Error('Local relay did not accept the delete event');
}
}
} catch (error) {
console.error('Failed to delete event:', error);
@@ -358,7 +519,7 @@
function updateGlobalCache(events) {
globalEventsCache = events;
globalEventsCache = events.sort((a, b) => b.created_at - a.created_at);
globalCacheTimestamp = Date.now();
savePersistentState();
}
@@ -712,6 +873,17 @@
// Initialize Nostr client and fetch profile
try {
await initializeNostrClient();
// Set up NDK signer based on authentication method
if (method === 'extension' && signer) {
// Extension signer (NIP-07 compatible)
nostrClient.setSigner(signer);
} else if (method === 'nsec' && privateKey) {
// Private key signer for nsec
const ndkSigner = new NDKPrivateKeySigner(privateKey);
nostrClient.setSigner(ndkSigner);
}
userProfile = await fetchUserProfile(pubkey);
console.log('Profile loaded:', userProfile);
} catch (error) {
@@ -780,22 +952,97 @@
const searchTabId = `search-${Date.now()}`;
const newSearchTab = {
id: searchTabId,
icon: '',
icon: '🔍',
label: query,
isSearchTab: true,
query: query
};
searchTabs = [...searchTabs, newSearchTab];
selectedTab = searchTabId;
// Initialize search results for this tab
searchResults.set(searchTabId, {
events: [],
isLoading: false,
hasMore: true,
oldestTimestamp: null
});
// Start loading search results
loadSearchResults(searchTabId, query);
}
function closeSearchTab(tabId) {
searchTabs = searchTabs.filter(tab => tab.id !== tabId);
searchResults.delete(tabId); // Clean up search results
if (selectedTab === tabId) {
selectedTab = 'export'; // Fall back to export tab
}
}
async function loadSearchResults(searchTabId, query, reset = true) {
const searchResult = searchResults.get(searchTabId);
if (!searchResult || searchResult.isLoading) return;
// Update loading state
searchResult.isLoading = true;
searchResults.set(searchTabId, searchResult);
try {
const options = {
limit: reset ? 100 : 200,
until: reset ? Math.floor(Date.now() / 1000) : searchResult.oldestTimestamp
};
console.log('Loading search results for query:', query, 'with options:', options);
const events = await searchEvents(query, options);
console.log('Received search results:', events.length, 'events');
if (reset) {
searchResult.events = events.sort((a, b) => b.created_at - a.created_at);
} else {
searchResult.events = [...searchResult.events, ...events].sort((a, b) => b.created_at - a.created_at);
}
// Update oldest timestamp for next pagination
if (events.length > 0) {
const oldestInBatch = Math.min(...events.map(e => e.created_at));
if (!searchResult.oldestTimestamp || oldestInBatch < searchResult.oldestTimestamp) {
searchResult.oldestTimestamp = oldestInBatch;
}
}
searchResult.hasMore = events.length === (reset ? 100 : 200);
searchResult.isLoading = false;
searchResults.set(searchTabId, searchResult);
} catch (error) {
console.error('Failed to load search results:', error);
searchResult.isLoading = false;
searchResults.set(searchTabId, searchResult);
alert('Failed to load search results: ' + error.message);
}
}
async function loadMoreSearchResults(searchTabId) {
const searchTab = searchTabs.find(tab => tab.id === searchTabId);
if (searchTab) {
await loadSearchResults(searchTabId, searchTab.query, false);
}
}
function handleSearchScroll(event, searchTabId) {
const { scrollTop, scrollHeight, clientHeight } = event.target;
const threshold = 100; // Load more when 100px from bottom
if (scrollHeight - scrollTop - clientHeight < threshold) {
const searchResult = searchResults.get(searchTabId);
if (searchResult && !searchResult.isLoading && searchResult.hasMore) {
loadMoreSearchResults(searchTabId);
}
}
}
$: if (typeof document !== 'undefined') {
if (isDarkTheme) {
@@ -981,13 +1228,9 @@
});
if (reset) {
myEvents = events;
// Update cache
updateCache(userPubkey, events);
myEvents = events.sort((a, b) => b.created_at - a.created_at);
} else {
myEvents = [...myEvents, ...events];
// Update cache with all events
updateCache(userPubkey, myEvents);
myEvents = [...myEvents, ...events].sort((a, b) => b.created_at - a.created_at);
}
// Update oldest timestamp for next pagination
@@ -1061,9 +1304,9 @@
}
try {
// Use WebSocket REQ to fetch events with timestamp-based pagination
// Use Nostr WebSocket to fetch events with timestamp-based pagination
// Load 100 events on initial load, otherwise use 200 for pagination
console.log('Loading events with authors filter:', authors);
console.log('Loading events with authors filter:', authors, 'including delete events');
const events = await fetchAllEvents({
limit: reset ? 100 : 200,
until: reset ? Math.floor(Date.now() / 1000) : oldestEventTimestamp,
@@ -1071,18 +1314,18 @@
});
console.log('Received events:', events.length, 'events');
if (authors && events.length > 0) {
const nonUserEvents = events.filter(event => event.pubkey !== userPubkey);
const nonUserEvents = events.filter(event => event.pubkey && event.pubkey !== userPubkey);
if (nonUserEvents.length > 0) {
console.warn('Server returned non-user events:', nonUserEvents.length, 'out of', events.length);
}
}
if (reset) {
allEvents = events;
allEvents = events.sort((a, b) => b.created_at - a.created_at);
// Update global cache
updateGlobalCache(events);
} else {
allEvents = [...allEvents, ...events];
allEvents = [...allEvents, ...events].sort((a, b) => b.created_at - a.created_at);
// Update global cache with all events
updateGlobalCache(allEvents);
}
@@ -1235,7 +1478,7 @@
<!-- Header -->
<header class="main-header" class:dark-theme={isDarkTheme}>
<div class="header-content">
<img src="/orly.png" alt="Orly Logo" class="logo"/>
<img src="/orly-favicon.png" alt="Orly Logo" class="logo"/>
{#if isSearchMode}
<div class="search-input-container">
<input
@@ -1290,12 +1533,11 @@
{#each tabs as tab}
<button class="tab" class:active={selectedTab === tab.id}
on:click={() => selectTab(tab.id)}>
{#if tab.isSearchTab}
<span class="tab-icon close-icon" on:click|stopPropagation={() => closeSearchTab(tab.id)} on:keydown={(e) => e.key === 'Enter' && closeSearchTab(tab.id)} role="button" tabindex="0">{tab.icon}</span>
{:else}
<span class="tab-icon">{tab.icon}</span>
{/if}
<span class="tab-icon">{tab.icon}</span>
<span class="tab-label">{tab.label}</span>
{#if tab.isSearchTab}
<span class="tab-close-icon" on:click|stopPropagation={() => closeSearchTab(tab.id)} on:keydown={(e) => e.key === 'Enter' && closeSearchTab(tab.id)} role="button" tabindex="0"></span>
{/if}
</button>
{/each}
</div>
@@ -1365,12 +1607,24 @@
<span class="toggle-label">Only show my events</span>
</label>
</div>
<button class="refresh-btn" on:click={() => {
const authors = showOnlyMyEvents && userPubkey ? [userPubkey] : null;
loadAllEvents(false, authors);
}} disabled={isLoadingEvents}>
🔄 Load More
</button>
<div class="events-view-buttons">
<button class="refresh-btn" on:click={() => {
const authors = showOnlyMyEvents && userPubkey ? [userPubkey] : null;
loadAllEvents(false, authors);
}} disabled={isLoadingEvents}>
🔄 Load More
</button>
<button class="reload-btn" on:click={() => {
const authors = showOnlyMyEvents && userPubkey ? [userPubkey] : null;
loadAllEvents(true, authors);
}} disabled={isLoadingEvents}>
{#if isLoadingEvents}
<div class="spinner"></div>
{:else}
🔄
{/if}
</button>
</div>
</div>
<div class="events-view-content" on:scroll={handleScroll}>
{#if filteredEvents.length > 0}
@@ -1385,14 +1639,27 @@
{truncatePubkey(event.pubkey)}
</div>
<div class="events-view-kind">
<span class="kind-number">{event.kind}</span>
<span class="kind-number" class:delete-event={event.kind === 5}>{event.kind}</span>
<span class="kind-name">{getKindName(event.kind)}</span>
</div>
</div>
<div class="events-view-content">
{truncateContent(event.content)}
{#if event.kind === 5}
<div class="delete-event-info">
<span class="delete-event-label">🗑️ Delete Event</span>
{#if event.tags && event.tags.length > 0}
<div class="delete-targets">
{#each event.tags.filter(tag => tag[0] === 'e') as eTag}
<span class="delete-target">Target: {eTag[1].slice(0, 8)}...{eTag[1].slice(-8)}</span>
{/each}
</div>
{/if}
</div>
{:else}
{truncateContent(event.content)}
{/if}
</div>
{#if (userRole === 'admin' || userRole === 'owner') || (userRole === 'write' && event.pubkey === userPubkey)}
{#if event.kind !== 5 && ((userRole === 'admin' || userRole === 'owner') || (userRole === 'write' && event.pubkey && event.pubkey === userPubkey))}
<button class="delete-btn" on:click|stopPropagation={() => deleteEvent(event.id)}>
🗑️
</button>
@@ -1561,6 +1828,71 @@
</div>
{/if}
</div>
{:else if searchTabs.some(tab => tab.id === selectedTab)}
{#each searchTabs as searchTab}
{#if searchTab.id === selectedTab}
<div class="search-results-view">
<div class="search-results-header">
<h2>🔍 Search Results: "{searchTab.query}"</h2>
<button class="refresh-btn" on:click={() => loadSearchResults(searchTab.id, searchTab.query, true)} disabled={searchResults.get(searchTab.id)?.isLoading}>
🔄 Refresh
</button>
</div>
<div class="search-results-content" on:scroll={(e) => handleSearchScroll(e, searchTab.id)}>
{#if searchResults.get(searchTab.id)?.events?.length > 0}
{#each searchResults.get(searchTab.id).events as event}
<div class="search-result-item" class:expanded={expandedEvents.has(event.id)}>
<div class="search-result-row" on:click={() => toggleEventExpansion(event.id)} on:keydown={(e) => e.key === 'Enter' && toggleEventExpansion(event.id)} role="button" tabindex="0">
<div class="search-result-avatar">
<div class="avatar-placeholder">👤</div>
</div>
<div class="search-result-info">
<div class="search-result-author">
{truncatePubkey(event.pubkey)}
</div>
<div class="search-result-kind">
<span class="kind-number">{event.kind}</span>
<span class="kind-name">{getKindName(event.kind)}</span>
</div>
</div>
<div class="search-result-content">
{truncateContent(event.content)}
</div>
{#if event.kind !== 5 && ((userRole === 'admin' || userRole === 'owner') || (userRole === 'write' && event.pubkey && event.pubkey === userPubkey))}
<button class="delete-btn" on:click|stopPropagation={() => deleteEvent(event.id)}>
🗑️
</button>
{/if}
</div>
{#if expandedEvents.has(event.id)}
<div class="search-result-details">
<pre class="event-json">{JSON.stringify(event, null, 2)}</pre>
</div>
{/if}
</div>
{/each}
{:else if !searchResults.get(searchTab.id)?.isLoading}
<div class="no-search-results">
<p>No search results found for "{searchTab.query}".</p>
</div>
{/if}
{#if searchResults.get(searchTab.id)?.isLoading}
<div class="loading-search-results">
<div class="loading-spinner"></div>
<p>Searching...</p>
</div>
{/if}
{#if !searchResults.get(searchTab.id)?.hasMore && searchResults.get(searchTab.id)?.events?.length > 0}
<div class="end-of-search-results">
<p>No more search results to load.</p>
</div>
{/if}
</div>
</div>
{/if}
{/each}
{:else}
<div class="welcome-message">
{#if isLoggedIn}
@@ -1885,13 +2217,20 @@
flex: 1;
}
.close-icon {
.tab-close-icon {
cursor: pointer;
transition: opacity 0.2s;
font-size: 0.8em;
margin-left: auto;
padding: 0.25rem;
border-radius: 0.25rem;
flex-shrink: 0;
}
.close-icon:hover {
.tab-close-icon:hover {
opacity: 0.7;
background-color: var(--warning);
color: white;
}
/* Main Content */
@@ -2543,7 +2882,13 @@
line-height: 1.5;
}
.export-btn, .import-btn, .refresh-btn {
.events-view-buttons {
display: flex;
gap: 0.5rem;
align-items: center;
}
.export-btn, .import-btn, .refresh-btn, .reload-btn {
padding: 0.5rem 1rem;
background: var(--primary);
color: white;
@@ -2559,10 +2904,29 @@
height: 2em;
}
.export-btn:hover, .import-btn:hover, .refresh-btn:hover {
.export-btn:hover, .import-btn:hover, .refresh-btn:hover, .reload-btn:hover {
background: #00ACC1;
}
.reload-btn {
min-width: 2em;
justify-content: center;
}
.spinner {
width: 1em;
height: 1em;
border: 2px solid transparent;
border-top: 2px solid currentColor;
border-radius: 50%;
animation: spin 1s linear infinite;
}
@keyframes spin {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
.export-btn:disabled, .import-btn:disabled {
opacity: 0.5;
cursor: not-allowed;
@@ -2800,6 +3164,35 @@
color: white;
}
.kind-number.delete-event {
background: var(--warning);
}
.delete-event-info {
display: flex;
flex-direction: column;
gap: 0.25rem;
}
.delete-event-label {
font-weight: 500;
color: var(--warning);
}
.delete-targets {
display: flex;
flex-direction: column;
gap: 0.125rem;
}
.delete-target {
font-size: 0.75rem;
font-family: monospace;
color: var(--text-color);
opacity: 0.7;
}
.events-view-details {
border-top: 1px solid var(--border-color);
background: var(--header-bg);
@@ -2873,6 +3266,148 @@
margin: 0;
}
/* Search Results Styles */
.search-results-view {
position: fixed;
top: 3em;
left: 200px;
right: 0;
bottom: 0;
background: var(--bg-color);
color: var(--text-color);
display: flex;
flex-direction: column;
overflow: hidden;
}
.search-results-header {
padding: 0.5rem 1rem;
background: var(--header-bg);
border-bottom: 1px solid var(--border-color);
flex-shrink: 0;
display: flex;
justify-content: space-between;
align-items: center;
height: 2.5em;
}
.search-results-header h2 {
margin: 0;
font-size: 1rem;
font-weight: 600;
color: var(--text-color);
}
.search-results-content {
flex: 1;
overflow-y: auto;
padding: 0;
}
.search-result-item {
border-bottom: 1px solid var(--border-color);
transition: background-color 0.2s;
}
.search-result-item:hover {
background: var(--button-hover-bg);
}
.search-result-item.expanded {
background: var(--button-hover-bg);
}
.search-result-row {
display: flex;
align-items: center;
padding: 0.75rem 1rem;
cursor: pointer;
gap: 0.75rem;
min-height: 3rem;
}
.search-result-avatar {
flex-shrink: 0;
width: 2rem;
height: 2rem;
display: flex;
align-items: center;
justify-content: center;
}
.search-result-info {
flex-shrink: 0;
width: 12rem;
display: flex;
flex-direction: column;
gap: 0.25rem;
}
.search-result-author {
font-family: monospace;
font-size: 0.8rem;
color: var(--text-color);
opacity: 0.8;
}
.search-result-kind {
display: flex;
align-items: center;
gap: 0.5rem;
}
.search-result-content {
flex: 1;
color: var(--text-color);
font-size: 0.9rem;
line-height: 1.3;
word-break: break-word;
padding: 0 0.5rem;
}
.search-result-details {
border-top: 1px solid var(--border-color);
background: var(--header-bg);
padding: 1rem;
}
.no-search-results {
padding: 2rem;
text-align: center;
color: var(--text-color);
opacity: 0.7;
}
.no-search-results p {
margin: 0;
font-size: 1rem;
}
.loading-search-results {
padding: 2rem;
text-align: center;
color: var(--text-color);
opacity: 0.7;
}
.loading-search-results p {
margin: 0;
font-size: 0.9rem;
}
.end-of-search-results {
padding: 1rem;
text-align: center;
color: var(--text-color);
opacity: 0.5;
font-size: 0.8rem;
border-top: 1px solid var(--border-color);
}
.end-of-search-results p {
margin: 0;
}
@media (max-width: 640px) {
.settings-drawer {
@@ -2916,5 +3451,21 @@
.events-view-content {
font-size: 0.8rem;
}
.search-results-view {
left: 160px;
}
.search-result-info {
width: 8rem;
}
.search-result-author {
font-size: 0.7rem;
}
.search-result-content {
font-size: 0.8rem;
}
}
</style>

View File

@@ -1,5 +1,6 @@
<script>
import { createEventDispatcher } from 'svelte';
import { NDKPrivateKeySigner } from '@nostr-dev-kit/ndk';
const dispatch = createEventDispatcher();
@@ -103,22 +104,23 @@
throw new Error('Invalid nsec format. Must start with "nsec1"');
}
// Convert nsec to hex format (simplified for demo)
const privateKey = nsecToHex(nsecInput.trim());
// Create NDK signer from nsec
const signer = new NDKPrivateKeySigner(nsecInput.trim());
// In a real implementation, you'd derive the public key from private key
const publicKey = 'derived_' + privateKey.slice(5, 37);
// Get the public key from the signer
const publicKey = await signer.user().then(user => user.pubkey);
// Store securely (in production, consider more secure storage)
localStorage.setItem('nostr_auth_method', 'nsec');
localStorage.setItem('nostr_pubkey', publicKey);
localStorage.setItem('nostr_privkey', privateKey);
localStorage.setItem('nostr_privkey', nsecInput.trim());
successMessage = 'Successfully logged in with nsec!';
dispatch('login', {
method: 'nsec',
pubkey: publicKey,
privateKey: privateKey
privateKey: nsecInput.trim(),
signer: signer
});
setTimeout(() => {

View File

@@ -1,14 +1,5 @@
// Default Nostr relays for searching
export const DEFAULT_RELAYS = [
// Use the local relay WebSocket endpoint
`wss://${window.location.host}/ws`,
// Fallback to external relays if local fails
"wss://relay.damus.io",
"wss://relay.nostr.band",
"wss://nos.lol",
"wss://relay.nostr.net",
"wss://relay.minibits.cash",
"wss://relay.coinos.io/",
"wss://nwc.primal.net",
"wss://relay.orly.dev",
`wss://${window.location.host}/`,
];

View File

@@ -1,243 +1,123 @@
import NDK, { NDKPrivateKeySigner, NDKEvent } from '@nostr-dev-kit/ndk';
import { DEFAULT_RELAYS } from "./constants.js";
// Simple WebSocket relay manager
// NDK-based Nostr client wrapper
class NostrClient {
constructor() {
this.relays = new Map();
this.subscriptions = new Map();
this.ndk = new NDK({
explicitRelayUrls: DEFAULT_RELAYS
});
this.isConnected = false;
}
async connect() {
console.log("Starting connection to", DEFAULT_RELAYS.length, "relays...");
const connectionPromises = DEFAULT_RELAYS.map((relayUrl) => {
return new Promise((resolve) => {
try {
console.log(`Attempting to connect to ${relayUrl}`);
const ws = new WebSocket(relayUrl);
ws.onopen = () => {
console.log(`✓ Successfully connected to ${relayUrl}`);
resolve(true);
};
ws.onerror = (error) => {
console.error(`✗ Error connecting to ${relayUrl}:`, error);
resolve(false);
};
ws.onclose = (event) => {
console.warn(
`Connection closed to ${relayUrl}:`,
event.code,
event.reason,
);
};
ws.onmessage = (event) => {
console.log(`Message from ${relayUrl}:`, event.data);
try {
this.handleMessage(relayUrl, JSON.parse(event.data));
} catch (error) {
console.error(
`Failed to parse message from ${relayUrl}:`,
error,
event.data,
);
}
};
this.relays.set(relayUrl, ws);
// Timeout after 5 seconds
setTimeout(() => {
if (ws.readyState !== WebSocket.OPEN) {
console.warn(`Connection timeout for ${relayUrl}`);
resolve(false);
}
}, 5000);
} catch (error) {
console.error(`Failed to create WebSocket for ${relayUrl}:`, error);
resolve(false);
}
});
});
const results = await Promise.all(connectionPromises);
const successfulConnections = results.filter(Boolean).length;
console.log(
`Connected to ${successfulConnections}/${DEFAULT_RELAYS.length} relays`,
);
// Wait a bit more for connections to stabilize
await new Promise((resolve) => setTimeout(resolve, 1000));
console.log("Starting NDK connection to", DEFAULT_RELAYS.length, "relays...");
try {
await this.ndk.connect();
this.isConnected = true;
console.log("✓ NDK successfully connected to relays");
// Wait a bit for connections to stabilize
await new Promise((resolve) => setTimeout(resolve, 1000));
} catch (error) {
console.error("✗ NDK connection failed:", error);
throw error;
}
}
handleMessage(relayUrl, message) {
console.log(`Processing message from ${relayUrl}:`, message);
const [type, subscriptionId, event, ...rest] = message;
console.log(`Message type: ${type}, subscriptionId: ${subscriptionId}`);
if (type === "EVENT") {
console.log(`Received EVENT for subscription ${subscriptionId}:`, event);
if (this.subscriptions.has(subscriptionId)) {
console.log(
`Found callback for subscription ${subscriptionId}, executing...`,
);
const callback = this.subscriptions.get(subscriptionId);
callback(event);
} else {
console.warn(`No callback found for subscription ${subscriptionId}`);
}
} else if (type === "EOSE") {
console.log(
`End of stored events for subscription ${subscriptionId} from ${relayUrl}`,
);
// Dispatch EOSE event for fetchEvents function
if (this.subscriptions.has(subscriptionId)) {
window.dispatchEvent(new CustomEvent('nostr-eose', {
detail: { subscriptionId, relayUrl }
}));
}
} else if (type === "NOTICE") {
console.warn(`Notice from ${relayUrl}:`, subscriptionId);
} else {
console.log(`Unknown message type ${type} from ${relayUrl}:`, message);
async connectToRelay(relayUrl) {
console.log(`Adding relay to NDK: ${relayUrl}`);
try {
// For now, just update the DEFAULT_RELAYS array and reconnect
// This is a simpler approach that avoids replacing the NDK instance
DEFAULT_RELAYS.push(relayUrl);
// Reconnect with the updated relay list
await this.connect();
console.log(`✓ Successfully added relay ${relayUrl}`);
return true;
} catch (error) {
console.error(`✗ Failed to add relay ${relayUrl}:`, error);
return false;
}
}
subscribe(filters, callback) {
const subscriptionId = Math.random().toString(36).substring(7);
console.log(
`Creating subscription ${subscriptionId} with filters:`,
filters,
);
console.log("Creating NDK subscription with filters:", filters);
const subscription = this.ndk.subscribe(filters, {
closeOnEose: true
});
this.subscriptions.set(subscriptionId, callback);
subscription.on('event', (event) => {
console.log("Event received via NDK:", event);
callback(event.rawEvent());
});
const subscription = ["REQ", subscriptionId, filters];
console.log(`Subscription message:`, JSON.stringify(subscription));
subscription.on('eose', () => {
console.log("EOSE received via NDK");
window.dispatchEvent(new CustomEvent('nostr-eose', {
detail: { subscriptionId: subscription.id }
}));
});
let sentCount = 0;
for (const [relayUrl, ws] of this.relays) {
console.log(
`Checking relay ${relayUrl}, readyState: ${ws.readyState} (${ws.readyState === WebSocket.OPEN ? "OPEN" : "NOT OPEN"})`,
);
if (ws.readyState === WebSocket.OPEN) {
try {
ws.send(JSON.stringify(subscription));
console.log(`✓ Sent subscription to ${relayUrl}`);
sentCount++;
} catch (error) {
console.error(`✗ Failed to send subscription to ${relayUrl}:`, error);
}
} else {
console.warn(`✗ Cannot send to ${relayUrl}, connection not ready`);
}
}
console.log(
`Subscription ${subscriptionId} sent to ${sentCount}/${this.relays.size} relays`,
);
return subscriptionId;
return subscription.id;
}
unsubscribe(subscriptionId) {
this.subscriptions.delete(subscriptionId);
const closeMessage = ["CLOSE", subscriptionId];
for (const [relayUrl, ws] of this.relays) {
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify(closeMessage));
}
}
console.log(`Closing NDK subscription: ${subscriptionId}`);
// NDK handles subscription cleanup automatically
}
disconnect() {
for (const [relayUrl, ws] of this.relays) {
ws.close();
console.log("Disconnecting NDK");
// Note: NDK doesn't have a destroy method, just disconnect
if (this.ndk && typeof this.ndk.disconnect === 'function') {
this.ndk.disconnect();
}
this.relays.clear();
this.subscriptions.clear();
this.isConnected = false;
}
// Publish an event to all connected relays
// Publish an event using NDK
async publish(event) {
return new Promise((resolve, reject) => {
const eventMessage = ["EVENT", event];
console.log("Publishing event:", eventMessage);
let publishedCount = 0;
let okCount = 0;
let errorCount = 0;
const totalRelays = this.relays.size;
if (totalRelays === 0) {
reject(new Error("No relays connected"));
return;
}
console.log("Publishing event via NDK:", event);
try {
const ndkEvent = new NDKEvent(this.ndk, event);
await ndkEvent.publish();
console.log("✓ Event published successfully via NDK");
return { success: true, okCount: 1, errorCount: 0 };
} catch (error) {
console.error("✗ Failed to publish event via NDK:", error);
throw error;
}
}
const handleResponse = (relayUrl, success) => {
if (success) {
okCount++;
} else {
errorCount++;
}
if (okCount + errorCount === totalRelays) {
if (okCount > 0) {
resolve({ success: true, okCount, errorCount });
} else {
reject(new Error(`All relays rejected the event. Errors: ${errorCount}`));
}
}
};
// Get NDK instance for advanced usage
getNDK() {
return this.ndk;
}
// Set up a temporary listener for OK responses
const originalHandleMessage = this.handleMessage.bind(this);
this.handleMessage = (relayUrl, message) => {
if (message[0] === "OK" && message[1] === event.id) {
const success = message[2] === true;
console.log(`Relay ${relayUrl} response:`, success ? "OK" : "REJECTED", message[3] || "");
handleResponse(relayUrl, success);
}
// Call original handler for other messages
originalHandleMessage(relayUrl, message);
};
// Get signer from NDK
getSigner() {
return this.ndk.signer;
}
// Send to all connected relays
for (const [relayUrl, ws] of this.relays) {
if (ws.readyState === WebSocket.OPEN) {
try {
ws.send(JSON.stringify(eventMessage));
publishedCount++;
console.log(`Event sent to ${relayUrl}`);
} catch (error) {
console.error(`Failed to send event to ${relayUrl}:`, error);
handleResponse(relayUrl, false);
}
} else {
console.warn(`Relay ${relayUrl} is not open, skipping`);
handleResponse(relayUrl, false);
}
}
// Restore original handler after timeout
setTimeout(() => {
this.handleMessage = originalHandleMessage;
if (okCount + errorCount < totalRelays) {
reject(new Error("Timeout waiting for relay responses"));
}
}, 10000); // 10 second timeout
});
// Set signer for NDK
setSigner(signer) {
this.ndk.signer = signer;
}
}
// Create a global client instance
export const nostrClient = new NostrClient();
// Export the class for creating new instances
export { NostrClient };
// IndexedDB helpers for caching events (kind 0 profiles)
const DB_NAME = "nostrCache";
const DB_VERSION = 1;
@@ -329,224 +209,100 @@ function parseProfileFromEvent(event) {
}
}
// Fetch user profile metadata (kind 0)
// Fetch user profile metadata (kind 0) using NDK
export async function fetchUserProfile(pubkey) {
return new Promise(async (resolve, reject) => {
console.log(`Starting profile fetch for pubkey: ${pubkey}`);
console.log(`Starting profile fetch for pubkey: ${pubkey}`);
let resolved = false;
let newestEvent = null;
let debounceTimer = null;
let overallTimer = null;
let subscriptionId = null;
function cleanup() {
if (subscriptionId) {
try {
nostrClient.unsubscribe(subscriptionId);
} catch {}
}
if (debounceTimer) clearTimeout(debounceTimer);
if (overallTimer) clearTimeout(overallTimer);
// 1) Try cached profile first and resolve immediately if present
try {
const cachedEvent = await getLatestProfileEvent(pubkey);
if (cachedEvent) {
console.log("Using cached profile event");
const profile = parseProfileFromEvent(cachedEvent);
return profile;
}
} catch (e) {
console.warn("Failed to load cached profile", e);
}
// 1) Try cached profile first and resolve immediately if present
try {
const cachedEvent = await getLatestProfileEvent(pubkey);
if (cachedEvent) {
console.log("Using cached profile event");
const profile = parseProfileFromEvent(cachedEvent);
resolved = true; // resolve immediately with cache
resolve(profile);
}
} catch (e) {
console.warn("Failed to load cached profile", e);
}
// 2) Set overall timeout
overallTimer = setTimeout(() => {
if (!newestEvent) {
console.log("Profile fetch timeout reached");
if (!resolved) reject(new Error("Profile fetch timeout"));
} else if (!resolved) {
resolve(parseProfileFromEvent(newestEvent));
}
cleanup();
}, 15000);
// 3) Wait a bit to ensure connections are ready and then subscribe without limit
setTimeout(() => {
console.log("Starting subscription after connection delay...");
subscriptionId = nostrClient.subscribe(
{
kinds: [0],
authors: [pubkey],
},
(event) => {
// Collect all kind 0 events and pick the newest by created_at
if (!event || event.kind !== 0) return;
console.log("Profile event received:", event);
if (
!newestEvent ||
(event.created_at || 0) > (newestEvent.created_at || 0)
) {
newestEvent = event;
}
// Debounce to wait for more relays; then finalize selection
if (debounceTimer) clearTimeout(debounceTimer);
debounceTimer = setTimeout(async () => {
try {
if (newestEvent) {
await putEvent(newestEvent); // cache newest only
const profile = parseProfileFromEvent(newestEvent);
// Notify listeners that an updated profile is available
try {
if (typeof window !== "undefined" && window.dispatchEvent) {
window.dispatchEvent(
new CustomEvent("profile-updated", {
detail: { pubkey, profile, event: newestEvent },
}),
);
}
} catch (e) {
console.warn("Failed to dispatch profile-updated event", e);
}
if (!resolved) {
resolve(profile);
resolved = true;
}
}
} finally {
cleanup();
}
}, 800);
},
);
}, 2000);
});
}
// Fetch events using WebSocket REQ envelopes
export async function fetchEvents(filters, options = {}) {
return new Promise(async (resolve, reject) => {
console.log(`Starting event fetch with filters:`, filters);
let resolved = false;
let events = [];
let debounceTimer = null;
let overallTimer = null;
let subscriptionId = null;
let eoseReceived = false;
const {
timeout = 30000,
debounceDelay = 1000,
limit = null
} = options;
function cleanup() {
if (subscriptionId) {
try {
nostrClient.unsubscribe(subscriptionId);
} catch {}
}
if (debounceTimer) clearTimeout(debounceTimer);
if (overallTimer) clearTimeout(overallTimer);
}
// Set overall timeout
overallTimer = setTimeout(() => {
if (!resolved) {
console.log("Event fetch timeout reached");
if (events.length > 0) {
resolve(events);
} else {
reject(new Error("Event fetch timeout"));
}
resolved = true;
}
cleanup();
}, timeout);
// Subscribe to events
setTimeout(() => {
console.log("Starting event subscription...");
// 2) Fetch profile using NDK
try {
const ndk = nostrClient.getNDK();
const user = ndk.getUser({ hexpubkey: pubkey });
// Fetch the latest profile event
const profileEvent = await user.fetchProfile();
if (profileEvent) {
console.log("Profile fetched via NDK:", profileEvent);
// Add limit to filters if specified
const requestFilters = { ...filters };
if (limit) {
requestFilters.limit = limit;
}
console.log('Sending REQ with filters:', requestFilters);
subscriptionId = nostrClient.subscribe(
requestFilters,
(event) => {
if (!event) return;
console.log("Event received:", event);
// Check if we already have this event (deduplication)
const existingEvent = events.find(e => e.id === event.id);
if (!existingEvent) {
events.push(event);
}
// If we have a limit and reached it, resolve immediately
if (limit && events.length >= limit) {
if (!resolved) {
resolve(events.slice(0, limit));
resolved = true;
}
cleanup();
return;
}
// Debounce to wait for more events
if (debounceTimer) clearTimeout(debounceTimer);
debounceTimer = setTimeout(() => {
if (eoseReceived && !resolved) {
resolve(events);
resolved = true;
cleanup();
}
}, debounceDelay);
},
);
// Listen for EOSE events
const handleEOSE = (event) => {
if (event.detail.subscriptionId === subscriptionId) {
console.log("EOSE received for subscription", subscriptionId);
eoseReceived = true;
// If we haven't resolved yet and have events, resolve now
if (!resolved && events.length > 0) {
resolve(events);
resolved = true;
cleanup();
}
// Cache the event
await putEvent(profileEvent.rawEvent());
// Parse profile data
const profile = parseProfileFromEvent(profileEvent.rawEvent());
// Notify listeners that an updated profile is available
try {
if (typeof window !== "undefined" && window.dispatchEvent) {
window.dispatchEvent(
new CustomEvent("profile-updated", {
detail: { pubkey, profile, event: profileEvent.rawEvent() },
}),
);
}
};
// Add EOSE listener
window.addEventListener('nostr-eose', handleEOSE);
// Cleanup EOSE listener
const originalCleanup = cleanup;
cleanup = () => {
window.removeEventListener('nostr-eose', handleEOSE);
originalCleanup();
};
}, 1000);
});
} catch (e) {
console.warn("Failed to dispatch profile-updated event", e);
}
return profile;
} else {
throw new Error("No profile found");
}
} catch (error) {
console.error("Failed to fetch profile via NDK:", error);
throw error;
}
}
// Fetch all events with timestamp-based pagination
// Fetch events using NDK
export async function fetchEvents(filters, options = {}) {
console.log(`Starting event fetch with filters:`, filters);
const {
timeout = 30000,
limit = null
} = options;
try {
const ndk = nostrClient.getNDK();
// Add limit to filters if specified
const requestFilters = { ...filters };
if (limit) {
requestFilters.limit = limit;
}
console.log('Fetching events via NDK with filters:', requestFilters);
// Use NDK's fetchEvents method
const events = await ndk.fetchEvents(requestFilters, {
timeout
});
console.log(`Fetched ${events.size} events via NDK`);
// Convert NDK events to raw events
const rawEvents = Array.from(events).map(event => event.rawEvent());
return rawEvents;
} catch (error) {
console.error("Failed to fetch events via NDK:", error);
throw error;
}
}
// Fetch all events with timestamp-based pagination using NDK (including delete events)
export async function fetchAllEvents(options = {}) {
const {
limit = 100,
@@ -561,6 +317,8 @@ export async function fetchAllEvents(options = {}) {
if (until) filters.until = until;
if (authors) filters.authors = authors;
// Don't specify kinds filter - this will include all events including delete events (kind 5)
const events = await fetchEvents(filters, {
limit: limit,
timeout: 30000
@@ -569,7 +327,7 @@ export async function fetchAllEvents(options = {}) {
return events;
}
// Fetch user's events with timestamp-based pagination
// Fetch user's events with timestamp-based pagination using NDK
export async function fetchUserEvents(pubkey, options = {}) {
const {
limit = 100,
@@ -592,6 +350,102 @@ export async function fetchUserEvents(pubkey, options = {}) {
return events;
}
// NIP-50 search function using NDK
export async function searchEvents(searchQuery, options = {}) {
const {
limit = 100,
since = null,
until = null,
kinds = null
} = options;
const filters = {
search: searchQuery
};
if (since) filters.since = since;
if (until) filters.until = until;
if (kinds) filters.kinds = kinds;
const events = await fetchEvents(filters, {
limit: limit,
timeout: 30000
});
return events;
}
// Fetch a specific event by ID
export async function fetchEventById(eventId, options = {}) {
const {
timeout = 10000,
relays = null
} = options;
console.log(`Fetching event by ID: ${eventId}`);
try {
const ndk = nostrClient.getNDK();
const filters = {
ids: [eventId]
};
console.log('Fetching event via NDK with filters:', filters);
// Use NDK's fetchEvents method
const events = await ndk.fetchEvents(filters, {
timeout
});
console.log(`Fetched ${events.size} events via NDK`);
// Convert NDK events to raw events
const rawEvents = Array.from(events).map(event => event.rawEvent());
// Return the first event if found, null otherwise
return rawEvents.length > 0 ? rawEvents[0] : null;
} catch (error) {
console.error("Failed to fetch event by ID via NDK:", error);
throw error;
}
}
// Fetch delete events that target a specific event ID using Nostr
export async function fetchDeleteEventsByTarget(eventId, options = {}) {
const {
timeout = 10000
} = options;
console.log(`Fetching delete events for target: ${eventId}`);
try {
const ndk = nostrClient.getNDK();
const filters = {
kinds: [5], // Kind 5 is deletion
'#e': [eventId] // e-tag referencing the target event
};
console.log('Fetching delete events via NDK with filters:', filters);
// Use NDK's fetchEvents method
const events = await ndk.fetchEvents(filters, {
timeout
});
console.log(`Fetched ${events.size} delete events via NDK`);
// Convert NDK events to raw events
const rawEvents = Array.from(events).map(event => event.rawEvent());
return rawEvents;
} catch (error) {
console.error("Failed to fetch delete events via NDK:", error);
throw error;
}
}
// Initialize client connection
export async function initializeNostrClient() {

View File

@@ -370,7 +370,7 @@ func (b *Benchmark) RunPeakThroughputTest() {
for ev := range eventChan {
eventStart := time.Now()
_, _, err := b.db.SaveEvent(ctx, ev)
_, err := b.db.SaveEvent(ctx, ev)
latency := time.Since(eventStart)
mu.Lock()
@@ -460,7 +460,7 @@ func (b *Benchmark) RunBurstPatternTest() {
defer wg.Done()
eventStart := time.Now()
_, _, err := b.db.SaveEvent(ctx, ev)
_, err := b.db.SaveEvent(ctx, ev)
latency := time.Since(eventStart)
mu.Lock()
@@ -554,7 +554,7 @@ func (b *Benchmark) RunMixedReadWriteTest() {
if eventIndex%2 == 0 {
// Write operation
writeStart := time.Now()
_, _, err := b.db.SaveEvent(ctx, events[eventIndex])
_, err := b.db.SaveEvent(ctx, events[eventIndex])
writeLatency := time.Since(writeStart)
mu.Lock()
@@ -878,7 +878,7 @@ func (b *Benchmark) RunConcurrentQueryStoreTest() {
for time.Since(start) < b.config.TestDuration && eventIndex < len(writeEvents) {
// Write operation
writeStart := time.Now()
_, _, err := b.db.SaveEvent(ctx, writeEvents[eventIndex])
_, err := b.db.SaveEvent(ctx, writeEvents[eventIndex])
writeLatency := time.Since(writeStart)
mu.Lock()

BIN
docs/orly-favicon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 379 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 485 KiB

After

Width:  |  Height:  |  Size: 514 KiB

2
go.mod
View File

@@ -34,8 +34,8 @@ require (
github.com/go-logr/stdr v1.2.2 // indirect
github.com/google/flatbuffers v25.9.23+incompatible // indirect
github.com/google/pprof v0.0.0-20251002213607-436353cc1ee6 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/nostr-dev-kit/ndk v0.0.0-20251010140307-0653d6e69923 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/templexxx/cpu v0.1.1 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect

4
go.sum
View File

@@ -45,8 +45,6 @@ github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8I
github.com/google/pprof v0.0.0-20240227163752-401108e1b7e7/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
github.com/google/pprof v0.0.0-20251002213607-436353cc1ee6 h1:/WHh/1k4thM/w+PAZEIiZK9NwCMFahw5tUzKUCnUtds=
github.com/google/pprof v0.0.0-20251002213607-436353cc1ee6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
github.com/ianlancetaylor/demangle v0.0.0-20230524184225-eabc099b10ab/go.mod h1:gx7rwoVhcfuVKG5uya9Hs3Sxj7EIvldVofAWIUtGouw=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
@@ -62,6 +60,8 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/ledongthuc/pdf v0.0.0-20220302134840-0c2507a12d80/go.mod h1:imJHygn/1yfhB7XSJJKlFZKl/J+dCPAknuiaGOshXAs=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/nostr-dev-kit/ndk v0.0.0-20251010140307-0653d6e69923 h1:N+sorUpSXhIxJeJ4A81SC3UTwo4S+BL3ECB/QSYS5qE=
github.com/nostr-dev-kit/ndk v0.0.0-20251010140307-0653d6e69923/go.mod h1:g76mM+6X3X2E9gM9VP+1I9arcSIhCLwknT1HAXJA+Z8=
github.com/orisano/pixelmatch v0.0.0-20220722002657-fb0b55479cde/go.mod h1:nZgzbfBr3hhjoZnS66nKrHmduYNpc34ny7RK4z5/HM0=
github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA=
github.com/pkg/profile v1.7.0/go.mod h1:8Uer0jas47ZQMJ7VD+OHknK4YDY07LPUC6dEvqDjvNo=

1
package.json Normal file
View File

@@ -0,0 +1 @@
{"dependencies": {}}

View File

@@ -422,7 +422,7 @@ func (f *Follows) startSubscriptions(ctx context.Context) {
)
}
if _, _, err = f.D.SaveEvent(
if _, err = f.D.SaveEvent(
ctx, res.Event,
); err != nil {
if !strings.HasPrefix(

View File

@@ -45,32 +45,50 @@ func (d *D) DeleteEvent(c context.Context, eid []byte) (err error) {
func (d *D) DeleteEventBySerial(
c context.Context, ser *types.Uint40, ev *event.E,
) (err error) {
d.Logger.Infof("DeleteEventBySerial: deleting event %0x (serial %d)", ev.ID, ser.Get())
// Get all indexes for the event
var idxs [][]byte
idxs, err = GetIndexesForEvent(ev, ser.Get())
if chk.E(err) {
d.Logger.Errorf("DeleteEventBySerial: failed to get indexes for event %0x: %v", ev.ID, err)
return
}
d.Logger.Infof("DeleteEventBySerial: found %d indexes for event %0x", len(idxs), ev.ID)
// Get the event key
eventKey := new(bytes.Buffer)
if err = indexes.EventEnc(ser).MarshalWrite(eventKey); chk.E(err) {
d.Logger.Errorf("DeleteEventBySerial: failed to create event key for %0x: %v", ev.ID, err)
return
}
// Delete the event and all its indexes in a transaction
err = d.Update(
func(txn *badger.Txn) (err error) {
// Delete the event
if err = txn.Delete(eventKey.Bytes()); chk.E(err) {
d.Logger.Errorf("DeleteEventBySerial: failed to delete event %0x: %v", ev.ID, err)
return
}
d.Logger.Infof("DeleteEventBySerial: deleted event %0x", ev.ID)
// Delete all indexes
for _, key := range idxs {
for i, key := range idxs {
if err = txn.Delete(key); chk.E(err) {
d.Logger.Errorf("DeleteEventBySerial: failed to delete index %d for event %0x: %v", i, ev.ID, err)
return
}
}
d.Logger.Infof("DeleteEventBySerial: deleted %d indexes for event %0x", len(idxs), ev.ID)
return
},
)
if chk.E(err) {
d.Logger.Errorf("DeleteEventBySerial: transaction failed for event %0x: %v", ev.ID, err)
return
}
d.Logger.Infof("DeleteEventBySerial: successfully deleted event %0x and all indexes", ev.ID)
return
}

View File

@@ -108,5 +108,4 @@ func (d *D) Export(c context.Context, w io.Writer, pubkeys ...[]byte) {
}
}
}
return
}

View File

@@ -55,7 +55,7 @@ func TestExport(t *testing.T) {
}
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event: %v", err)
}

View File

@@ -58,7 +58,7 @@ func TestFetchEventBySerial(t *testing.T) {
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -52,11 +52,11 @@ func TestGetSerialById(t *testing.T) {
t.Fatal(err)
}
ev.Free()
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -63,7 +63,7 @@ func TestGetSerialsByRange(t *testing.T) {
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -59,7 +59,7 @@ func (d *D) Import(rr io.Reader) {
continue
}
if _, _, err = d.SaveEvent(d.ctx, ev); err != nil {
if _, err = d.SaveEvent(d.ctx, ev); err != nil {
// return the pooled buffer on error paths too
ev.Free()
continue
@@ -83,6 +83,4 @@ func (d *D) Import(rr io.Reader) {
// Help garbage collection
tmp = nil
}()
return
}

View File

@@ -43,7 +43,7 @@ func TestMultipleParameterizedReplaceableEvents(t *testing.T) {
baseEvent.Sign(sign)
// Save the base parameterized replaceable event
if _, _, err := db.SaveEvent(ctx, baseEvent); err != nil {
if _, err := db.SaveEvent(ctx, baseEvent); err != nil {
t.Fatalf("Failed to save base parameterized replaceable event: %v", err)
}
@@ -61,7 +61,7 @@ func TestMultipleParameterizedReplaceableEvents(t *testing.T) {
newerEvent.Sign(sign)
// Save the newer parameterized replaceable event
if _, _, err := db.SaveEvent(ctx, newerEvent); err != nil {
if _, err := db.SaveEvent(ctx, newerEvent); err != nil {
t.Fatalf(
"Failed to save newer parameterized replaceable event: %v", err,
)
@@ -81,7 +81,7 @@ func TestMultipleParameterizedReplaceableEvents(t *testing.T) {
newestEvent.Sign(sign)
// Save the newest parameterized replaceable event
if _, _, err := db.SaveEvent(ctx, newestEvent); err != nil {
if _, err := db.SaveEvent(ctx, newestEvent); err != nil {
t.Fatalf(
"Failed to save newest parameterized replaceable event: %v", err,
)

View File

@@ -1,194 +1,196 @@
package database
import (
"context"
"os"
"testing"
"time"
"context"
"os"
"testing"
"time"
"lol.mleku.dev/chk"
"next.orly.dev/pkg/crypto/p256k"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/encoders/timestamp"
"lol.mleku.dev/chk"
"next.orly.dev/pkg/crypto/p256k"
"next.orly.dev/pkg/encoders/event"
"next.orly.dev/pkg/encoders/filter"
"next.orly.dev/pkg/encoders/kind"
"next.orly.dev/pkg/encoders/tag"
"next.orly.dev/pkg/encoders/timestamp"
)
// helper to create a fresh DB
func newTestDB(t *testing.T) (*D, context.Context, context.CancelFunc, string) {
t.Helper()
tempDir, err := os.MkdirTemp("", "search-db-*")
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
ctx, cancel := context.WithCancel(context.Background())
db, err := New(ctx, cancel, tempDir, "error")
if err != nil {
cancel()
os.RemoveAll(tempDir)
t.Fatalf("Failed to init DB: %v", err)
}
return db, ctx, cancel, tempDir
t.Helper()
tempDir, err := os.MkdirTemp("", "search-db-*")
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
ctx, cancel := context.WithCancel(context.Background())
db, err := New(ctx, cancel, tempDir, "error")
if err != nil {
cancel()
os.RemoveAll(tempDir)
t.Fatalf("Failed to init DB: %v", err)
}
return db, ctx, cancel, tempDir
}
// TestQueryEventsBySearchTerms creates a small set of events with content and tags,
// saves them, then queries using filter.Search to ensure the word index works.
func TestQueryEventsBySearchTerms(t *testing.T) {
db, ctx, cancel, tempDir := newTestDB(t)
defer func() {
// cancel context first to stop background routines cleanly
cancel()
db.Close()
os.RemoveAll(tempDir)
}()
db, ctx, cancel, tempDir := newTestDB(t)
defer func() {
// cancel context first to stop background routines cleanly
cancel()
db.Close()
os.RemoveAll(tempDir)
}()
// signer for all events
sign := new(p256k.Signer)
if err := sign.Generate(); chk.E(err) {
t.Fatalf("signer generate: %v", err)
}
// signer for all events
sign := new(p256k.Signer)
if err := sign.Generate(); chk.E(err) {
t.Fatalf("signer generate: %v", err)
}
now := timestamp.Now().V
now := timestamp.Now().V
// Events to cover tokenizer rules:
// - regular words
// - URLs ignored
// - 64-char hex ignored
// - nostr: URIs ignored
// - #[n] mentions ignored
// - tag fields included in search
// Events to cover tokenizer rules:
// - regular words
// - URLs ignored
// - 64-char hex ignored
// - nostr: URIs ignored
// - #[n] mentions ignored
// - tag fields included in search
// 1. Contains words: "alpha beta", plus URL and hex (ignored)
ev1 := event.New()
ev1.Kind = kind.TextNote.K
ev1.Pubkey = sign.Pub()
ev1.CreatedAt = now - 5
ev1.Content = []byte("Alpha beta visit https://example.com deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef")
ev1.Tags = tag.NewS()
ev1.Sign(sign)
if _, _, err := db.SaveEvent(ctx, ev1); err != nil {
t.Fatalf("save ev1: %v", err)
}
// 1. Contains words: "alpha beta", plus URL and hex (ignored)
ev1 := event.New()
ev1.Kind = kind.TextNote.K
ev1.Pubkey = sign.Pub()
ev1.CreatedAt = now - 5
ev1.Content = []byte("Alpha beta visit https://example.com deadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef")
ev1.Tags = tag.NewS()
ev1.Sign(sign)
if _, err := db.SaveEvent(ctx, ev1); err != nil {
t.Fatalf("save ev1: %v", err)
}
// 2. Contains overlap word "beta" and unique "gamma" and nostr: URI ignored
ev2 := event.New()
ev2.Kind = kind.TextNote.K
ev2.Pubkey = sign.Pub()
ev2.CreatedAt = now - 4
ev2.Content = []byte("beta and GAMMA with nostr:nevent1qqqqq")
ev2.Tags = tag.NewS()
ev2.Sign(sign)
if _, _, err := db.SaveEvent(ctx, ev2); err != nil {
t.Fatalf("save ev2: %v", err)
}
// 2. Contains overlap word "beta" and unique "gamma" and nostr: URI ignored
ev2 := event.New()
ev2.Kind = kind.TextNote.K
ev2.Pubkey = sign.Pub()
ev2.CreatedAt = now - 4
ev2.Content = []byte("beta and GAMMA with nostr:nevent1qqqqq")
ev2.Tags = tag.NewS()
ev2.Sign(sign)
if _, err := db.SaveEvent(ctx, ev2); err != nil {
t.Fatalf("save ev2: %v", err)
}
// 3. Contains only a URL (should not create word tokens) and mention #[1] (ignored)
ev3 := event.New()
ev3.Kind = kind.TextNote.K
ev3.Pubkey = sign.Pub()
ev3.CreatedAt = now - 3
ev3.Content = []byte("see www.example.org #[1]")
ev3.Tags = tag.NewS()
ev3.Sign(sign)
if _, _, err := db.SaveEvent(ctx, ev3); err != nil {
t.Fatalf("save ev3: %v", err)
}
// 3. Contains only a URL (should not create word tokens) and mention #[1] (ignored)
ev3 := event.New()
ev3.Kind = kind.TextNote.K
ev3.Pubkey = sign.Pub()
ev3.CreatedAt = now - 3
ev3.Content = []byte("see www.example.org #[1]")
ev3.Tags = tag.NewS()
ev3.Sign(sign)
if _, err := db.SaveEvent(ctx, ev3); err != nil {
t.Fatalf("save ev3: %v", err)
}
// 4. No content words, but tag value has searchable words: "delta epsilon"
ev4 := event.New()
ev4.Kind = kind.TextNote.K
ev4.Pubkey = sign.Pub()
ev4.CreatedAt = now - 2
ev4.Content = []byte("")
ev4.Tags = tag.NewS()
*ev4.Tags = append(*ev4.Tags, tag.NewFromAny("t", "delta epsilon"))
ev4.Sign(sign)
if _, _, err := db.SaveEvent(ctx, ev4); err != nil {
t.Fatalf("save ev4: %v", err)
}
// 4. No content words, but tag value has searchable words: "delta epsilon"
ev4 := event.New()
ev4.Kind = kind.TextNote.K
ev4.Pubkey = sign.Pub()
ev4.CreatedAt = now - 2
ev4.Content = []byte("")
ev4.Tags = tag.NewS()
*ev4.Tags = append(*ev4.Tags, tag.NewFromAny("t", "delta epsilon"))
ev4.Sign(sign)
if _, err := db.SaveEvent(ctx, ev4); err != nil {
t.Fatalf("save ev4: %v", err)
}
// 5. Another event with both content and tag tokens for ordering checks
ev5 := event.New()
ev5.Kind = kind.TextNote.K
ev5.Pubkey = sign.Pub()
ev5.CreatedAt = now - 1
ev5.Content = []byte("alpha DELTA mixed-case and link http://foo.bar")
ev5.Tags = tag.NewS()
*ev5.Tags = append(*ev5.Tags, tag.NewFromAny("t", "zeta"))
ev5.Sign(sign)
if _, _, err := db.SaveEvent(ctx, ev5); err != nil {
t.Fatalf("save ev5: %v", err)
}
// 5. Another event with both content and tag tokens for ordering checks
ev5 := event.New()
ev5.Kind = kind.TextNote.K
ev5.Pubkey = sign.Pub()
ev5.CreatedAt = now - 1
ev5.Content = []byte("alpha DELTA mixed-case and link http://foo.bar")
ev5.Tags = tag.NewS()
*ev5.Tags = append(*ev5.Tags, tag.NewFromAny("t", "zeta"))
ev5.Sign(sign)
if _, err := db.SaveEvent(ctx, ev5); err != nil {
t.Fatalf("save ev5: %v", err)
}
// Small sleep to ensure created_at ordering is the only factor
time.Sleep(5 * time.Millisecond)
// Small sleep to ensure created_at ordering is the only factor
time.Sleep(5 * time.Millisecond)
// Helper to run a search and return IDs
run := func(q string) ([]*event.E, error) {
f := &filter.F{Search: []byte(q)}
return db.QueryEvents(ctx, f)
}
// Helper to run a search and return IDs
run := func(q string) ([]*event.E, error) {
f := &filter.F{Search: []byte(q)}
return db.QueryEvents(ctx, f)
}
// Single-term search: alpha -> should match ev1 and ev5 ordered by created_at desc (ev5 newer)
if evs, err := run("alpha"); err != nil {
t.Fatalf("search alpha: %v", err)
} else {
if len(evs) != 2 {
t.Fatalf("alpha expected 2 results, got %d", len(evs))
}
if !(evs[0].CreatedAt >= evs[1].CreatedAt) {
t.Fatalf("results not ordered by created_at desc")
}
}
// Single-term search: alpha -> should match ev1 and ev5 ordered by created_at desc (ev5 newer)
if evs, err := run("alpha"); err != nil {
t.Fatalf("search alpha: %v", err)
} else {
if len(evs) != 2 {
t.Fatalf("alpha expected 2 results, got %d", len(evs))
}
if !(evs[0].CreatedAt >= evs[1].CreatedAt) {
t.Fatalf("results not ordered by created_at desc")
}
}
// Overlap term beta -> ev1 and ev2
if evs, err := run("beta"); err != nil {
t.Fatalf("search beta: %v", err)
} else if len(evs) != 2 {
t.Fatalf("beta expected 2 results, got %d", len(evs))
}
// Overlap term beta -> ev1 and ev2
if evs, err := run("beta"); err != nil {
t.Fatalf("search beta: %v", err)
} else if len(evs) != 2 {
t.Fatalf("beta expected 2 results, got %d", len(evs))
}
// Unique term gamma -> only ev2
if evs, err := run("gamma"); err != nil {
t.Fatalf("search gamma: %v", err)
} else if len(evs) != 1 {
t.Fatalf("gamma expected 1 result, got %d", len(evs))
}
// Unique term gamma -> only ev2
if evs, err := run("gamma"); err != nil {
t.Fatalf("search gamma: %v", err)
} else if len(evs) != 1 {
t.Fatalf("gamma expected 1 result, got %d", len(evs))
}
// URL terms should be ignored: example -> appears only as URL in ev1/ev3/ev5; tokenizer ignores URLs so expect 0
if evs, err := run("example"); err != nil {
t.Fatalf("search example: %v", err)
} else if len(evs) != 0 {
t.Fatalf("example expected 0 results (URL tokens ignored), got %d", len(evs))
}
// URL terms should be ignored: example -> appears only as URL in ev1/ev3/ev5; tokenizer ignores URLs so expect 0
if evs, err := run("example"); err != nil {
t.Fatalf("search example: %v", err)
} else if len(evs) != 0 {
t.Fatalf(
"example expected 0 results (URL tokens ignored), got %d", len(evs),
)
}
// Tag words searchable: delta should match ev4 and ev5 (delta in tag for ev4, in content for ev5)
if evs, err := run("delta"); err != nil {
t.Fatalf("search delta: %v", err)
} else if len(evs) != 2 {
t.Fatalf("delta expected 2 results, got %d", len(evs))
}
// Tag words searchable: delta should match ev4 and ev5 (delta in tag for ev4, in content for ev5)
if evs, err := run("delta"); err != nil {
t.Fatalf("search delta: %v", err)
} else if len(evs) != 2 {
t.Fatalf("delta expected 2 results, got %d", len(evs))
}
// Very short token ignored: single-letter should yield 0
if evs, err := run("a"); err != nil {
t.Fatalf("search short token: %v", err)
} else if len(evs) != 0 {
t.Fatalf("single-letter expected 0 results, got %d", len(evs))
}
// Very short token ignored: single-letter should yield 0
if evs, err := run("a"); err != nil {
t.Fatalf("search short token: %v", err)
} else if len(evs) != 0 {
t.Fatalf("single-letter expected 0 results, got %d", len(evs))
}
// 64-char hex should be ignored
hex64 := "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef"
if evs, err := run(hex64); err != nil {
t.Fatalf("search hex64: %v", err)
} else if len(evs) != 0 {
t.Fatalf("hex64 expected 0 results, got %d", len(evs))
}
// 64-char hex should be ignored
hex64 := "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef"
if evs, err := run(hex64); err != nil {
t.Fatalf("search hex64: %v", err)
} else if len(evs) != 0 {
t.Fatalf("hex64 expected 0 results, got %d", len(evs))
}
// nostr: scheme ignored
if evs, err := run("nostr:nevent1qqqqq"); err != nil {
t.Fatalf("search nostr: %v", err)
} else if len(evs) != 0 {
t.Fatalf("nostr: expected 0 results, got %d", len(evs))
}
// nostr: scheme ignored
if evs, err := run("nostr:nevent1qqqqq"); err != nil {
t.Fatalf("search nostr: %v", err)
} else if len(evs) != 0 {
t.Fatalf("nostr: expected 0 results, got %d", len(evs))
}
}

View File

@@ -37,6 +37,12 @@ func CheckExpiration(ev *event.E) (expired bool) {
func (d *D) QueryEvents(c context.Context, f *filter.F) (
evs event.S, err error,
) {
return d.QueryEventsWithOptions(c, f, true)
}
func (d *D) QueryEventsWithOptions(c context.Context, f *filter.F, includeDeleteEvents bool) (
evs event.S, err error,
) {
// if there is Ids in the query, this overrides anything else
var expDeletes types.Uint40s
@@ -195,16 +201,15 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
// We don't need to do anything with direct event ID
// references as we will filter those out in the second pass
}
// Check for 'a' tags that reference parameterized replaceable
// events
// Check for 'a' tags that reference replaceable events
aTags := ev.Tags.GetAll([]byte("a"))
for _, aTag := range aTags {
if aTag.Len() < 2 {
continue
}
// Parse the 'a' tag value: kind:pubkey:d-tag
// Parse the 'a' tag value: kind:pubkey:d-tag (for parameterized) or kind:pubkey (for regular)
split := bytes.Split(aTag.Value(), []byte{':'})
if len(split) != 3 {
if len(split) < 2 {
continue
}
// Parse the kind
@@ -214,8 +219,8 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
continue
}
kk := kind.New(uint16(kindInt))
// Only process parameterized replaceable events
if !kind.IsParameterizedReplaceable(kk.K) {
// Process both regular and parameterized replaceable events
if !kind.IsReplaceable(kk.K) {
continue
}
// Parse the pubkey
@@ -230,21 +235,30 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
// Create the key for the deletion map using hex
// representation of pubkey
key := hex.Enc(pk) + ":" + strconv.Itoa(int(kk.K))
// Initialize the inner map if it doesn't exist
if _, exists := deletionsByKindPubkeyDTag[key]; !exists {
deletionsByKindPubkeyDTag[key] = make(map[string]int64)
if kind.IsParameterizedReplaceable(kk.K) {
// For parameterized replaceable events, use d-tag specific deletion
if len(split) < 3 {
continue
}
// Initialize the inner map if it doesn't exist
if _, exists := deletionsByKindPubkeyDTag[key]; !exists {
deletionsByKindPubkeyDTag[key] = make(map[string]int64)
}
// Record the newest delete timestamp for this d-tag
dValue := string(split[2])
if ts, ok := deletionsByKindPubkeyDTag[key][dValue]; !ok || ev.CreatedAt > ts {
deletionsByKindPubkeyDTag[key][dValue] = ev.CreatedAt
}
} else {
// For regular replaceable events, mark as deleted by kind/pubkey
deletionsByKindPubkey[key] = true
}
// Record the newest delete timestamp for this d-tag
dValue := string(split[2])
if ts, ok := deletionsByKindPubkeyDTag[key][dValue]; !ok || ev.CreatedAt > ts {
deletionsByKindPubkeyDTag[key][dValue] = ev.CreatedAt
}
// Debug logging
}
// For replaceable events, we need to check if there are any
// e-tags that reference events with the same kind and pubkey
for _, eTag := range eTags {
if eTag.Len() != 64 {
if len(eTag.Value()) != 64 {
continue
}
// Get the event ID from the e-tag
@@ -252,15 +266,30 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
if _, err = hex.DecBytes(evId, eTag.Value()); err != nil {
continue
}
// Query for the event
var targetEvs event.S
targetEvs, err = d.QueryEvents(
c, &filter.F{Ids: tag.NewFromBytesSlice(evId)},
)
if err != nil || len(targetEvs) == 0 {
continue
// Look for the target event in our current batch instead of querying
var targetEv *event.E
for _, candidateEv := range allEvents {
if utils.FastEqual(candidateEv.ID, evId) {
targetEv = candidateEv
break
}
}
targetEv := targetEvs[0]
// If not found in current batch, try to fetch it directly
if targetEv == nil {
// Get serial for the event ID
ser, serErr := d.GetSerialById(evId)
if serErr != nil || ser == nil {
continue
}
// Fetch the event by serial
targetEv, serErr = d.FetchEventBySerial(ser)
if serErr != nil || targetEv == nil {
continue
}
}
// Only allow users to delete their own events
if !utils.FastEqual(targetEv.Pubkey, ev.Pubkey) {
continue
@@ -378,8 +407,8 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
// )
}
// Skip events with kind 5 (Deletion)
if ev.Kind == kind.Deletion.K {
// Skip events with kind 5 (Deletion) unless explicitly requested
if ev.Kind == kind.Deletion.K && !includeDeleteEvents {
continue
}
// Check if this event's ID is in the filter
@@ -408,16 +437,8 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
// kind/pubkey and is not in the filter AND there isn't a newer
// event with the same kind/pubkey
if deletionsByKindPubkey[key] && !isIdInFilter {
// Check if there's a newer event with the same kind/pubkey
// that hasn't been specifically deleted
existing, exists := replaceableEvents[key]
if !exists || ev.CreatedAt > existing.CreatedAt {
// This is the newest event so far, keep it
replaceableEvents[key] = ev
} else {
// There's a newer event, skip this one
continue
}
// This replaceable event has been deleted, skip it
continue
} else {
// Normal replaceable event handling
existing, exists := replaceableEvents[key]
@@ -501,3 +522,23 @@ func (d *D) QueryEvents(c context.Context, f *filter.F) (
}
return
}
// QueryDeleteEventsByTargetId queries for delete events that target a specific event ID
func (d *D) QueryDeleteEventsByTargetId(c context.Context, targetEventId []byte) (
evs event.S, err error,
) {
// Create a filter for deletion events with the target event ID in e-tags
f := &filter.F{
Kinds: kind.NewS(kind.Deletion),
Tags: tag.NewS(
tag.NewFromAny("#e", hex.Enc(targetEventId)),
),
}
// Query for the delete events
if evs, err = d.QueryEventsWithOptions(c, f, true); chk.E(err) {
return
}
return
}

View File

@@ -64,7 +64,7 @@ func setupTestDB(t *testing.T) (
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}
@@ -204,7 +204,7 @@ func TestReplaceableEventsAndDeletion(t *testing.T) {
replaceableEvent.Tags = tag.NewS()
replaceableEvent.Sign(sign)
// Save the replaceable event
if _, _, err := db.SaveEvent(ctx, replaceableEvent); err != nil {
if _, err := db.SaveEvent(ctx, replaceableEvent); err != nil {
t.Errorf("Failed to save replaceable event: %v", err)
}
@@ -217,7 +217,7 @@ func TestReplaceableEventsAndDeletion(t *testing.T) {
newerEvent.Tags = tag.NewS()
newerEvent.Sign(sign)
// Save the newer event
if _, _, err := db.SaveEvent(ctx, newerEvent); err != nil {
if _, err := db.SaveEvent(ctx, newerEvent); err != nil {
t.Errorf("Failed to save newer event: %v", err)
}
@@ -286,7 +286,7 @@ func TestReplaceableEventsAndDeletion(t *testing.T) {
)
// Save the deletion event
if _, _, err = db.SaveEvent(ctx, deletionEvent); err != nil {
if _, err = db.SaveEvent(ctx, deletionEvent); err != nil {
t.Fatalf("Failed to save deletion event: %v", err)
}
@@ -371,7 +371,7 @@ func TestParameterizedReplaceableEventsAndDeletion(t *testing.T) {
paramEvent.Sign(sign)
// Save the parameterized replaceable event
if _, _, err := db.SaveEvent(ctx, paramEvent); err != nil {
if _, err := db.SaveEvent(ctx, paramEvent); err != nil {
t.Fatalf("Failed to save parameterized replaceable event: %v", err)
}
@@ -397,7 +397,7 @@ func TestParameterizedReplaceableEventsAndDeletion(t *testing.T) {
paramDeletionEvent.Sign(sign)
// Save the parameterized deletion event
if _, _, err := db.SaveEvent(ctx, paramDeletionEvent); err != nil {
if _, err := db.SaveEvent(ctx, paramDeletionEvent); err != nil {
t.Fatalf("Failed to save parameterized deletion event: %v", err)
}
@@ -430,7 +430,7 @@ func TestParameterizedReplaceableEventsAndDeletion(t *testing.T) {
paramDeletionEvent2.Sign(sign)
// Save the parameterized deletion event with e-tag
if _, _, err := db.SaveEvent(ctx, paramDeletionEvent2); err != nil {
if _, err := db.SaveEvent(ctx, paramDeletionEvent2); err != nil {
t.Fatalf(
"Failed to save parameterized deletion event with e-tag: %v", err,
)

View File

@@ -58,7 +58,7 @@ func TestQueryForAuthorsTags(t *testing.T) {
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -58,7 +58,7 @@ func TestQueryForCreatedAt(t *testing.T) {
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -60,7 +60,7 @@ func TestQueryForIds(t *testing.T) {
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -59,7 +59,7 @@ func TestQueryForKindsAuthorsTags(t *testing.T) {
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -59,7 +59,7 @@ func TestQueryForKindsAuthors(t *testing.T) {
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -59,7 +59,7 @@ func TestQueryForKindsTags(t *testing.T) {
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -58,7 +58,7 @@ func TestQueryForKinds(t *testing.T) {
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -61,7 +61,7 @@ func TestQueryForSerials(t *testing.T) {
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -58,7 +58,7 @@ func TestQueryForTags(t *testing.T) {
events = append(events, ev)
// Save the event to the database
if _, _, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}

View File

@@ -9,7 +9,6 @@ import (
"github.com/dgraph-io/badger/v4"
"lol.mleku.dev/chk"
"lol.mleku.dev/log"
"next.orly.dev/pkg/database/indexes"
"next.orly.dev/pkg/database/indexes/types"
"next.orly.dev/pkg/encoders/event"
@@ -103,7 +102,9 @@ func (d *D) WouldReplaceEvent(ev *event.E) (bool, types.Uint40s, error) {
}
// SaveEvent saves an event to the database, generating all the necessary indexes.
func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
func (d *D) SaveEvent(c context.Context, ev *event.E) (
replaced bool, err error,
) {
if ev == nil {
err = errors.New("nil event")
return
@@ -111,7 +112,7 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
// check if the event already exists
var ser *types.Uint40
if ser, err = d.GetSerialById(ev.ID); err == nil && ser != nil {
err = errors.New("blocked: event already exists")
err = errors.New("blocked: event already exists: " + hex.Enc(ev.ID[:]))
return
}
@@ -136,10 +137,9 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
}
// check for replacement (separated check vs deletion)
if kind.IsReplaceable(ev.Kind) || kind.IsParameterizedReplaceable(ev.Kind) {
var wouldReplace bool
var sers types.Uint40s
var werr error
if wouldReplace, sers, werr = d.WouldReplaceEvent(ev); werr != nil {
if replaced, sers, werr = d.WouldReplaceEvent(ev); werr != nil {
if errors.Is(werr, ErrOlderThanExisting) {
if kind.IsReplaceable(ev.Kind) {
err = errors.New("blocked: event is older than existing replaceable event")
@@ -156,7 +156,7 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
// any other error
return
}
if wouldReplace {
if replaced {
for _, s := range sers {
var oldEv *event.E
if oldEv, err = d.FetchEventBySerial(s); chk.E(err) {
@@ -178,10 +178,6 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
if idxs, err = GetIndexesForEvent(ev, serial); chk.E(err) {
return
}
// log.I.S(idxs)
for _, k := range idxs {
kc += len(k)
}
// Start a transaction to save the event and all its indexes
err = d.Update(
func(txn *badger.Txn) (err error) {
@@ -209,23 +205,11 @@ func (d *D) SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error) {
v := new(bytes.Buffer)
ev.MarshalBinary(v)
kb, vb := k.Bytes(), v.Bytes()
kc += len(kb)
vc += len(vb)
// log.I.S(kb, vb)
if err = txn.Set(kb, vb); chk.E(err) {
return
}
return
},
)
log.T.F(
"total data written: %d bytes keys %d bytes values for event ID %s", kc,
vc, hex.Enc(ev.ID),
)
// log.T.C(
// func() string {
// return fmt.Sprintf("event:\n%s\n", ev.Serialize())
// },
// )
return
}

View File

@@ -65,7 +65,7 @@ func TestSaveEvents(t *testing.T) {
// Save the event to the database
var k, v int
if k, v, err = db.SaveEvent(ctx, ev); err != nil {
if _, err = db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err)
}
kc += k
@@ -125,7 +125,7 @@ func TestDeletionEventWithETagRejection(t *testing.T) {
regularEvent.Sign(sign)
// Save the regular event
if _, _, err := db.SaveEvent(ctx, regularEvent); err != nil {
if _, err := db.SaveEvent(ctx, regularEvent); err != nil {
t.Fatalf("Failed to save regular event: %v", err)
}
@@ -151,7 +151,7 @@ func TestDeletionEventWithETagRejection(t *testing.T) {
err = errorf.E("deletion events referencing other events with 'e' tag are not allowed")
} else {
// Try to save the deletion event
_, _, err = db.SaveEvent(ctx, deletionEvent)
_, err = db.SaveEvent(ctx, deletionEvent)
}
if err == nil {
@@ -204,18 +204,18 @@ func TestSaveExistingEvent(t *testing.T) {
ev.Sign(sign)
// Save the event for the first time
if _, _, err := db.SaveEvent(ctx, ev); err != nil {
if _, err := db.SaveEvent(ctx, ev); err != nil {
t.Fatalf("Failed to save event: %v", err)
}
// Try to save the same event again, it should be rejected
_, _, err = db.SaveEvent(ctx, ev)
_, err = db.SaveEvent(ctx, ev)
if err == nil {
t.Fatal("Expected error when saving an existing event, but got nil")
}
// Verify the error message
expectedErrorPrefix := "event already exists: "
expectedErrorPrefix := "blocked: event already exists: "
if !bytes.HasPrefix([]byte(err.Error()), []byte(expectedErrorPrefix)) {
t.Fatalf(
"Expected error message to start with '%s', got '%s'",

View File

@@ -84,7 +84,7 @@ type Saver interface {
// SaveEvent is called once relay.AcceptEvent reports true. The owners
// parameter is for designating admins whose delete by e tag events apply
// the same as author's own.
SaveEvent(c context.Context, ev *event.E) (kc, vc int, err error)
SaveEvent(c context.Context, ev *event.E) (replaced bool, err error)
}
type Importer interface {

View File

@@ -293,7 +293,10 @@ func (s *Spider) calculateOptimalChunkSize() int {
chunkSize = 10
}
log.D.F("Spider: calculated optimal chunk size: %d pubkeys (max would be %d)", chunkSize, maxPubkeys)
log.D.F(
"Spider: calculated optimal chunk size: %d pubkeys (max would be %d)",
chunkSize, maxPubkeys,
)
return chunkSize
}
@@ -301,7 +304,10 @@ func (s *Spider) calculateOptimalChunkSize() int {
func (s *Spider) queryRelayForEvents(
relayURL string, followedPubkeys [][]byte, startTime, endTime time.Time,
) (int, error) {
log.T.F("Spider sync: querying relay %s with %d pubkeys", relayURL, len(followedPubkeys))
log.T.F(
"Spider sync: querying relay %s with %d pubkeys", relayURL,
len(followedPubkeys),
)
// Connect to the relay with a timeout context
ctx, cancel := context.WithTimeout(s.ctx, 30*time.Second)
@@ -324,8 +330,10 @@ func (s *Spider) queryRelayForEvents(
}
chunk := followedPubkeys[i:end]
log.T.F("Spider sync: processing chunk %d-%d (%d pubkeys) for relay %s",
i, end-1, len(chunk), relayURL)
log.T.F(
"Spider sync: processing chunk %d-%d (%d pubkeys) for relay %s",
i, end-1, len(chunk), relayURL,
)
// Create filter for this chunk of pubkeys
f := &filter.F{
@@ -338,8 +346,10 @@ func (s *Spider) queryRelayForEvents(
// Subscribe to get events for this chunk
sub, err := client.Subscribe(ctx, filter.NewS(f))
if err != nil {
log.E.F("Spider sync: failed to subscribe to chunk %d-%d for relay %s: %v",
i, end-1, relayURL, err)
log.E.F(
"Spider sync: failed to subscribe to chunk %d-%d for relay %s: %v",
i, end-1, relayURL, err,
)
continue
}
@@ -385,7 +395,7 @@ func (s *Spider) queryRelayForEvents(
}
// Save the event to the database
if _, _, err := s.db.SaveEvent(s.ctx, ev); err != nil {
if _, err := s.db.SaveEvent(s.ctx, ev); err != nil {
if !strings.HasPrefix(err.Error(), "blocked:") {
log.T.F(
"Spider sync: error saving event from relay %s: %v",
@@ -410,12 +420,16 @@ func (s *Spider) queryRelayForEvents(
sub.Unsub()
totalEventsSaved += chunkEventsSaved
log.T.F("Spider sync: completed chunk %d-%d for relay %s, saved %d events",
i, end-1, relayURL, chunkEventsSaved)
log.T.F(
"Spider sync: completed chunk %d-%d for relay %s, saved %d events",
i, end-1, relayURL, chunkEventsSaved,
)
}
log.T.F("Spider sync: completed all chunks for relay %s, total saved %d events",
relayURL, totalEventsSaved)
log.T.F(
"Spider sync: completed all chunks for relay %s, total saved %d events",
relayURL, totalEventsSaved,
)
return totalEventsSaved, nil
}

View File

@@ -1 +1 @@
v0.12.1
v0.14.0

View File

@@ -227,6 +227,67 @@ export ORLY_APP_NAME="ORLY"
The sprocket script should be placed at:
`~/.config/{ORLY_APP_NAME}/sprocket.sh`
For example, with default `ORLY_APP_NAME="ORLY"`:
`~/.config/ORLY/sprocket.sh`
Backup files are automatically created when updating sprocket scripts via the web UI, with timestamps like:
`~/.config/ORLY/sprocket.sh.20240101120000`
=== manual sprocket updates
For manual sprocket script updates, you can use the stop/write/restart method:
1. **Stop the relay**:
```bash
# Send SIGINT to gracefully stop
kill -INT <relay_pid>
```
2. **Write new sprocket script**:
```bash
# Create/update the sprocket script
cat > ~/.config/ORLY/sprocket.sh << 'EOF'
#!/bin/bash
while read -r line; do
if [[ -n "$line" ]]; then
event_id=$(echo "$line" | jq -r '.id')
echo "{\"id\":\"$event_id\",\"action\":\"accept\",\"msg\":\"\"}"
fi
done
EOF
# Make it executable
chmod +x ~/.config/ORLY/sprocket.sh
```
3. **Restart the relay**:
```bash
./orly
```
The relay will automatically detect the new sprocket script and start it. If the script fails, sprocket will be disabled and all events rejected until the script is fixed.
=== failure handling
When sprocket is enabled but fails to start or crashes:
1. **Automatic Disable**: Sprocket is automatically disabled
2. **Event Rejection**: All incoming events are rejected with error message
3. **Periodic Recovery**: Every 30 seconds, the system checks if the sprocket script becomes available
4. **Auto-Restart**: If the script is found, sprocket is automatically re-enabled and restarted
This ensures that:
- Relay continues running even when sprocket fails
- No events are processed without proper sprocket filtering
- Sprocket automatically recovers when the script is fixed
- Clear error messages inform users about the sprocket status
- Error messages include the exact file location for easy fixes
When sprocket fails, the error message will show:
`sprocket disabled due to failure - all events will be rejected (script location: ~/.config/ORLY/sprocket.sh)`
This makes it easy to locate and fix the sprocket script file.
=== example script
Here's a Python example that implements various filtering criteria: