Compare commits
8 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 0fd7151094 | |||
| 4c6e7b08ac | |||
| d36bcdb243 | |||
| ab032cd296 | |||
| 0b6772ca83 | |||
| bbf79bb91f | |||
| d7b9415037 | |||
| 80e4c54c08 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -75,6 +75,7 @@ key
|
||||
!.openapi-generator-ignore
|
||||
!.gitignore
|
||||
!*.jsonl
|
||||
!*.adoc
|
||||
|
||||
# ...even if they are in subdirectories
|
||||
!*/
|
||||
|
||||
87
doc/events_queries.adoc
Normal file
87
doc/events_queries.adoc
Normal file
@@ -0,0 +1,87 @@
|
||||
= REALY protocol event/query specification
|
||||
|
||||
JSON is awful, and space inefficient, and complex to parse due to its intolerance of terminal commas and annoying to work with because of its retarded, multi-standards of string escaping.
|
||||
|
||||
Line structured documents are much more readily amenable to human reading and editing, and `\n`/`;`/`:` is more efficient than `","` as an item separator. Data structures can be much more simply expressed in a similar way as how they are in programming languages.
|
||||
|
||||
It is one of the guiding principles of the Unix philosophy to keep data in plain text, human readable format wherever possible, forcing the interposition of a parser just for humans to read the data adds extra brittleness to a protocol.
|
||||
|
||||
REALY protocol format is extremely simple and should be trivial to parse in any programming language with basic string slicing operators.
|
||||
|
||||
== Events
|
||||
|
||||
So, this is how realy events look:
|
||||
|
||||
----
|
||||
<type name>\n // can be anything, hierarchic names like note/html note/md are possible
|
||||
<pubkey>\n // encoded in URL-base64
|
||||
<unix second precision timestamp in decimal ascii>\n
|
||||
key:value;extra;...\n // zero or more line separated, fields cannot contain a semicolon, end with newline instead of semicolon, key lowercase alphanumeric, first alpha, no whitespace or symbols, only key is mandatory, only reserved is `content`
|
||||
content:\n // literally this word on one line *directly* after the newline of the previous
|
||||
<content>\n // any number of further line breaks, last line is signature, everything before signature line is part of the canonical hash
|
||||
<ed25519 signature encoded in URL-base64>\n
|
||||
----
|
||||
|
||||
The canonical form is exactly this, except for the signature and following linebreak, hashed with Blake2b.
|
||||
|
||||
The binary data - Event Ids, Pubkeys and Signatures are encoded in raw base64 URL encoding (without padding), Signatures are 86 characters, Ids and Pubkeys are 43 characters long.
|
||||
|
||||
The database stored form of this event should make use of an event ID hash to monotonic collision free serial table and an event table.
|
||||
|
||||
Event ID hashes will be encoded in URL-base64 where used in tags or mentioned in content with the prefix `e:`. Public keys must be prefixed with `p:` Tag keys should be intelligible words and a specification for their structure should be defined by users of them and shared with other REALY devs.
|
||||
|
||||
Indexing tags should be done with a truncated Blake2b hash cut at 8 bytes in the event store.
|
||||
|
||||
Submitting an event to be stored is the same as a result sent from an Event Id query except with the type of operation inteded: `store\n` to store an event, `replace:<Event Id>\n` to replace an existing event and `relay\n` to not store but send to subscribers with open matching filters. Replace will not be accepted if the message type and pubkey are different to the original that is specified.
|
||||
|
||||
An event is then acknowledged to be stored or rejected with a message `ok:<true/false>;<Event Id>;<reason type>:human readable part` where the reason type is one of a set of common types to indicate the reason for the false
|
||||
|
||||
Events that are returned have the `<subscription Id>:<Event Id>\n` as the first line.
|
||||
|
||||
== Queries
|
||||
|
||||
There is three types of queries in REALY:
|
||||
|
||||
=== Filter
|
||||
|
||||
A filter has one or more of the fields listed below, and headed with `filter`:
|
||||
|
||||
----
|
||||
filter:<subscription Id>\n
|
||||
pubkeys:<one>;<two>;...\n // these match as OR
|
||||
timestamp:<since>;<until\n // either can be empty but not both, omit line for this, both are inclusive
|
||||
tags:
|
||||
<key>:<value>\n // indexes are not required or used for more than the key and value
|
||||
... // several matches can be present, they will act as OR
|
||||
----
|
||||
|
||||
The result returned from this is a newline separated list of event ID hashes encoded in base64, a following Event Id search is required to retrieve them. This obviates the need for pagination as the 45 bytes per event per result is far less than sending the whole event and the client is then free to paginate how they like without making for an onerous implementation requirement or nebulous result limit specification.
|
||||
|
||||
The results must be in reverse chronological order so the client knows it can paginate them from newest to oldest as required by the user interface.
|
||||
|
||||
If instead of `filter\n` at the top there is `subscribe:<subscription Id>\n` the relay should return any events it finds the Id for and then subsequently will forward the Event Id of any new matching event that comes in until the client sends a `close:<subscription Id>\n` message.
|
||||
|
||||
Once all stored events are returned, the relay will send `end:<subscription Id>\n` to notify the client the query is finished. If the client wants a subscription it must use `subscribe`. The client should end subscriptions with `close:<subscription Id>\n` or if the socket is closed.
|
||||
|
||||
=== Text
|
||||
|
||||
A text search is just `search:<subscription Id>:` followed by a series of space separated tokens if the event store has a full text index, terminated with a newline.
|
||||
|
||||
=== Event Id
|
||||
|
||||
Event requests are as follows:
|
||||
|
||||
----
|
||||
events:<subscription Id>\n
|
||||
<one>\n
|
||||
...
|
||||
----
|
||||
|
||||
Normally clients will gather a potentially longer list of events and then send Event Id queries in segments according to the requirements of the user interface.
|
||||
|
||||
The results are returned as a series as follows, for each item returned:
|
||||
|
||||
----
|
||||
event:<subscription Id>:<Event Id>\n
|
||||
<event>
|
||||
----
|
||||
26
doc/relays.adoc
Normal file
26
doc/relays.adoc
Normal file
@@ -0,0 +1,26 @@
|
||||
= relays
|
||||
|
||||
A key design principle employed in REALY is that of relay specialization.
|
||||
|
||||
Instead of making a relay a hybrid event store and router, in REALY a relay does only one thing. Thus there can be
|
||||
|
||||
- a simple event repository that only understands queries to fetch a list of events by ID,
|
||||
- a relay that only indexes and keeps a space/time limited cache of events to process filters
|
||||
- a relay that only keeps a full text search index and a query results cache
|
||||
- a relay that only accepts list change CRDT events such as follow, join/create/delete/leave group, block, delete, report and compiles these events into single lists that are accessible to another relay that can use these compiled lists to control access either via explicit lists or by matching filters
|
||||
- a relay that stores and fetches media, including being able to convert and cache such as image size and formats
|
||||
- ...and many others are possible
|
||||
|
||||
By constraining the protocol interoperability compliance down to small simple sub-protocols the ability for clients to maintain currency with other clients and with relays is greatly simplified, without gatekeepers.
|
||||
|
||||
In addition, it should be normalized that relays can include clients that query other specialist relays, especially for such things as caching events. Thus one relay can be queried for a filter index, and the list of Event Ids returned can then be fetched from another relay that specialises in storing events and returning them on request by lists of Event Ids, and still other relays could store media files and be able to convert them on demand.
|
||||
|
||||
For this reason, instead of a single centralised mechanism, aside from the basic specifications as you can see in link:./events_queries.adoc[REALY protocol event/query specification] it is possible to add more to this list without needing to negotiate to have this specification added to this repository, though once it comes into use it can be done.
|
||||
|
||||
Along with the use of human-readable type identifiers for documents and the almost completely human-composable event encoding, the specification of REALY is not dependent on any kind of authoritative gatekeeping organisation, but instead organisations can add these to their own specifications lists as they see fit, eliminating a key problem with the operation of the nostr protocol.
|
||||
|
||||
There need not be bureaucratic RFC style specifications, but instead use human-readable names and be less formally described, the formality improving as others adopt it and expand or refine it.
|
||||
|
||||
Thus also it is recommended that implementations of any or all REALY servers and clients should keep a copy of the specification documents found in other implementations and converge them to each other as required when their repositories update support to changes and new sub-protocols.
|
||||
|
||||
Lastly, as part of making this ecosystem as heterogeneous and decentralized as possible, the notion of relay operators subscribing to other relay services such as media storage/conversion specialists or event archivists and focusing each relay service on simple, single purposes and protocols enables a more robust and failure resistant ecosystem where multiple providers can compete for clients and to be suppliers for other providers and replicate data and potentially enable specialisations like archival data access for providers that aggregate data from multiple other providers.
|
||||
25
doc/spec.md
25
doc/spec.md
@@ -1,25 +0,0 @@
|
||||
# realy protocol event specification
|
||||
|
||||
JSON is awful, and space inefficient, and complex to parse due to its intolerance of terminal commas and annoying to work with because of its retarded, multi-standards of string escaping.
|
||||
|
||||
Line structured documents are much more readily amenable to human reading and editing, and `\n`/`;`/`:` is more efficient than `","` as an item separator. Data structures can be much more simply expressed in a similar way as how they are in programming languages.
|
||||
|
||||
It is one of the guiding principles of the Unix philosophy to keep data in plain text, human readable format wherever possible, forcing the interposition of a parser just for humans to read the data adds extra brittleness to a protocol.
|
||||
|
||||
So, this is how realy events look:
|
||||
|
||||
```
|
||||
<type name>\n
|
||||
<pubkey>\n // encoded in URL-base64
|
||||
<unix second precision timestamp in decimal ascii>\n
|
||||
key:value;extra;...\n // zero or more line separated, fields cannot contain a semicolon, end with newline instead of semicolon, key lowercase alphanumeric, first alpha, only key is mandatory, only reserved is `content`
|
||||
content: // literally this word on one line
|
||||
<content>\n // any number of further line breaks, last line is signature
|
||||
<bip-340 schnorr signature encoded in URL-base64>\n
|
||||
```
|
||||
|
||||
The canonical form is exactly this, except for the signature and following linebreak, hashed with Blake2b
|
||||
|
||||
The database stored form of this event should make use of an event ID hash to monotonic collision free serial table and an event table.
|
||||
|
||||
Event ID hashes will be encoded in URL-base64 where used in tags or mentioned in content with the prefix `event:`. Public keys must be prefixed with `pubkey:` Tag keys should be intelligible words and a specification for their structure should be defined by users of them and shared with other REALY devs.
|
||||
@@ -1,4 +1,4 @@
|
||||
# why realy?
|
||||
= why REALY?
|
||||
|
||||
Since the introduction of the idea of a general "public square" style social network as seen with Facebook and Twitter, the whole world has been overcome by something of a plague of mind control brainwashing cults.
|
||||
|
||||
@@ -6,16 +6,23 @@ Worse than "Beatlemania" people are being lured into the control of various kind
|
||||
|
||||
Nostr protocol is a super simple event bus architecture, blended with a post office protocol, and due to various reasons related to the recent buyout of Twitter by Elon Musk, who plainly wants to turn it into the Western version of Wechat, it has become plagued with bad subprotocol designs that negate the benefits of self sovereign identity (elliptic curve asymmetric cryptography) and a dominant form of client that is essentially a travesty of Twitter itself.
|
||||
|
||||
Realy is being designed with the lessons learned from Nostr and the last 30 years of experience of internet communications protocols to aim to resist this kind of Embrace/Extend/Extinguish protocol that has repeatedly been performed on everything from email, to RSS, to threaded forums and instant messaging, by starting with the distilled essence of how these protocols should work so as to not be so easily vulnerable to being coopted by what is essentially in all but name the same centralised event bus architecture of social networks like Facebook and Twitter.
|
||||
REALY is being designed with the lessons learned from Nostr and the last 30 years of experience of internet communications protocols to aim to resist this kind of Embrace/Extend/Extinguish protocol that has repeatedly been performed on everything from email, to RSS, to threaded forums and instant messaging, by starting with the distilled essence of how these protocols should work so as to not be so easily vulnerable to being coopted by what is essentially in all but name the same centralised event bus architecture of social networks like Facebook and Twitter.
|
||||
|
||||
The main purposes that Realy will target are:
|
||||
The main purposes that REALY will target are:
|
||||
|
||||
- synchronous instant messaging protocols with IRC style nickserv and chanserv permissions and persistence, built from the ground up to take advantage of the cryptographic identities created by BIP-340 signatures, with an intuitive threaded structure that allows users to peruse a larger discussion without the problem of threads of discussion breaking the top level structure
|
||||
- structured document repositories primarily for text media, as a basis for collaborative documentation and literature collections, and software source code (breaking out of the filesystem tree structure to permit much more flexible ways of organising code)
|
||||
- persistent threaded discussion forums for longer form messages than the typical single sentence/paragraph of instant messaging
|
||||
- simple cross-relay data query protocol that enables minimising the data cost of traffic to clients
|
||||
- push style notification systems that can be programmed by the users' clients to respond to any kind of event breadcast to a relay
|
||||
* synchronous instant messaging protocols with IRC style nickserv and chanserv permissions and persistence, built from the ground up to take advantage of the cryptographic identities created by BIP-340 signatures, with an intuitive threaded structure that allows users to peruse a larger discussion without the problem of threads of discussion breaking the top level structure
|
||||
* structured document repositories primarily for text media, as a basis for collaborative documentation and literature collections, and software source code (breaking out of the filesystem tree structure to permit much more flexible ways of organising code)
|
||||
* persistent threaded discussion forums for longer form messages than the typical single sentence/paragraph of instant messaging
|
||||
* simple cross-relay data query protocol that enables minimising the data cost of traffic to clients
|
||||
* push style notification systems that can be programmed by the users' clients to respond to any kind of event breadcast to a relay
|
||||
|
||||
A key concept in the R.E.A.L.Y. architecture is that of relays being a heteregenous group of data repositories and relaying systems that are built specific to purpose, such as a chat relay, which does not store any messages but merely bounces messages around ot subscribers, a document repository, which provides read access to data with full text search capability, that can ne specialised for a singular data format (eg markdown, eg mediawiki, eg code), a threaded, moderated forum, and others.
|
||||
A key concept in the REALY architecture is that of relays being a heteregenous group of data repositories and relaying systems that are built specific to purpose, such as
|
||||
|
||||
A second key concept in R.E.A.L.Y. is the integration of Lightning Network payments - again mostly copying what is done with Nostr but enabling both per-use, micro-accounts and long term subscription styles of access, and the promotion of a notion of user-pays - where all data writing must be charged for, and most reading must be paid for. Lightning is perfect for this because it can currently cope with enormous volumes of payments with mere seconds of delay for settlement and a granularity of denomination that lends itself to the very low cost of delivering a one-time service, or maintaining a micro-account.
|
||||
- a chat relay, which does not store any messages but merely bounces messages around ot subscribers,
|
||||
- a document repository, which provides read access to data with full text search capability, that can ne specialised for a singular data format (eg markdown, eg mediawiki, eg code), a threaded, moderated forum, and others,
|
||||
- a directory relay which stores and distributes user metadata such as profiles, relay lists, follows, mutes, deletes and reports
|
||||
- an authentication relay, which can be sent messages to add or remove users from access whitelists and blacklists, that provides this state data to relays it is used by
|
||||
|
||||
A second key concept in REALY is the integration of Lightning Network payments - again mostly copying what is done with Nostr but enabling both pseudonymous micro-accounts and long term subscription styles of access payment, and the promotion of a notion of user-pays - where all data writing must be charged for, and most reading must be paid for.
|
||||
|
||||
Lightning is perfect for this because it can currently cope with enormous volumes of payments with mere seconds of delay for settlement and a granularity of denomination that lends itself to the very low cost of delivering a one-time service, or maintaining a micro-account.
|
||||
54
pkg/content/content.go
Normal file
54
pkg/content/content.go
Normal file
@@ -0,0 +1,54 @@
|
||||
package content
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
)
|
||||
|
||||
// C is raw content bytes of a message. This can contain anything but when it is
|
||||
// unmarshalled it is assumed that the last line (content between the second
|
||||
// last and last line break) is not part of the content, as this is where the
|
||||
// signature is placed.
|
||||
//
|
||||
// The only guaranteed property of an encoded content.C is that it has two
|
||||
// newline characters, one at the very end, and a second one before it that
|
||||
// demarcates the end of the actual content. It can be entirely binary and mess
|
||||
// up a terminal to render the unsanitized possible control characters.
|
||||
type C struct{ Content []byte }
|
||||
|
||||
// Marshal just writes the provided data with a `content:\n` prefix and adds a
|
||||
// terminal newline.
|
||||
func (c *C) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = append(append(append(dst, "content:\n"...), c.Content...), '\n')
|
||||
return
|
||||
}
|
||||
|
||||
var Prefix = "content:\n"
|
||||
|
||||
// Unmarshal expects the `content:\n` prefix and stops at the second last
|
||||
// newline. The data between the second last and last newline in the data is
|
||||
// assumed to be a signature but it could be anything in another use case.
|
||||
func (c *C) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
if !bytes.HasPrefix(data, []byte("content:\n")) {
|
||||
err = errorf.E("content prefix `content:\\n' not found: '%s'", data[:len(Prefix)+1])
|
||||
return
|
||||
}
|
||||
// trim off the prefix.
|
||||
data = data[len(Prefix):]
|
||||
// check that there is a last newline.
|
||||
if data[len(data)-1] != '\n' {
|
||||
err = errorf.E("input data does not end with newline")
|
||||
return
|
||||
}
|
||||
// we start at the second last, previous to the terminal newline byte.
|
||||
lastPos := len(data) - 2
|
||||
for ; lastPos >= len(Prefix); lastPos-- {
|
||||
// the content ends at the byte before the second last newline byte.
|
||||
if data[lastPos] == '\n' {
|
||||
break
|
||||
}
|
||||
}
|
||||
c.Content = data[:lastPos]
|
||||
// return the remainder after the content-terminal newline byte.
|
||||
rem = data[lastPos+1:]
|
||||
return
|
||||
}
|
||||
39
pkg/content/content_test.go
Normal file
39
pkg/content/content_test.go
Normal file
@@ -0,0 +1,39 @@
|
||||
package content
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
mrand "math/rand"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestC_Marshal_Unmarshal(t *testing.T) {
|
||||
c := make([]byte, mrand.Intn(100)+25)
|
||||
_, err := rand.Read(c)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.S(c)
|
||||
c1 := new(C)
|
||||
c1.Content = c
|
||||
var res []byte
|
||||
if res, err = c1.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// append a fake zero length signature
|
||||
res = append(res, '\n')
|
||||
log.I.S(res)
|
||||
c2 := new(C)
|
||||
var rem []byte
|
||||
if rem, err = c2.Unmarshal(res); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !bytes.Equal(c1.Content, c2.Content) {
|
||||
log.I.S(c1, c2)
|
||||
t.Fatal("content not equal")
|
||||
}
|
||||
if !bytes.Equal(rem, []byte{'\n'}) {
|
||||
log.I.S(rem)
|
||||
t.Fatalf("remainder not found")
|
||||
}
|
||||
}
|
||||
9
pkg/content/log.go
Normal file
9
pkg/content/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package content
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
@@ -1,14 +1,17 @@
|
||||
package event
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/content"
|
||||
"protocol.realy.lol/pkg/event/types"
|
||||
"protocol.realy.lol/pkg/pubkey"
|
||||
"protocol.realy.lol/pkg/signature"
|
||||
)
|
||||
|
||||
type Event struct {
|
||||
Type types.T
|
||||
Pubkey []byte
|
||||
Type *types.T
|
||||
Pubkey *pubkey.P
|
||||
Timestamp int64
|
||||
Tags [][]byte
|
||||
Content []byte
|
||||
Signature []byte
|
||||
Content *content.C
|
||||
Signature *signature.S
|
||||
}
|
||||
|
||||
74
pkg/id/id.go
Normal file
74
pkg/id/id.go
Normal file
@@ -0,0 +1,74 @@
|
||||
package id
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/ed25519"
|
||||
"encoding/base64"
|
||||
"io"
|
||||
)
|
||||
|
||||
const Len = 43
|
||||
|
||||
type P struct{ b []byte }
|
||||
|
||||
func New(id []byte) (p *P, err error) {
|
||||
if len(id) != ed25519.PublicKeySize {
|
||||
err = errorf.E("invalid public key size: %d; require %d",
|
||||
len(id), ed25519.PublicKeySize)
|
||||
return
|
||||
}
|
||||
p = &P{id}
|
||||
return
|
||||
}
|
||||
|
||||
func (p *P) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = dst
|
||||
if p == nil || p.b == nil || len(p.b) == 0 {
|
||||
err = errorf.E("nil/zero length pubkey")
|
||||
return
|
||||
}
|
||||
if len(p.b) != ed25519.PublicKeySize {
|
||||
err = errorf.E("invalid public key length %d; require %d '%0x'",
|
||||
len(p.b), ed25519.PublicKeySize, p.b)
|
||||
return
|
||||
}
|
||||
buf := bytes.NewBuffer(result)
|
||||
w := base64.NewEncoder(base64.RawURLEncoding, buf)
|
||||
if _, err = w.Write(p.b); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if err = w.Close(); chk.E(err) {
|
||||
return
|
||||
}
|
||||
result = append(buf.Bytes(), '\n')
|
||||
return
|
||||
}
|
||||
|
||||
func (p *P) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
rem = data
|
||||
if p == nil {
|
||||
err = errorf.E("can't unmarshal into nil types.T")
|
||||
return
|
||||
}
|
||||
if len(rem) < 2 {
|
||||
err = errorf.E("can't unmarshal nothing")
|
||||
return
|
||||
}
|
||||
for i := range rem {
|
||||
if rem[i] == '\n' {
|
||||
if i != Len {
|
||||
err = errorf.E("invalid encoded pubkey length %d; require %d '%0x'",
|
||||
i, Len, rem[:i])
|
||||
return
|
||||
}
|
||||
p.b = make([]byte, ed25519.PublicKeySize)
|
||||
if _, err = base64.RawURLEncoding.Decode(p.b, rem[:i]); chk.E(err) {
|
||||
return
|
||||
}
|
||||
rem = rem[i+1:]
|
||||
return
|
||||
}
|
||||
}
|
||||
err = io.EOF
|
||||
return
|
||||
}
|
||||
40
pkg/id/id_test.go
Normal file
40
pkg/id/id_test.go
Normal file
@@ -0,0 +1,40 @@
|
||||
package id
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/ed25519"
|
||||
"crypto/rand"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestP_Marshal_Unmarshal(t *testing.T) {
|
||||
var err error
|
||||
for range 10 {
|
||||
pk := make([]byte, ed25519.PublicKeySize)
|
||||
if _, err = rand.Read(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.S(pk)
|
||||
var p *P
|
||||
if p, err = New(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var o []byte
|
||||
if o, err = p.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.F("%d %s", len(o), o)
|
||||
p2 := &P{}
|
||||
var rem []byte
|
||||
if rem, err = p2.Unmarshal(o); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
log.I.F("%d %s", len(rem), rem)
|
||||
}
|
||||
log.I.S(p2.b)
|
||||
if !bytes.Equal(pk, p2.b) {
|
||||
t.Fatal("public key did not encode/decode faithfully")
|
||||
}
|
||||
}
|
||||
}
|
||||
9
pkg/id/log.go
Normal file
9
pkg/id/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package id
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
@@ -7,7 +7,7 @@ import (
|
||||
"io"
|
||||
)
|
||||
|
||||
const Len = 44
|
||||
const Len = 43
|
||||
|
||||
type P struct{ ed25519.PublicKey }
|
||||
|
||||
@@ -33,7 +33,7 @@ func (p *P) Marshal(dst []byte) (result []byte, err error) {
|
||||
return
|
||||
}
|
||||
buf := bytes.NewBuffer(result)
|
||||
w := base64.NewEncoder(base64.URLEncoding, buf)
|
||||
w := base64.NewEncoder(base64.RawURLEncoding, buf)
|
||||
if _, err = w.Write(p.PublicKey); chk.E(err) {
|
||||
return
|
||||
}
|
||||
@@ -62,7 +62,7 @@ func (p *P) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
return
|
||||
}
|
||||
p.PublicKey = make([]byte, ed25519.PublicKeySize)
|
||||
if _, err = base64.URLEncoding.Decode(p.PublicKey, rem[:i]); chk.E(err) {
|
||||
if _, err = base64.RawURLEncoding.Decode(p.PublicKey, rem[:i]); chk.E(err) {
|
||||
return
|
||||
}
|
||||
rem = rem[i+1:]
|
||||
|
||||
@@ -8,31 +8,33 @@ import (
|
||||
)
|
||||
|
||||
func TestP_Marshal_Unmarshal(t *testing.T) {
|
||||
pk := make([]byte, ed25519.PublicKeySize)
|
||||
var err error
|
||||
if _, err = rand.Read(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.S(pk)
|
||||
var p *P
|
||||
if p, err = New(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var o []byte
|
||||
if o, err = p.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.F("%d %s", len(o), o)
|
||||
p2 := &P{}
|
||||
var rem []byte
|
||||
if rem, err = p2.Unmarshal(o); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
log.I.F("%d %s", len(rem), rem)
|
||||
}
|
||||
log.I.S(p2.PublicKey)
|
||||
if !bytes.Equal(pk, p2.PublicKey) {
|
||||
t.Fatal("public key did not encode/decode faithfully")
|
||||
for range 10 {
|
||||
pk := make([]byte, ed25519.PublicKeySize)
|
||||
if _, err = rand.Read(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.S(pk)
|
||||
var p *P
|
||||
if p, err = New(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var o []byte
|
||||
if o, err = p.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.F("%d %s", len(o), o)
|
||||
p2 := &P{}
|
||||
var rem []byte
|
||||
if rem, err = p2.Unmarshal(o); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
log.I.F("%d %s", len(rem), rem)
|
||||
}
|
||||
log.I.S(p2.PublicKey)
|
||||
if !bytes.Equal(pk, p2.PublicKey) {
|
||||
t.Fatal("public key did not encode/decode faithfully")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,7 +7,7 @@ import (
|
||||
"io"
|
||||
)
|
||||
|
||||
const Len = 88
|
||||
const Len = 86
|
||||
|
||||
type S struct{ Signature []byte }
|
||||
|
||||
@@ -33,7 +33,7 @@ func (p *S) Marshal(dst []byte) (result []byte, err error) {
|
||||
return
|
||||
}
|
||||
buf := bytes.NewBuffer(result)
|
||||
w := base64.NewEncoder(base64.URLEncoding, buf)
|
||||
w := base64.NewEncoder(base64.RawURLEncoding, buf)
|
||||
if _, err = w.Write(p.Signature); chk.E(err) {
|
||||
return
|
||||
}
|
||||
@@ -62,7 +62,7 @@ func (p *S) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
return
|
||||
}
|
||||
p.Signature = make([]byte, ed25519.SignatureSize)
|
||||
if _, err = base64.URLEncoding.Decode(p.Signature, rem[:i]); chk.E(err) {
|
||||
if _, err = base64.RawURLEncoding.Decode(p.Signature, rem[:i]); chk.E(err) {
|
||||
return
|
||||
}
|
||||
rem = rem[i+1:]
|
||||
|
||||
@@ -8,31 +8,34 @@ import (
|
||||
)
|
||||
|
||||
func TestS_Marshal_Unmarshal(t *testing.T) {
|
||||
sig := make([]byte, ed25519.SignatureSize)
|
||||
var err error
|
||||
if _, err = rand.Read(sig); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.S(sig)
|
||||
var s *S
|
||||
if s, err = New(sig); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var o []byte
|
||||
if o, err = s.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.F("%d %s", len(o), o)
|
||||
p2 := &S{}
|
||||
var rem []byte
|
||||
if rem, err = p2.Unmarshal(o); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
log.I.F("%d %s", len(rem), rem)
|
||||
}
|
||||
log.I.S(p2.Signature)
|
||||
if !bytes.Equal(sig, p2.Signature) {
|
||||
t.Fatal("signature did not encode/decode faithfully")
|
||||
for range 10 {
|
||||
sig := make([]byte, ed25519.SignatureSize)
|
||||
if _, err = rand.Read(sig); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.S(sig)
|
||||
var s *S
|
||||
if s, err = New(sig); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var o []byte
|
||||
if o, err = s.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.F("%d %s", len(o), o)
|
||||
p2 := &S{}
|
||||
var rem []byte
|
||||
if rem, err = p2.Unmarshal(o); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
log.I.F("%d %s", len(rem), rem)
|
||||
}
|
||||
log.I.S(p2.Signature)
|
||||
if !bytes.Equal(sig, p2.Signature) {
|
||||
t.Fatal("signature did not encode/decode faithfully")
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
9
pkg/tag/log.go
Normal file
9
pkg/tag/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package tag
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
135
pkg/tag/tag.go
Normal file
135
pkg/tag/tag.go
Normal file
@@ -0,0 +1,135 @@
|
||||
// Package tag defines a format for event tags that follows the following rules:
|
||||
//
|
||||
// First field is the key, this is to be hashed using Blake2b and truncated to 8 bytes for indexing. These keys should
|
||||
// not be long, and thus will not have any collisions as a truncated hash. The terminal byte of a key is the colon `:`
|
||||
//
|
||||
// Subsequent fields are separated by semicolon ';' and they can contain any data except a semicolon or newline.
|
||||
//
|
||||
// The tag is terminated by a newline.
|
||||
package tag
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
)
|
||||
|
||||
type fields [][]byte
|
||||
|
||||
type T struct{ fields }
|
||||
|
||||
func New[V ~[]byte | ~string](v ...V) (t *T, err error) {
|
||||
t = new(T)
|
||||
var k []byte
|
||||
if k, err = ValidateKey([]byte(v[0])); err != nil {
|
||||
err = errorf.E("")
|
||||
return
|
||||
}
|
||||
v = v[1:]
|
||||
t.fields = append(t.fields, k)
|
||||
for i, val := range v {
|
||||
var b []byte
|
||||
if b, err = ValidateField(val, i); chk.E(err) {
|
||||
return
|
||||
}
|
||||
t.fields = append(t.fields, b)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// ValidateKey checks that the key is valid. Keys must be the same most language symbols:
|
||||
//
|
||||
// - first character is alphabetic [a-zA-Z]
|
||||
// - subsequent characters can be alphanumeric and underscore [a-zA-Z0-9_]
|
||||
//
|
||||
// If the key is not valid this function returns a nil value.
|
||||
func ValidateKey[V ~[]byte | ~string](key V) (k []byte, err error) {
|
||||
if len(key) < 1 {
|
||||
return
|
||||
}
|
||||
kb := []byte(key)
|
||||
switch {
|
||||
case kb[0] < 'a' && k[0] > 'z' || kb[0] < 'A' && kb[0] > 'Z':
|
||||
for i, b := range kb[1:] {
|
||||
switch {
|
||||
case (b > 'a' && b < 'z') || b > 'A' && b < 'Z' || b == '_' || b > '0' && b < '9':
|
||||
default:
|
||||
err = errorf.E("invalid character in tag key at index %d '%c': \"%s\"", i, b, kb)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
// if we got to here, the whole string is compliant
|
||||
k = kb
|
||||
return
|
||||
}
|
||||
|
||||
func ValidateField[V ~[]byte | ~string](f V, i int) (k []byte, err error) {
|
||||
b := []byte(f)
|
||||
if bytes.Contains(b, []byte(";")) {
|
||||
err = errorf.E("key %d cannot contain ';': '%s'", i, b)
|
||||
return
|
||||
}
|
||||
if bytes.Contains(b, []byte("\n")) {
|
||||
err = errorf.E("key %d cannot contain '\\n': '%s'", i, b)
|
||||
return
|
||||
}
|
||||
// if we got to here, the whole string is compliant
|
||||
k = b
|
||||
return
|
||||
}
|
||||
|
||||
func (t *T) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = dst
|
||||
if len(t.fields) == 0 {
|
||||
return
|
||||
}
|
||||
for i, field := range t.fields {
|
||||
result = append(result, field...)
|
||||
if i == 0 {
|
||||
result = append(result, ':')
|
||||
} else if i == len(t.fields)-1 {
|
||||
result = append(result, '\n')
|
||||
} else {
|
||||
result = append(result, ';')
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (t *T) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
var i int
|
||||
var v byte
|
||||
var dat []byte
|
||||
// first find the end
|
||||
for i, v = range data {
|
||||
if v == '\n' {
|
||||
dat, rem = data[:i], data[i+1:]
|
||||
break
|
||||
}
|
||||
}
|
||||
if len(dat) == 0 {
|
||||
err = errorf.E("invalid empty tag")
|
||||
return
|
||||
}
|
||||
for i, v = range dat {
|
||||
if v == ':' {
|
||||
f := dat[:i]
|
||||
dat = dat[i+1:]
|
||||
t.fields = append(t.fields, f)
|
||||
break
|
||||
}
|
||||
}
|
||||
for len(dat) > 0 {
|
||||
for i, v = range dat {
|
||||
if v == ';' {
|
||||
t.fields = append(t.fields, dat[:i])
|
||||
dat = dat[i+1:]
|
||||
break
|
||||
}
|
||||
if i == len(dat)-1 {
|
||||
t.fields = append(t.fields, dat)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
29
pkg/tag/tag_test.go
Normal file
29
pkg/tag/tag_test.go
Normal file
@@ -0,0 +1,29 @@
|
||||
package tag
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
var err error
|
||||
var t1 *T
|
||||
if t1, err = New("reply", "e:l_T9Of4ru-PLGUxxvw3SfZH0e6XW11VYy8ZSgbcsD9Y",
|
||||
"realy.example.com/repo"); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var tb []byte
|
||||
if tb, err = t1.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.S(tb)
|
||||
t2 := new(T)
|
||||
var rem []byte
|
||||
if rem, err = t2.Unmarshal(tb); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
log.I.F("%s", rem)
|
||||
t.Fatal("remainder after tag should have been nothing")
|
||||
}
|
||||
log.I.S(t2)
|
||||
}
|
||||
9
pkg/tags/log.go
Normal file
9
pkg/tags/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package tags
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
56
pkg/tags/tags.go
Normal file
56
pkg/tags/tags.go
Normal file
@@ -0,0 +1,56 @@
|
||||
package tags
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"protocol.realy.lol/pkg/tag"
|
||||
)
|
||||
|
||||
const Sentinel = "tags:\n"
|
||||
|
||||
var SentinelBytes = []byte(Sentinel)
|
||||
|
||||
type tags []*tag.T
|
||||
|
||||
type T struct{ tags }
|
||||
|
||||
func New(v ...*tag.T) *T { return &T{tags: v} }
|
||||
|
||||
func (t *T) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = dst
|
||||
result = append(result, Sentinel...)
|
||||
for _, tt := range t.tags {
|
||||
if result, err = tt.Marshal(result); chk.E(err) {
|
||||
return
|
||||
}
|
||||
}
|
||||
result = append(result, '\n')
|
||||
return
|
||||
}
|
||||
|
||||
func (t *T) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
if len(data) < len(Sentinel) {
|
||||
err = fmt.Errorf("bytes too short to contain tags")
|
||||
return
|
||||
}
|
||||
var dat []byte
|
||||
if bytes.Equal(data[:len(Sentinel)], SentinelBytes) {
|
||||
dat = data[len(Sentinel):]
|
||||
}
|
||||
if len(dat) < 1 {
|
||||
return
|
||||
}
|
||||
for len(dat) > 0 {
|
||||
if len(dat) == 1 && dat[0] == '\n' {
|
||||
break
|
||||
}
|
||||
// log.I.S(dat)
|
||||
tt := new(tag.T)
|
||||
if dat, err = tt.Unmarshal(dat); chk.E(err) {
|
||||
return
|
||||
}
|
||||
t.tags = append(t.tags, tt)
|
||||
}
|
||||
return
|
||||
}
|
||||
45
pkg/tags/tags_test.go
Normal file
45
pkg/tags/tags_test.go
Normal file
@@ -0,0 +1,45 @@
|
||||
package tags
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"protocol.realy.lol/pkg/tag"
|
||||
)
|
||||
|
||||
func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
var tegs = [][]string{
|
||||
{"reply", "e:l_T9Of4ru-PLGUxxvw3SfZH0e6XW11VYy8ZSgbcsD9Y", "realy.example.com/repo1"},
|
||||
{"root", "e:l_T9Of4ru-PLGUxxvw3SfZH0e6XW11VYy8ZSgbcsD9Y", "realy.example.com/repo2"},
|
||||
{"mention", "p:JMkZVnu9QFplR4F_KrWX-3chQsklXZq_5I6eYcXfz1Q", "realy.example.com/repo3"},
|
||||
}
|
||||
var err error
|
||||
var tgs []*tag.T
|
||||
for _, teg := range tegs {
|
||||
var tg *tag.T
|
||||
if tg, err = tag.New(teg...); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
tgs = append(tgs, tg)
|
||||
}
|
||||
t1 := New(tgs...)
|
||||
var m1 []byte
|
||||
if m1, err = t1.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t2 := new(T)
|
||||
var rem []byte
|
||||
if rem, err = t2.Unmarshal(m1); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
t.Fatalf("%s", rem)
|
||||
}
|
||||
var m2 []byte
|
||||
if m2, err = t2.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !bytes.Equal(m1, m2) {
|
||||
t.Fatalf("not equal:\n%s\n%s", m1, m2)
|
||||
}
|
||||
}
|
||||
22
readme.adoc
Normal file
22
readme.adoc
Normal file
@@ -0,0 +1,22 @@
|
||||
= REALY Protocol
|
||||
|
||||
____
|
||||
|
||||
relay events and like… yeah
|
||||
|
||||
____
|
||||
|
||||
image:https://img.shields.io/badge/godoc-documentation-blue.svg[Documentation,link=https://pkg.go.dev/protocol.realy.lol]
|
||||
image:https://img.shields.io/badge/matrix-chat-green.svg[matrix chat,link=https://matrix.to/#/#realy-general:matrix.org]
|
||||
|
||||
zap mleku: ⚡️mleku@getalby.com
|
||||
|
||||
Inspired by the event bus architecture of https://github.com/nostr-protocol[nostr] but redesigned to avoid the
|
||||
serious deficiencies of that protocol for both developers and users.
|
||||
|
||||
* link:./doc/why.md[why]
|
||||
* link:./doc/events_queries.md[events and queries]
|
||||
* link:./doc/relays.md[relays]
|
||||
* link:./relays/readme.md[reference relays]
|
||||
* link:./clients/readme.md[reference clients]
|
||||
* link:./pkg/readme.md[GO libraries]
|
||||
17
readme.md
17
readme.md
@@ -1,17 +0,0 @@
|
||||
# R.E.A.L.Y. Protocol
|
||||
|
||||
> relay events and like... yeah
|
||||
|
||||
[](https://pkg.go.dev/protocol.realy.lol)
|
||||
[](https://matrix.to/#/#realy-general:matrix.org)
|
||||
|
||||
zap mleku: ⚡️mleku@getalby.com
|
||||
|
||||
Inspired by the event bus architecture of [nostr](https://github.com/nostr-protocol) but redesigned to avoid the
|
||||
serious deficiencies of that protocol for both developers and users.
|
||||
|
||||
- [why](./doc/why.md)
|
||||
- [event spec](./doc/spec.md)
|
||||
- [reference relays](./relays/readme.md)
|
||||
- [reference clients](./clients/readme.md)
|
||||
- [GO libraries](./pkg/readme.md)
|
||||
Reference in New Issue
Block a user