Compare commits
10 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 5cde988f66 | |||
| d29ea74783 | |||
| 76297bf73e | |||
| 786cc0108c | |||
| 7eba0a1ddb | |||
| a3b07cf68a | |||
| 4983d6095f | |||
| eaefa6c0bc | |||
| 0fd7151094 | |||
| 4c6e7b08ac |
@@ -8,35 +8,93 @@ It is one of the guiding principles of the Unix philosophy to keep data in plain
|
||||
|
||||
REALY protocol format is extremely simple and should be trivial to parse in any programming language with basic string slicing operators.
|
||||
|
||||
---
|
||||
|
||||
== Base64 Encoding
|
||||
|
||||
To save space and eliminate the need for ugly `=` padding characters, we invoke link:https://datatracker.ietf.org/doc/html/rfc4648#section-3.2[RFC 4648 section 3.2] for the case of using base64 URL encoding without padding because we know the data length. In this case, it is used for IDs and pubkeys (32 bytes payload each, 43 characters base64 raw URL encoded) and signatures (64 bytes payload, 86 characters base64 raw URL encoded) - the further benefit here is the exact same string can be used in HTTP GET parameters `?key=value&...` context. The standard `=` padding would break this usage as well.
|
||||
|
||||
For ease of human usage, also, it is recommended when the value is printed in plain text that it be on its own line so triple click catches all of it including the normally word-wise separated `-` hyphen/minus character, as follows:
|
||||
|
||||
CF4I5dXYPZ_lu2pYRjey1QMDmgNJEyT-MM8Vvj6EnZM
|
||||
|
||||
For those who can't find a "raw" codec for base64, the 32 byte length has 1`=` pad suffix and the 64 byte length has 2: `==` and this can be trimmed off and added back to conform to this requirement. Due to the fact that potentially there can be hundreds if not thousands of these in event content and tag fields the benefit can be quite great, as well as the benefit of being able to use these codes also in URL parameter values.
|
||||
|
||||
== Sockets and HTTP
|
||||
|
||||
Only subscriptions require server push messaging pattern, thus all other queries in REALY can be done with simple HTTP POST requests.
|
||||
|
||||
A relay should respond to a `subscribe` request by upgrading from http to a websocket.
|
||||
|
||||
It is unnecessary messages and work to use websockets for queries that match the HTTP request/response pattern, and by only requiring sockets for APIs that actually need server initiated messaging, the complexity of the relay is greatly reduced.
|
||||
|
||||
There can be a separate subscription type also, where there is delivering the IDs only, or forwarding the whole event.
|
||||
|
||||
=== HTTP Authentication
|
||||
|
||||
For the most part, all queries and submissions must be authenticated in order to enable a REALY relay to allow access.
|
||||
|
||||
To enable this, a suffix is added to messages with the following format:
|
||||
|
||||
`<message payload>\n` // all messages must be terminated with a newline
|
||||
|
||||
`<request URL>\n` // because we aren't signing also on the http header
|
||||
|
||||
`<unix timestamp in decimal ascii>:<public key of signer>:<signature>\n`
|
||||
|
||||
For reasons of security, a relay should not allow a time skew in the timestamp of more than 15 seconds.
|
||||
|
||||
The signature is upon the Blake 2b message hash of everything up to the semicolon preceding it, and only relates to the HTTP POST payload, not including the header.
|
||||
|
||||
Even subscription messages should be signed the same way, to avoid needing a secondary protocol. "open" relays that have no access control (which is retarded, but just to be complete) must still require this authentication message, but simply the client can use one-shot keys to sign with, as it also serves as a HMAC to validate the consistency of the request data, since it is based on the hash.
|
||||
|
||||
== Events
|
||||
|
||||
So, this is how realy events look:
|
||||
The format of events is as follows - the monospace segments are the exact text, including the necessary linebreak characters, the rest is descriptive.
|
||||
|
||||
----
|
||||
<type name>\n // can be anything, hierarchic names like note/html note/md are possible
|
||||
<pubkey>\n // encoded in URL-base64
|
||||
<unix second precision timestamp in decimal ascii>\n
|
||||
key:value;extra;...\n // zero or more line separated, fields cannot contain a semicolon, end with newline instead of semicolon, key lowercase alphanumeric, first alpha, no whitespace or symbols, only key is mandatory, only reserved is `content`
|
||||
content:\n // literally this word on one line *directly* after the newline of the previous
|
||||
<content>\n // any number of further line breaks, last line is signature, everything before signature line is part of the canonical hash
|
||||
<ed25519 signature encoded in URL-base64>\n
|
||||
----
|
||||
---
|
||||
|
||||
The canonical form is exactly this, except for the signature and following linebreak, hashed with Blake2b.
|
||||
`<type name>\n` // can be anything, hierarchic names like note/html note/md are possible, or type.subtype or whatever
|
||||
|
||||
The binary data - Event Ids, Pubkeys and Signatures are encoded in raw base64 URL encoding (without padding), Signatures are 86 characters, Ids and Pubkeys are 43 characters long.
|
||||
`<pubkey>\n` // encoded in URL-base64 with the padding `=` elided
|
||||
|
||||
The database stored form of this event should make use of an event ID hash to monotonic collision free serial table and an event table.
|
||||
`<unix second precision timestamp in decimal ascii>\n`
|
||||
|
||||
`tags:\n`
|
||||
|
||||
`key:value;extra;...\n` // zero or more line separated, fields cannot contain a semicolon, end with newline instead of semicolon, key lowercase alphanumeric, first alpha, no whitespace or symbols, only key and following `:` are mandatory
|
||||
|
||||
`\n` // tags end with a double linebreak
|
||||
|
||||
`content:\n` // literally this word on one line *directly* after the newline of the previous
|
||||
|
||||
`<content>\n` // any number of further line breaks, last line is signature, everything before signature line is part of the canonical hash
|
||||
|
||||
-> The canonical form is the above, creating the message hash that is generated with Blake 2b <-
|
||||
|
||||
---
|
||||
|
||||
`<ed25519 signature encoded in URL-base64>\n` // this field would have two padding chars `==`, these should be elided
|
||||
|
||||
---
|
||||
|
||||
The binary data - Event Ids, Pubkeys and Signatures are encoded in raw base64 URL encoding (without padding), Signatures are 86 characters long, with the two padding characters elided `==`, Ids and Pubkeys are 43 characters long, with a single padding character elided `=`.
|
||||
|
||||
The database stored form of this event should make use of an event ID hash to monotonic serial ID number as the key to associating the filter indexes of an event store.
|
||||
|
||||
Event ID hashes will be encoded in URL-base64 where used in tags or mentioned in content with the prefix `e:`. Public keys must be prefixed with `p:` Tag keys should be intelligible words and a specification for their structure should be defined by users of them and shared with other REALY devs.
|
||||
|
||||
Indexing tags should be done with a truncated Blake2b hash cut at 8 bytes in the event store.
|
||||
Indexing tag keys should be done with a truncated Blake2b hash cut at 8 bytes in the event store, keys should be short and thus the chances of collisions are practically zero.
|
||||
|
||||
Submitting an event to be stored is the same as a result sent from an Event Id query except with the type of operation inteded: `store\n` to store an event, `replace:<Event Id>\n` to replace an existing event and `relay\n` to not store but send to subscribers with open matching filters.
|
||||
== Publishing
|
||||
|
||||
Submitting an event to be stored is the same as a result sent from an Event Id query except with the type of operation inteded: `store\n` to store an event, `replace:<Event Id>\n` to replace an existing event and `relay\n` to not store but send to subscribers with open matching filters. Replace will not be accepted if the message type and pubkey are different to the original that is specified.
|
||||
|
||||
The use of specific different types of store requests eliminates the complexity of defining event types as replaceable, by making this intent explicit. A relay can also only allow one kind, such as a pure relay, which only accepts `relay` requests but neither `store` nor `replace`.
|
||||
|
||||
An event is then acknowledged to be stored or rejected with a message `ok:<true/false>;<Event Id>;<reason type>:human readable part` where the reason type is one of a set of common types to indicate the reason for the false
|
||||
|
||||
Events that are returned have the `<subscription Id>:<Event Id>\n` as the first line.
|
||||
Events that are returned have the `<subscription Id>:<Event Id>\n` as the first line, and then the event in the format described above afterwards.
|
||||
|
||||
== Queries
|
||||
|
||||
@@ -61,7 +119,11 @@ The results must be in reverse chronological order so the client knows it can pa
|
||||
|
||||
If instead of `filter\n` at the top there is `subscribe:<subscription Id>\n` the relay should return any events it finds the Id for and then subsequently will forward the Event Id of any new matching event that comes in until the client sends a `close:<subscription Id>\n` message.
|
||||
|
||||
Once all stored events are returned, the relay will send `end:<subscription Id>\n` to notify the client the query is finished. If the client wants a subscription it must use `subscribe`. The client should end subscriptions with `close:<subscription Id>\n` or if the socket is closed.
|
||||
Once all stored events are returned, the relay will send `end:<subscription Id>\n` to notify the client that here after will only be events that just arrived.
|
||||
|
||||
`subscribe_full:<subscription Id>` should be used to request the events be directly delivered instead of just the event IDs associated with the subscription filter.
|
||||
|
||||
In the case of events that are published via the `relay` command, it is necessary that therefore there must be one or more "chanserv" style relays also connected to the relay to whom the clients know they can request such events, and a "nickserv" type specialized relay would need to exist also for creating access whitelists - by compiling singular edits to these lists and using a subscription mechanism to notify such clients of the need to update their ACL.
|
||||
|
||||
=== Text
|
||||
|
||||
@@ -73,15 +135,18 @@ Event requests are as follows:
|
||||
|
||||
----
|
||||
events:<subscription Id>\n
|
||||
<one>\n
|
||||
<event ID one>\n
|
||||
...
|
||||
----
|
||||
|
||||
Unlike in event tags and content, the `e:` prefix is unnecessary. The previous two query types only have lists of events in return, and to fetch the event a client then must send an `events` request.
|
||||
|
||||
Normally clients will gather a potentially longer list of events and then send Event Id queries in segments according to the requirements of the user interface.
|
||||
|
||||
The results are returned as a series as follows, for each item returned:
|
||||
|
||||
----
|
||||
event:<subscription Id>:<Event Id>\n
|
||||
<event>
|
||||
----
|
||||
<event>\n
|
||||
...
|
||||
----
|
||||
|
||||
54
pkg/content/content.go
Normal file
54
pkg/content/content.go
Normal file
@@ -0,0 +1,54 @@
|
||||
package content
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
)
|
||||
|
||||
// C is raw content bytes of a message. This can contain anything but when it is
|
||||
// unmarshalled it is assumed that the last line (content between the second
|
||||
// last and last line break) is not part of the content, as this is where the
|
||||
// signature is placed.
|
||||
//
|
||||
// The only guaranteed property of an encoded content.C is that it has two
|
||||
// newline characters, one at the very end, and a second one before it that
|
||||
// demarcates the end of the actual content. It can be entirely binary and mess
|
||||
// up a terminal to render the unsanitized possible control characters.
|
||||
type C struct{ Content []byte }
|
||||
|
||||
// Marshal just writes the provided data with a `content:\n` prefix and adds a
|
||||
// terminal newline.
|
||||
func (c *C) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = append(append(append(dst, "content:\n"...), c.Content...), '\n')
|
||||
return
|
||||
}
|
||||
|
||||
var Prefix = "content:\n"
|
||||
|
||||
// Unmarshal expects the `content:\n` prefix and stops at the second last
|
||||
// newline. The data between the second last and last newline in the data is
|
||||
// assumed to be a signature but it could be anything in another use case.
|
||||
func (c *C) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
if !bytes.HasPrefix(data, []byte("content:\n")) {
|
||||
err = errorf.E("content prefix `content:\\n' not found: '%s'", data[:len(Prefix)+1])
|
||||
return
|
||||
}
|
||||
// trim off the prefix.
|
||||
data = data[len(Prefix):]
|
||||
// check that there is a last newline.
|
||||
if data[len(data)-1] != '\n' {
|
||||
err = errorf.E("input data does not end with newline")
|
||||
return
|
||||
}
|
||||
// we start at the second last, previous to the terminal newline byte.
|
||||
lastPos := len(data) - 2
|
||||
for ; lastPos >= len(Prefix); lastPos-- {
|
||||
// the content ends at the byte before the second last newline byte.
|
||||
if data[lastPos] == '\n' {
|
||||
break
|
||||
}
|
||||
}
|
||||
c.Content = data[:lastPos]
|
||||
// return the remainder after the content-terminal newline byte.
|
||||
rem = data[lastPos+1:]
|
||||
return
|
||||
}
|
||||
37
pkg/content/content_test.go
Normal file
37
pkg/content/content_test.go
Normal file
@@ -0,0 +1,37 @@
|
||||
package content
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
mrand "math/rand"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestC_Marshal_Unmarshal(t *testing.T) {
|
||||
c := make([]byte, mrand.Intn(100)+25)
|
||||
_, err := rand.Read(c)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
c1 := new(C)
|
||||
c1.Content = c
|
||||
var res []byte
|
||||
if res, err = c1.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// append a fake zero length signature
|
||||
res = append(res, '\n')
|
||||
c2 := new(C)
|
||||
var rem []byte
|
||||
if rem, err = c2.Unmarshal(res); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !bytes.Equal(c1.Content, c2.Content) {
|
||||
log.I.S(c1, c2)
|
||||
t.Fatal("content not equal")
|
||||
}
|
||||
if !bytes.Equal(rem, []byte{'\n'}) {
|
||||
log.I.S(rem)
|
||||
t.Fatalf("remainder not found")
|
||||
}
|
||||
}
|
||||
9
pkg/content/log.go
Normal file
9
pkg/content/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package content
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
@@ -1,16 +1,19 @@
|
||||
package event
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/content"
|
||||
"protocol.realy.lol/pkg/event/types"
|
||||
"protocol.realy.lol/pkg/pubkey"
|
||||
"protocol.realy.lol/pkg/signature"
|
||||
"protocol.realy.lol/pkg/tags"
|
||||
"protocol.realy.lol/pkg/timestamp"
|
||||
)
|
||||
|
||||
type Event struct {
|
||||
Type types.T
|
||||
Pubkey pubkey.P
|
||||
Timestamp int64
|
||||
Tags [][]byte
|
||||
Content []byte
|
||||
Signature signature.S
|
||||
Type *types.T
|
||||
Pubkey *pubkey.P
|
||||
Timestamp *timestamp.T
|
||||
Tags *tags.T
|
||||
Content *content.C
|
||||
Signature *signature.S
|
||||
}
|
||||
|
||||
@@ -12,7 +12,6 @@ func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
if res, err = typ.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.S(res)
|
||||
t2 := new(T)
|
||||
var rem []byte
|
||||
if rem, err = t2.Unmarshal(res); chk.E(err) {
|
||||
@@ -21,7 +20,6 @@ func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
if len(rem) > 0 {
|
||||
log.I.S(rem)
|
||||
}
|
||||
log.I.S(t2)
|
||||
if !bytes.Equal(typ, *t2) {
|
||||
t.Fatal("types.T did not encode/decode faithfully")
|
||||
}
|
||||
|
||||
74
pkg/id/id.go
Normal file
74
pkg/id/id.go
Normal file
@@ -0,0 +1,74 @@
|
||||
package id
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/ed25519"
|
||||
"encoding/base64"
|
||||
"io"
|
||||
)
|
||||
|
||||
const Len = 43
|
||||
|
||||
type P struct{ b []byte }
|
||||
|
||||
func New(id []byte) (p *P, err error) {
|
||||
if len(id) != ed25519.PublicKeySize {
|
||||
err = errorf.E("invalid public key size: %d; require %d",
|
||||
len(id), ed25519.PublicKeySize)
|
||||
return
|
||||
}
|
||||
p = &P{id}
|
||||
return
|
||||
}
|
||||
|
||||
func (p *P) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = dst
|
||||
if p == nil || p.b == nil || len(p.b) == 0 {
|
||||
err = errorf.E("nil/zero length pubkey")
|
||||
return
|
||||
}
|
||||
if len(p.b) != ed25519.PublicKeySize {
|
||||
err = errorf.E("invalid public key length %d; require %d '%0x'",
|
||||
len(p.b), ed25519.PublicKeySize, p.b)
|
||||
return
|
||||
}
|
||||
buf := bytes.NewBuffer(result)
|
||||
w := base64.NewEncoder(base64.RawURLEncoding, buf)
|
||||
if _, err = w.Write(p.b); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if err = w.Close(); chk.E(err) {
|
||||
return
|
||||
}
|
||||
result = append(buf.Bytes(), '\n')
|
||||
return
|
||||
}
|
||||
|
||||
func (p *P) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
rem = data
|
||||
if p == nil {
|
||||
err = errorf.E("can't unmarshal into nil types.T")
|
||||
return
|
||||
}
|
||||
if len(rem) < 2 {
|
||||
err = errorf.E("can't unmarshal nothing")
|
||||
return
|
||||
}
|
||||
for i := range rem {
|
||||
if rem[i] == '\n' {
|
||||
if i != Len {
|
||||
err = errorf.E("invalid encoded pubkey length %d; require %d '%0x'",
|
||||
i, Len, rem[:i])
|
||||
return
|
||||
}
|
||||
p.b = make([]byte, ed25519.PublicKeySize)
|
||||
if _, err = base64.RawURLEncoding.Decode(p.b, rem[:i]); chk.E(err) {
|
||||
return
|
||||
}
|
||||
rem = rem[i+1:]
|
||||
return
|
||||
}
|
||||
}
|
||||
err = io.EOF
|
||||
return
|
||||
}
|
||||
37
pkg/id/id_test.go
Normal file
37
pkg/id/id_test.go
Normal file
@@ -0,0 +1,37 @@
|
||||
package id
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/ed25519"
|
||||
"crypto/rand"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
var err error
|
||||
for range 10 {
|
||||
pk := make([]byte, ed25519.PublicKeySize)
|
||||
if _, err = rand.Read(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var p *P
|
||||
if p, err = New(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var o []byte
|
||||
if o, err = p.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
p2 := &P{}
|
||||
var rem []byte
|
||||
if rem, err = p2.Unmarshal(o); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
log.I.F("%d %s", len(rem), rem)
|
||||
}
|
||||
if !bytes.Equal(pk, p2.b) {
|
||||
t.Fatal("public key did not encode/decode faithfully")
|
||||
}
|
||||
}
|
||||
}
|
||||
9
pkg/id/log.go
Normal file
9
pkg/id/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package id
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
@@ -14,7 +14,6 @@ func TestP_Marshal_Unmarshal(t *testing.T) {
|
||||
if _, err = rand.Read(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.S(pk)
|
||||
var p *P
|
||||
if p, err = New(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
@@ -23,7 +22,6 @@ func TestP_Marshal_Unmarshal(t *testing.T) {
|
||||
if o, err = p.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.F("%d %s", len(o), o)
|
||||
p2 := &P{}
|
||||
var rem []byte
|
||||
if rem, err = p2.Unmarshal(o); chk.E(err) {
|
||||
@@ -32,7 +30,6 @@ func TestP_Marshal_Unmarshal(t *testing.T) {
|
||||
if len(rem) > 0 {
|
||||
log.I.F("%d %s", len(rem), rem)
|
||||
}
|
||||
log.I.S(p2.PublicKey)
|
||||
if !bytes.Equal(pk, p2.PublicKey) {
|
||||
t.Fatal("public key did not encode/decode faithfully")
|
||||
}
|
||||
|
||||
@@ -14,7 +14,6 @@ func TestS_Marshal_Unmarshal(t *testing.T) {
|
||||
if _, err = rand.Read(sig); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.S(sig)
|
||||
var s *S
|
||||
if s, err = New(sig); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
@@ -23,7 +22,6 @@ func TestS_Marshal_Unmarshal(t *testing.T) {
|
||||
if o, err = s.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.F("%d %s", len(o), o)
|
||||
p2 := &S{}
|
||||
var rem []byte
|
||||
if rem, err = p2.Unmarshal(o); chk.E(err) {
|
||||
@@ -32,7 +30,6 @@ func TestS_Marshal_Unmarshal(t *testing.T) {
|
||||
if len(rem) > 0 {
|
||||
log.I.F("%d %s", len(rem), rem)
|
||||
}
|
||||
log.I.S(p2.Signature)
|
||||
if !bytes.Equal(sig, p2.Signature) {
|
||||
t.Fatal("signature did not encode/decode faithfully")
|
||||
}
|
||||
|
||||
9
pkg/tag/log.go
Normal file
9
pkg/tag/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package tag
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
135
pkg/tag/tag.go
Normal file
135
pkg/tag/tag.go
Normal file
@@ -0,0 +1,135 @@
|
||||
// Package tag defines a format for event tags that follows the following rules:
|
||||
//
|
||||
// First field is the key, this is to be hashed using Blake2b and truncated to 8 bytes for indexing. These keys should
|
||||
// not be long, and thus will not have any collisions as a truncated hash. The terminal byte of a key is the colon `:`
|
||||
//
|
||||
// Subsequent fields are separated by semicolon ';' and they can contain any data except a semicolon or newline.
|
||||
//
|
||||
// The tag is terminated by a newline.
|
||||
package tag
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
)
|
||||
|
||||
type fields [][]byte
|
||||
|
||||
type T struct{ fields }
|
||||
|
||||
func New[V ~[]byte | ~string](v ...V) (t *T, err error) {
|
||||
t = new(T)
|
||||
var k []byte
|
||||
if k, err = ValidateKey([]byte(v[0])); err != nil {
|
||||
err = errorf.E("")
|
||||
return
|
||||
}
|
||||
v = v[1:]
|
||||
t.fields = append(t.fields, k)
|
||||
for i, val := range v {
|
||||
var b []byte
|
||||
if b, err = ValidateField(val, i); chk.E(err) {
|
||||
return
|
||||
}
|
||||
t.fields = append(t.fields, b)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// ValidateKey checks that the key is valid. Keys must be the same most language symbols:
|
||||
//
|
||||
// - first character is alphabetic [a-zA-Z]
|
||||
// - subsequent characters can be alphanumeric and underscore [a-zA-Z0-9_]
|
||||
//
|
||||
// If the key is not valid this function returns a nil value.
|
||||
func ValidateKey[V ~[]byte | ~string](key V) (k []byte, err error) {
|
||||
if len(key) < 1 {
|
||||
return
|
||||
}
|
||||
kb := []byte(key)
|
||||
switch {
|
||||
case kb[0] < 'a' && k[0] > 'z' || kb[0] < 'A' && kb[0] > 'Z':
|
||||
for i, b := range kb[1:] {
|
||||
switch {
|
||||
case (b > 'a' && b < 'z') || b > 'A' && b < 'Z' || b == '_' || b > '0' && b < '9':
|
||||
default:
|
||||
err = errorf.E("invalid character in tag key at index %d '%c': \"%s\"", i, b, kb)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
// if we got to here, the whole string is compliant
|
||||
k = kb
|
||||
return
|
||||
}
|
||||
|
||||
func ValidateField[V ~[]byte | ~string](f V, i int) (k []byte, err error) {
|
||||
b := []byte(f)
|
||||
if bytes.Contains(b, []byte(";")) {
|
||||
err = errorf.E("key %d cannot contain ';': '%s'", i, b)
|
||||
return
|
||||
}
|
||||
if bytes.Contains(b, []byte("\n")) {
|
||||
err = errorf.E("key %d cannot contain '\\n': '%s'", i, b)
|
||||
return
|
||||
}
|
||||
// if we got to here, the whole string is compliant
|
||||
k = b
|
||||
return
|
||||
}
|
||||
|
||||
func (t *T) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = dst
|
||||
if len(t.fields) == 0 {
|
||||
return
|
||||
}
|
||||
for i, field := range t.fields {
|
||||
result = append(result, field...)
|
||||
if i == 0 {
|
||||
result = append(result, ':')
|
||||
} else if i == len(t.fields)-1 {
|
||||
result = append(result, '\n')
|
||||
} else {
|
||||
result = append(result, ';')
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (t *T) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
var i int
|
||||
var v byte
|
||||
var dat []byte
|
||||
// first find the end
|
||||
for i, v = range data {
|
||||
if v == '\n' {
|
||||
dat, rem = data[:i], data[i+1:]
|
||||
break
|
||||
}
|
||||
}
|
||||
if len(dat) == 0 {
|
||||
err = errorf.E("invalid empty tag")
|
||||
return
|
||||
}
|
||||
for i, v = range dat {
|
||||
if v == ':' {
|
||||
f := dat[:i]
|
||||
dat = dat[i+1:]
|
||||
t.fields = append(t.fields, f)
|
||||
break
|
||||
}
|
||||
}
|
||||
for len(dat) > 0 {
|
||||
for i, v = range dat {
|
||||
if v == ';' {
|
||||
t.fields = append(t.fields, dat[:i])
|
||||
dat = dat[i+1:]
|
||||
break
|
||||
}
|
||||
if i == len(dat)-1 {
|
||||
t.fields = append(t.fields, dat)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
27
pkg/tag/tag_test.go
Normal file
27
pkg/tag/tag_test.go
Normal file
@@ -0,0 +1,27 @@
|
||||
package tag
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
var err error
|
||||
var t1 *T
|
||||
if t1, err = New("reply", "e:l_T9Of4ru-PLGUxxvw3SfZH0e6XW11VYy8ZSgbcsD9Y",
|
||||
"realy.example.com/repo"); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var tb []byte
|
||||
if tb, err = t1.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t2 := new(T)
|
||||
var rem []byte
|
||||
if rem, err = t2.Unmarshal(tb); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
log.I.F("%s", rem)
|
||||
t.Fatal("remainder after tag should have been nothing")
|
||||
}
|
||||
}
|
||||
9
pkg/tags/log.go
Normal file
9
pkg/tags/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package tags
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
56
pkg/tags/tags.go
Normal file
56
pkg/tags/tags.go
Normal file
@@ -0,0 +1,56 @@
|
||||
package tags
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"protocol.realy.lol/pkg/tag"
|
||||
)
|
||||
|
||||
const Sentinel = "tags:\n"
|
||||
|
||||
var SentinelBytes = []byte(Sentinel)
|
||||
|
||||
type tags []*tag.T
|
||||
|
||||
type T struct{ tags }
|
||||
|
||||
func New(v ...*tag.T) *T { return &T{tags: v} }
|
||||
|
||||
func (t *T) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = dst
|
||||
result = append(result, Sentinel...)
|
||||
for _, tt := range t.tags {
|
||||
if result, err = tt.Marshal(result); chk.E(err) {
|
||||
return
|
||||
}
|
||||
}
|
||||
result = append(result, '\n')
|
||||
return
|
||||
}
|
||||
|
||||
func (t *T) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
if len(data) < len(Sentinel) {
|
||||
err = fmt.Errorf("bytes too short to contain tags")
|
||||
return
|
||||
}
|
||||
var dat []byte
|
||||
if bytes.Equal(data[:len(Sentinel)], SentinelBytes) {
|
||||
dat = data[len(Sentinel):]
|
||||
}
|
||||
if len(dat) < 1 {
|
||||
return
|
||||
}
|
||||
for len(dat) > 0 {
|
||||
if len(dat) == 1 && dat[0] == '\n' {
|
||||
break
|
||||
}
|
||||
// log.I.S(dat)
|
||||
tt := new(tag.T)
|
||||
if dat, err = tt.Unmarshal(dat); chk.E(err) {
|
||||
return
|
||||
}
|
||||
t.tags = append(t.tags, tt)
|
||||
}
|
||||
return
|
||||
}
|
||||
45
pkg/tags/tags_test.go
Normal file
45
pkg/tags/tags_test.go
Normal file
@@ -0,0 +1,45 @@
|
||||
package tags
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"protocol.realy.lol/pkg/tag"
|
||||
)
|
||||
|
||||
func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
var tegs = [][]string{
|
||||
{"reply", "e:l_T9Of4ru-PLGUxxvw3SfZH0e6XW11VYy8ZSgbcsD9Y", "realy.example.com/repo1"},
|
||||
{"root", "e:l_T9Of4ru-PLGUxxvw3SfZH0e6XW11VYy8ZSgbcsD9Y", "realy.example.com/repo2"},
|
||||
{"mention", "p:JMkZVnu9QFplR4F_KrWX-3chQsklXZq_5I6eYcXfz1Q", "realy.example.com/repo3"},
|
||||
}
|
||||
var err error
|
||||
var tgs []*tag.T
|
||||
for _, teg := range tegs {
|
||||
var tg *tag.T
|
||||
if tg, err = tag.New(teg...); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
tgs = append(tgs, tg)
|
||||
}
|
||||
t1 := New(tgs...)
|
||||
var m1 []byte
|
||||
if m1, err = t1.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t2 := new(T)
|
||||
var rem []byte
|
||||
if rem, err = t2.Unmarshal(m1); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
t.Fatalf("%s", rem)
|
||||
}
|
||||
var m2 []byte
|
||||
if m2, err = t2.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !bytes.Equal(m1, m2) {
|
||||
t.Fatalf("not equal:\n%s\n%s", m1, m2)
|
||||
}
|
||||
}
|
||||
1
pkg/timestamp/base10k.txt
Normal file
1
pkg/timestamp/base10k.txt
Normal file
File diff suppressed because one or more lines are too long
9
pkg/timestamp/gen/log.go
Normal file
9
pkg/timestamp/gen/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
17
pkg/timestamp/gen/pregen.go
Normal file
17
pkg/timestamp/gen/pregen.go
Normal file
@@ -0,0 +1,17 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
)
|
||||
|
||||
func main() {
|
||||
var err error
|
||||
var fh *os.File
|
||||
if fh, err = os.Create("base10k.txt"); chk.E(err) {
|
||||
panic(err)
|
||||
}
|
||||
for i := range 10000 {
|
||||
fmt.Fprintf(fh, "%04d", i)
|
||||
}
|
||||
}
|
||||
9
pkg/timestamp/log.go
Normal file
9
pkg/timestamp/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package timestamp
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
109
pkg/timestamp/timestamp.go
Normal file
109
pkg/timestamp/timestamp.go
Normal file
@@ -0,0 +1,109 @@
|
||||
package timestamp
|
||||
|
||||
import (
|
||||
_ "embed"
|
||||
|
||||
"golang.org/x/exp/constraints"
|
||||
)
|
||||
|
||||
// run this to regenerate (pointlessly) the base 10 array of 4 places per entry
|
||||
//go:generate go run ./gen/.
|
||||
|
||||
//go:embed base10k.txt
|
||||
var base10k []byte
|
||||
|
||||
const base = 10000
|
||||
|
||||
type T struct {
|
||||
N uint64
|
||||
}
|
||||
|
||||
func New[V constraints.Integer](n V) *T { return &T{uint64(n)} }
|
||||
|
||||
func (n *T) Uint64() uint64 { return n.N }
|
||||
func (n *T) Int64() int64 { return int64(n.N) }
|
||||
func (n *T) Uint16() uint16 { return uint16(n.N) }
|
||||
|
||||
var powers = []*T{
|
||||
{1},
|
||||
{1_0000},
|
||||
{1_0000_0000},
|
||||
{1_0000_0000_0000},
|
||||
{1_0000_0000_0000_0000},
|
||||
}
|
||||
|
||||
const zero = '0'
|
||||
const nine = '9'
|
||||
|
||||
func (n *T) Marshal(dst []byte) (b []byte) {
|
||||
nn := n.N
|
||||
b = dst
|
||||
if n.N == 0 {
|
||||
b = append(b, '0')
|
||||
return
|
||||
}
|
||||
var i int
|
||||
var trimmed bool
|
||||
k := len(powers)
|
||||
for k > 0 {
|
||||
k--
|
||||
q := n.N / powers[k].N
|
||||
if !trimmed && q == 0 {
|
||||
continue
|
||||
}
|
||||
offset := q * 4
|
||||
bb := base10k[offset : offset+4]
|
||||
if !trimmed {
|
||||
for i = range bb {
|
||||
if bb[i] != '0' {
|
||||
bb = bb[i:]
|
||||
trimmed = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
b = append(b, bb...)
|
||||
n.N = n.N - q*powers[k].N
|
||||
}
|
||||
n.N = nn
|
||||
return
|
||||
}
|
||||
|
||||
// Unmarshal reads a string, which must be a positive integer int larger than math.MaxUint64,
|
||||
// skipping any non-numeric content before it.
|
||||
//
|
||||
// Note that leading zeros are not considered valid, but basically int such thing as machine
|
||||
// generated JSON integers with leading zeroes. Until this is disproven, this is the fastest way
|
||||
// to read a positive json integer, and a leading zero is decoded as a zero, and the remainder
|
||||
// returned.
|
||||
func (n *T) Unmarshal(b []byte) (r []byte, err error) {
|
||||
if len(b) < 1 {
|
||||
err = errorf.E("zero length number")
|
||||
return
|
||||
}
|
||||
var sLen int
|
||||
if b[0] == zero {
|
||||
r = b[1:]
|
||||
n.N = 0
|
||||
return
|
||||
}
|
||||
// count the digits
|
||||
for ; sLen < len(b) && b[sLen] >= zero && b[sLen] <= nine && b[sLen] != ','; sLen++ {
|
||||
}
|
||||
if sLen == 0 {
|
||||
err = errorf.E("zero length number")
|
||||
return
|
||||
}
|
||||
if sLen > 20 {
|
||||
err = errorf.E("too big number for uint64")
|
||||
return
|
||||
}
|
||||
// the length of the string found
|
||||
r = b[sLen:]
|
||||
b = b[:sLen]
|
||||
for _, ch := range b {
|
||||
ch -= zero
|
||||
n.N = n.N*10 + uint64(ch)
|
||||
}
|
||||
return
|
||||
}
|
||||
77
pkg/timestamp/timestamp_test.go
Normal file
77
pkg/timestamp/timestamp_test.go
Normal file
@@ -0,0 +1,77 @@
|
||||
package timestamp
|
||||
|
||||
import (
|
||||
"math"
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
"lukechampine.com/frand"
|
||||
)
|
||||
|
||||
func TestMarshalUnmarshal(t *testing.T) {
|
||||
b := make([]byte, 0, 8)
|
||||
var rem []byte
|
||||
var n *T
|
||||
var err error
|
||||
for _ = range 10000000 {
|
||||
n = New(uint64(frand.Intn(math.MaxInt64)))
|
||||
b = n.Marshal(b)
|
||||
m := New(0)
|
||||
if rem, err = m.Unmarshal(b); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if n.N != m.N {
|
||||
t.Fatalf("failed to convert to int64 at %d %s %d", n.N, b, m.N)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
t.Fatalf("leftover bytes after converting back: '%s'", rem)
|
||||
}
|
||||
b = b[:0]
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkByteStringToInt64(bb *testing.B) {
|
||||
b := make([]byte, 0, 19)
|
||||
var i int
|
||||
const nTests = 10000000
|
||||
testInts := make([]*T, nTests)
|
||||
for i = range nTests {
|
||||
testInts[i] = New(frand.Intn(math.MaxInt64))
|
||||
}
|
||||
bb.Run("Marshal", func(bb *testing.B) {
|
||||
bb.ReportAllocs()
|
||||
for i = 0; i < bb.N; i++ {
|
||||
n := testInts[i%10000]
|
||||
b = n.Marshal(b)
|
||||
b = b[:0]
|
||||
}
|
||||
})
|
||||
bb.Run("Itoa", func(bb *testing.B) {
|
||||
bb.ReportAllocs()
|
||||
var s string
|
||||
for i = 0; i < bb.N; i++ {
|
||||
n := testInts[i%10000]
|
||||
s = strconv.Itoa(int(n.N))
|
||||
_ = s
|
||||
}
|
||||
})
|
||||
bb.Run("MarshalUnmarshal", func(bb *testing.B) {
|
||||
bb.ReportAllocs()
|
||||
m := New(0)
|
||||
for i = 0; i < bb.N; i++ {
|
||||
n := testInts[i%10000]
|
||||
b = m.Marshal(b)
|
||||
_, _ = n.Unmarshal(b)
|
||||
b = b[:0]
|
||||
}
|
||||
})
|
||||
bb.Run("ItoaAtoi", func(bb *testing.B) {
|
||||
bb.ReportAllocs()
|
||||
var s string
|
||||
for i = 0; i < bb.N; i++ {
|
||||
n := testInts[i%10000]
|
||||
s = strconv.Itoa(int(n.N))
|
||||
_, _ = strconv.Atoi(s)
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -14,9 +14,9 @@ zap mleku: ⚡️mleku@getalby.com
|
||||
Inspired by the event bus architecture of https://github.com/nostr-protocol[nostr] but redesigned to avoid the
|
||||
serious deficiencies of that protocol for both developers and users.
|
||||
|
||||
* link:./doc/why.md[why]
|
||||
* link:./doc/events_queries.md[events and queries]
|
||||
* link:./doc/relays.md[relays]
|
||||
* link:./doc/why.adoc[why]
|
||||
* link:./doc/events_queries.adoc[events and queries]
|
||||
* link:./doc/relays.adoc[relays]
|
||||
* link:./relays/readme.md[reference relays]
|
||||
* link:./clients/readme.md[reference clients]
|
||||
* link:./pkg/readme.md[GO libraries]
|
||||
Reference in New Issue
Block a user