Compare commits
22 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 4862dcf898 | |||
| 64bd999cdd | |||
| ec19703727 | |||
| 91447950c1 | |||
| f166402702 | |||
| e639428ab7 | |||
| 87775b6324 | |||
| 1ffa18099a | |||
| 58920da99b | |||
| 7505a378ff | |||
| 29465ff180 | |||
| 895bdb6928 | |||
| ebad892ece | |||
| d2e4a75eb3 | |||
| 2f665fae3e | |||
| 03d1529651 | |||
| 59b7c81ed6 | |||
| 60d3805150 | |||
| e0bc7fd82a | |||
| abf02e9e0d | |||
| 9914469cc2 | |||
| 1c4c4e1d27 |
3
clients/readme.adoc
Normal file
3
clients/readme.adoc
Normal file
@@ -0,0 +1,3 @@
|
||||
= clients
|
||||
|
||||
here be some reference clients for various sub-protocols
|
||||
@@ -1,3 +0,0 @@
|
||||
# clients
|
||||
|
||||
here be some reference clients for various sub-protocols
|
||||
3
directories/readme.adoc
Normal file
3
directories/readme.adoc
Normal file
@@ -0,0 +1,3 @@
|
||||
= directories
|
||||
|
||||
directories are relays that store data about users, including access privilege lists for other relays
|
||||
@@ -1,152 +0,0 @@
|
||||
= REALY protocol event/query specification
|
||||
|
||||
JSON is awful, and space inefficient, and complex to parse due to its intolerance of terminal commas and annoying to work with because of its retarded, multi-standards of string escaping.
|
||||
|
||||
Line structured documents are much more readily amenable to human reading and editing, and `\n`/`;`/`:` is more efficient than `","` as an item separator. Data structures can be much more simply expressed in a similar way as how they are in programming languages.
|
||||
|
||||
It is one of the guiding principles of the Unix philosophy to keep data in plain text, human readable format wherever possible, forcing the interposition of a parser just for humans to read the data adds extra brittleness to a protocol.
|
||||
|
||||
REALY protocol format is extremely simple and should be trivial to parse in any programming language with basic string slicing operators.
|
||||
|
||||
---
|
||||
|
||||
== Base64 Encoding
|
||||
|
||||
To save space and eliminate the need for ugly `=` padding characters, we invoke link:https://datatracker.ietf.org/doc/html/rfc4648#section-3.2[RFC 4648 section 3.2] for the case of using base64 URL encoding without padding because we know the data length. In this case, it is used for IDs and pubkeys (32 bytes payload each, 43 characters base64 raw URL encoded) and signatures (64 bytes payload, 86 characters base64 raw URL encoded) - the further benefit here is the exact same string can be used in HTTP GET parameters `?key=value&...` context. The standard `=` padding would break this usage as well.
|
||||
|
||||
For ease of human usage, also, it is recommended when the value is printed in plain text that it be on its own line so triple click catches all of it including the normally word-wise separated `-` hyphen/minus character, as follows:
|
||||
|
||||
CF4I5dXYPZ_lu2pYRjey1QMDmgNJEyT-MM8Vvj6EnZM
|
||||
|
||||
For those who can't find a "raw" codec for base64, the 32 byte length has 1`=` pad suffix and the 64 byte length has 2: `==` and this can be trimmed off and added back to conform to this requirement. Due to the fact that potentially there can be hundreds if not thousands of these in event content and tag fields the benefit can be quite great, as well as the benefit of being able to use these codes also in URL parameter values.
|
||||
|
||||
== Sockets and HTTP
|
||||
|
||||
Only subscriptions require server push messaging pattern, thus all other queries in REALY can be done with simple HTTP POST requests.
|
||||
|
||||
A relay should respond to a `subscribe` request by upgrading from http to a websocket.
|
||||
|
||||
It is unnecessary messages and work to use websockets for queries that match the HTTP request/response pattern, and by only requiring sockets for APIs that actually need server initiated messaging, the complexity of the relay is greatly reduced.
|
||||
|
||||
There can be a separate subscription type also, where there is delivering the IDs only, or forwarding the whole event.
|
||||
|
||||
=== HTTP Authentication
|
||||
|
||||
For the most part, all queries and submissions must be authenticated in order to enable a REALY relay to allow access.
|
||||
|
||||
To enable this, a suffix is added to messages with the following format:
|
||||
|
||||
`<message payload>\n` // all messages must be terminated with a newline
|
||||
|
||||
`<request URL>\n` // because we aren't signing also on the http header
|
||||
|
||||
`<unix timestamp in decimal ascii>:<public key of signer>:<signature>\n`
|
||||
|
||||
For reasons of security, a relay should not allow a time skew in the timestamp of more than 15 seconds.
|
||||
|
||||
The signature is upon the Blake 2b message hash of everything up to the semicolon preceding it, and only relates to the HTTP POST payload, not including the header.
|
||||
|
||||
Even subscription messages should be signed the same way, to avoid needing a secondary protocol. "open" relays that have no access control (which is retarded, but just to be complete) must still require this authentication message, but simply the client can use one-shot keys to sign with, as it also serves as a HMAC to validate the consistency of the request data, since it is based on the hash.
|
||||
|
||||
== Events
|
||||
|
||||
The format of events is as follows - the monospace segments are the exact text, including the necessary linebreak characters, the rest is descriptive.
|
||||
|
||||
---
|
||||
|
||||
`<type name>\n` // can be anything, hierarchic names like note/html note/md are possible, or type.subtype or whatever
|
||||
|
||||
`<pubkey>\n` // encoded in URL-base64 with the padding `=` elided
|
||||
|
||||
`<unix second precision timestamp in decimal ascii>\n`
|
||||
|
||||
`tags:\n`
|
||||
|
||||
`key:value;extra;...\n` // zero or more line separated, fields cannot contain a semicolon, end with newline instead of semicolon, key lowercase alphanumeric, first alpha, no whitespace or symbols, only key and following `:` are mandatory
|
||||
|
||||
`\n` // tags end with a double linebreak
|
||||
|
||||
`content:\n` // literally this word on one line *directly* after the newline of the previous
|
||||
|
||||
`<content>\n` // any number of further line breaks, last line is signature, everything before signature line is part of the canonical hash
|
||||
|
||||
-> The canonical form is the above, creating the message hash that is generated with Blake 2b <-
|
||||
|
||||
---
|
||||
|
||||
`<ed25519 signature encoded in URL-base64>\n` // this field would have two padding chars `==`, these should be elided
|
||||
|
||||
---
|
||||
|
||||
The binary data - Event Ids, Pubkeys and Signatures are encoded in raw base64 URL encoding (without padding), Signatures are 86 characters long, with the two padding characters elided `==`, Ids and Pubkeys are 43 characters long, with a single padding character elided `=`.
|
||||
|
||||
The database stored form of this event should make use of an event ID hash to monotonic serial ID number as the key to associating the filter indexes of an event store.
|
||||
|
||||
Event ID hashes will be encoded in URL-base64 where used in tags or mentioned in content with the prefix `e:`. Public keys must be prefixed with `p:` Tag keys should be intelligible words and a specification for their structure should be defined by users of them and shared with other REALY devs.
|
||||
|
||||
Indexing tag keys should be done with a truncated Blake2b hash cut at 8 bytes in the event store, keys should be short and thus the chances of collisions are practically zero.
|
||||
|
||||
== Publishing
|
||||
|
||||
Submitting an event to be stored is the same as a result sent from an Event Id query except with the type of operation inteded: `store\n` to store an event, `replace:<Event Id>\n` to replace an existing event and `relay\n` to not store but send to subscribers with open matching filters. Replace will not be accepted if the message type and pubkey are different to the original that is specified.
|
||||
|
||||
The use of specific different types of store requests eliminates the complexity of defining event types as replaceable, by making this intent explicit. A relay can also only allow one kind, such as a pure relay, which only accepts `relay` requests but neither `store` nor `replace`.
|
||||
|
||||
An event is then acknowledged to be stored or rejected with a message `ok:<true/false>;<Event Id>;<reason type>:human readable part` where the reason type is one of a set of common types to indicate the reason for the false
|
||||
|
||||
Events that are returned have the `<subscription Id>:<Event Id>\n` as the first line, and then the event in the format described above afterwards.
|
||||
|
||||
== Queries
|
||||
|
||||
There is three types of queries in REALY:
|
||||
|
||||
=== Filter
|
||||
|
||||
A filter has one or more of the fields listed below, and headed with `filter`:
|
||||
|
||||
----
|
||||
filter:<subscription Id>\n
|
||||
pubkeys:<one>;<two>;...\n // these match as OR
|
||||
timestamp:<since>;<until\n // either can be empty but not both, omit line for this, both are inclusive
|
||||
tags:
|
||||
<key>:<value>\n // indexes are not required or used for more than the key and value
|
||||
... // several matches can be present, they will act as OR
|
||||
----
|
||||
|
||||
The result returned from this is a newline separated list of event ID hashes encoded in base64, a following Event Id search is required to retrieve them. This obviates the need for pagination as the 45 bytes per event per result is far less than sending the whole event and the client is then free to paginate how they like without making for an onerous implementation requirement or nebulous result limit specification.
|
||||
|
||||
The results must be in reverse chronological order so the client knows it can paginate them from newest to oldest as required by the user interface.
|
||||
|
||||
If instead of `filter\n` at the top there is `subscribe:<subscription Id>\n` the relay should return any events it finds the Id for and then subsequently will forward the Event Id of any new matching event that comes in until the client sends a `close:<subscription Id>\n` message.
|
||||
|
||||
Once all stored events are returned, the relay will send `end:<subscription Id>\n` to notify the client that here after will only be events that just arrived.
|
||||
|
||||
`subscribe_full:<subscription Id>` should be used to request the events be directly delivered instead of just the event IDs associated with the subscription filter.
|
||||
|
||||
In the case of events that are published via the `relay` command, it is necessary that therefore there must be one or more "chanserv" style relays also connected to the relay to whom the clients know they can request such events, and a "nickserv" type specialized relay would need to exist also for creating access whitelists - by compiling singular edits to these lists and using a subscription mechanism to notify such clients of the need to update their ACL.
|
||||
|
||||
=== Text
|
||||
|
||||
A text search is just `search:<subscription Id>:` followed by a series of space separated tokens if the event store has a full text index, terminated with a newline.
|
||||
|
||||
=== Event Id
|
||||
|
||||
Event requests are as follows:
|
||||
|
||||
----
|
||||
events:<subscription Id>\n
|
||||
<event ID one>\n
|
||||
...
|
||||
----
|
||||
|
||||
Unlike in event tags and content, the `e:` prefix is unnecessary. The previous two query types only have lists of events in return, and to fetch the event a client then must send an `events` request.
|
||||
|
||||
Normally clients will gather a potentially longer list of events and then send Event Id queries in segments according to the requirements of the user interface.
|
||||
|
||||
The results are returned as a series as follows, for each item returned:
|
||||
|
||||
----
|
||||
event:<subscription Id>:<Event Id>\n
|
||||
<event>\n
|
||||
...
|
||||
----
|
||||
@@ -1,26 +0,0 @@
|
||||
= relays
|
||||
|
||||
A key design principle employed in REALY is that of relay specialization.
|
||||
|
||||
Instead of making a relay a hybrid event store and router, in REALY a relay does only one thing. Thus there can be
|
||||
|
||||
- a simple event repository that only understands queries to fetch a list of events by ID,
|
||||
- a relay that only indexes and keeps a space/time limited cache of events to process filters
|
||||
- a relay that only keeps a full text search index and a query results cache
|
||||
- a relay that only accepts list change CRDT events such as follow, join/create/delete/leave group, block, delete, report and compiles these events into single lists that are accessible to another relay that can use these compiled lists to control access either via explicit lists or by matching filters
|
||||
- a relay that stores and fetches media, including being able to convert and cache such as image size and formats
|
||||
- ...and many others are possible
|
||||
|
||||
By constraining the protocol interoperability compliance down to small simple sub-protocols the ability for clients to maintain currency with other clients and with relays is greatly simplified, without gatekeepers.
|
||||
|
||||
In addition, it should be normalized that relays can include clients that query other specialist relays, especially for such things as caching events. Thus one relay can be queried for a filter index, and the list of Event Ids returned can then be fetched from another relay that specialises in storing events and returning them on request by lists of Event Ids, and still other relays could store media files and be able to convert them on demand.
|
||||
|
||||
For this reason, instead of a single centralised mechanism, aside from the basic specifications as you can see in link:./events_queries.adoc[REALY protocol event/query specification] it is possible to add more to this list without needing to negotiate to have this specification added to this repository, though once it comes into use it can be done.
|
||||
|
||||
Along with the use of human-readable type identifiers for documents and the almost completely human-composable event encoding, the specification of REALY is not dependent on any kind of authoritative gatekeeping organisation, but instead organisations can add these to their own specifications lists as they see fit, eliminating a key problem with the operation of the nostr protocol.
|
||||
|
||||
There need not be bureaucratic RFC style specifications, but instead use human-readable names and be less formally described, the formality improving as others adopt it and expand or refine it.
|
||||
|
||||
Thus also it is recommended that implementations of any or all REALY servers and clients should keep a copy of the specification documents found in other implementations and converge them to each other as required when their repositories update support to changes and new sub-protocols.
|
||||
|
||||
Lastly, as part of making this ecosystem as heterogeneous and decentralized as possible, the notion of relay operators subscribing to other relay services such as media storage/conversion specialists or event archivists and focusing each relay service on simple, single purposes and protocols enables a more robust and failure resistant ecosystem where multiple providers can compete for clients and to be suppliers for other providers and replicate data and potentially enable specialisations like archival data access for providers that aggregate data from multiple other providers.
|
||||
28
doc/why.adoc
28
doc/why.adoc
@@ -1,28 +0,0 @@
|
||||
= why REALY?
|
||||
|
||||
Since the introduction of the idea of a general "public square" style social network as seen with Facebook and Twitter, the whole world has been overcome by something of a plague of mind control brainwashing cults.
|
||||
|
||||
Worse than "Beatlemania" people are being lured into the control of various kinds of "influencers" and adopting in-group words and "challenges" that are more often harmful to the people than actually beneficial like an exercise challenge might be.
|
||||
|
||||
Nostr protocol is a super simple event bus architecture, blended with a post office protocol, and due to various reasons related to the recent buyout of Twitter by Elon Musk, who plainly wants to turn it into the Western version of Wechat, it has become plagued with bad subprotocol designs that negate the benefits of self sovereign identity (elliptic curve asymmetric cryptography) and a dominant form of client that is essentially a travesty of Twitter itself.
|
||||
|
||||
REALY is being designed with the lessons learned from Nostr and the last 30 years of experience of internet communications protocols to aim to resist this kind of Embrace/Extend/Extinguish protocol that has repeatedly been performed on everything from email, to RSS, to threaded forums and instant messaging, by starting with the distilled essence of how these protocols should work so as to not be so easily vulnerable to being coopted by what is essentially in all but name the same centralised event bus architecture of social networks like Facebook and Twitter.
|
||||
|
||||
The main purposes that REALY will target are:
|
||||
|
||||
* synchronous instant messaging protocols with IRC style nickserv and chanserv permissions and persistence, built from the ground up to take advantage of the cryptographic identities created by BIP-340 signatures, with an intuitive threaded structure that allows users to peruse a larger discussion without the problem of threads of discussion breaking the top level structure
|
||||
* structured document repositories primarily for text media, as a basis for collaborative documentation and literature collections, and software source code (breaking out of the filesystem tree structure to permit much more flexible ways of organising code)
|
||||
* persistent threaded discussion forums for longer form messages than the typical single sentence/paragraph of instant messaging
|
||||
* simple cross-relay data query protocol that enables minimising the data cost of traffic to clients
|
||||
* push style notification systems that can be programmed by the users' clients to respond to any kind of event breadcast to a relay
|
||||
|
||||
A key concept in the REALY architecture is that of relays being a heteregenous group of data repositories and relaying systems that are built specific to purpose, such as
|
||||
|
||||
- a chat relay, which does not store any messages but merely bounces messages around ot subscribers,
|
||||
- a document repository, which provides read access to data with full text search capability, that can ne specialised for a singular data format (eg markdown, eg mediawiki, eg code), a threaded, moderated forum, and others,
|
||||
- a directory relay which stores and distributes user metadata such as profiles, relay lists, follows, mutes, deletes and reports
|
||||
- an authentication relay, which can be sent messages to add or remove users from access whitelists and blacklists, that provides this state data to relays it is used by
|
||||
|
||||
A second key concept in REALY is the integration of Lightning Network payments - again mostly copying what is done with Nostr but enabling both pseudonymous micro-accounts and long term subscription styles of access payment, and the promotion of a notion of user-pays - where all data writing must be charged for, and most reading must be paid for.
|
||||
|
||||
Lightning is perfect for this because it can currently cope with enormous volumes of payments with mere seconds of delay for settlement and a granularity of denomination that lends itself to the very low cost of delivering a one-time service, or maintaining a micro-account.
|
||||
9
go.mod
9
go.mod
@@ -6,11 +6,16 @@ require (
|
||||
github.com/davecgh/go-spew v1.1.1
|
||||
github.com/fatih/color v1.18.0
|
||||
go.uber.org/atomic v1.11.0
|
||||
golang.org/x/exp v0.0.0-20250207012021-f9890c6ad9f3
|
||||
lukechampine.com/frand v1.5.1
|
||||
realy.lol v1.7.13
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/klauspost/cpuid/v2 v2.2.9 // indirect
|
||||
github.com/mattn/go-colorable v0.1.14 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/stretchr/testify v1.10.0 // indirect
|
||||
golang.org/x/sys v0.29.0 // indirect
|
||||
github.com/templexxx/cpu v0.1.1 // indirect
|
||||
github.com/templexxx/xhex v0.0.0-20200614015412-aed53437177b // indirect
|
||||
golang.org/x/sys v0.30.0 // indirect
|
||||
)
|
||||
|
||||
17
go.sum
17
go.sum
@@ -2,6 +2,8 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
|
||||
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
|
||||
github.com/klauspost/cpuid/v2 v2.2.9 h1:66ze0taIn2H33fBvCkXuv9BmCwDfafmiIVpKV9kKGuY=
|
||||
github.com/klauspost/cpuid/v2 v2.2.9/go.mod h1:rqkxqrZ1EhYM9G+hXH7YdowN5R5RGN6NK4QwQ3WMXF8=
|
||||
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
|
||||
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
@@ -10,10 +12,21 @@ github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZb
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/templexxx/cpu v0.0.1/go.mod h1:w7Tb+7qgcAlIyX4NhLuDKt78AHA5SzPmq0Wj6HiEnnk=
|
||||
github.com/templexxx/cpu v0.1.1 h1:isxHaxBXpYFWnk2DReuKkigaZyrjs2+9ypIdGP4h+HI=
|
||||
github.com/templexxx/cpu v0.1.1/go.mod h1:w7Tb+7qgcAlIyX4NhLuDKt78AHA5SzPmq0Wj6HiEnnk=
|
||||
github.com/templexxx/xhex v0.0.0-20200614015412-aed53437177b h1:XeDLE6c9mzHpdv3Wb1+pWBaWv/BlHK0ZYIu/KaL6eHg=
|
||||
github.com/templexxx/xhex v0.0.0-20200614015412-aed53437177b/go.mod h1:7rwmCH0wC2fQvNEvPZ3sKXukhyCTyiaZ5VTZMQYpZKQ=
|
||||
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
|
||||
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||
golang.org/x/exp v0.0.0-20250207012021-f9890c6ad9f3 h1:qNgPs5exUA+G0C96DrPwNrvLSj7GT/9D+3WMWUcUg34=
|
||||
golang.org/x/exp v0.0.0-20250207012021-f9890c6ad9f3/go.mod h1:tujkw807nyEEAamNbDrEGzRav+ilXA7PCRAd6xsmwiU=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU=
|
||||
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
|
||||
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
lukechampine.com/frand v1.5.1 h1:fg0eRtdmGFIxhP5zQJzM1lFDbD6CUfu/f+7WgAZd5/w=
|
||||
lukechampine.com/frand v1.5.1/go.mod h1:4VstaWc2plN4Mjr10chUD46RAVGWhpkZ5Nja8+Azp0Q=
|
||||
realy.lol v1.7.13 h1:+7kIa+RFmvdP23DRjj1GEe7+F7cmyl/xuII8QMwe7nM=
|
||||
realy.lol v1.7.13/go.mod h1:qtk9aklmo7dpX+uSj20ol4utmh3ldWXQOpyzH4dcRG8=
|
||||
|
||||
47
pkg/auth/auth.go
Normal file
47
pkg/auth/auth.go
Normal file
@@ -0,0 +1,47 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/codec"
|
||||
"protocol.realy.lol/pkg/decimal"
|
||||
"protocol.realy.lol/pkg/pubkey"
|
||||
"protocol.realy.lol/pkg/signature"
|
||||
"protocol.realy.lol/pkg/url"
|
||||
)
|
||||
|
||||
type Message struct {
|
||||
Payload codec.C
|
||||
RequestURL *url.U
|
||||
Timestamp *decimal.T
|
||||
PubKey *pubkey.P
|
||||
Signature *signature.S
|
||||
}
|
||||
|
||||
func SignMessage(msg *Message) (m *Message, err error) {
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (m *Message) Marshal(d []byte) (r []byte, err error) {
|
||||
r = d
|
||||
if r, err = m.Payload.Marshal(d); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if r, err = m.RequestURL.Marshal(d); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if r, err = m.Timestamp.Marshal(d); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if r, err = m.PubKey.Marshal(d); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if r, err = m.Signature.Marshal(d); chk.E(err) {
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (m *Message) Unmarshal(d []byte) (r []byte, err error) {
|
||||
|
||||
return
|
||||
}
|
||||
1
pkg/auth/auth_test.go
Normal file
1
pkg/auth/auth_test.go
Normal file
@@ -0,0 +1 @@
|
||||
package auth
|
||||
@@ -1,4 +1,4 @@
|
||||
package timestamp
|
||||
package auth
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
@@ -6,8 +6,8 @@ package codec
|
||||
type C interface {
|
||||
// Marshal data by appending it to the provided destination, and return the
|
||||
// resultant slice.
|
||||
Marshal(dst []byte) (result []byte, err error)
|
||||
Marshal(dst []byte) (r []byte, err error)
|
||||
// Unmarshal the next expected data element from the provided slice and return
|
||||
// the remainder after the expected separator.
|
||||
Unmarshal(data []byte) (rem []byte, err error)
|
||||
Unmarshal(data []byte) (r []byte, err error)
|
||||
}
|
||||
|
||||
@@ -2,53 +2,65 @@ package content
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
|
||||
"protocol.realy.lol/pkg/decimal"
|
||||
)
|
||||
|
||||
// C is raw content bytes of a message. This can contain anything but when it is
|
||||
// unmarshalled it is assumed that the last line (content between the second
|
||||
// last and last line break) is not part of the content, as this is where the
|
||||
// signature is placed.
|
||||
//
|
||||
// The only guaranteed property of an encoded content.C is that it has two
|
||||
// newline characters, one at the very end, and a second one before it that
|
||||
// demarcates the end of the actual content. It can be entirely binary and mess
|
||||
// up a terminal to render the unsanitized possible control characters.
|
||||
// C is raw content bytes of a message.
|
||||
type C struct{ Content []byte }
|
||||
|
||||
// Marshal just writes the provided data with a `content:\n` prefix and adds a
|
||||
// terminal newline.
|
||||
func (c *C) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = append(append(append(dst, "content:\n"...), c.Content...), '\n')
|
||||
func (c *C) Marshal(d []byte) (r []byte, err error) {
|
||||
r = append(d, "content:"...)
|
||||
if r, err = decimal.New(len(c.Content)).Marshal(r); chk.E(err) {
|
||||
return
|
||||
}
|
||||
r = append(r, '\n')
|
||||
// log.I.S(r)
|
||||
r = append(r, c.Content...)
|
||||
// r = append(r, '\n')
|
||||
// log.I.S(r)
|
||||
return
|
||||
}
|
||||
|
||||
var Prefix = "content:\n"
|
||||
var Prefix = "content:"
|
||||
|
||||
// Unmarshal expects the `content:\n` prefix and stops at the second last
|
||||
// Unmarshal expects the `content:<length>\n` prefix and stops at the second last
|
||||
// newline. The data between the second last and last newline in the data is
|
||||
// assumed to be a signature but it could be anything in another use case.
|
||||
func (c *C) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
if !bytes.HasPrefix(data, []byte("content:\n")) {
|
||||
err = errorf.E("content prefix `content:\\n' not found: '%s'", data[:len(Prefix)+1])
|
||||
// assumed to be a signature, but it could be anything in another use case.
|
||||
//
|
||||
// It is necessary that any non-content elements after the content must be
|
||||
// parsed before returning to the content, because this is a
|
||||
func (c *C) Unmarshal(d []byte) (r []byte, err error) {
|
||||
if !bytes.HasPrefix(d, []byte(Prefix)) {
|
||||
err = errorf.E("content prefix `content:' not found: '%s'", d[:len(Prefix)])
|
||||
return
|
||||
}
|
||||
// trim off the prefix.
|
||||
data = data[len(Prefix):]
|
||||
// check that there is a last newline.
|
||||
if data[len(data)-1] != '\n' {
|
||||
err = errorf.E("input data does not end with newline")
|
||||
d = d[len(Prefix):]
|
||||
l := decimal.New(0)
|
||||
if d, err = l.Unmarshal(d); chk.E(err) {
|
||||
return
|
||||
}
|
||||
// we start at the second last, previous to the terminal newline byte.
|
||||
lastPos := len(data) - 2
|
||||
for ; lastPos >= len(Prefix); lastPos-- {
|
||||
// the content ends at the byte before the second last newline byte.
|
||||
if data[lastPos] == '\n' {
|
||||
break
|
||||
}
|
||||
// and then there must be a newline
|
||||
if d[0] != '\n' {
|
||||
err = errorf.E("must be newline after content:<length>:\n%n", d)
|
||||
return
|
||||
}
|
||||
c.Content = data[:lastPos]
|
||||
// return the remainder after the content-terminal newline byte.
|
||||
rem = data[lastPos+1:]
|
||||
d = d[1:]
|
||||
// log.I.S(l.Uint64(), d)
|
||||
if len(d) < int(l.N) {
|
||||
err = io.EOF
|
||||
return
|
||||
}
|
||||
c.Content = d[:l.N]
|
||||
r = d[l.N:]
|
||||
if r[0] != '\n' {
|
||||
err = errorf.E("must be newline after content:<length>\\n, got '%s' %x", c.Content[len(c.Content)-1])
|
||||
return
|
||||
}
|
||||
r = r[1:]
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -5,6 +5,8 @@ import (
|
||||
"crypto/rand"
|
||||
mrand "math/rand"
|
||||
"testing"
|
||||
|
||||
"protocol.realy.lol/pkg/separator"
|
||||
)
|
||||
|
||||
func TestC_Marshal_Unmarshal(t *testing.T) {
|
||||
@@ -19,19 +21,18 @@ func TestC_Marshal_Unmarshal(t *testing.T) {
|
||||
if res, err = c1.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// append a fake zero length signature
|
||||
res = append(res, '\n')
|
||||
res = separator.Add(res)
|
||||
c2 := new(C)
|
||||
var rem []byte
|
||||
if rem, err = c2.Unmarshal(res); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !bytes.Equal(c1.Content, c2.Content) {
|
||||
log.I.S(c1, c2)
|
||||
log.I.S(c1.Content, c2.Content)
|
||||
t.Fatal("content not equal")
|
||||
}
|
||||
if !bytes.Equal(rem, []byte{'\n'}) {
|
||||
if len(rem) > 0 {
|
||||
log.I.S(rem)
|
||||
t.Fatalf("remainder not found")
|
||||
t.Fatalf("unexpected remaining bytes: '%0x'", rem)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,12 +1,13 @@
|
||||
package timestamp
|
||||
package decimal
|
||||
|
||||
import (
|
||||
_ "embed"
|
||||
"time"
|
||||
|
||||
"golang.org/x/exp/constraints"
|
||||
)
|
||||
|
||||
// run this to regenerate (pointlessly) the base 10 array of 4 places per entry
|
||||
// run this to regenerate (pointlessly) the base 10 array of 0 to 9999
|
||||
//go:generate go run ./gen/.
|
||||
|
||||
//go:embed base10k.txt
|
||||
@@ -14,32 +15,36 @@ var base10k []byte
|
||||
|
||||
const base = 10000
|
||||
|
||||
type T struct {
|
||||
N uint64
|
||||
}
|
||||
type T struct{ N uint64 }
|
||||
|
||||
func New[V constraints.Integer](n V) *T { return &T{uint64(n)} }
|
||||
|
||||
func Now() *T { return New(time.Now().Unix()) }
|
||||
|
||||
func (n *T) Uint64() uint64 { return n.N }
|
||||
func (n *T) Int64() int64 { return int64(n.N) }
|
||||
func (n *T) Uint16() uint16 { return uint16(n.N) }
|
||||
|
||||
var powers = []*T{
|
||||
{1},
|
||||
{1_0000},
|
||||
{1_0000_0000},
|
||||
{1_0000_0000_0000},
|
||||
{1_0000_0000_0000_0000},
|
||||
{base / base},
|
||||
{base},
|
||||
{base * base},
|
||||
{base * base * base},
|
||||
{base * base * base * base},
|
||||
}
|
||||
|
||||
const zero = '0'
|
||||
const nine = '9'
|
||||
|
||||
func (n *T) Marshal(dst []byte) (b []byte) {
|
||||
func (n *T) Marshal(d []byte) (r []byte, err error) {
|
||||
if n == nil {
|
||||
err = errorf.E("cannot marshal nil timestamp")
|
||||
return
|
||||
}
|
||||
nn := n.N
|
||||
b = dst
|
||||
r = d
|
||||
if n.N == 0 {
|
||||
b = append(b, '0')
|
||||
r = append(r, '0')
|
||||
return
|
||||
}
|
||||
var i int
|
||||
@@ -62,33 +67,34 @@ func (n *T) Marshal(dst []byte) (b []byte) {
|
||||
}
|
||||
}
|
||||
}
|
||||
b = append(b, bb...)
|
||||
r = append(r, bb...)
|
||||
n.N = n.N - q*powers[k].N
|
||||
}
|
||||
// r = append(r, '\n')
|
||||
n.N = nn
|
||||
return
|
||||
}
|
||||
|
||||
// Unmarshal reads a string, which must be a positive integer int larger than math.MaxUint64,
|
||||
// Unmarshal reads a string, which must be a positive integer no larger than math.MaxUint64,
|
||||
// skipping any non-numeric content before it.
|
||||
//
|
||||
// Note that leading zeros are not considered valid, but basically int such thing as machine
|
||||
// Note that leading zeros are not considered valid, but basically no such thing as machine
|
||||
// generated JSON integers with leading zeroes. Until this is disproven, this is the fastest way
|
||||
// to read a positive json integer, and a leading zero is decoded as a zero, and the remainder
|
||||
// returned.
|
||||
func (n *T) Unmarshal(b []byte) (r []byte, err error) {
|
||||
if len(b) < 1 {
|
||||
func (n *T) Unmarshal(d []byte) (r []byte, err error) {
|
||||
if len(d) < 1 {
|
||||
err = errorf.E("zero length number")
|
||||
return
|
||||
}
|
||||
var sLen int
|
||||
if b[0] == zero {
|
||||
r = b[1:]
|
||||
if d[0] == zero {
|
||||
r = d[1:]
|
||||
n.N = 0
|
||||
return
|
||||
}
|
||||
// count the digits
|
||||
for ; sLen < len(b) && b[sLen] >= zero && b[sLen] <= nine && b[sLen] != ','; sLen++ {
|
||||
for ; sLen < len(d) && d[sLen] >= zero && d[sLen] <= nine && d[sLen] != ','; sLen++ {
|
||||
}
|
||||
if sLen == 0 {
|
||||
err = errorf.E("zero length number")
|
||||
@@ -99,11 +105,11 @@ func (n *T) Unmarshal(b []byte) (r []byte, err error) {
|
||||
return
|
||||
}
|
||||
// the length of the string found
|
||||
r = b[sLen:]
|
||||
b = b[:sLen]
|
||||
for _, ch := range b {
|
||||
r = d[sLen:]
|
||||
d = d[:sLen]
|
||||
for _, ch := range d {
|
||||
ch -= zero
|
||||
n.N = n.N*10 + uint64(ch)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
package timestamp
|
||||
package decimal
|
||||
|
||||
import (
|
||||
"math"
|
||||
@@ -15,7 +15,9 @@ func TestMarshalUnmarshal(t *testing.T) {
|
||||
var err error
|
||||
for _ = range 10000000 {
|
||||
n = New(uint64(frand.Intn(math.MaxInt64)))
|
||||
b = n.Marshal(b)
|
||||
if b, err = n.Marshal(b); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
m := New(0)
|
||||
if rem, err = m.Unmarshal(b); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
@@ -42,7 +44,7 @@ func BenchmarkByteStringToInt64(bb *testing.B) {
|
||||
bb.ReportAllocs()
|
||||
for i = 0; i < bb.N; i++ {
|
||||
n := testInts[i%10000]
|
||||
b = n.Marshal(b)
|
||||
b, _ = n.Marshal(b)
|
||||
b = b[:0]
|
||||
}
|
||||
})
|
||||
@@ -60,7 +62,7 @@ func BenchmarkByteStringToInt64(bb *testing.B) {
|
||||
m := New(0)
|
||||
for i = 0; i < bb.N; i++ {
|
||||
n := testInts[i%10000]
|
||||
b = m.Marshal(b)
|
||||
b, _ = m.Marshal(b)
|
||||
_, _ = n.Unmarshal(b)
|
||||
b = b[:0]
|
||||
}
|
||||
9
pkg/decimal/log.go
Normal file
9
pkg/decimal/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package decimal
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
@@ -1,19 +1,140 @@
|
||||
package event
|
||||
|
||||
import (
|
||||
"realy.lol/sha256"
|
||||
"realy.lol/signer"
|
||||
|
||||
"protocol.realy.lol/pkg/content"
|
||||
"protocol.realy.lol/pkg/event/types"
|
||||
"protocol.realy.lol/pkg/decimal"
|
||||
"protocol.realy.lol/pkg/pubkey"
|
||||
"protocol.realy.lol/pkg/separator"
|
||||
"protocol.realy.lol/pkg/signature"
|
||||
"protocol.realy.lol/pkg/tags"
|
||||
"protocol.realy.lol/pkg/timestamp"
|
||||
"protocol.realy.lol/pkg/types"
|
||||
)
|
||||
|
||||
type Event struct {
|
||||
type E struct {
|
||||
id []byte
|
||||
Type *types.T
|
||||
Pubkey *pubkey.P
|
||||
Timestamp *timestamp.T
|
||||
Timestamp *decimal.T
|
||||
Tags *tags.T
|
||||
Content *content.C
|
||||
Signature *signature.S
|
||||
encoded []byte
|
||||
}
|
||||
|
||||
// New creates a new event with some typical data already filled. This should be
|
||||
// populated by some kind of editor.
|
||||
//
|
||||
// Simplest form of this would be to create a temporary file, open user's
|
||||
// default editor with the event already populated, they enter the content
|
||||
// field's message, and then after closing the editor it scans the text for e:
|
||||
// and p: event and pubkey references and maybe #hashtags, updates the
|
||||
// timestamp, and then signs it with a signing key, wraps in an event publish
|
||||
// request, stamps and signs it and then pushes it to a configured relay
|
||||
// address.
|
||||
//
|
||||
// Other more complex edit flows could be created but this one is for a simple
|
||||
// flow as described. REALY events are text, and it is simple to make them
|
||||
// literally edit as simple text files. REALY is natively text files, and the
|
||||
// first composition client should just be a text editor.
|
||||
func New(pk []byte, typ string) (ev *E, err error) {
|
||||
var p *pubkey.P
|
||||
p, err = pubkey.New(pk)
|
||||
ev = &E{
|
||||
Type: types.New(typ),
|
||||
Pubkey: p,
|
||||
Timestamp: decimal.Now(),
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Invalidate empties the existing encoded cache of the event. This needs to be
|
||||
// called in case of mutating its fields. It also nils the signature.
|
||||
func (e *E) Invalidate() { e.encoded = e.encoded[:0]; e.Signature = nil; e.id = nil }
|
||||
|
||||
func (e *E) Sign(s signer.I) (err error) {
|
||||
var h []byte
|
||||
if h, err = e.Hash(); chk.E(err) {
|
||||
return
|
||||
}
|
||||
var sig []byte
|
||||
if sig, err = s.Sign(h); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if e.Signature, err = signature.New(sig); chk.E(err) {
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (e *E) Encode(d []byte) (r []byte, err error) {
|
||||
r = d
|
||||
if e.Type == nil {
|
||||
err = errorf.E("type is not defined for event")
|
||||
return
|
||||
}
|
||||
if r, err = e.Type.Marshal(r); chk.E(err) {
|
||||
return
|
||||
}
|
||||
r = separator.Add(r, ':')
|
||||
if e.Pubkey == nil {
|
||||
err = errorf.E("pubkey is not defined for event")
|
||||
return
|
||||
}
|
||||
// log.I.S(r)
|
||||
if r, err = e.Pubkey.Marshal(r); chk.E(err) {
|
||||
return
|
||||
}
|
||||
r = separator.Add(r, ';')
|
||||
if e.Timestamp == nil {
|
||||
err = errorf.E("timestamp is not defined for event")
|
||||
return
|
||||
}
|
||||
if r, err = e.Timestamp.Marshal(r); chk.E(err) {
|
||||
return
|
||||
}
|
||||
r = separator.Add(r)
|
||||
if r, err = e.Tags.Marshal(r); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if e.Content != nil {
|
||||
if r, err = e.Content.Marshal(r); chk.E(err) {
|
||||
return
|
||||
}
|
||||
r = separator.Add(r)
|
||||
}
|
||||
e.encoded = r
|
||||
return
|
||||
}
|
||||
|
||||
func (e *E) Hash() (h []byte, err error) {
|
||||
var b []byte
|
||||
if e.encoded == nil {
|
||||
if e.encoded, err = e.Encode(nil); chk.E(err) {
|
||||
return
|
||||
}
|
||||
b = e.encoded
|
||||
}
|
||||
hh := sha256.Sum256(b)
|
||||
h = hh[:]
|
||||
e.id = h
|
||||
return
|
||||
}
|
||||
|
||||
func (e *E) Marshal(d []byte) (r []byte, err error) {
|
||||
if r, err = e.Encode(d); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if r, err = e.Signature.Marshal(r); chk.E(err) {
|
||||
return
|
||||
}
|
||||
r = separator.Add(r)
|
||||
return
|
||||
}
|
||||
|
||||
func (e *E) Unmarshal(data []byte) (r []byte, err error) {
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
205
pkg/event/event_test.go
Normal file
205
pkg/event/event_test.go
Normal file
@@ -0,0 +1,205 @@
|
||||
package event
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
mrand "math/rand"
|
||||
"math/rand/v2"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"lukechampine.com/frand"
|
||||
"realy.lol/signer"
|
||||
|
||||
"protocol.realy.lol/pkg/content"
|
||||
"protocol.realy.lol/pkg/decimal"
|
||||
"protocol.realy.lol/pkg/id"
|
||||
"protocol.realy.lol/pkg/pubkey"
|
||||
"protocol.realy.lol/pkg/tag"
|
||||
"protocol.realy.lol/pkg/tags"
|
||||
"protocol.realy.lol/pkg/types"
|
||||
|
||||
"realy.lol/p256k"
|
||||
)
|
||||
|
||||
const seed = 0
|
||||
|
||||
func GenerateFake32Bytes(rng *rand.Rand) (fake []byte) {
|
||||
fake = make([]byte, 32)
|
||||
for i := range 4 {
|
||||
n := rng.Uint64()
|
||||
binary.LittleEndian.PutUint64(fake[i*8:i*8+8], n)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
var Hashtags, _ = tag.New(
|
||||
"halsey",
|
||||
"$DIAM",
|
||||
"Trevor Lawrence",
|
||||
"#AEWCEO",
|
||||
"Reuters",
|
||||
"Linda McMahon",
|
||||
"Bolton",
|
||||
"Raining in Houston",
|
||||
"#SwiftDay",
|
||||
"Munich",
|
||||
"NATO",
|
||||
"#thursdayvibes",
|
||||
"Good Thursday",
|
||||
"$SEA",
|
||||
"#AEWGrandSlam",
|
||||
"Brian Steele",
|
||||
"#GalentinesDay",
|
||||
"Bregman",
|
||||
"Afghan",
|
||||
"The Accountant 2",
|
||||
"Happy Friday Eve",
|
||||
"TLaw",
|
||||
"Red Sox",
|
||||
"Large Scale Social Deception",
|
||||
"2024 BMW",
|
||||
"Onew",
|
||||
"Secretary of Education",
|
||||
"$HIMS",
|
||||
"Core PPI",
|
||||
"Avowed",
|
||||
"Kemp",
|
||||
"Angel's Venture",
|
||||
"YouTube TV",
|
||||
"Bri Bri",
|
||||
"Teslas",
|
||||
"Thirsty Thursday",
|
||||
"matz",
|
||||
"Jack the Ripper",
|
||||
"Paramount",
|
||||
"Megan Boswell",
|
||||
"Zeldin",
|
||||
"Zelensky",
|
||||
"Censure",
|
||||
"Sheldon Whitehouse",
|
||||
"Arenado",
|
||||
"Parasite Class",
|
||||
"Kennedy Center",
|
||||
"I Love Jesus",
|
||||
"James Cook",
|
||||
)
|
||||
|
||||
func GenerateContent(rng *rand.Rand, l int) (c *content.C) {
|
||||
c = &content.C{}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
const lorem = `Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
|
||||
|
||||
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
|
||||
|
||||
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
|
||||
|
||||
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.`
|
||||
|
||||
func GenerateTags(rng *rand.Rand, n int) (t *tags.T, err error) {
|
||||
nE, nP, nH := rng.IntN(n)+1, rng.IntN(n)+1, rng.IntN(n)+1
|
||||
var tt []*tag.T
|
||||
k := tag.List.GetElementBytes(tag.KeyEvent)
|
||||
for range nE {
|
||||
var tg *tag.T
|
||||
v := GenerateFake32Bytes(rng)
|
||||
var e *id.T
|
||||
if e, err = id.New(v); chk.E(err) {
|
||||
return
|
||||
}
|
||||
var b []byte
|
||||
if b, err = e.Marshal(b); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if tg, err = tag.New(k, b, []byte("root")); chk.E(err) {
|
||||
return
|
||||
}
|
||||
tt = append(tt, tg)
|
||||
}
|
||||
k = tag.List.GetElementBytes(tag.KeyPubkey)
|
||||
for range nP {
|
||||
var tg *tag.T
|
||||
v := GenerateFake32Bytes(rng)
|
||||
var p *pubkey.P
|
||||
if p, err = pubkey.New(v); chk.E(err) {
|
||||
return
|
||||
}
|
||||
var b []byte
|
||||
if b, err = p.Marshal(b); chk.E(err) {
|
||||
return
|
||||
}
|
||||
if tg, err = tag.New(k, b); chk.E(err) {
|
||||
return
|
||||
}
|
||||
tt = append(tt, tg)
|
||||
}
|
||||
k = tag.List.GetElementBytes(tag.KeyHashtag)
|
||||
for range nH {
|
||||
var tg *tag.T
|
||||
v := Hashtags.GetElementBytes(rng.IntN(Hashtags.Len() - 1))
|
||||
// v = bytes.ReplaceAll(v, []byte{';'}, []byte{'_'})
|
||||
// v = bytes.ReplaceAll(v, []byte{':'}, []byte{'-'})
|
||||
// log.I.S(v)
|
||||
if tg, err = tag.New(k, v); chk.E(err) {
|
||||
return
|
||||
}
|
||||
tt = append(tt, tg)
|
||||
}
|
||||
t = tags.New(tt...)
|
||||
return
|
||||
}
|
||||
|
||||
func GenerateEvent(sign signer.I) (ev *E, err error) {
|
||||
s2 := rand.NewPCG(seed, seed)
|
||||
rng := rand.New(s2)
|
||||
sign = new(p256k.Signer)
|
||||
if err = sign.Generate(); chk.E(err) {
|
||||
return
|
||||
}
|
||||
var pk *pubkey.P
|
||||
if pk, err = pubkey.New(sign.Pub()); chk.E(err) {
|
||||
return
|
||||
}
|
||||
var t *tags.T
|
||||
if t, err = GenerateTags(rng, 3+1); chk.E(err) {
|
||||
return
|
||||
}
|
||||
cont := make([]byte, mrand.Intn(100)+25)
|
||||
_, err = frand.Read(cont)
|
||||
|
||||
ev = &E{
|
||||
Type: types.New("note/adoc"),
|
||||
Pubkey: pk,
|
||||
Timestamp: decimal.New(time.Now().Unix()),
|
||||
Tags: t,
|
||||
Content: &content.C{Content: []byte(lorem)},
|
||||
}
|
||||
if err = ev.Sign(sign); chk.E(err) {
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func TestE_Marshal_Unmarshal(t *testing.T) {
|
||||
var ev *E
|
||||
var err error
|
||||
var b1, b2 []byte
|
||||
sign := &p256k.Signer{}
|
||||
if err = sign.Generate(); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
for range 10 {
|
||||
if ev, err = GenerateEvent(sign); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if b1, err = ev.Marshal(b1); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.I.F("\n```\n%s```\n", b1)
|
||||
// log.I.S(ev)
|
||||
b1 = b1[:0]
|
||||
_ = b2
|
||||
}
|
||||
}
|
||||
33
pkg/id/id.go
33
pkg/id/id.go
@@ -9,20 +9,20 @@ import (
|
||||
|
||||
const Len = 43
|
||||
|
||||
type P struct{ b []byte }
|
||||
type T struct{ b []byte }
|
||||
|
||||
func New(id []byte) (p *P, err error) {
|
||||
func New(id []byte) (p *T, err error) {
|
||||
if len(id) != ed25519.PublicKeySize {
|
||||
err = errorf.E("invalid public key size: %d; require %d",
|
||||
len(id), ed25519.PublicKeySize)
|
||||
return
|
||||
}
|
||||
p = &P{id}
|
||||
p = &T{id}
|
||||
return
|
||||
}
|
||||
|
||||
func (p *P) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = dst
|
||||
func (p *T) Marshal(d []byte) (r []byte, err error) {
|
||||
r = d
|
||||
if p == nil || p.b == nil || len(p.b) == 0 {
|
||||
err = errorf.E("nil/zero length pubkey")
|
||||
return
|
||||
@@ -32,7 +32,7 @@ func (p *P) Marshal(dst []byte) (result []byte, err error) {
|
||||
len(p.b), ed25519.PublicKeySize, p.b)
|
||||
return
|
||||
}
|
||||
buf := bytes.NewBuffer(result)
|
||||
buf := bytes.NewBuffer(r)
|
||||
w := base64.NewEncoder(base64.RawURLEncoding, buf)
|
||||
if _, err = w.Write(p.b); chk.E(err) {
|
||||
return
|
||||
@@ -40,35 +40,36 @@ func (p *P) Marshal(dst []byte) (result []byte, err error) {
|
||||
if err = w.Close(); chk.E(err) {
|
||||
return
|
||||
}
|
||||
result = append(buf.Bytes(), '\n')
|
||||
r = append(r, buf.Bytes()...)
|
||||
// r = append(buf.Bytes(), '\n')
|
||||
return
|
||||
}
|
||||
|
||||
func (p *P) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
rem = data
|
||||
func (p *T) Unmarshal(data []byte) (r []byte, err error) {
|
||||
r = data
|
||||
if p == nil {
|
||||
err = errorf.E("can't unmarshal into nil types.T")
|
||||
return
|
||||
}
|
||||
if len(rem) < 2 {
|
||||
if len(r) < 2 {
|
||||
err = errorf.E("can't unmarshal nothing")
|
||||
return
|
||||
}
|
||||
for i := range rem {
|
||||
if rem[i] == '\n' {
|
||||
for i := range r {
|
||||
if r[i] == '\n' {
|
||||
if i != Len {
|
||||
err = errorf.E("invalid encoded pubkey length %d; require %d '%0x'",
|
||||
i, Len, rem[:i])
|
||||
i, Len, r[:i])
|
||||
return
|
||||
}
|
||||
p.b = make([]byte, ed25519.PublicKeySize)
|
||||
if _, err = base64.RawURLEncoding.Decode(p.b, rem[:i]); chk.E(err) {
|
||||
if _, err = base64.RawURLEncoding.Decode(p.b, r[:i]); chk.E(err) {
|
||||
return
|
||||
}
|
||||
rem = rem[i+1:]
|
||||
r = r[i+1:]
|
||||
return
|
||||
}
|
||||
}
|
||||
err = io.EOF
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -5,6 +5,8 @@ import (
|
||||
"crypto/ed25519"
|
||||
"crypto/rand"
|
||||
"testing"
|
||||
|
||||
"protocol.realy.lol/pkg/separator"
|
||||
)
|
||||
|
||||
func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
@@ -14,7 +16,7 @@ func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
if _, err = rand.Read(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var p *P
|
||||
var p *T
|
||||
if p, err = New(pk); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
@@ -22,7 +24,8 @@ func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
if o, err = p.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
p2 := &P{}
|
||||
o = separator.Add(o)
|
||||
p2 := &T{}
|
||||
var rem []byte
|
||||
if rem, err = p2.Unmarshal(o); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
@@ -34,4 +37,4 @@ func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
t.Fatal("public key did not encode/decode faithfully")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,35 +1,37 @@
|
||||
# lol
|
||||
= lol
|
||||
|
||||
location of log
|
||||
|
||||
This is a very simple, but practical library for logging in applications. Its
|
||||
main feature is printing source code locations to make debugging easier.
|
||||
|
||||
## usage
|
||||
== usage
|
||||
|
||||
put this somewhere in your package:
|
||||
|
||||
```go
|
||||
[source,go]
|
||||
----
|
||||
var log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
```
|
||||
----
|
||||
|
||||
then you can invoke like this:
|
||||
|
||||
```go
|
||||
[source,go]
|
||||
----
|
||||
log.I.S(spew.this.thing)
|
||||
errorf.E("print and return this error")
|
||||
if err = bogus; chk.E(err) { return bogus.response } // print an error at the site and return the error
|
||||
```
|
||||
----
|
||||
|
||||
## terminals
|
||||
== terminals
|
||||
|
||||
Due to how so few terminals actually support source location hyperlinks, pretty much tilix and intellij terminal are
|
||||
the only two that really provide adequate functionality, this logging library defaults to output format that works
|
||||
best with intellij. As such, the terminal is aware of the CWD and the code locations printed are relative, as
|
||||
required to get the hyperlinkization from this terminal. Handling support for Tilix requires more complications and
|
||||
due to advances with IntelliJ's handling it is not practical to support any other for this purpose. Users of this
|
||||
Due to how so few terminals actually support source location hyperlinks, pretty much tilix and intellij terminal are
|
||||
the only two that really provide adequate functionality, this logging library defaults to output format that works
|
||||
best with intellij. As such, the terminal is aware of the CWD and the code locations printed are relative, as
|
||||
required to get the hyperlinkization from this terminal. Handling support for Tilix requires more complications and
|
||||
due to advances with IntelliJ's handling it is not practical to support any other for this purpose. Users of this
|
||||
library can always fall back to manually interpreting and accessing the relative file path to find the source of a log.
|
||||
|
||||
In addition, due to this terminal's slow rendering of long lines, long log strings are automatically broken into 80
|
||||
character lines, and if there is comma separators in the line, the line is broken at the comma instead of at
|
||||
In addition, due to this terminal's slow rendering of long lines, long log strings are automatically broken into 80
|
||||
character lines, and if there is comma separators in the line, the line is broken at the comma instead of at
|
||||
column80. This works perfectly for this purpose.
|
||||
@@ -21,8 +21,8 @@ func New(pk []byte) (p *P, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
func (p *P) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = dst
|
||||
func (p *P) Marshal(d []byte) (r []byte, err error) {
|
||||
r = d
|
||||
if p == nil || p.PublicKey == nil || len(p.PublicKey) == 0 {
|
||||
err = errorf.E("nil/zero length pubkey")
|
||||
return
|
||||
@@ -32,7 +32,7 @@ func (p *P) Marshal(dst []byte) (result []byte, err error) {
|
||||
len(p.PublicKey), ed25519.PublicKeySize, p.PublicKey)
|
||||
return
|
||||
}
|
||||
buf := bytes.NewBuffer(result)
|
||||
buf := new(bytes.Buffer)
|
||||
w := base64.NewEncoder(base64.RawURLEncoding, buf)
|
||||
if _, err = w.Write(p.PublicKey); chk.E(err) {
|
||||
return
|
||||
@@ -40,35 +40,37 @@ func (p *P) Marshal(dst []byte) (result []byte, err error) {
|
||||
if err = w.Close(); chk.E(err) {
|
||||
return
|
||||
}
|
||||
result = append(buf.Bytes(), '\n')
|
||||
// log.I.S(buf.Bytes())
|
||||
r = append(r, buf.Bytes()...)
|
||||
// r = append(buf.Bytes(), '\n')
|
||||
return
|
||||
}
|
||||
|
||||
func (p *P) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
rem = data
|
||||
func (p *P) Unmarshal(d []byte) (r []byte, err error) {
|
||||
r = d
|
||||
if p == nil {
|
||||
err = errorf.E("can't unmarshal into nil types.T")
|
||||
return
|
||||
}
|
||||
if len(rem) < 2 {
|
||||
if len(r) < 2 {
|
||||
err = errorf.E("can't unmarshal nothing")
|
||||
return
|
||||
}
|
||||
for i := range rem {
|
||||
if rem[i] == '\n' {
|
||||
for i := range r {
|
||||
if r[i] == '\n' {
|
||||
if i != Len {
|
||||
err = errorf.E("invalid encoded pubkey length %d; require %d '%0x'",
|
||||
i, Len, rem[:i])
|
||||
i, Len, r[:i])
|
||||
return
|
||||
}
|
||||
p.PublicKey = make([]byte, ed25519.PublicKeySize)
|
||||
if _, err = base64.RawURLEncoding.Decode(p.PublicKey, rem[:i]); chk.E(err) {
|
||||
if _, err = base64.RawURLEncoding.Decode(p.PublicKey, r[:i]); chk.E(err) {
|
||||
return
|
||||
}
|
||||
rem = rem[i+1:]
|
||||
r = r[i+1:]
|
||||
return
|
||||
}
|
||||
}
|
||||
err = io.EOF
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -5,6 +5,8 @@ import (
|
||||
"crypto/ed25519"
|
||||
"crypto/rand"
|
||||
"testing"
|
||||
|
||||
"protocol.realy.lol/pkg/separator"
|
||||
)
|
||||
|
||||
func TestP_Marshal_Unmarshal(t *testing.T) {
|
||||
@@ -22,6 +24,8 @@ func TestP_Marshal_Unmarshal(t *testing.T) {
|
||||
if o, err = p.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
o = separator.Add(o)
|
||||
log.I.S(o)
|
||||
p2 := &P{}
|
||||
var rem []byte
|
||||
if rem, err = p2.Unmarshal(o); chk.E(err) {
|
||||
@@ -34,4 +38,4 @@ func TestP_Marshal_Unmarshal(t *testing.T) {
|
||||
t.Fatal("public key did not encode/decode faithfully")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
3
pkg/readme.adoc
Normal file
3
pkg/readme.adoc
Normal file
@@ -0,0 +1,3 @@
|
||||
= pkg
|
||||
|
||||
go libraries shared by clients and relays
|
||||
@@ -1,3 +0,0 @@
|
||||
# pkg
|
||||
|
||||
go libraries shared by clients and relays
|
||||
10
pkg/separator/separator.go
Normal file
10
pkg/separator/separator.go
Normal file
@@ -0,0 +1,10 @@
|
||||
package separator
|
||||
|
||||
func Add(dst []byte, custom ...byte) (r []byte) {
|
||||
sep := byte('\n')
|
||||
if len(custom) > 0 {
|
||||
sep = custom[0]
|
||||
}
|
||||
r = append(dst, sep)
|
||||
return
|
||||
}
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
)
|
||||
|
||||
const Len = 86
|
||||
const Sentinel = "sig:"
|
||||
|
||||
type S struct{ Signature []byte }
|
||||
|
||||
@@ -21,8 +22,16 @@ func New(sig []byte) (p *S, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
func (p *S) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = dst
|
||||
func Sign(msg []byte, sec ed25519.PrivateKey) (sig []byte, err error) {
|
||||
return sec.Sign(nil, msg, nil)
|
||||
}
|
||||
|
||||
func Verify(msg []byte, pub ed25519.PublicKey, sig []byte) (ok bool) {
|
||||
return ed25519.Verify(pub, msg, sig)
|
||||
}
|
||||
|
||||
func (p *S) Marshal(d []byte) (r []byte, err error) {
|
||||
r = d
|
||||
if p == nil || p.Signature == nil || len(p.Signature) == 0 {
|
||||
err = errorf.E("nil/zero length signature")
|
||||
return
|
||||
@@ -32,7 +41,7 @@ func (p *S) Marshal(dst []byte) (result []byte, err error) {
|
||||
len(p.Signature), ed25519.SignatureSize, p.Signature)
|
||||
return
|
||||
}
|
||||
buf := bytes.NewBuffer(result)
|
||||
buf := new(bytes.Buffer)
|
||||
w := base64.NewEncoder(base64.RawURLEncoding, buf)
|
||||
if _, err = w.Write(p.Signature); chk.E(err) {
|
||||
return
|
||||
@@ -40,32 +49,36 @@ func (p *S) Marshal(dst []byte) (result []byte, err error) {
|
||||
if err = w.Close(); chk.E(err) {
|
||||
return
|
||||
}
|
||||
result = append(buf.Bytes(), '\n')
|
||||
r = append(r, Sentinel...)
|
||||
r = append(r, buf.Bytes()...)
|
||||
// r = append(buf.Bytes(), '\n')
|
||||
return
|
||||
}
|
||||
|
||||
func (p *S) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
rem = data
|
||||
func (p *S) Unmarshal(d []byte) (r []byte, err error) {
|
||||
r = d
|
||||
if p == nil {
|
||||
err = errorf.E("can't unmarshal into nil types.T")
|
||||
return
|
||||
}
|
||||
if len(rem) < 2 {
|
||||
if len(r) < 2 {
|
||||
err = errorf.E("can't unmarshal nothing")
|
||||
return
|
||||
}
|
||||
for i := range rem {
|
||||
if rem[i] == '\n' {
|
||||
if i != Len {
|
||||
for i := range r {
|
||||
if r[i] == '\n' {
|
||||
if i != Len+len(Sentinel) {
|
||||
err = errorf.E("invalid encoded signature length %d; require %d '%0x'",
|
||||
i, Len, rem[:i])
|
||||
i, Len, r[:i])
|
||||
return
|
||||
}
|
||||
// discard the sentinel
|
||||
r = r[len(Sentinel):]
|
||||
p.Signature = make([]byte, ed25519.SignatureSize)
|
||||
if _, err = base64.RawURLEncoding.Decode(p.Signature, rem[:i]); chk.E(err) {
|
||||
if _, err = base64.RawURLEncoding.Decode(p.Signature, r[:Len]); chk.E(err) {
|
||||
return
|
||||
}
|
||||
rem = rem[i+1:]
|
||||
r = r[Len:]
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,8 @@ import (
|
||||
"crypto/ed25519"
|
||||
"crypto/rand"
|
||||
"testing"
|
||||
|
||||
"protocol.realy.lol/pkg/separator"
|
||||
)
|
||||
|
||||
func TestS_Marshal_Unmarshal(t *testing.T) {
|
||||
@@ -14,23 +16,24 @@ func TestS_Marshal_Unmarshal(t *testing.T) {
|
||||
if _, err = rand.Read(sig); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var s *S
|
||||
if s, err = New(sig); chk.E(err) {
|
||||
var s1 *S
|
||||
if s1, err = New(sig); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var o []byte
|
||||
if o, err = s.Marshal(nil); chk.E(err) {
|
||||
if o, err = s1.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
p2 := &S{}
|
||||
o = separator.Add(o)
|
||||
s2 := &S{}
|
||||
var rem []byte
|
||||
if rem, err = p2.Unmarshal(o); chk.E(err) {
|
||||
if rem, err = s2.Unmarshal(o); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
log.I.F("%d %s", len(rem), rem)
|
||||
}
|
||||
if !bytes.Equal(sig, p2.Signature) {
|
||||
if !bytes.Equal(sig, s2.Signature) {
|
||||
t.Fatal("signature did not encode/decode faithfully")
|
||||
}
|
||||
|
||||
|
||||
17
pkg/tag/common.go
Normal file
17
pkg/tag/common.go
Normal file
@@ -0,0 +1,17 @@
|
||||
package tag
|
||||
|
||||
const (
|
||||
KeyEvent = iota
|
||||
KeyPubkey
|
||||
KeyHashtag
|
||||
)
|
||||
|
||||
var List, _ = New(
|
||||
// event is a reference to an event, the value is an Event Id
|
||||
"event",
|
||||
// pubkey is a reference to a public key, the value is a pubkey.P
|
||||
"pubkey",
|
||||
// hashtag is a string that can be searched by a hashtag filter tag
|
||||
"hashtag",
|
||||
// ... can many more things be in here for purposes
|
||||
)
|
||||
@@ -27,6 +27,7 @@ func New[V ~[]byte | ~string](v ...V) (t *T, err error) {
|
||||
t.fields = append(t.fields, k)
|
||||
for i, val := range v {
|
||||
var b []byte
|
||||
// log.I.S(val)
|
||||
if b, err = ValidateField(val, i); chk.E(err) {
|
||||
return
|
||||
}
|
||||
@@ -35,6 +36,29 @@ func New[V ~[]byte | ~string](v ...V) (t *T, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
func (t *T) Len() int { return len(t.fields) }
|
||||
func (t *T) Less(i, j int) bool { return bytes.Compare(t.fields[i], t.fields[j]) < 0 }
|
||||
func (t *T) Swap(i, j int) { t.fields[i], t.fields[j] = t.fields[j], t.fields[i] }
|
||||
|
||||
func (t *T) GetElementBytes(i int) (s []byte) {
|
||||
if i >= len(t.fields) {
|
||||
// return empty string if not found
|
||||
return
|
||||
}
|
||||
return t.fields[i]
|
||||
}
|
||||
|
||||
func (t *T) GetElementString(i int) (s string) {
|
||||
return string(t.GetElementBytes(i))
|
||||
}
|
||||
|
||||
func (t *T) GetStringSlice() (s []string) {
|
||||
for _, v := range t.fields {
|
||||
s = append(s, string(v))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// ValidateKey checks that the key is valid. Keys must be the same most language symbols:
|
||||
//
|
||||
// - first character is alphabetic [a-zA-Z]
|
||||
@@ -77,32 +101,32 @@ func ValidateField[V ~[]byte | ~string](f V, i int) (k []byte, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
func (t *T) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = dst
|
||||
func (t *T) Marshal(d []byte) (r []byte, err error) {
|
||||
r = d
|
||||
if len(t.fields) == 0 {
|
||||
return
|
||||
}
|
||||
for i, field := range t.fields {
|
||||
result = append(result, field...)
|
||||
r = append(r, field...)
|
||||
if i == 0 {
|
||||
result = append(result, ':')
|
||||
r = append(r, ':')
|
||||
} else if i == len(t.fields)-1 {
|
||||
result = append(result, '\n')
|
||||
r = append(r, '\n')
|
||||
} else {
|
||||
result = append(result, ';')
|
||||
r = append(r, ';')
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (t *T) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
func (t *T) Unmarshal(d []byte) (r []byte, err error) {
|
||||
var i int
|
||||
var v byte
|
||||
var dat []byte
|
||||
// first find the end
|
||||
for i, v = range data {
|
||||
for i, v = range d {
|
||||
if v == '\n' {
|
||||
dat, rem = data[:i], data[i+1:]
|
||||
dat, r = d[:i], d[i+1:]
|
||||
break
|
||||
}
|
||||
}
|
||||
@@ -132,4 +156,4 @@ func (t *T) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -1,56 +1,53 @@
|
||||
package tags
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"protocol.realy.lol/pkg/tag"
|
||||
)
|
||||
|
||||
const Sentinel = "tags:\n"
|
||||
|
||||
var SentinelBytes = []byte(Sentinel)
|
||||
|
||||
type tags []*tag.T
|
||||
|
||||
type T struct{ tags }
|
||||
|
||||
func New(v ...*tag.T) *T { return &T{tags: v} }
|
||||
|
||||
func (t *T) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = dst
|
||||
result = append(result, Sentinel...)
|
||||
for _, tt := range t.tags {
|
||||
if result, err = tt.Marshal(result); chk.E(err) {
|
||||
return
|
||||
func (t *T) Marshal(dst []byte) (r []byte, err error) {
|
||||
r = dst
|
||||
if t != nil {
|
||||
for _, tt := range t.tags {
|
||||
if r, err = tt.Marshal(r); chk.E(err) {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
result = append(result, '\n')
|
||||
return
|
||||
}
|
||||
|
||||
func (t *T) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
if len(data) < len(Sentinel) {
|
||||
err = fmt.Errorf("bytes too short to contain tags")
|
||||
return
|
||||
}
|
||||
var dat []byte
|
||||
if bytes.Equal(data[:len(Sentinel)], SentinelBytes) {
|
||||
dat = data[len(Sentinel):]
|
||||
}
|
||||
if len(dat) < 1 {
|
||||
return
|
||||
}
|
||||
for len(dat) > 0 {
|
||||
if len(dat) == 1 && dat[0] == '\n' {
|
||||
break
|
||||
}
|
||||
// log.I.S(dat)
|
||||
tt := new(tag.T)
|
||||
if dat, err = tt.Unmarshal(dat); chk.E(err) {
|
||||
return
|
||||
}
|
||||
t.tags = append(t.tags, tt)
|
||||
}
|
||||
// todo: update for the lack of start/end markers
|
||||
// if len(data) < len(Sentinel) {
|
||||
// err = fmt.Errorf("bytes too short to contain tags")
|
||||
// return
|
||||
// }
|
||||
// var d []byte
|
||||
// if bytes.Equal(data[:len(Sentinel)], SentinelBytes) {
|
||||
// d = data[len(Sentinel):]
|
||||
// }
|
||||
// l := decimal.New(0)
|
||||
// if d, err = l.Unmarshal(d); chk.E(err) {
|
||||
// return
|
||||
// }
|
||||
// // and then there must be a newline
|
||||
// if d[0] != '\n' {
|
||||
// err = errorf.E("must be newline after content:<length>:\n%n", d)
|
||||
// return
|
||||
// }
|
||||
// d = d[1:]
|
||||
// for range l.N {
|
||||
// tt := new(tag.T)
|
||||
// if d, err = tt.Unmarshal(d); chk.E(err) {
|
||||
// return
|
||||
// }
|
||||
// t.tags = append(t.tags, tt)
|
||||
// }
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package tags
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"protocol.realy.lol/pkg/tag"
|
||||
@@ -9,9 +8,9 @@ import (
|
||||
|
||||
func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
var tegs = [][]string{
|
||||
{"reply", "e:l_T9Of4ru-PLGUxxvw3SfZH0e6XW11VYy8ZSgbcsD9Y", "realy.example.com/repo1"},
|
||||
{"root", "e:l_T9Of4ru-PLGUxxvw3SfZH0e6XW11VYy8ZSgbcsD9Y", "realy.example.com/repo2"},
|
||||
{"mention", "p:JMkZVnu9QFplR4F_KrWX-3chQsklXZq_5I6eYcXfz1Q", "realy.example.com/repo3"},
|
||||
{"reply", "l_T9Of4ru-PLGUxxvw3SfZH0e6XW11VYy8ZSgbcsD9Y", "realy.example.com/repo1"},
|
||||
{"root", "l_T9Of4ru-PLGUxxvw3SfZH0e6XW11VYy8ZSgbcsD9Y", "realy.example.com/repo2"},
|
||||
{"mention", "JMkZVnu9QFplR4F_KrWX-3chQsklXZq_5I6eYcXfz1Q", "realy.example.com/repo3"},
|
||||
}
|
||||
var err error
|
||||
var tgs []*tag.T
|
||||
@@ -27,19 +26,22 @@ func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
if m1, err = t1.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t2 := new(T)
|
||||
var rem []byte
|
||||
if rem, err = t2.Unmarshal(m1); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
t.Fatalf("%s", rem)
|
||||
}
|
||||
var m2 []byte
|
||||
if m2, err = t2.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !bytes.Equal(m1, m2) {
|
||||
t.Fatalf("not equal:\n%s\n%s", m1, m2)
|
||||
}
|
||||
_ = m1
|
||||
// todo: unmarshal not currently working
|
||||
// t2 := new(T)
|
||||
// var rem []byte
|
||||
// if rem, err = t2.Unmarshal(m1); chk.E(err) {
|
||||
// t.Fatal(err)
|
||||
// }
|
||||
// if len(rem) > 0 {
|
||||
// t.Fatalf("%s", rem)
|
||||
// }
|
||||
// var m2 []byte
|
||||
// if m2, err = t2.Marshal(nil); chk.E(err) {
|
||||
// t.Fatal(err)
|
||||
// }
|
||||
// if !bytes.Equal(m1, m2) {
|
||||
// log.I.S(m1, m2)
|
||||
// t.Fatalf("not equal:\n%s\n%s", m1, m2)
|
||||
// }
|
||||
}
|
||||
|
||||
@@ -1,44 +1,49 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
)
|
||||
|
||||
// A T is a type descriptor, that is terminated by a newline.
|
||||
type T []byte
|
||||
type T struct{ t []byte }
|
||||
|
||||
func New[V ~[]byte | ~string](t V) *T { return &T{[]byte(t)} }
|
||||
|
||||
func (t *T) Equal(t2 *T) bool { return bytes.Equal(t.t, t2.t) }
|
||||
|
||||
// Marshal append the T to a slice and appends a terminal newline, and returns
|
||||
// the result.
|
||||
func (t *T) Marshal(dst []byte) (result []byte, err error) {
|
||||
func (t *T) Marshal(d []byte) (r []byte, err error) {
|
||||
if t == nil {
|
||||
return
|
||||
}
|
||||
result = append(append(dst, []byte(*t)...), '\n')
|
||||
r = append(d, t.t...)
|
||||
return
|
||||
}
|
||||
|
||||
// Unmarshal expects an identifier followed by a newline. If the buffer ends
|
||||
// without a newline an EOF is returned.
|
||||
func (t *T) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
rem = data
|
||||
func (t *T) Unmarshal(d []byte) (r []byte, err error) {
|
||||
r = d
|
||||
if t == nil {
|
||||
err = errorf.E("can't unmarshal into nil types.T")
|
||||
return
|
||||
}
|
||||
if len(rem) < 2 {
|
||||
if len(r) < 2 {
|
||||
err = errorf.E("can't unmarshal nothing")
|
||||
return
|
||||
}
|
||||
for i := range rem {
|
||||
if rem[i] == '\n' {
|
||||
for i := range r {
|
||||
if r[i] == '\n' {
|
||||
// write read data up to the newline and return the remainder after
|
||||
// the newline.
|
||||
*t = rem[:i]
|
||||
rem = rem[i+1:]
|
||||
t.t = r[:i]
|
||||
r = r[i:]
|
||||
return
|
||||
}
|
||||
}
|
||||
// a T must end with a newline or an io.EOF is returned.
|
||||
err = io.EOF
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -1,17 +1,20 @@
|
||||
package types
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"protocol.realy.lol/pkg/separator"
|
||||
)
|
||||
|
||||
func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
typ := T("note")
|
||||
typ := New("note")
|
||||
var err error
|
||||
var res []byte
|
||||
if res, err = typ.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
res = separator.Add(res)
|
||||
log.I.S(res)
|
||||
t2 := new(T)
|
||||
var rem []byte
|
||||
if rem, err = t2.Unmarshal(res); chk.E(err) {
|
||||
@@ -20,7 +23,7 @@ func TestT_Marshal_Unmarshal(t *testing.T) {
|
||||
if len(rem) > 0 {
|
||||
log.I.S(rem)
|
||||
}
|
||||
if !bytes.Equal(typ, *t2) {
|
||||
if !typ.Equal(t2) {
|
||||
t.Fatal("types.T did not encode/decode faithfully")
|
||||
}
|
||||
}
|
||||
}
|
||||
9
pkg/url/log.go
Normal file
9
pkg/url/log.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package url
|
||||
|
||||
import (
|
||||
"protocol.realy.lol/pkg/lol"
|
||||
)
|
||||
|
||||
var (
|
||||
log, chk, errorf = lol.Main.Log, lol.Main.Check, lol.Main.Errorf
|
||||
)
|
||||
49
pkg/url/url.go
Normal file
49
pkg/url/url.go
Normal file
@@ -0,0 +1,49 @@
|
||||
package url
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"net/url"
|
||||
)
|
||||
|
||||
type U struct{ uu []byte }
|
||||
|
||||
// New creates a new URL codec.C from the provided URL, and validates it.
|
||||
func New[V ~string | []byte](ur V) (uu *U, err error) {
|
||||
uu = new(U)
|
||||
var UU *url.URL
|
||||
if UU, err = url.Parse(string(ur)); chk.E(err) {
|
||||
return
|
||||
} else {
|
||||
// if it is valid, store it
|
||||
uu.uu = []byte(UU.String())
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (u *U) String() string { return string(u.uu) }
|
||||
func (u *U) Bytes() []byte { return u.uu }
|
||||
func (u *U) Equal(u2 *U) bool { return bytes.Equal(u.uu, u2.uu) }
|
||||
|
||||
// Marshal a URL, use New to ensure it is valid beforehand. Appends a terminal
|
||||
// newline.
|
||||
func (u *U) Marshal(dst []byte) (result []byte, err error) {
|
||||
result = append(dst, u.uu...)
|
||||
return
|
||||
}
|
||||
|
||||
// Unmarshal decodes a URL and validates it is a proper URL.
|
||||
func (u *U) Unmarshal(data []byte) (rem []byte, err error) {
|
||||
rem = data
|
||||
for i, v := range rem {
|
||||
if v == '\n' {
|
||||
u.uu = rem[:i]
|
||||
rem = rem[i+1:]
|
||||
break
|
||||
}
|
||||
}
|
||||
// validate the URL and return error if not valid.
|
||||
if _, err = url.Parse(string(u.uu)); chk.E(err) {
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
32
pkg/url/url_test.go
Normal file
32
pkg/url/url_test.go
Normal file
@@ -0,0 +1,32 @@
|
||||
package url
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"protocol.realy.lol/pkg/separator"
|
||||
)
|
||||
|
||||
func TestU_Marshal_Unmarshal(t *testing.T) {
|
||||
u := "https://example.com/path/to/resource"
|
||||
var err error
|
||||
var u1 *U
|
||||
if u1, err = New(u); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var m1 []byte
|
||||
if m1, err = u1.Marshal(nil); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
m1 = separator.Add(m1)
|
||||
u2 := new(U)
|
||||
var rem []byte
|
||||
if rem, err = u2.Unmarshal(m1); chk.E(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(rem) > 0 {
|
||||
t.Fatalf("'%s' should be empty", string(rem))
|
||||
}
|
||||
if !u2.Equal(u1) {
|
||||
t.Fatalf("u1 should be equal to u2: '%s' != '%s'", u1, u2)
|
||||
}
|
||||
}
|
||||
457
readme.adoc
457
readme.adoc
@@ -1,22 +1,449 @@
|
||||
= REALY Protocol
|
||||
|
||||
____
|
||||
|
||||
relay events and like… yeah
|
||||
|
||||
____
|
||||
:toc:
|
||||
:important-caption: 🔥
|
||||
:note-caption: 🗩
|
||||
:tip-caption: 💡
|
||||
:caution-caption: ⚠
|
||||
:table-caption: 🔍
|
||||
:example-caption: 🥚
|
||||
|
||||
image:https://img.shields.io/badge/godoc-documentation-blue.svg[Documentation,link=https://pkg.go.dev/protocol.realy.lol]
|
||||
image:https://img.shields.io/badge/matrix-chat-green.svg[matrix chat,link=https://matrix.to/#/#realy-general:matrix.org]
|
||||
|
||||
zap mleku: ⚡️mleku@getalby.com
|
||||
|
||||
Inspired by the event bus architecture of https://github.com/nostr-protocol[nostr] but redesigned to avoid the
|
||||
serious deficiencies of that protocol for both developers and users.
|
||||
== about
|
||||
|
||||
* link:./doc/why.adoc[why]
|
||||
* link:./doc/events_queries.adoc[events and queries]
|
||||
* link:./doc/relays.adoc[relays]
|
||||
* link:./relays/readme.md[reference relays]
|
||||
* link:./clients/readme.md[reference clients]
|
||||
* link:./pkg/readme.md[GO libraries]
|
||||
Inspired by the event bus architecture of link:https://github.com/nostr-protocol[nostr] but redesigned to avoid the serious deficiencies of that protocol for both developers and users.
|
||||
|
||||
* link:./relays/readme.adoc[reference relays]
|
||||
* link:./repos/readme.adoc[reference repos]
|
||||
* link:./clients/readme.adoc[reference clients]
|
||||
* link:./pkg/readme.adoc[_GO⌯_ libraries]
|
||||
|
||||
=== why REALY?
|
||||
|
||||
Since the introduction of the idea of a general "public square" style social network as seen with Facebook and Twitter, the whole world has been overcome by something of a plague of mind control brainwashing cults.
|
||||
|
||||
Worse than "Beatlemania" people are being lured into the control of various kinds of "influencers" and adopting in-group words and "challenges" that are more often harmful to the people than actually beneficial like an exercise challenge might be.
|
||||
|
||||
Nostr protocol is a super simple event bus architecture, blended with a post office protocol, and due to various reasons related to the recent buyout of Twitter by Elon Musk, who plainly wants to turn it into the Western version of Wechat, it has become plagued with bad subprotocol designs that negate the benefits of self sovereign identity (elliptic curve asymmetric cryptography) and a dominant form of client that is essentially a travesty of Twitter itself.
|
||||
|
||||
REALY is being designed with the lessons learned from Nostr and the last 30 years of experience of internet communications protocols to aim to resist this kind of Embrace/Extend/Extinguish protocol that has repeatedly been performed on everything from email, to RSS, to threaded forums and instant messaging, by starting with the distilled essence of how these protocols should work so as to not be so easily vulnerable to being coopted by what is essentially in all but name the same centralised event bus architecture of social networks like Facebook and Twitter.
|
||||
|
||||
=== Use Cases
|
||||
|
||||
The main purposes that REALY will target are:
|
||||
|
||||
* synchronous instant messaging protocols with IRC style nickserv and chanserv permissions and persistence, built from the ground up to take advantage of the cryptographic identities, with an intuitive threaded structure that allows users to peruse a larger discussion without the problem of threads of discussion breaking the top level structure
|
||||
* structured document repositories primarily for text media, as a basis for collaborative documentation and literature collections, and software source code (breaking out of the filesystem tree structure to permit much more flexible ways of organising code)
|
||||
* persistent threaded discussion forums for longer form messages than the typical single sentence/paragraph of instant messaging
|
||||
* simple cross-relay data query protocol that enables minimising the data cost of traffic to clients
|
||||
* push style notification systems that can be programmed by the users' clients to respond to any kind of event broadcast to a relay
|
||||
|
||||
=== Architectural Philosophy
|
||||
|
||||
A key concept in the REALY architecture is that of relays being a heterogeneous group of data repositories and relaying systems that are built specific to purpose, such as
|
||||
|
||||
- a chat relay, which does not store any messages but merely bounces messages around ot subscribers,
|
||||
- a document repository, which provides read access to data with full text search capability, that can ne specialised for a singular data format (eg markdown, eg mediawiki, eg code), a threaded, moderated forum, and others,
|
||||
- a directory relay which stores and distributes user metadata such as profiles, relay lists, follows, mutes, deletes and reports
|
||||
- an authentication relay, which can be sent messages to add or remove users from access whitelists and blacklists, that provides this state data to relays it is used by
|
||||
|
||||
A second key concept in REALY is the integration of Lightning Network payments - again mostly copying what is done with Nostr but enabling both pseudonymous micro-accounts and long term subscription styles of access payment, and the promotion of a notion of user-pays - where all data writing must be charged for, and most reading must be paid for.
|
||||
|
||||
Lightning is perfect for this because it can currently cope with enormous volumes of payments with mere seconds of delay for settlement and a granularity of denomination that lends itself to the very low cost of delivering a one-time service, or maintaining a micro-account.
|
||||
|
||||
== Network Protocol
|
||||
|
||||
Following are several important specifications and rationales for the way the messages are encoded and handled.
|
||||
|
||||
=== Simple Plaintext Message Codec
|
||||
|
||||
Features are the equivalent of volume in construction and building architecture.
|
||||
They have an exponential time cost.
|
||||
Most API wire codecs make assumptions about data structures that do not hold for all applications, and it is simpler to make one that fits.
|
||||
Protobuf, for example, does not have any constraints for lengths of binary digits.
|
||||
This can be quite a problem for cryptographic data protocols, which then need to write extra validation code in addition to integrating the generated API message codec.
|
||||
|
||||
The existing `nostr` protocol uses JSON, which is awful, and space inefficient, and complex to parse due to its intolerance of terminal commas and annoying to work with because of its retarded, multi-standards of string escaping.
|
||||
|
||||
Thus instead of giving options for no reason, to developers, we are going to dictate that a plain text based protocol be used, in place of any other option.
|
||||
The performance difference is very minimal and a well designed plaintext message encoding is nearly as efficient as binary, and anyway, decent GZIP compression can also be applied to messages, flattening especially textual content.
|
||||
|
||||
Line structured documents are much more readily amenable to human reading and editing, and `\n`/`;`/`:` is more efficient than `","` as an item separator.
|
||||
Data structures can be much more simply expressed in a similar way as how they are in programming languages.
|
||||
|
||||
It is one of the guiding principles of the Unix philosophy to keep data in plain text, human readable format wherever possible, forcing the interposition of a parser just for humans to read the data adds extra brittleness to a protocol.
|
||||
|
||||
REALY protocol format is extremely simple and should be trivial to parse in any programming language with basic string slicing operators.
|
||||
|
||||
=== Unpadded Base64 Encoding for Fixed Length Binary Fields
|
||||
|
||||
To save space and eliminate the need for ugly `=` padding characters, we invoke link:https://datatracker.ietf.org/doc/html/rfc4648#section-3.2[RFC 4648 section 3.2] for the case of using base64 URL encoding without padding because we know the data length.
|
||||
In this case, it is used for IDs and pubkeys (32 bytes payload each, 43 characters base64 raw URL encoded) and signatures (64 bytes payload, 86 characters base64 raw URL encoded) - the further benefit here is the exact same string can be used in HTTP GET parameters `?key=value&...` context.
|
||||
The standard `=` padding would break this usage as well.
|
||||
|
||||
For those who can't find a "raw" codec for base64, the 32 byte length has 1 `=` pad suffix and the 64 byte length has 2: `==` and this can be trimmed off and added back to conform to this requirement.
|
||||
|
||||
Due to the fact that potentially there can be hundreds if not thousands of these in event content and tag fields the benefit can be quite great, as well as the benefit of being able to use these codes also in URL parameter values - 43 bytes is not so much more than 32 binary bytes and because it is an even number base it is also cheaper to decode.
|
||||
|
||||
=== HTTP for Request/Response, Websockets for Push and Subscriptions
|
||||
|
||||
Only subscriptions require server push messaging pattern, thus all other queries in REALY can be done with simple HTTP POST requests.
|
||||
|
||||
A relay should respond to a `subscribe` request by upgrading from http to a websocket.
|
||||
The client should send this in the header also.
|
||||
|
||||
It is unnecessary messages and work to use websockets for queries that match the HTTP request/response pattern, and by only requiring sockets for APIs that actually need server initiated messaging, the complexity of the relay is greatly reduced.
|
||||
|
||||
There can be a separate subscription type also, where there is delivering the IDs only, or forwarding the whole event.
|
||||
|
||||
HTTP with upgrades to websockets, and in the future HTTP/3 (QUIC) will be possible, have a big advantage of being generic, having a built in protocol for metadata, and are universally supported.
|
||||
|
||||
Socket protocols have a higher overhead in processing, memory and bandwidth compared to simple request/response messages so it is more efficient to be able to support both models, as many times there is one or two subscriptions that might be opened, these can live on one socket per client, but the other requests are momentary so they have no state management cost.
|
||||
If the message type is this type, it makes no sense to do it over transports with a higher cost per byte and per user.
|
||||
A subscription is longer lasting, so it is ok that it takes a little longer to negotiate.
|
||||
|
||||
== Relays
|
||||
|
||||
=== Modular Architecture
|
||||
|
||||
A key design principle employed in REALY is that of relay specialization.
|
||||
|
||||
Instead of making a relay a hybrid event store and router, in REALY a relay does only one thing.
|
||||
Thus there can be
|
||||
|
||||
- a simple event repository that only understands queries to fetch a list of events by ID
|
||||
- a relay that only indexes and keeps a space/time limited cache of events to process filters
|
||||
- a relay that only keeps a full text search index and a query results cache
|
||||
- a relay that only accepts list change CRDT events such as follow, join/create/delete/leave group, block, delete, report and compiles these events into single lists that are accessible to another relay that can use these compiled lists to control access either via explicit lists or by matching filters
|
||||
- a relay that stores and fetches media, including being able to convert and cache such as image size and formats
|
||||
- ...and many others are possible
|
||||
|
||||
By constraining the protocol interoperability compliance down to small simple sub-protocols the ability for clients to maintain currency with other clients and with relays is greatly simplified, without gatekeepers.
|
||||
|
||||
=== The Continuum between Client and Server
|
||||
|
||||
It should be normalized that relays can include clients that query other specialist relays, especially for such things as caching results fetched from other relays.
|
||||
|
||||
Thus one relay can be queried for a filter index, and the list of Event Ids returned can then be fetched from another relay that specialises in storing events and returning them on request by lists of Event Ids, and still other relays could store media files and be able to convert them on demand.
|
||||
|
||||
=== Specifications
|
||||
|
||||
==== Replication Instead of Arbitration
|
||||
|
||||
Along with the use of human-readable type identifiers for documents and the almost completely human-composable event encoding, the specification of REALY is not dependent on any kind of authoritative gatekeeping organisation, but instead organisations can add these to their own specifications lists as they see fit, eliminating a key problem with the operation of the nostr protocol.
|
||||
|
||||
There need not be bureaucratic RFC style specifications, but instead use human-readable names and be less formally described, the formality improving as others adopt it and expand or refine it.
|
||||
|
||||
==== Keeping Specifications With Implementations
|
||||
|
||||
Thus also it is recommended that implementations of any or all REALY servers and clients should keep a copy of the specification documents found in other implementations and converge them to each other as required when their repositories update support to changes and new sub-protocols.
|
||||
|
||||
== Client Message Authentication and Integrity
|
||||
|
||||
All queries and submissions must be authenticated in order to enable a REALY relay to allow access.
|
||||
The signing key does not have to be identifying, but it serves as a HMAC for the messages, as implementations can in fact expose parts of the path to plaintext and at least same-process possible interception.
|
||||
|
||||
Thus access control becomes simple, and privacy also equally simple if the relay is public access to read, the client should default to one-shot keys for each request.
|
||||
|
||||
=== Authentication Message Format
|
||||
|
||||
Authenticating messages, for simplicity, is a simple message suffix.
|
||||
|
||||
.Authenticated Message Encoding
|
||||
[options="header,footer"]
|
||||
|====
|
||||
| Message | Description
|
||||
|`<message payload>\n` | all messages must be terminated with a newline
|
||||
|`<request URL>\n` |
|
||||
|`<unix timestamp in decimal ascii>\n` |
|
||||
|`<public key of signer>\n` |
|
||||
|`<signature>\n` |
|
||||
|====
|
||||
|
||||
For simplicity, the signature is on a separate line, just as it is in the event format, this avoids needing to have a separate codec, and for the same reason the timestamp and public key.
|
||||
|
||||
For reasons of security, a relay should not allow a time skew in the timestamp of more than 15 seconds.
|
||||
|
||||
The signature is upon the Blake 2b message hash of everything up to and including the newline preceding it, and only relates to the HTTP POST payload, not including the header.
|
||||
|
||||
Even subscription messages should be signed the same way, to avoid needing a secondary protocol. "open" relays that have no access control (which is retarded, but just to be complete) must still require this authentication message, but simply the client can use one-shot keys to sign with, as it also serves as a HMAC to validate the consistency of the request data, since it is based on the hash.
|
||||
|
||||
IMPORTANT: One shot keys for requests and publications is recommended especially for the case of users of Tor accessing relays, as this ensures traffic that emerges from the same exit or comes to the same hidden service looks the same. However, it should be also pointed out that a client is likely to query for one specific pubkey on a fairly regular basis which should be considered with respect to triggering the use of a new path in the tor connection (or other anonymizing protocol).
|
||||
|
||||
== RESTful APIs
|
||||
|
||||
HTTP conveniently allows for the use of paths, and a list of key/values for parameters where necessary, to enable a query to stay entirely within the context of a HTTP GET request.
|
||||
|
||||
As such, most queries can be identified simply by the path they refer to, instead of the messaging needing to additionally conatin this context.
|
||||
|
||||
== Capability Messages
|
||||
|
||||
Capabilities are an important concept for an open, extensible network protocol.
|
||||
It is also very important to narrow down the surface of each API in the protocol in order to make it more efficient to deploy.
|
||||
|
||||
One of the biggest mistakes in the design of `nostr` is precisely in the blurring of APIs and even message types together with ambiguous elements to their structure.
|
||||
|
||||
The `COUNT` and `AUTH` protocol method types have this property.
|
||||
Their structure is defined by an implicit data point - the sender of the message, which means parsing the message isn't just identifying it but also reading context.
|
||||
|
||||
Capability *must* be provided at the `/capability` path of the relay's web server path scheme.
|
||||
|
||||
=== Capability Response
|
||||
|
||||
The message that is sent back from a GET request at `/capability` should be as follows:
|
||||
|
||||
.Capability Response
|
||||
[Options="header"]
|
||||
|====
|
||||
| Message | Description
|
||||
| `<protocol name>:<URL of protocol spec>;vX.X.X;` | Protocol name and version, the protocol spec URL.
|
||||
|
||||
_The protocol name must be identical to the message header used in the protocol._
|
||||
|
||||
The version number should be a tag on the commit at the URL that matches the version specified.
|
||||
|
||||
| `<flag>[=<value>];...>` | `flag,...` for relevant flags on the protocol, for example `whitelisted`, so for a `filter` this means "authenticate to read as whitelisted user". All messages must be authenticated, but without this flag any user can use this protocol on this relay. The last flag ends with a newline, not a semicolon.
|
||||
|
||||
By maintaining a very small, method-based definition of protocols, complex flags are not required, in many cases, unnecessary
|
||||
|
||||
| `\n` | Each protocol spec ends with a newline.
|
||||
|====
|
||||
|
||||
NOTE: Because lists of event Ids are relatively small, there should be no need for a limit on a filter with at least one parameter, even if it may yield a > 500kb message this is trivial considering the client can keep this and use it for a long time without needing to do that query again. _This is the reason for separating the filter and fulltext-search from the event retrieval syntax._
|
||||
|
||||
Protocol names should be defined in the same sense as a set of API calls - the details of how to write that exactly differs somewhat for different languages (and may involve checks not native to the language) but they should map to something along similar lines as a link:https://go.dev[_Go⌯_] `interface{}`
|
||||
|
||||
The protocol name is a shortcut and convenience, but should make automatic decisions by clients regarding a capability set simple.
|
||||
|
||||
As per implementation, each capability should be part of a registered list of message types that will match the message sentinel that is also the protocol name, using a registry of available functions.
|
||||
|
||||
== Events
|
||||
|
||||
=== Message Format
|
||||
|
||||
.Event Encoding
|
||||
[options="header,footer"]
|
||||
|====
|
||||
| Message | Description
|
||||
| `<type name>:` | can be anything, hierarchic names like `note/html` `note/md` are possible, or `type.subtype` or whatever
|
||||
| `<pubkey>;` | encoded in URL-base64 with the padding single `=` elided
|
||||
| `<unix second precision timestamp in decimal ascii>\n` | this ends the first line of the event format
|
||||
2+^| tags follow, they end with `\ncontent:<length>`; the end of tags and beginning of content
|
||||
| `key:value;extra;...\n` | zero or more line separated, fields cannot contain a semicolon, end with newline instead of semicolon, key lowercase alphanumeric, first alpha, no whitespace or symbols, only key and following `:` are mandatory
|
||||
| `content:<length>\n` | literally this word on one line *directly* after the newline of the previous, the length value refers to *after* the newline and the end of it MUST be a newline and then the signature
|
||||
| `<content>\n` | any number of further line breaks, last line is signature, everything before signature line is part of the canonical hash
|
||||
2+^| The canonical form is the above, creating the message hash that is generated with SHA256
|
||||
| `sig:<BIP-340 secp256k1 schnorr signature encoded in unpadded URL-base64>\n` | this field would have two padding chars `==`, these should be elided before generating the encoding. The length is always 86 characters/bytes.
|
||||
|====
|
||||
|
||||
==== Example
|
||||
|
||||
```
|
||||
note/adoc:6iiJMRHgRA4SZcc7Jg-k8kD81tJQYpM1saUykC5YCDs;1740226569
|
||||
event:V6zWuopmz3D7pWZyqTZOZtIHlq8LrLAToWNZ9wBbnLo;root
|
||||
event:4g6hb5mpNXupjigkdYU_vim9rnmUhR_mibfkpPs5d2A;root
|
||||
event:jjBUzkXZkD9vwmHqwsCzQP07o-npo-4F-ciA0pWrJr8;root
|
||||
pubkey:DLAJqN-E2n1OLP1gXDnMk2lgra6qYGTULuIJk4KriCk
|
||||
pubkey:j5L8SIYV3yQPhHkp4vbFSTUh4kEbeL9SfZM8CGk5lMs
|
||||
hashtag:Megan Boswell
|
||||
hashtag:#AEWGrandSlam
|
||||
hashtag:2024 BMW
|
||||
hashtag:Censure
|
||||
content:449
|
||||
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
|
||||
|
||||
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
|
||||
|
||||
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
|
||||
|
||||
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
|
||||
sig:OX5r0PdsC6p1dXf1Jr225O5bDupLGA-ZGKKxC59GOtqMPXfW9HZQPhURMe24WdrciWwJ0j7R7WWnuS32xFUyjA
|
||||
```
|
||||
|
||||
=== Use in Data Storage
|
||||
|
||||
The encoding is already suitable for encoding to a database, it is optional to use a somewhat more compact binary encoding, especially if the database has good compression like ZST, which will flatten tables of these values quite effectively, with little overhead cost for lowered complexity.
|
||||
|
||||
=== Tags
|
||||
|
||||
Tags are simply a list of `<key>:<field>[;<field>]...\n` and the terminator is the sentinel `\ncontent:<length>\n`
|
||||
|
||||
Common tags would include `event` and `pubkey` and `hashtag` - these are guidelines, the specifics of tag content and syntax is tied to the type, the first string at the top of the event, as described above.
|
||||
|
||||
== Protocols
|
||||
|
||||
Every REALY protocol should be simple and precise, and use HTTP for request/response pattern and only use websocket upgrades for publish/subscribe pattern.
|
||||
|
||||
The list of protocols below can be expanded to add new categories. The design should be as general as possible for each to isolate the application features from the relay processing cleanly.
|
||||
|
||||
== Publication Protocols
|
||||
|
||||
=== store, update and relay
|
||||
|
||||
store\n
|
||||
<event>
|
||||
|
||||
update:<event id>\n
|
||||
<event>
|
||||
|
||||
relay:\n
|
||||
<event>
|
||||
|
||||
Submitting an event to be stored is the same as a result sent from an Event Id query except with the type of operation intended: `store\n` to store an event, `replace:<Event Id>\n` to replace an existing event and `relay\n` to not store but send to subscribers with open matching filters.
|
||||
|
||||
NOTE: Replace will not be accepted if the message type and pubkey are different to the original that is specified.
|
||||
|
||||
The use of specific different types of store requests eliminates the complexity of defining event types as replaceable, by making this intent explicit.
|
||||
A relay can also only allow one of these, such as a pure relay, which only accepts `relay` requests but neither `store` nor `replace`, or any combination of these.
|
||||
The available API calls should be listed in the `capability` response
|
||||
|
||||
An event is then acknowledged to be stored or rejected with a message `ok:<true/false>;<Event Id>;<reason type>:human readable part` where the reason type is one of a set of common types to indicate the reason for the false.
|
||||
|
||||
Events that are returned have the `<subscription Id>:<Event Id>\n` as the first line, and then the event in the format described above afterwards.
|
||||
|
||||
|
||||
There is four basic types of queries in REALY, derived from the `nostr` design, but refined and separated into distinct, small API calls.
|
||||
|
||||
== Query Protocols
|
||||
|
||||
=== events
|
||||
|
||||
A key concept in REALY protocol is minimising the footprint of each API call.
|
||||
Thus, a primary query type is the simple request for a list of events by their ID hash:
|
||||
|
||||
==== Request
|
||||
|
||||
.events request
|
||||
[options="header"]
|
||||
|====
|
||||
| Message | Description
|
||||
|`events:\n` | message header
|
||||
|`<event ID one>\n` | one or more event ID to be returned in the response
|
||||
|====
|
||||
|
||||
Unlike in event tags and content, the `e:` prefix is unnecessary.
|
||||
The previous two query types only have lists of events in return, and to fetch the event a client then must send an `events` request.
|
||||
|
||||
Normally clients will gather a potentially longer list of events and then send Event Id queries in segments according to the requirements of the user interface.
|
||||
|
||||
The results are returned as a series as follows, for each item returned:
|
||||
|
||||
==== Response
|
||||
|
||||
.events response
|
||||
[options="header"]
|
||||
|====
|
||||
| Message | Description
|
||||
|`event:<Event Id>\n`| each event is marked with his header, so `\nevent:` serves as a section marker
|
||||
|`<event>\n`| the full event text as described previously
|
||||
|====
|
||||
|
||||
=== filter
|
||||
|
||||
A filter has one or more of the fields listed below, and headed with `filter`:
|
||||
|
||||
==== Request
|
||||
|
||||
.filter request
|
||||
[options="header"]
|
||||
|====
|
||||
| Message | Description
|
||||
|`filter:\n` | message type header
|
||||
|`types:<one>;<two>;...\n` | these should be the same as the ones that appear in events, and match on the prefix so subtypes, eg `note/text` and `note/html` will both match on `note`.
|
||||
|`pubkeys:<one>;<two>;...\n` | list of pubkeys to only return results from
|
||||
|`timestamp:<since>;<until\n` | either can be empty but not both, omit line for this, both are inclusive
|
||||
|`tags:\n` | these end with a second newline
|
||||
|`<key>:<value>[;...]\n` | only the value can be searched for, and must be semicolon separated for multiple
|
||||
|`...` | several tags can be present, they will act as OR
|
||||
|`\n` | tags end with a second newline
|
||||
|====
|
||||
|
||||
The response message is simply a list of the matching events IDs, which are expected to be in reverse chronological order:
|
||||
|
||||
==== Response
|
||||
|
||||
.filter response
|
||||
[options="header"]
|
||||
|====
|
||||
| Message | Description
|
||||
|`response:filter\n` | message type header, all use `response:` for HTTP style request/response
|
||||
|`<event id>\n` | each event id is separated by a newline
|
||||
|`...` | ...any number of events further.
|
||||
|====
|
||||
|
||||
=== subscribe
|
||||
|
||||
`subscribe` means to request to be sent events that match a filter, from the moment the request is received. Mixing queries and subscriptions is a bad idea because it makes it difficult to specify the expected behaviour from a relay, or client. Thus, a subset of the `filter` is used. The subscription ends when the client sends `unsubscribe` message.
|
||||
|
||||
.subscribe request
|
||||
[options="header"]
|
||||
|====
|
||||
| Message | Description
|
||||
|`subscribe:<subscription id>\n` | the ID is for the use of the client to distinguish between multiple subscriptions on one socket, there can be more than one.
|
||||
|`types:<one>;<two>;...\n` | these should be the same as the ones that appear in events, and match on the prefix so subtypes, eg `note/text` and `note/html` will both match on `note`.
|
||||
|`pubkeys:<one>;<two>;...\n` | list of pubkeys to only return results from
|
||||
|`tags:\n` | these end with a second newline
|
||||
|`<key>:<value>[;...]\n` | only the value can be searched for, and must be semicolon separated for multiple matches
|
||||
|`...` | several tags can be present, they will act as OR
|
||||
|`\n` | tags end with a second newline
|
||||
|====
|
||||
|
||||
NOTE: **There is no timestamp field in a `subscribe`.**
|
||||
|
||||
After a subscribe request the relay will send an acknowledgement:
|
||||
|
||||
.subscribed response
|
||||
[options="header"]
|
||||
|====
|
||||
| Message | Description
|
||||
|`subscribed:<subscription id>\n` |
|
||||
|====
|
||||
|
||||
To close a subscription the client sends an `unsubscribe`:
|
||||
|
||||
.unsubscribe request
|
||||
[options="header"]
|
||||
|====
|
||||
| Message | Description
|
||||
|`unsubscribe:<subscription id>\n` |
|
||||
|====
|
||||
|
||||
|
||||
IMPORTANT: Direct messages, for example, are privileged and can only be sent in response to a query or subscription signed with one of the keys appearing in the message (author or recipient/s)
|
||||
|
||||
The `subscribe` query streams back results containing just the event ID hash, in the following message:
|
||||
|
||||
.subscription response
|
||||
[options="header"]
|
||||
|====
|
||||
| Message | Description
|
||||
|`subscription:<subscription id>:<event id>\n` |
|
||||
|====
|
||||
|
||||
|
||||
The client can then send an `events` query to actually fetch the data.
|
||||
This enables collecting a list and indicating the count without consuming the bandwidth for it until the view is opened.
|
||||
|
||||
=== fulltext
|
||||
|
||||
A fulltext query is just `fulltext:` followed by a series of space separated tokens if the event store has a full text index, terminated with a newline.
|
||||
|
||||
.fulltext request
|
||||
[options="header"]
|
||||
|====
|
||||
| Message | Description
|
||||
|`fulltext:text to do full text search with\n`| search terms are space separated, terminated by newline
|
||||
|====
|
||||
|
||||
The response message is like as the `filter`, the actual fetching of events is a separate operation.
|
||||
|
||||
.fulltext response
|
||||
[options="header"]
|
||||
|====
|
||||
| Message | Description
|
||||
|`response:fulltext\n`| each event is marked with his header, so `\nevent:` serves as a section marker
|
||||
|`<event id>\n`| event id that matches the search terms
|
||||
|`...` | any number of events further, sorted by relevance.
|
||||
|====
|
||||
3
relays/readme.adoc
Normal file
3
relays/readme.adoc
Normal file
@@ -0,0 +1,3 @@
|
||||
= relays
|
||||
|
||||
relays specialise in subscription and relaying, and do not store events
|
||||
@@ -1,3 +0,0 @@
|
||||
# relays
|
||||
|
||||
relay implementations for various subprotocols
|
||||
3
repos/readme.adoc
Normal file
3
repos/readme.adoc
Normal file
@@ -0,0 +1,3 @@
|
||||
= repos
|
||||
|
||||
repos are relays that store data
|
||||
@@ -1,3 +0,0 @@
|
||||
# relays
|
||||
|
||||
relay implementations for various subprotocols
|
||||
Reference in New Issue
Block a user