Files
jericho/Jericho Consensus.md

80 KiB
Raw Permalink Blame History

Jericho Consensus

Overview

Jericho is a Byzantine fault-tolerant consensus mechanism combining CPU-equivalent proof of work with ephemeral bonding and democratic validation. It aims to eliminate monetary privilege (Cantillon effect) while maintaining security through distributed vigilance rather than capital hostage-taking.

Core Components

Block Production

Miners compete using a proof-of-work hash function based on large integer long division—an operation that cannot be meaningfully accelerated by specialized hardware. This ensures fair token issuance where acquisition cost is equivalent regardless of participant scale.

Proof of Work Function

The proof of work uses iterative squaring with long division, expanding the working number until it exceeds 128 MB. This creates memory-hard, CPU-bound computation that resists ASIC acceleration.

Algorithm

function proof_of_work(block_header, nonce) -> hash:
    // Initialize from block header
    seed = SHA3-256(block_header || nonce)
    N = big_int_from_bytes(seed)  // 256-bit initial value
    D = extract_divisor(seed)      // 128-bit divisor from least significant bits

    // Expansion loop: square and divide until size exceeds 128 MB
    while byte_length(N) < 134,217,728:  // 2^27 bytes = 128 MB
        N = N * N                         // Square (doubles bit length)
        D = extract_divisor(N)            // Update divisor from LSB 128 bits
        (Q, R) = divmod(N, D)             // Long division
        N = Q ^ R                         // XOR quotient and remainder

    // Final hash of expanded state
    return SHA3-256(N.to_bytes())

Divisor Extraction

The divisor is derived from the least significant 128 bits of the current number, with safeguards against trivial division:

function extract_divisor(N) -> D:
    raw = N & ((1 << 128) - 1)           // Extract LSB 128 bits
    D = raw | (1 << 127)                  // Set high bit (ensures D >= 2^127)
    D = D | 1                             // Set low bit (ensures D is odd)
    return D

Setting the high bit guarantees the divisor is always at least 2^127, preventing degenerate cases where division barely reduces the number. Setting the low bit ensures the divisor is odd, maximizing the information mixed by the remainder.

Expansion Dynamics

Each iteration approximately doubles the bit length through squaring, then the XOR of quotient and remainder preserves most of the entropy:

Iteration | Approximate Size | Operations
----------|------------------|---------------------------
0         | 256 bits         | Initial seed
1         | 512 bits         | Square → divide → XOR
2         | 1 KB             | Square → divide → XOR
3         | 2 KB             | Square → divide → XOR
...       | ...              | ...
17        | 32 MB            | Square → divide → XOR
18        | 64 MB            | Square → divide → XOR
19        | 128 MB           | Square → divide → XOR (terminal)

The algorithm completes in approximately 19-20 iterations, each requiring:

  • One large integer multiplication (squaring)
  • One large integer division
  • One XOR operation

Memory Hardness

The working number N must be held entirely in memory during computation:

  • Cannot be streamed or computed in chunks (squaring requires full number)
  • Division requires random access across the full dividend
  • Peak memory: ~128 MB per proof attempt

This memory requirement makes the algorithm hostile to:

  • ASICs: Would need 128 MB on-chip memory per parallel unit
  • GPUs: Limited by memory bandwidth for large integer operations
  • FPGAs: Same memory constraints as ASICs

Standard CPUs with large L3 caches and system RAM handle this workload efficiently, equalizing hardware advantage.

CPU Equivalence

Long division of large integers is inherently sequential:

  • Each digit of the quotient depends on the running remainder
  • Cannot be parallelized beyond word-level operations
  • Modern CPUs are highly optimized for this pattern (DIV instructions, branch prediction)

Custom hardware gains minimal advantage because:

  1. Memory bandwidth dominates (128 MB working set)
  2. Division is already near-optimal on CPUs
  3. Squaring uses standard multiplication circuits

Difficulty Adjustment

The proof is valid if the final hash meets the difficulty target:

valid = (SHA3-256(N.to_bytes()) < difficulty_target)

Difficulty target encoding:

The difficulty target is encoded as a variable-length integer using a floating-point-like representation that provides uniform relative precision across the entire difficulty range:

encoded_difficulty = (exponent << mantissa_bits) | mantissa

where:
    mantissa_bits = 24
    exponent:  8 bits (0-255)
    mantissa: 24 bits (0 to 2^24 - 1)

decoded_target = (2^24 + mantissa) × 2^(exponent - 24 - 24)
               = (2^24 + mantissa) × 2^(exponent - 48)

Encoded size: 32 bits (4 bytes) fixed

This encoding provides:

  • Uniform relative error: Each mantissa increment changes the target by ~0.000006% (1/2^24)
  • Full range coverage: From 2^-48 to 2^231 (exponent 0-255)
  • No wasted precision: Unlike fixed-point, precision scales with magnitude

Encoding/decoding functions:

function encode_target(target) -> uint32:
    if target == 0:
        return 0

    // Find exponent: position of highest bit
    exp = floor(log2(target))

    // Extract 24-bit mantissa (excluding implicit leading 1)
    if exp >= 24:
        mantissa = (target >> (exp - 24)) & 0xFFFFFF
    else:
        mantissa = (target << (24 - exp)) & 0xFFFFFF

    // Adjust exponent for encoding offset
    encoded_exp = exp + 48 - 24

    // Clamp to valid range
    if encoded_exp < 0:
        return 0
    if encoded_exp > 255:
        return 0xFFFFFFFF

    return (encoded_exp << 24) | (mantissa & 0xFFFFFF)


function decode_target(encoded) -> uint256:
    if encoded == 0:
        return 0

    exp = encoded >> 24
    mantissa = encoded & 0xFFFFFF

    // Reconstruct: (2^24 + mantissa) × 2^(exp - 48)
    significand = (1 << 24) | mantissa

    shift = exp - 48
    if shift >= 0:
        return significand << shift
    else:
        return significand >> (-shift)

Adjustment with asymptotic damping:

The proportional difficulty adjustment uses asymptotic damping rather than hard clamping. Large deviations produce diminishing adjustments, preventing both liveness failures and mercenary mining exploits:

function adjust_difficulty(current_encoded, actual_time, expected_time) -> uint32:
    // Compute raw ratio
    raw_ratio = expected_time / actual_time

    // Apply asymptotic damping: large ratios are compressed
    // Uses tanh-like function that approaches ±1 asymptotically
    damped_ratio = dampen(raw_ratio)

    // Decode current target
    current_target = decode_target(current_encoded)

    // Apply adjustment
    new_target = current_target × damped_ratio

    // Re-encode
    return encode_target(new_target)


function dampen(ratio) -> float:
    // Convert ratio to log scale for symmetric handling
    // ratio > 1 means blocks too fast (decrease target)
    // ratio < 1 means blocks too slow (increase target)

    log_ratio = ln(ratio)

    // Asymptotic compression using scaled tanh
    // k controls the "knee" where damping begins
    // max_log controls the asymptotic limit
    k = 0.5          // damping begins around 1.6× or 0.6×
    max_log = 1.5    // asymptotic limit ~4.5× or ~0.22×

    damped_log = max_log × tanh(log_ratio / k)

    return e^damped_log

Damping curve properties:

Raw Ratio | Log Ratio | Damped Log | Damped Ratio | Effective Adjustment
----------|-----------|------------|--------------|---------------------
0.10      | -2.30     | -1.46      | 0.23         | +77% easier (was +900%)
0.25      | -1.39     | -1.28      | 0.28         | +72% easier (was +300%)
0.50      | -0.69     | -0.87      | 0.42         | +58% easier (was +100%)
0.75      | -0.29     | -0.47      | 0.63         | +37% easier (was +33%)
1.00      |  0.00     |  0.00      | 1.00         | No change
1.33      |  0.29     |  0.47      | 1.60         | -38% harder (was -25%)
2.00      |  0.69     |  0.87      | 2.39         | -58% harder (was -50%)
4.00      |  1.39     |  1.28      | 3.60         | -72% harder (was -75%)
10.00     |  2.30     |  1.46      | 4.31         | -77% harder (was -90%)

Asymmetric protection:

The damping function is symmetric in log-space, providing balanced protection against both failure modes:

Threat Without Damping With Damping
Mercenary spike (10× hashrate for one period) Difficulty increases 10×, then 90% of honest miners can't find blocks Difficulty increases ~4.3×, honest miners remain viable
Hashrate collapse (90% miners leave) Difficulty stays high, blocks take ~100 minutes Difficulty decreases ~4.3×, blocks recover to ~23 minutes
Sustained 2× hashrate Full 2× adjustment per period ~2.4× adjustment, slightly faster convergence
Sustained 0.5× hashrate Full 0.5× adjustment per period ~0.42× adjustment, slightly faster recovery

Liveness guarantee:

The asymptotic limit ensures that even with complete hashrate collapse, difficulty eventually decreases enough for any remaining miners to produce blocks:

Worst case: 99.9% hashrate disappears instantly

Without damping (4× clamp):
    Period 1: 4× easier, blocks still 250× too slow
    Period 2: 4× easier, blocks still 62× too slow
    Period 3: 4× easier, blocks still 15× too slow
    Period 4: 4× easier, blocks still 4× too slow
    Period 5: 4× easier, blocks approach normal
    Recovery: 5 periods × 2016 blocks × (avg ~25 min) = ~175 days

With asymptotic damping:
    Period 1: 4.3× easier, blocks ~230× too slow
    Period 2: 4.3× easier, blocks ~53× too slow
    Period 3: 4.3× easier, blocks ~12× too slow
    Period 4: 4.3× easier, blocks ~3× too slow
    Period 5: ~3× easier, blocks approach normal
    Recovery: Similar duration but smoother, no discontinuities

    Crucially: damping applies per-period, so recovery accelerates
    as the ratio shrinks back toward 1.0

Anti-mercenary properties:

Mercenary miners who briefly contribute massive hashrate to manipulate difficulty are countered:

Attack: Mercenary adds 9× network hashrate for one period

Without damping:
    Period N:   Blocks 10× faster, difficulty increases 10×
    Period N+1: Mercenary leaves, honest miners face 10× difficulty
    Period N+2: Blocks 10× slower, difficulty decreases (clamped to 4×)
    Period N+3: Still 2.5× too hard
    Period N+4: Still too hard...
    Result: Extended period of slow blocks, chain vulnerable

With asymptotic damping:
    Period N:   Blocks 10× faster, difficulty increases ~4.3×
    Period N+1: Mercenary leaves, honest miners face 4.3× difficulty
    Period N+2: Blocks 4.3× slower, difficulty decreases ~3.6×
    Period N+3: Blocks ~1.2× slower, difficulty decreases ~1.15×
    Period N+4: Near equilibrium
    Result: Faster recovery, reduced attack profitability

Fixed sampling window:

The adjustment uses a fixed 2016-block sampling window (not dynamic):

adjustment_trigger = (block_height % 2016 == 0)

actual_time = timestamp[block_height] - timestamp[block_height - 2016]
expected_time = 2016 × 600 seconds

Control properties:
    - Fixed window: predictable, not gameable by timestamp manipulation
    - No integral term: prevents windup from sustained deviations
    - No derivative term: avoids oscillation from noise
    - Asymptotic damping: prevents both liveness failure and mercenary exploitation

Adjustment error analysis:

Adjustment Size Encoding Error Damping Distortion Combined Error
1% change ±0.000006% < 0.1% < 0.1%
10% change ±0.000006% < 1% < 1%
50% change ±0.000006% ~8% ~8%
100% change ±0.000006% ~20% ~20%
400% change ±0.000006% ~40% ~40%

The damping distortion is intentional—it prevents extreme adjustments while still responding proportionally to moderate deviations. The 24-bit mantissa ensures encoding error remains negligible at all scales.

Comparison with Bitcoin's encoding:

Property Bitcoin (nBits) Jericho
Mantissa bits 23 (with sign) 24
Exponent bits 8 8
Relative precision ~0.00001% ~0.000006%
Negative values Possible (invalid) Not representable
Zero handling Special case Clean encoding
Byte order Little-endian quirks Big-endian, clean

The Jericho encoding eliminates Bitcoin's historical quirks while providing slightly better precision.

Verification

Verifying a proof requires re-executing the expansion loop—there is no shortcut. However, verification is deterministic and completes in the same time as mining one attempt. Since valid blocks are rare (one per 10 minutes network-wide), verification cost is negligible compared to mining cost.

function verify_proof(block_header, nonce, claimed_hash) -> bool:
    computed_hash = proof_of_work(block_header, nonce)
    return computed_hash == claimed_hash AND computed_hash < difficulty_target

Security Properties

Property Guarantee
Progress-free Each nonce attempt is independent
Memory-hard 128 MB working set required
CPU-equivalent No significant ASIC advantage
Deterministic Same inputs always produce same output
Non-reversible Cannot work backwards from hash to nonce

Token Denomination

Tokens are represented as 128-bit unsigned fixed-point integers:

Token = uint128

Layout:
    [  32 bits whole  |  96 bits fractional  ]

    whole part:      2^32 maximum whole tokens (4,294,967,296)
    fractional part: 2^96 subdivisions per token (~28.9 decimal digits precision)

Terminal supply: 2^32 whole tokens (4,294,967,296 tokens)

This provides:

  • Exact binary arithmetic (no floating-point rounding)
  • Extreme precision for micro-transactions (1/2^96 ≈ 1.26 × 10^-29 tokens)
  • Clean power-of-two total supply equals maximum representable whole value
  • Simple overflow check: any value ≥ 2^128 is invalid

Conversion:

raw_value = whole_tokens × 2^96 + fractional_part
whole_tokens = raw_value >> 96
fractional = raw_value & ((1 << 96) - 1)

Maximum raw value: 2^128 - 1 (represents 2^32 - 2^-96 tokens, just under total supply)

Token Emission

Block rewards follow a smooth exponential decay equivalent to Bitcoin's halving schedule. Rather than discrete halvings every 210,000 blocks, the reward decreases continuously:

block_reward = base_reward × e^(-λ × block_height)

where:
    base_reward = 2^25 tokens = 33,554,432 tokens
    λ = ln(2) / 210,000 ≈ 0.0000033 (decay constant)
    terminal_supply = base_reward / λ = 2^32 tokens

This produces halving-equivalent checkpoints:
    At block 210,000:  reward = 2^24 tokens (16,777,216)
    At block 420,000:  reward = 2^23 tokens (8,388,608)
    At block 630,000:  reward = 2^22 tokens (4,194,304)
Block Height Reward (tokens) Reward (2^n) Cumulative Supply
0 33,554,432 2^25 0
105,000 23,726,566 ~2^24.5 ~2^31
210,000 16,777,216 2^24 ~2^31.5
420,000 8,388,608 2^23 ~2^31.8
630,000 4,194,304 2^22 ~2^31.9
... ... ... ...
→ 0 → 0 2^32

Cumulative supply formula:

total_supply(h) = terminal_supply × (1 - e^(-λ × h))
                = 2^32 × (1 - e^(-0.0000033 × h))

In fixed-point representation:
    raw_supply(h) = 2^128 × (1 - e^(-λ × h))

Implementation note: The exponential can be computed via fixed-point arithmetic using Taylor series or lookup tables with interpolation to avoid floating-point operations in consensus-critical code.

Target block time: 10 minutes, adjusted via difficulty retargeting every 2016 blocks.

Block Size

Block size follows an inverse exponential growth pattern—the mirror image of reward decay. As block rewards decrease, block capacity increases, reaching a terminal maximum that can ossify without constraining state channel operations.

block_size_limit(h) = terminal_size × (1 - e^(-λ × h))

where:
    terminal_size = 2^23 bytes (8 MB)
    λ = ln(2) / 210,000 (same decay constant as emission)
    h = block height
Block Height Size Limit Equivalent Era
0 0 bytes Genesis
1,000 ~26 KB Bootstrap
10,000 ~260 KB Early
105,000 ~2.4 MB Mid-cycle 1
210,000 ~4 MB Halving 1
420,000 ~6 MB Halving 2
630,000 ~7 MB Halving 3
8 MB Terminal

Genesis bootstrap: At block 0 the formula yields 0 bytes, so a minimum floor applies:

effective_limit(h) = max(block_size_limit(h), 2^16 bytes)

minimum_floor = 65,536 bytes (64 KB)

Rationale for terminal size:

The 8 MB terminal limit is designed to support permanent ossification while preserving state channel capacity:

  1. Channel operations are compact:

    • Channel open: ~500 bytes (funding tx + announcement)
    • Channel close: ~300 bytes (settlement tx)
    • On-chain HTLC resolution: ~200 bytes per HTLC
  2. Terminal throughput calculation:

    At 8 MB blocks, 10-minute intervals:
        ~16,000 channel opens per block
        ~96,000 channel opens per hour
        ~2.3M channel opens per day
    
    With triangular channels (3 parties each):
        ~6.9M new channel participants per day capacity
    
  3. State channel efficiency:

    • Once opened, channels transact off-chain indefinitely
    • On-chain footprint only at open/close/dispute
    • 8 MB supports network of billions of channels with normal churn

Growth-to-ossification path:

Phase Blocks Size Growth Network State
Bootstrap 0-10K 64KB → 260KB Initial validator set, few channels
Growth 10K-210K 260KB → 4MB Channel network forming
Maturity 210K-630K 4MB → 7MB Dense channel graph
Terminal 630K+ 7MB → 8MB Ossification-ready

After ~12 years (630,000 blocks), block size reaches 87.5% of terminal. The protocol can ossify at any point after sufficient channel density, with 8 MB providing permanent headroom for:

  • Channel churn (open/close cycles)
  • Dispute resolution spikes
  • New user onboarding via channel creation

Interaction with emission:

Block size and emission are inverse curves crossing at the halving points:

emission(h) + normalized_size(h) ≈ constant

where:
    emission(h) = e^(-λh)           // decaying
    normalized_size(h) = 1 - e^(-λh) // growing

At h = 210,000 (first halving):
    emission = 0.5 (50% of initial reward)
    size = 0.5 (50% of terminal size = 4 MB)

This creates natural economic balance: early blocks have high rewards but low capacity; mature blocks have low rewards but high capacity. Validators are compensated primarily through fees as block space becomes the scarce resource.

Block rewards are distributed:

  • 70% to the block producer (miner)
  • 20% split among endorsing validators (proportional to trust weight)
  • 10% to the challenge reserve pool (funds successful fraud proof rewards)

Ephemeral Bonding

To propose or endorse a block, validators must post a temporary bond locked for a fixed challenge period. Bonds release automatically if unchallenged, creating rolling capital commitment without permanent lock-up. Validators who stop bonding automatically lose consensus influence—no explicit exit required.

Bond Parameters:

minimum_proposer_bond = 10 tokens
minimum_endorser_bond = 1 token
challenge_period = 100 blocks (~16.7 hours)
challenger_bond = 0.1 × proposer_bond

Democratic Endorsement

Blocks require endorsement from validators whose trust weight exceeds a threshold. Trust flows through a web of attestations between participants. Attestation weight decays with bonding inactivity, causing the trust graph to self-prune inactive or abandoned nodes.

Endorsement threshold: A block is considered endorsed when validators representing ≥67% of active trust weight sign the block within the endorsement window (10 blocks after proposal).

Adversarial Challenge Layer

During the challenge period, any participant may submit a fraud proof disputing block validity. Challengers post a smaller bond; the dispute resolves on-chain. Losers forfeit their bond to winners. This creates 1-of-N security: a single honest challenger defeats fraud.

Fraudulent Block Definition

A block is fraudulent if it violates any of the following validity rules:

Transaction Validity Fraud

Fraud Type Description Proof Required
Double spend Transaction spends already-spent output Both conflicting transactions with merkle proofs
Invalid signature Transaction signature fails verification The invalid transaction + public key
Insufficient balance Spend exceeds available balance Account state proof showing balance < spend
Invalid script Script execution fails Transaction + execution trace
Replay attack Transaction replayed from different context Original transaction + replay context

Block Structure Fraud

Fraud Type Description Proof Required
Invalid PoW Block hash does not meet difficulty target Block header
Invalid merkle root Declared root mismatches transaction set Transaction list + computed root
Invalid parent Parent hash references non-existent/invalid block Block header + chain state
Timestamp violation Timestamp outside acceptable window Block header + median time past
Oversized block Block exceeds maximum size limit Block header + size proof

Consensus Rule Fraud

Fraud Type Description Proof Required
Invalid coinbase Block reward exceeds allowed emission Coinbase transaction + block height
Missing endorsements Insufficient trust weight endorsed Block header + endorsement set
Bond violation Proposer bond insufficient or expired Proposer identity + bond state
Vesting violation Tokens spent before vesting release Transaction + vesting schedule proof
Invalid state root Post-execution state root incorrect Pre-state + transactions + computed post-state

Fraud Proof Structure

fraud_proof = {
    block_hash:     hash of disputed block
    fraud_type:     enumerated fraud category
    evidence:       type-specific proof data
    witness:        merkle proofs for state access
    challenger:     challenger public key
    bond_tx:        challenger bond transaction
    signature:      proof of challenger key ownership
}

Fraud Resolution

  1. Submission: Challenger submits fraud proof with bond (0.1 × proposer bond)
  2. Verification window: 288 blocks (~48 hours) for on-chain verification
  3. Adjudication: Deterministic verification of proof validity
  4. Settlement:
    • If fraud proven: proposer loses bond, challenger receives proposer bond + own bond return + challenge reserve bonus
    • If fraud not proven: challenger loses bond to proposer
fraud_payout = {
    to_challenger: proposer_bond + challenger_bond + reserve_bonus
    reserve_bonus: min(proposer_bond × 0.5, challenge_reserve_balance × 0.01)
}

Trust Attestation System

Trust attestations form the foundation of validator influence in the consensus. Unlike stake-weighted systems, trust is earned through explicit peer endorsement and sustained by ongoing participation.

Attestation Structure

attestation = {
    attester:       attester public key
    attestee:       attestee public key
    weight:         integer weight (1-100)
    expiry:         block height when attestation expires (0 = no expiry)
    conditions:     optional behavioral conditions
    signature:      attester signature over attestation
}

Trust Weight Calculation

Each validator's effective trust weight is computed as:

trust_weight(V) = Σ(attestation_weight(A, V) × attester_effectiveness(A))
                  for all attesters A who attested to V

attestation_weight(A, V) = base_weight(A, V) × time_decay(A, V) × activity_factor(A)

where:
    base_weight(A, V) = weight declared in attestation (1-100)
    time_decay(A, V) = 0.99^(blocks_since_attestation / 1000)
    activity_factor(A) = min(1, bonding_blocks_last_10000 / 5000)

Attester Effectiveness

An attester's own effectiveness determines how much weight their attestations carry:

attester_effectiveness(A) = trust_weight(A) × activity_factor(A) × reputation_factor(A)

reputation_factor(A) = 1 - (slashing_events(A) × 0.1)
                       clamped to [0, 1]

This creates recursive trust: validators who are themselves trusted carry more attestation weight.

Trust Graph Properties

Transitivity limits: Trust does not flow transitively beyond direct attestations. If A trusts B and B trusts C, A does not automatically trust C. This prevents trust inflation.

Decay mechanics: Attestations decay over time unless refreshed. An attestation loses ~1% of its weight every 1000 blocks (~1 week). Attesters must periodically re-sign to maintain trust relationships.

Activity requirement: Attesters who stop bonding see their attestation effectiveness drop. After 10,000 blocks (~10 weeks) of inactivity, attestations from that identity carry zero weight.

Attestation Operations

Operation Bond Required Effect
Create attestation 0.1 tokens New trust edge in graph
Update weight 0.05 tokens Modify existing attestation weight
Revoke attestation 0 tokens Remove trust edge immediately
Refresh attestation 0.01 tokens Reset decay timer

Trust Threshold for Consensus

endorsement_threshold = 0.67 × total_active_trust_weight

total_active_trust_weight = Σ(trust_weight(V) × activity_factor(V))
                            for all validators V with activity_factor > 0.1

A block achieves consensus when:

Σ(trust_weight(E)) ≥ endorsement_threshold
for all endorsers E who signed the block

Sybil Resistance in Trust

The trust system resists Sybil attacks through:

  1. Identity cost: Creating validators costs ~12 weeks CPU work
  2. Probation period: New identities have 10% attestation effectiveness for 1000 blocks
  3. No self-attestation: Validators cannot attest to themselves
  4. Attestation limits: Each validator can create max 100 active attestations
  5. Collusion detection: Highly interconnected clusters with no external attestations receive reduced weight
cluster_penalty(C) = 1 - (internal_edges(C) / (internal_edges(C) + external_edges(C)))^2

where cluster C is detected via graph community analysis

Trust Bootstrap at Genesis

At network genesis, a predefined set of bootstrap attestations creates initial trust:

genesis_attestations = [
    {attester: genesis_key_1, attestee: genesis_key_2, weight: 50},
    {attester: genesis_key_2, attestee: genesis_key_1, weight: 50},
    ...
]

These bootstrap identities must have completed the work-bond registration. After 10,000 blocks, genesis attestations decay like any other, and the network trust graph becomes purely organic.

Identity Bootstrap and Sybil Resistance

New validator identities must satisfy a work-bond requirement before participating in consensus. This mechanism serves dual purpose: bootstrapping the network without privileged token distribution, and creating ongoing Sybil resistance through identity cost.

Work-Bond Mechanism

Every new public key entering the validator set must submit a proof-of-work demonstrating sustained computational commitment:

identity_proof = {
    pubkey:     new validator public key
    work:       PoW solution (long division hash)
    timestamp:  block height at submission
    signature:  proof of key ownership
}

The work requirement targets approximately twelve weeks of continuous single-thread CPU computation at genesis, establishing genuine cost for identity creation.

Adaptive Identity Difficulty

As network token supply matures, identity cost transitions from pure computation to capital:

identity_cost = max(work_requirement, token_bond_equivalent)

where:
    work_requirement = base_difficulty × decay_factor(supply)
    token_bond_equivalent = minimum_bond × time_weight_factor
    decay_factor(supply) = 2^(-supply / halving_constant)
Network Maturity Work Requirement Token Alternative
Genesis ~12 weeks CPU Not available
Early (low supply) ~6 weeks CPU Expensive (scarce tokens)
Growth ~2 weeks CPU Moderate token bond
Mature ~1 week CPU Standard token bond preferred

This creates equivalence: early participants pay in computation what later participants pay in capital. Neither enjoys privilege over the other.

Sybil Economics

Creating validator identities always costs real resources:

sybil_attack_cost = N × identity_cost

where N = number of fake identities needed

At any network stage, identity cost equals approximately 12 weeks CPU work OR equivalent token bond.

To achieve 33% Byzantine threshold with 100 honest validators:

  • Need ~50 Sybil identities
  • Cost: 600 weeks CPU time OR 50 × token_bond

Identity cost scales with ambition. Casual participation is accessible; capture attempts are expensive.

Identity Lifecycle

┌─────────────────────────────────────────────────────────┐
│ 1. REGISTRATION                                         │
│    Submit identity_proof with PoW or token bond         │
│    Identity enters probationary state                   │
└────────────────────────┬────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│ 2. PROBATION (1000 blocks)                              │
│    Can bond and validate                                │
│    Attestation weight: 10% of normal                    │
│    Cannot attest to other new identities                │
└────────────────────────┬────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│ 3. ACTIVE                                               │
│    Full participation rights                            │
│    Attestation weight grows with bonding activity       │
│    Can attest to new identities                         │
└────────────────────────┬────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│ 4. INACTIVE (no bonding for N blocks)                   │
│    Attestation weight decays                            │
│    Eventually: identity suspended                       │
│    Reactivation: reduced work-bond (not full)           │
└─────────────────────────────────────────────────────────┘

Reward Vesting Schedule

Block rewards vest over one year following a halving pattern with progressively doubling periods. Each period releases double the previous amount over double the time, maintaining constant release rate while extending commitment duration.

Schedule Structure

Period Duration Cumulative Time Release Cumulative Release
1 5.7 days 5.7 days 1/128 (0.78%) 0.78%
2 11.4 days 17.1 days 1/64 (1.56%) 2.34%
3 22.8 days 39.9 days 1/32 (3.13%) 5.47%
4 45.6 days 85.5 days 1/16 (6.25%) 11.72%
5 91.3 days 176.8 days 1/8 (12.5%) 24.22%
6 182.5 days 359.3 days 1/4 (25%) 49.22%
7 5.7 days 365 days ~1/2 (50.78%) 100%

Game-Theoretic Effect

Exit Timing Reward Forfeited
Early exit (1 month) ~95%
Early exit (3 months) ~88%
Early exit (6 months) ~75%
Full year commitment 0%

This creates:

  • Continuous incentive: Every additional day of honest participation unlocks more value
  • Compounding commitment: Later periods contain majority of reward, binding validators to long-term network health
  • Attack deterrent: Misbehavior forfeits substantial unvested rewards beyond immediate bond loss
  • Anti-mercenary pressure: Quick profit-taking captures minimal value; patience rewarded exponentially

Forfeiture Mechanics

Unvested rewards from slashed or departed validators return to the block reward pool, benefiting remaining honest participants. This creates collective incentive to identify and challenge bad actors early.

Countermeasures

Random Audits (Anti-Apathy)

Each block, a deterministic random selection obliges specific validators to demonstrate liveness and state awareness within a response window. Failure results in attestation weight reduction and partial bond slashing. Consecutive failures trigger temporary exclusion. This unpredictable obligation prevents passive participation.

Time-Weighted Bonding (Anti-Concentration)

Newly bonded capital enters at reduced effectiveness (10%), scaling to full weight over thousands of blocks. This prevents flash concentration attacks and rewards patient commitment over mercenary capital deployment. Combined with sublinear influence curves, small long-term participants can match large short-term holders.

Security Model

Threat Countermeasure
Hardware privilege CPU-equivalent PoW (long division)
Validation fraud Economic bonds + adversarial challenges
Voter apathy Random audits with slashing
Capital concentration Time-weighted bonding + sublinear returns
Mercenary behavior Progressive vesting over one year
Zombie validators Automatic decay via bonding inactivity
Sybil attacks Work-bond identity cost (~12 weeks CPU per identity)
Bootstrap privilege Work-to-capital transition preserves cost equivalence

Properties

Aspect Mechanism
Sybil resistance Work-bond identity registration (~12 weeks CPU)
Zombie elimination Automatic via bonding inactivity decay
Attack cost Bond loss + challenge certainty + vested reward forfeiture
Entry barrier Moderate work OR equivalent capital (never free)
Governance Democratic endorsement with activity-weighted voting
Commitment horizon One year through vesting schedule
Bootstrap fairness Work-capital equivalence across network maturity

Trade-offs

Jericho sacrifices the passive security of capital hostage-taking (Bitcoin's ASIC lock-in) for active security through distributed vigilance. It requires ongoing participation but offers fair issuance, accessible entry, and resistance to monetary privilege accumulation. The vesting schedule creates soft lock-in through future rewards rather than stranded hardware, aligning incentives without creating permanent barriers to exit.

The work-bond identity mechanism ensures that validator creation always costs real resources—computation early, capital later—eliminating both Sybil attacks and privileged genesis distribution. Early and late participants face equivalent barriers in different currencies.

Cryptographic Primitives and Address Model

Signature Scheme

The protocol uses secp256k1 Schnorr signatures for all authentication:

Key generation:
    private_key = random 256-bit integer mod n
    public_key = private_key × G (33 bytes compressed, or 32 bytes x-only)

Signature:
    sig = schnorr_sign(private_key, message)
    sig_size = 64 bytes (r: 32 bytes, s: 32 bytes)

Verification:
    valid = schnorr_verify(public_key, message, sig)

Schnorr advantages over ECDSA:

  • Provable security under standard assumptions
  • Linear signature aggregation (n signatures → 1 signature for n-of-n multisig)
  • Batch verification (~2× faster for multiple signatures)
  • Simpler, non-malleable signatures

Note on quantum resistance: secp256k1 Schnorr signatures are not quantum-resistant. Shor's algorithm can solve the discrete logarithm problem, enabling key recovery from exposed public keys. The single-use address model (below) limits exposure, but does not provide full quantum security. Post-quantum migration may be considered before protocol ossification.

Address Model

Addresses are SHA-256 hashes of public keys, not raw public keys:

address = SHA256(public_key)

address_size = 32 bytes

Public key hiding: The public key is not revealed until spending. This provides:

  • Partial quantum protection (public key unknown until spend)
  • Smaller on-chain footprint for unspent outputs
  • Address commitments possible before key generation

Single-Use Address Enforcement

The protocol rejects any transaction that spends to an address that has previously appeared as an output.

consensus_rule:
    for each output in transaction.outputs:
        if output.address ∈ historical_output_addresses:
            REJECT transaction as invalid

Rationale:

Problem Single-Use Solution
Public key exposure Key revealed only once, at spend time
Address clustering No repeated addresses to link
Quantum harvest attacks Exposed keys cannot receive future funds
Change address correlation Forced fresh addresses break heuristics

Implementation:

Nodes maintain a bloom filter or set of all historical output addresses:

historical_addresses: Set<Address>

on_new_block(block):
    for tx in block.transactions:
        for output in tx.outputs:
            if output.address in historical_addresses:
                reject_block("address reuse detected")
            historical_addresses.add(output.address)

The storage overhead is ~32 bytes per historical output, but can be compressed using probabilistic structures with negligible false positive rate.

Wallets should implement automated UTXO management to minimize address linkage while maintaining efficient coin selection. The recommended strategy uses interleaved change addresses that distribute change outputs across the wallet's value range.

UTXO Set Structure

Wallets maintain UTXOs sorted by value:

wallet_utxos = [
    {address: A1, value: 0.5},
    {address: A2, value: 2.3},
    {address: A3, value: 5.0},
    {address: A4, value: 12.7},
    {address: A5, value: 50.0},
]

Interleaved Change Strategy

When spending requires change, generate 2-3 change outputs that interleave with existing UTXOs by value:

function create_change_outputs(change_amount, wallet_utxos) -> [output]:
    // Determine number of change outputs (2-3)
    num_changes = 2 + (random() % 2)

    // Find value gaps in existing UTXO set
    gaps = find_value_gaps(wallet_utxos)

    // Distribute change across gaps
    change_outputs = []
    remaining = change_amount

    for i in 0..(num_changes - 1):
        // Target a value that falls within a gap
        target_value = select_gap_value(gaps[i % len(gaps)])

        // Allocate portion of change (with randomization)
        if i == num_changes - 1:
            portion = remaining
        else:
            portion = remaining × (0.3 + random() × 0.4)  // 30-70% of remaining

        change_outputs.append({
            address: generate_fresh_address(),
            value: portion
        })
        remaining -= portion

    return change_outputs


function find_value_gaps(utxos) -> [gap]:
    gaps = []
    sorted_utxos = sort_by_value(utxos)

    for i in 0..(len(sorted_utxos) - 1):
        gap = {
            lower: sorted_utxos[i].value,
            upper: sorted_utxos[i + 1].value,
            midpoint: (sorted_utxos[i].value + sorted_utxos[i + 1].value) / 2
        }
        gaps.append(gap)

    // Add gaps at extremes
    if len(sorted_utxos) > 0:
        gaps.append({lower: 0, upper: sorted_utxos[0].value})
        gaps.append({lower: sorted_utxos[-1].value, upper: ∞})

    return gaps

Example: Interleaved Change

Before transaction:
    UTXOs: [0.5, 2.3, 5.0, 12.7, 50.0]
    Gaps:  [0-0.5, 0.5-2.3, 2.3-5.0, 5.0-12.7, 12.7-50.0, 50.0+]

Payment: 8.0 tokens from the 12.7 UTXO
Change:  4.7 tokens to distribute

Interleaved change outputs (example):
    Change 1: 1.8 tokens (fills gap 0.5-2.3)
    Change 2: 2.9 tokens (fills gap 2.3-5.0)

After transaction:
    UTXOs: [0.5, 1.8, 2.3, 2.9, 5.0, 50.0]

Result: Change is distributed, not concentrated in one traceable output

Privacy Properties

Heuristic Attack Mitigation
Largest output is payment Multiple change outputs obscure which is payment
Change returns to same value range Interleaving distributes across ranges
Round number detection Randomized splits avoid round numbers
UTXO count fingerprinting Gradual UTXO count changes, not dramatic
Value clustering Values spread across wallet's range

Coin Selection Algorithm

When selecting inputs for a payment:

function select_inputs(payment_amount, wallet_utxos) -> [utxo]:
    // Prefer single UTXO if possible (minimizes linkage)
    for utxo in wallet_utxos:
        if utxo.value >= payment_amount AND utxo.value < payment_amount × 1.5:
            return [utxo]

    // Otherwise, select minimal set with preference for:
    // 1. UTXOs that will produce change fitting existing gaps
    // 2. Older UTXOs (reduce UTXO set age variance)
    // 3. UTXOs from different historical transactions (if metadata available)

    selected = []
    total = 0

    candidates = sort_by_selection_score(wallet_utxos, payment_amount)

    for utxo in candidates:
        selected.append(utxo)
        total += utxo.value
        if total >= payment_amount + min_change:
            break

    return selected


function selection_score(utxo, payment_amount, wallet_utxos) -> float:
    // Score based on how well resulting change fits gaps
    potential_change = utxo.value - payment_amount
    gap_fit = score_gap_fit(potential_change, wallet_utxos)

    // Prefer older UTXOs
    age_score = utxo.confirmation_height / current_height

    // Combine scores
    return gap_fit × 0.6 + age_score × 0.4

UTXO Consolidation

Periodically consolidate UTXOs during low-fee periods:

function consolidation_candidates(wallet_utxos) -> [utxo]:
    // Identify dust or near-dust UTXOs
    dust_threshold = median(wallet_utxos.values) × 0.01

    candidates = [u for u in wallet_utxos if u.value < dust_threshold]

    // Only consolidate if it improves UTXO distribution
    if len(candidates) >= 3:
        return candidates
    else:
        return []

Consolidation creates a single new UTXO at a value that fills
the largest gap in the remaining UTXO distribution.

Wallet Implementation Requirements

Compliant wallets MUST:

  1. Generate fresh addresses for all outputs (enforced by consensus)
  2. Implement interleaved change (RECOMMENDED for privacy)
  3. Never expose public keys until spending
  4. Maintain UTXO metadata for intelligent selection

Compliant wallets SHOULD:

  1. Use 2-3 change outputs for transactions with significant change
  2. Randomize change value distribution (not exact splits)
  3. Consolidate dust during low-fee periods
  4. Avoid creating outputs smaller than median fee × 100

Collusion Resistance and Protocol Ossification

The protocol is designed to resist capture by cartels seeking to modify consensus rules, inject arbitrary data, or expand scripting capabilities. These defenses promote eventual ossification—a stable end-state where the protocol becomes immutable.

No On-Chain Governance

Protocol changes cannot be enacted through on-chain voting or stake-weighted governance:

  • No upgrade mechanism: The protocol has no built-in facility for rule changes
  • No signaling bits: Block headers contain no version bits or feature flags
  • No admin keys: No privileged keys can modify consensus parameters

Any protocol change requires a hard fork—a new software release that honest nodes must independently choose to adopt. Cartels cannot use captured trust weight or bonded capital to amend rules; they can only propose forks that the network may reject.

Minimal Transaction Format

Transactions follow a fixed schema with no extension points:

transaction = {
    version:     1 (fixed, no upgrade path)
    inputs:      [input, ...]
    outputs:     [output, ...]
    lock_time:   absolute block height or timestamp
    witnesses:   [witness, ...]
}

input = {
    prev_tx:     transaction hash
    prev_index:  output index
    sequence:    relative timelock (if enabled)
}

output = {
    amount:      token amount (128-bit fixed-point)
    script:      locking script (restricted grammar)
}

witness = {
    signatures:  [signature, ...]
    preimages:   [hash preimage, ...] (for HTLCs)
}

No arbitrary data fields: There is no OP_RETURN equivalent, no memo field, no metadata extension. Every byte must conform to the transaction schema.

Restricted Script Grammar

Locking scripts use a minimal, non-Turing-complete grammar supporting only payment operations:

script := pubkey_lock
        | multisig_lock
        | timelock_wrapper(script)
        | hashlock_wrapper(script)

pubkey_lock     := CHECK_SIG(pubkey)
multisig_lock   := CHECK_MULTISIG(threshold, [pubkey, ...])
timelock_wrapper := IF block_height >= N THEN script
hashlock_wrapper := IF SHA256(witness) == hash THEN script

Supported operations (exhaustive list):

Operation Function
CHECK_SIG Verify single signature against pubkey
CHECK_MULTISIG Verify n-of-m signatures
CHECK_TIMELOCK_ABS Require minimum block height
CHECK_TIMELOCK_REL Require minimum blocks since input confirmed
CHECK_HASHLOCK Require preimage of specified hash

Explicitly excluded:

  • Loops or recursion
  • Arithmetic beyond comparison
  • String manipulation
  • External data references (oracles)
  • Dynamic script generation
  • Stack manipulation beyond witness consumption

This grammar supports all necessary payment patterns:

  • Simple transfers (single signature)
  • Joint custody (multisig)
  • Vesting/escrow (timelocks)
  • Atomic swaps and HTLCs (hashlocks)
  • State channel operations (multisig + timelocks + hashlocks)

Complex smart contracts are not possible and cannot be added without a hard fork that redefines the script grammar.

Content-Addressed Block Structure

Every byte in a block is accounted for:

block = {
    header:      block_header
    transactions: [transaction, ...]
}

block_header = {
    version:          1 (fixed)
    prev_block:       hash of previous block header
    merkle_root:      merkle root of transaction hashes
    timestamp:        unix timestamp
    difficulty:       current difficulty target
    nonce:            proof of work nonce
}

Validity requires:
    merkle_root == compute_merkle_root(transactions)

    No additional space exists outside header + merkle-committed transactions

Miners cannot include arbitrary data without either:

  1. Creating invalid transactions (rejected by all nodes)
  2. Wasting transaction outputs (economically costly, limited by restricted script)

Strict Standardness Rules

Nodes enforce standardness at the mempool level, not just block validity:

standardness_rules = {
    // Transaction structure
    max_transaction_size:    100 KB
    max_inputs:              1000
    max_outputs:             1000

    // Script restrictions
    max_script_size:         520 bytes
    max_multisig_keys:       20
    max_timelock_height:     500,000,000 (far future)

    // Witness restrictions
    max_signature_size:      73 bytes (DER + sighash)
    max_preimage_size:       32 bytes

    // Economic minimums
    min_output_amount:       1 token (prevents dust)
    min_fee_rate:            1 token per KB
}

Non-standard transactions:

  • Are not relayed between nodes
  • Are not included in mempools
  • May be valid if mined, but miners have no way to receive them

Even a cartel-controlled miner cannot populate blocks with non-standard transactions because they cannot propagate through the network.

Economic Deterrents

Block space is scarce and fee-driven:

data_storage_cost = size × fee_rate × (1 / utility)

For a 1 KB arbitrary data payload at mature network:
    - Competes with ~3 actual payment transactions
    - Pays same fee as those transactions
    - Provides no utility (cannot be spent or referenced)
    - Must be wrapped in valid transaction structure (overhead)

The restricted script grammar means "data storage" outputs are unspendable, permanently consuming UTXO set space. Rational miners prefer fee-paying transactions that will eventually be spent.

Trust Attestation Decay

Cartel formation requires accumulating trust weight, but attestation effectiveness decays:

Cartel accumulation timeline:
    Week 0:    Cartel begins acquiring identities
    Week 12:   First identity completes work-bond
    Week 12+:  Cartel members must continuously bond to maintain weight

    If cartel stops bonding to coordinate attack:
        activity_factor drops
        attestation_weight of cartel members decays
        honest validators' relative weight increases

Cartels cannot accumulate dormant power. Trust weight requires ongoing capital commitment, creating continuous cost for maintaining attack capability.

Identity Cost Barrier

Capturing 33% of network trust requires:

At N honest validators with average mass M:
    cartel_identities_needed = N × 0.5 (to reach 33% of total)
    cartel_cost = cartel_identities_needed × identity_cost

    identity_cost = ~12 weeks CPU OR equivalent token bond

Example with 1000 honest validators:
    Need ~500 cartel identities
    Cost: 6000 weeks CPU time (~115 years single-threaded)
         OR 500 × token_bond

The identity cost creates a slow, expensive barrier. Cartels cannot rapidly acquire influence; they must either:

  • Spend years computing identity proofs
  • Acquire massive token holdings (visible, market-moving)

Both approaches are detectable long before reaching capture threshold.

Ossification Path

The protocol is designed to reach a stable end-state:

Component Ossification Trigger Final State
Block size Approaches 8 MB terminal Fixed at 2^23 bytes
Emission Approaches zero No new issuance
Script grammar Already minimal No expansion possible
Transaction format Already fixed No versioning
Difficulty adjustment Continues indefinitely Self-regulating

Post-ossification operation:

Once the network reaches sufficient channel density and emission becomes negligible:

  1. Base layer: Processes only channel opens, closes, and disputes
  2. Fee market: Validators compensated entirely by transaction fees
  3. State channels: Handle all regular payment volume off-chain
  4. Protocol changes: Effectively impossible (no governance, no upgrade path)

The protocol becomes a minimal, stable settlement layer—resistant to feature creep, regulatory capture, and cartel manipulation.

Soft Fork Resistance

Even "soft" forks (tightening rules) face resistance:

soft_fork_attempt:
    1. Cartel tightens rules in their blocks
    2. Honest nodes accept these blocks (valid under old rules)
    3. But honest miners produce blocks cartel rejects
    4. Network splits along cartel/honest boundary
    5. Cartel must maintain >50% hash power indefinitely
    6. Any lapse allows honest chain to overtake

Without a coordination mechanism (upgrade signaling, governance), even soft forks require permanent majority control—economically unsustainable.

Collusion Detection

The trust attestation system includes collusion detection:

cluster_penalty(C) = 1 - (internal_edges(C) / (internal_edges(C) + external_edges(C)))^2

Highly interconnected groups with few external attestations receive reduced weight. This makes cartel formation visible and penalized:

  • Cartels naturally attest heavily to each other (coordination requirement)
  • External attestations require trust from honest validators (who won't attest to suspicious clusters)
  • Cluster penalty reduces cartel's effective trust weight

Cartels face a dilemma: tight coordination (detectable, penalized) or loose coordination (ineffective).

Triangular State Channels

Jericho implements off-chain payment channels using hash time-locked contracts (HTLCs) with a three-party triangular geometry that improves routing efficiency over traditional two-party channels.

Channel Structure

A triangular channel connects three parties (A, B, C) with a single on-chain funding transaction:

funding_tx = {
    inputs:     [A_contribution, B_contribution, C_contribution]
    output:     3-of-3 multisig(A, B, C)
    capacity:   sum of contributions
    channel_id: hash(funding_tx)
}

State representation:

channel_state = {
    channel_id:  funding transaction hash
    sequence:    monotonically increasing state number
    balances:    {A: amount, B: amount, C: amount}
    htlcs:       [pending HTLC commitments]
    signatures:  {A: sig, B: sig, C: sig}
}

All three parties must sign each state update. Any party can broadcast the latest mutually-signed state to close the channel.

Triangular Geometry Advantages

Traditional two-party channels form a graph where routing requires finding paths through sequential hops. Triangular channels create a hypergraph where each channel connects three nodes:

Two-party topology:          Triangular topology:
    A───B───C───D               A───B
    │   │   │   │                ╲ │ 
    E───F───G───H                 ╲│╱
                                   C
                                  ╱│╲
                                  │ ╲
                                D───E

Pathfinding benefits:

Metric Two-party Triangular
Average hops for N nodes O(√N) O(∛N)
Routing table size O(N²) O(N^1.5)
Failure recovery options 1 alternate per hop 2 alternates per hop
Liquidity utilization Sequential Parallel rebalancing

Within a triangle, any party can route to either other party in one hop. Payments between triangles require fewer intermediate hops because each triangle provides two potential next-hops rather than one.

HTLC Protocol

Hash time-locked contracts enable atomic multi-hop payments:

htlc = {
    payment_hash:  H(preimage)
    amount:        payment amount
    sender:        originating party within channel
    receiver:      destination party within channel
    expiry:        block height deadline
    direction:     which balance decrements/increments on resolution
}

HTLC lifecycle:

  1. Offer: Sender proposes HTLC, all three parties sign new state with HTLC pending
  2. Forward: Receiver creates corresponding HTLC in adjacent channel (with shorter expiry)
  3. Fulfill: Final recipient reveals preimage, HTLCs resolve backwards
  4. Timeout: If expiry passes without preimage, HTLC cancels and funds return to sender

Triangular HTLC routing:

Payment A→D through triangle (A,B,C) and triangle (C,D,E):

    A offers HTLC to C (via triangle ABC)
    C offers HTLC to D (via triangle CDE)
    D reveals preimage to C
    C reveals preimage to A

    Total hops: 2 (vs 3+ in two-party topology)

State Update Protocol

Channel state advances through signed commitment transactions:

commitment = {
    channel_id:     channel identifier
    sequence:       new state number (must exceed previous)
    new_balances:   {A: amount, B: amount, C: amount}
    htlc_updates:   [add/remove HTLC operations]
    fee_allocation: how on-chain fees are split if broadcast
}

update_message = {
    commitment:  proposed new state
    signature:   proposer's signature
    revocation:  revocation key for proposer's previous state
}

Three-phase update:

  1. Propose: One party broadcasts proposed state with their signature
  2. Countersign: Other two parties each respond with their signature + revocation
  3. Finalize: All parties now hold fully-signed new state

Any state update requires 3 messages minimum (propose + 2 countersigns). Revocation keys ensure that broadcasting an old state results in complete fund loss to the other parties.

Revocation Mechanism

To prevent broadcast of superseded states, each party provides revocation secrets:

revocation_key(party, sequence) = hash(party_secret || sequence)

penalty_clause = {
    condition:  "if state sequence < current AND revocation_key revealed"
    action:     "all channel funds go to non-broadcasting parties"
}

When signing state N+1, each party reveals their revocation key for state N. This makes broadcasting state N provably punishable—the other parties can claim all funds using the revealed revocation key.

Triangular penalty distribution:

If party A broadcasts revoked state, parties B and C split A's balance:

penalty_distribution = {
    B: A_balance × (B_balance / (B_balance + C_balance))
    C: A_balance × (C_balance / (B_balance + C_balance))
}

Channel Closure

Cooperative close (all parties online and agreeing):

closing_tx = {
    input:   funding_tx output
    outputs: [A_final_balance, B_final_balance, C_final_balance]
    signatures: [A_sig, B_sig, C_sig]
}

Immediate finality, minimal fees.

Unilateral close (one party broadcasts latest state):

force_close_tx = {
    input:   funding_tx output
    outputs: [
        A_balance (timelocked: 288 blocks if A broadcasted, else immediate),
        B_balance (timelocked: 288 blocks if B broadcasted, else immediate),
        C_balance (timelocked: 288 blocks if C broadcasted, else immediate),
        htlc_outputs...
    ]
    signatures: [A_sig, B_sig, C_sig] (from last valid state)
}

The broadcasting party's funds are timelocked to allow fraud proof submission if the state was revoked.

HTLC resolution on-chain:

Pending HTLCs become separate outputs that resolve via:

  • Preimage reveal (receiver claims)
  • Timeout expiry (sender reclaims)

Channel Capacity Gossip Protocol

Efficient routing requires near-real-time knowledge of channel capacities across the network. The gossip protocol propagates capacity updates with minimal latency.

Gossip Message Types

channel_announcement = {
    channel_id:      funding tx hash
    participants:    [pubkey_A, pubkey_B, pubkey_C]
    capacity:        total channel capacity
    funding_proof:   merkle proof of funding tx in block
    signatures:      [A_sig, B_sig, C_sig]
}

capacity_update = {
    channel_id:      channel identifier
    timestamp:       unix timestamp (monotonic per channel)
    available:       {
        A_to_B: amount,  A_to_C: amount,
        B_to_A: amount,  B_to_C: amount,
        C_to_A: amount,  C_to_B: amount
    }
    fee_rates:       {A: rate, B: rate, C: rate}
    signature:       any one participant's signature
}

capacity_digest = {
    channels:        [channel_id, ...]
    checksums:       [hash(latest_update), ...]
    timestamp:       digest generation time
}

Propagation Strategy

Immediate propagation for significant changes:

significant_change = (
    |new_capacity - old_capacity| > threshold OR
    direction_became_unavailable OR
    direction_became_available
)

threshold = max(0.1 × channel_capacity, 1000 tokens)

Batched propagation for minor fluctuations:

  • Accumulate minor updates for 10 seconds
  • Broadcast single aggregate update
  • Reduces message volume by ~90% during high activity

Protocol flow:

1. State change occurs in channel ABC
2. Any participant (say A) generates capacity_update
3. A sends to all connected peers
4. Peers validate signature and timestamp monotonicity
5. Peers update local routing table
6. Peers forward to their peers (with deduplication)

Anti-Spam Measures

rate_limits = {
    updates_per_channel_per_hour:  60
    updates_per_node_per_minute:   100
    minimum_update_interval:       1 second
}

spam_penalty = {
    condition:  exceeding rate limits
    action:     temporary ignore (exponential backoff)
}

Nodes track update frequency per channel and per source node. Excessive updates trigger temporary filtering.

Routing Table Structure

Each node maintains:

routing_table = {
    channels: {
        channel_id: {
            participants:  [A, B, C]
            capacity:      total
            available:     directional capacities
            fee_rates:     per-participant fees
            last_update:   timestamp
            reliability:   success rate estimate
        }
    },

    node_index: {
        pubkey: [channel_ids...]  // channels involving this node
    },

    distance_cache: {
        (source, dest): {
            paths:      [precomputed paths]
            expires:    cache expiry time
        }
    }
}

Synchronization

New nodes or nodes recovering from disconnection:

sync_request = {
    known_channels:   [channel_id, ...]
    known_checksums:  [hash(latest_update), ...]
}

sync_response = {
    missing_announcements:  [channel_announcement, ...]
    stale_updates:          [capacity_update, ...]
}

Nodes exchange capacity_digest messages periodically to detect desynchronization. Missing or stale entries trigger targeted sync requests.

Latency Targets

Metric Target
Update propagation (95th percentile) < 500ms
Full network sync (1000 channels) < 30 seconds
Routing table memory (10,000 channels) < 50 MB
Path computation (10 hops max) < 10ms

Payment Failure Prevention

The gossip protocol's low latency enables proactive failure prevention:

Pre-flight capacity check:

Before initiating payment:
    1. Query local routing table for path capacities
    2. Add safety margin (10%) for in-flight updates
    3. If insufficient capacity, wait for fresh gossip or try alternate path

Speculative path reservation:

For large payments:
    1. Send capacity_probe along intended path
    2. Each hop tentatively reserves capacity (60 second hold)
    3. If all hops confirm, proceed with HTLC
    4. If any hop fails, release reservations and reroute

Failure feedback:

payment_failure = {
    failed_channel:  channel_id where failure occurred
    failure_reason:  insufficient_capacity | node_offline | htlc_timeout
    suggested_wait:  seconds until retry recommended
}

Failed payments immediately update local routing table and optionally broadcast capacity corrections if the failure reveals stale gossip data.

DAG-Structured Onion Routing

Payments use onion-encrypted routing with a directed acyclic graph (DAG) structure rather than a single linear path. The onion format supports:

  • Multiple parallel paths for redundancy (first-wins settlement)
  • Multiple destinations for split payments (multi-party settlement)
  • Embedded branch and join instructions within uniform-size onion packets

Uniform Onion Packet Size

All onion packets use a fixed 8 KB (8192 bytes) size regardless of path complexity:

ONION_PACKET_SIZE = 8192 bytes

onion_packet = {
    version:          1 byte
    ephemeral_pubkey: 33 bytes
    encrypted_payload: 8126 bytes
    hmac:             32 bytes
}

The large uniform size accommodates:

  • Up to 32 hops in a linear path
  • Embedded branch instructions (split to multiple next-hops)
  • Embedded join instructions (reconvergence points)
  • Multiple terminal destinations with separate amounts
  • Random padding indistinguishable from real routing data

Size rationale:

Routing Complexity Approximate Payload Required
Simple 10-hop linear ~2 KB
3-path DAG, 10 hops each ~4 KB
Multi-destination (5 recipients) ~3 KB
Complex DAG with 3 destinations ~6 KB
Maximum (32 hops, 8 destinations, full DAG) ~8 KB

The 8 KB size provides headroom for complex payment structures while remaining efficient for simple payments (excess is random padding).

Onion Layer Structure

Each layer contains routing instructions that may specify linear forwarding, branching, or termination:

onion_layer = {
    layer_type:     enum { FORWARD, BRANCH, JOIN, TERMINAL }
    payload:        type-specific data
    inner_onion:    encrypted remainder (variable size within packet)
    padding:        random bytes to maintain fixed packet size
}

// FORWARD: Simple single next-hop
forward_payload = {
    next_hop:       pubkey of next node
    forward_amount: amount to pass
    expiry:         block height deadline
    path_id:        DAG path identifier
}

// BRANCH: Split to multiple next-hops (DAG divergence)
branch_payload = {
    branches: [
        {
            next_hop:       pubkey of branch destination
            branch_amount:  amount for this branch
            expiry:         block height deadline
            branch_onion:   encrypted onion for this branch (embedded)
            branch_id:      unique identifier for this branch
        },
        ...
    ]
    branch_mode:    enum { ALL_REQUIRED, FIRST_WINS, THRESHOLD(n) }
}

// JOIN: Reconvergence point expecting multiple incoming branches
join_payload = {
    expected_branches: [branch_id, ...]
    forward_amount:    total to forward after join
    next_hop:          pubkey of next node after join
    timeout:           blocks to wait for all branches
}

// TERMINAL: Final destination
terminal_payload = {
    destination:    pubkey of recipient
    amount:         payment amount for this destination
    payment_hash:   H(preimage) for this destination's HTLC
    payment_data:   encrypted memo/invoice data (optional)
}

Multi-Destination Payments

A single payment can terminate at multiple destinations on different DAG branches:

Multi-destination payment structure:

    Sender A pays:
        - 50 tokens to Destination G
        - 30 tokens to Destination H
        - 20 tokens to Destination I

                    ┌───B───────────→ G (50 tokens)
                    │
    A ──── BRANCH ──┼───C───D───────→ H (30 tokens)
                    │
                    └───E───F───────→ I (20 tokens)

Multi-destination onion construction:

multi_dest_onion = {
    payment_id:     unique payment identifier
    total_amount:   sum of all destination amounts
    destinations: [
        {dest: G, amount: 50, payment_hash: H(preimage_G)},
        {dest: H, amount: 30, payment_hash: H(preimage_H)},
        {dest: I, amount: 20, payment_hash: H(preimage_I)},
    ]
    settlement_mode: "all_required" | "threshold(n)" | "any"
}

Each destination receives a separate HTLC with its own payment_hash. Settlement modes:

Mode Behavior
all_required All destinations must receive and settle; any failure cancels all
threshold(n) At least n destinations must settle; others may fail
any Each destination settles independently

Branch Embedding

Branch instructions are embedded within the onion layer, with each branch containing its own encrypted sub-onion:

function construct_branch_layer(branches, shared_secret):
    branch_payloads = []

    for branch in branches:
        // Construct sub-onion for this branch
        sub_onion = construct_onion(branch.path, branch.ephemeral)

        // Embed sub-onion in branch payload
        branch_payloads.append({
            next_hop: branch.first_hop,
            branch_amount: branch.amount,
            expiry: branch.expiry,
            branch_onion: sub_onion,  // Full 8KB embedded
            branch_id: branch.id
        })

    // Encrypt branch layer
    layer = {
        layer_type: BRANCH,
        payload: {
            branches: branch_payloads,
            branch_mode: payment.branch_mode
        }
    }

    return encrypt(shared_secret, layer)

Branch processing by intermediate node:

function process_branch_layer(encrypted_layer, node_private_key):
    // Decrypt layer
    shared_secret = ECDH(node_private_key, ephemeral_pubkey)
    layer = decrypt(shared_secret, encrypted_layer)

    if layer.layer_type != BRANCH:
        return process_other_layer_type(layer)

    // Process each branch
    for branch in layer.payload.branches:
        // Extract embedded sub-onion
        sub_onion = branch.branch_onion

        // Create HTLC for this branch
        create_htlc(
            next_hop: branch.next_hop,
            amount: branch.branch_amount,
            expiry: branch.expiry,
            onion: sub_onion
        )

    // Track branches for potential join
    register_pending_branches(layer.payload.branches)

Join Points (Reconvergence)

Join points aggregate multiple incoming branches before forwarding:

Join point processing:

                B ──┐
                    │
    A ── BRANCH ────┼──→ F (JOIN) ──→ G (destination)
                    │
                D ──┘

Node F receives:
    - HTLC from B with branch_id=0, amount=60
    - HTLC from D with branch_id=1, amount=40

F's join layer specifies:
    expected_branches: [0, 1]
    forward_amount: 100
    next_hop: G

Join processing:

function process_join(join_layer, incoming_htlcs):
    expected = set(join_layer.expected_branches)
    received = set(htlc.branch_id for htlc in incoming_htlcs)

    if received != expected:
        if timeout_reached:
            // Cancel all received HTLCs
            for htlc in incoming_htlcs:
                cancel_htlc(htlc, reason="incomplete_join")
            return FAIL

        // Wait for remaining branches
        return PENDING

    // All branches received, forward aggregated payment
    total = sum(htlc.amount for htlc in incoming_htlcs)

    if total < join_layer.forward_amount:
        // Insufficient funds across branches
        cancel_all(incoming_htlcs, reason="insufficient_join_amount")
        return FAIL

    // Create forwarding HTLC
    create_htlc(
        next_hop: join_layer.next_hop,
        amount: join_layer.forward_amount,
        expiry: min(htlc.expiry for htlc in incoming_htlcs) - safety_margin,
        onion: join_layer.inner_onion
    )

    return SUCCESS

Multi-Path Redundancy with Multi-Destination

Combining redundant paths with multiple destinations:

Payment: A pays 100 to G and 50 to H

    Path redundancy for G:
        ┌───B───┐
        │       │
    A───┼───C───┼───F───→ G (100 tokens, first-wins)
        │       │
        │   JOIN
        │
    A───┴───D───E───────→ H (50 tokens)

Construction:
    - Paths to G (via B→F and C→F) use same payment_hash, first-wins
    - Path to H uses different payment_hash
    - All embedded in single 8KB onion from A

Onion Padding Strategy

Random padding is cryptographically indistinguishable from encrypted routing data:

function pad_onion(payload, target_size):
    current_size = len(payload)
    padding_needed = target_size - current_size

    // Generate deterministic-looking random padding
    // (seeded from shared secret so it appears encrypted)
    padding = generate_pseudo_random_bytes(
        seed: hash(shared_secret || "padding"),
        length: padding_needed
    )

    return payload || padding


function generate_pseudo_random_bytes(seed, length):
    // Use ChaCha20 in counter mode with the seed
    output = []
    for i in 0..(length / 64):
        block = ChaCha20(key=seed[0:32], nonce=i, input=zeros(64))
        output.append(block)
    return output[0:length]

Indistinguishability properties:

Property Guarantee
Padding vs payload Encrypted padding indistinguishable from encrypted routing
Simple vs complex 10-hop linear looks identical to 32-hop DAG
Single vs multi-dest Cannot determine number of destinations
Branch points Cannot identify which nodes perform branching

Path-Specific Ephemeral Keys

Each branch in the DAG uses independent ephemeral key derivation:

// Root ephemeral for payment
root_ephemeral = random_scalar()

// Derive branch-specific ephemeral keys
branch_ephemeral[branch_id] = root_ephemeral × hash(
    "branch" || payment_id || branch_id || root_ephemeral × G
)

// Each branch's path uses its ephemeral independently
for hop in branch.path:
    ephemeral_pubkey[hop] = derive_hop_ephemeral(branch_ephemeral[branch_id], hop)

This ensures:

  • Branches cannot be correlated by ephemeral key analysis
  • Compromise of one branch reveals nothing about others
  • Join points cannot link incoming branches except by branch_id

Settlement Modes

First-wins (redundant paths to single destination):

first_wins_settlement = {
    paths: [path_0, path_1, path_2]
    destination: G
    amount: 100
    payment_hash: H(preimage)

    Settlement:
        - First path to deliver HTLC wins
        - Destination reveals preimage to winning path only
        - Other paths timeout and cancel
}

All-required (multi-destination atomic):

all_required_settlement = {
    destinations: [
        {dest: G, amount: 50, hash: H(preimage_G)},
        {dest: H, amount: 30, hash: H(preimage_H)},
    ]

    Settlement:
        - All destinations must receive their HTLCs
        - Preimages revealed only if all destinations ready
        - Any failure cancels entire payment
        - Uses hash chain: preimage_G = hash(preimage_H || secret)
}

Threshold (partial success acceptable):

threshold_settlement = {
    destinations: [G, H, I, J]
    threshold: 3
    amounts: [25, 25, 25, 25]

    Settlement:
        - At least 3 of 4 destinations must succeed
        - Successful destinations settle independently
        - Failed destinations timeout
        - Sender accepts partial delivery
}

Cancellation Propagation

cancel_signal = {
    payment_id:    payment being cancelled
    branch_id:     specific branch to cancel (or ALL)
    reason:        enum { PATH_SETTLED, TIMEOUT, INSUFFICIENT_CAPACITY, JOIN_FAILED }
    signature:     proof of authority to cancel
}

Propagation:
    1. Cancel signal sent backwards along branch
    2. Each hop releases locked capacity
    3. At branch points, cancel propagates to all sub-branches
    4. At join points, cancel waits for all incoming branches

Cancellation is an optimization—HTLCs will timeout regardless, but prompt cancellation improves capital efficiency.

Path Selection Algorithm

Sender constructs DAG paths to maximize success probability:

select_dag_paths(sender, destination, amount, max_paths=3):
    candidates = []

    # Find diverse paths using modified Dijkstra
    for i in 0..max_paths:
        path = shortest_path(
            sender, destination,
            edge_weight = fee + (1/reliability) + (1/capacity),
            excluded_edges = edges_in(candidates)  # force diversity
        )
        if path exists:
            candidates.append(path)

    # Score paths by independence
    dag = optimize_dag(candidates):
        - Merge common suffixes (reconvergence points)
        - Remove redundant intermediate hops
        - Balance: more paths = higher success, but more locked liquidity

    return dag

Independence scoring:

independence(path_a, path_b) = 1 - (shared_hops / total_hops)

dag_score = Σ(independence(path_i, path_j)) for all pairs
          + path_count × success_bonus
          - path_count × liquidity_penalty

Locked Liquidity Management

Multi-path payments temporarily lock more liquidity than single-path:

total_locked = Σ(amount × path_i.hop_count) for all paths

vs single path:
single_locked = amount × best_path.hop_count

Mitigation strategies:

  1. Short timeouts on redundant paths: Non-primary paths use 50% shorter HTLC expiry
  2. Fast cancellation: Cancel signals release capacity immediately upon settlement
  3. Adaptive path count: More paths for larger payments, fewer for small
recommended_paths(amount) = {
    amount < 1 token:        1 path (single)
    amount < 100 tokens:     2 paths
    amount < 10000 tokens:   3 paths
    amount >= 10000 tokens:  3 paths + proportional split
}

Proportional Split Mode

For very large payments, instead of first-wins settlement, the payment splits across paths:

split_payment = {
    payment_hash:    H(preimage)
    total_amount:    10000 tokens
    paths: [
        {path_id: 0, amount: 4000, encrypted_route: onion_0},
        {path_id: 1, amount: 3500, encrypted_route: onion_1},
        {path_id: 2, amount: 2500, encrypted_route: onion_2},
    ]
    settlement_mode: "proportional"
    preimage_shares: [share_0, share_1, share_2]  // Shamir secret sharing
}

Atomic multi-path settlement:

Using verifiable secret sharing, the preimage is split such that the destination can only reconstruct it after receiving all path amounts:

preimage = reconstruct(share_0, share_1, share_2)

Each path's HTLC uses:
    payment_hash = H(preimage)
    partial_preimage = share_i

Settlement requires destination to:
    1. Receive all shares via successful path HTLCs
    2. Reconstruct full preimage
    3. Reveal full preimage to settle all paths atomically

If any path fails, the destination cannot reconstruct the preimage, and all paths timeout—ensuring atomicity.

Privacy Properties

Property Guarantee
Sender anonymity Intermediate hops cannot identify sender
Receiver anonymity Intermediate hops cannot identify receiver (even with multi-dest)
Path length Hidden by 8 KB uniform onion size
Path complexity Simple linear indistinguishable from complex DAG
Multi-path correlation Branches use independent ephemeral keys
Multi-destination hiding Cannot determine number of destinations
Branch point hiding Cannot identify which nodes perform branching
Amount per destination Only final destination sees its amount
Payment linkage Different payment_hash per attempt prevents linking retries