Block Structure: Building the Chain

Block Structure: Building the Chain

Block Structure: Building the Chain

These past two weeks, ZooBC finally starts doing the thing a blockchain is supposed to do.

Blocks are being produced.
They are validated.
They are added to the chain.

After weeks spent on foundations, abstractions, and invisible guarantees, this is the moment where everything begins to converge. Transactions, cryptography, database state, and networking all meet here. If any one of those layers lies, blocks will expose it immediately.

Watching the first valid blocks being assembled in my lab feels strangely calm. No fireworks. No celebration. Just a quiet confirmation that the pieces are starting to fit together.

What a Block Really Contains

A ZooBC block is more than a container for transactions. It is a snapshot of protocol state at a specific point in time, signed and anchored to everything that came before it.

In C++, the structure looks like this:

struct Block {
    uint32_t height;
    int64_t id;
    int64_t timestamp;
    std::vector<uint8_t> block_hash;
    std::vector<uint8_t> previous_block_hash;
    std::vector<uint8_t> block_seed;
    std::vector<uint8_t> block_signature;
    std::vector<uint8_t> blocksmith_public_key;
    std::string cumulative_difficulty;
    int64_t total_amount;
    int64_t total_fee;
    int64_t total_coinbase;
    uint32_t version;
    std::vector<uint8_t> merkle_root;
    std::vector<uint8_t> merkle_tree;
    std::vector<Transaction> transactions;
    std::vector<PublishedReceipt> published_receipts;
};

Every field exists for a reason. Some support consensus. Others support verification, accounting, or fast synchronization. Together, they form the minimum set of data required to independently verify the chain from genesis to the current height.

Nothing here is decorative.

Validation as a Process, Not a Check

Block validation is deliberately multi-stage. This is not about being defensive, it is about being precise. Each stage answers a different question, and failures are explicit.

Structural validation comes first. Are all required fields present? Are sizes and formats correct? Is anything obviously malformed? This stage is fast and rejects garbage early.

Header validation follows. Does previous_block_hash actually match the last known block? Is the height sequential? Is the timestamp within acceptable bounds? This ensures the block makes sense in the context of the chain.

Then transaction validation kicks in. Every transaction inside the block is revalidated. Signatures are verified again. Balances are checked again. Nothing is trusted just because it passed through the mempool.

Merkle validation comes next. The merkle tree is rebuilt from the transaction list, and the resulting root must match merkle_root. If even one byte differs, the block is invalid.

Finally, consensus validation answers the hardest question: was this block produced by an authorized blocksmith? Proof of Participation rules are applied, participation scores are checked, and the block seed is verified.

Only if all stages pass does the block earn its place in the chain.

Byte-for-Byte Compatibility

One of the least visible, but most critical aspects of block handling is serialization.

Serialization must match the Go implementation byte-for-byte. Not approximately. Not logically. Exactly.

Fields are encoded in little-endian. The order is fixed and unforgiving:

  • version
  • timestamp
  • total_amount
  • total_fee
  • total_coinbase
  • payload_length
  • payload_hash
  • blocksmith_public_key
  • block_seed
  • previous_block_hash
  • block_signature

Any deviation here breaks compatibility. Hashes change. Signatures fail. Consensus collapses.

I validate this against test vectors generated by the Go implementation, comparing serialized output byte by byte until they match perfectly. This is tedious work, but necessary. Protocols do not forgive interpretation.

Keeping the Chain Fast

On the storage side, the blockchain class maintains a cache of the latest block. This avoids unnecessary database reads for the most common operations and keeps block production and validation fast.

When deeper chain queries are needed, they go through prepared statements. No dynamic SQL. No ad-hoc queries. Performance stays predictable, and behavior stays consistent.

Again, boring choices.

Deliberate ones.

Watching the Chain Grow

At this point, the chain is short. A handful of blocks. Controlled inputs. No adversarial conditions yet.

But blocks are real now. They link. They validate. They reject invalid data exactly when they should.

Late at night, sitting in my man cave, I sometimes scroll back through the logs and watch the chain grow one block at a time. Each block is quiet proof that the system is holding together under its own rules.

ZooBC is no longer just moving data.

It is building history.