P2P Networking: Connecting the Nodes
These past two weeks, ZooBC finally starts reaching outside of itself.
Until now, everything has been internal. Cryptography, transactions, state, all moving cleanly inside a single process. Important work, but isolated. This week, that changes. The network layer comes alive, and nodes begin to discover each other, connect, and exchange data.
The first time I see two nodes handshake successfully from my lab in Bali, running on different machines, it feels like a quiet milestone. Nothing flashy happens. Just logs scrolling by. But that moment matters. A blockchain without networking is just a database with opinions.
Choosing Compatibility Over Elegance
The P2P layer uses gRPC with Protocol Buffers, exactly matching the message format used by the original Go implementation. This is a deliberate constraint.
Interoperability matters. During this transition period, C++ nodes must be able to communicate seamlessly with any remaining Go nodes. No forks based on implementation. No protocol drift. Same messages, same fields, same behavior.
This is one of those moments where purity gives way to pragmatism. The goal is not to design a better protocol, but to faithfully speak the existing one.
Learning From the Old Network
While implementing block propagation, I spend a lot of time re-reading the Go networking code. There is a subtle but important optimization there that is easy to miss if you only skim it.
Blocks are not broadcast with full transaction bodies.
Instead, only transaction IDs are sent. The receiving node checks its mempool and determines which transactions it already has. Only the missing ones are requested explicitly.
This approach dramatically reduces bandwidth usage, especially for well-connected nodes that already share most transactions. It also shifts complexity away from the broadcast path, which is exactly where you want things to be fast and simple.
That behavior is preserved exactly:
// GO COMPATIBILITY: Set TransactionIDs instead of full Transactions
for (const auto& tx : block.transactions) {
block_msg->add_transactionids(tx.id);
}
// NOTE: We deliberately do NOT add full transactions
No shortcuts here. If a C++ node behaves differently, it becomes a second-class citizen on the network.
Parallelism Without Blocking
Broadcasting blocks is inherently parallel. One slow peer should never slow down everyone else.
Just like the Go version uses goroutines, the C++ implementation uses asynchronous tasks. Each peer gets its own execution path. If a peer is slow, unresponsive, or temporarily unreachable, it does not block the rest of the network.
The logic is straightforward:
std::vector<std::future<void>> futures;
for (const auto& peer : peers) {
futures.push_back(std::async(std::launch::async, [...]() {
auto stub = peer_manager->GetPeerStub(peer.public_key);
stub->SendBlock(...);
}));
}
This is not about squeezing every last bit of performance. It is about preserving liveness. The network must keep moving even when parts of it misbehave.
Managing Peers Like Adults
Peer management is another area where past experience heavily influences current decisions.
Connections are monitored continuously. Peers that consistently fail or timeout are not immediately banned, but they are deprioritized. Healthy peers naturally rise to the top. Unreliable ones fade into the background.
New peers are discovered through GetMorePeers, but discovery alone is not enough. Each peer is validated before joining the active set. Identity, responsiveness, and protocol correctness all matter.
This avoids a common trap: treating every discovered peer as equally trustworthy. The network earns trust through behavior, not presence.
Watching the Network Breathe
At this point, the network is still small. A handful of nodes, controlled environments, predictable behavior.
But even so, something changes when networking is in place. Transactions no longer feel local. Blocks propagate. Peers react. Delays appear. Failures happen. Recovery happens too.
Late at night, sitting in my man cave, I watch logs from multiple nodes side by side and realize something important: ZooBC is no longer a single program.
It is becoming a system.
And systems behave very differently once they start talking to themselves.

