Enclave is now The Interfold. Learn more

How Trust Works in Shared Computation

How Trust Works in Shared Computation
Enjoy this recap of our recent Multiplayer Privacy X Space on cryptography and shared computation with Enrico Bottazzi, Auryn Macmillan & Giacomo. Catch the full recording here on X.

The work coming out of the Interfold sits at the intersection of some of the most ambitious ideas in modern cryptography: multi-party computation (MPC), threshold encryption, verifiable computation, and emerging primitives like indistinguishability obfuscation (iO). Privacy is no longer just about hiding data, but about coordinating computation across many parties without collapsing into centralized trust.

This shift is subtle but foundational. Traditional cryptographic systems assume a clear boundary: one party encrypts, another computes, and results return to a single owner. The Interfold instead explores what happens when computation itself becomes shared infrastructure, where no single party should control inputs, execution, or decryption. In that setting, cryptography stops being a protective wrapper and becomes the mechanism that defines how cooperation is even possible.

That is why the conversation with cryptographer Enrico Bottazzi was so important to us. Enrico’s work, particularly around primitives like GRECO and correctness guarantees for encrypted inputs directly addresses one of the hardest problems in this space: how to ensure that shared computation remains both secure and meaningful when participants may act adversarially or inputs may be malformed. His perspective helped clarify that the core challenge is not eliminating trust, but redistributing it across protocols, committees, and proofs.


As part of our Multiplayer Privacy X Space series, Auryn Macmillan and Giacomo from the Interfold were joined by cryptographer Enrico Bottazzi to discuss what changes when privacy becomes a property of shared computation rather than an individual safeguard. The discussion kept returning to one question: when many parties compute together, where does trust go?

“You need to accept the trade-off between liveness and security.”

Shared computation gets difficult when decryption becomes shared

Enrico framed the issue through the difference between client-server and multiplayer systems. In the first, one party encrypts, a server computes, and that same party decrypts. In the second, many parties need to compute under the same public key. That is where the trust problem appears: someone, or some committee, has to hold decryption authority.

The trade-off he described runs underneath the whole conversation:

  • centralize decryption, and the system is easier to keep live
  • distribute decryption, and the system is harder to abuse
  • but liveness, coordination, and key management all become harder

Trust does not vanish. It moves.

That point carried into the discussion of Indistinguishability Obfuscation (iO) and verification. Auryn pushed back on the idea that iO resolves the problem on its own:

“I don’t think that iO is the silver bullet that a lot of folks hope it is.”

His concern was that plaintext release can become a leakage channel. If a program can be rerun on altered subsets of inputs, even sparse outputs can reveal more than intended, which pushes the system back toward constrained release and delegated decryption authority.

Giacomo named the broader shift:

“There’s more than an elimination of trust, just a shift of trust.”

That is also where GRECO entered the conversation. Enrico described it as a proof for correct encryption in multiplayer settings where malformed inputs can distort a shared computation - ideas explored in more detail in his GRECO paper. Auryn described work at the Interfold that builds outward from GRECO toward end-to-end verifiability across committee selection, key generation, input validity, execution, and decryption.

Privacy matters most when it changes the outcome

Toward the end, the conversation moved from trust assumptions to applications. Auryn pointed to voting, auctions, analytics, and AI as cases where many parties want a shared result without exposing their inputs. Enrico’s example of multilateral trade credit set-off gave the clearest sense of why this matters:

“Privacy actually makes the outcome better for all the participants.”

His point was that privacy is not always just protective. In some systems, it changes participation itself. More participants make the network more complete, and a more complete network produces better coordination outcomes. That is why the example lands: privacy is not outside the system. It changes what the system can do.

Multiplayer privacy is not just about keeping secrets

The conversation did not treat shared computation as a solved problem but as a design problem with stubborn constraints: decryption authority, liveness, committees, verification, and the limits of current primitives. What emerged from that discussion, and from Interfold’s broader research direction, is a more precise framing of the field: multiplayer privacy is not just about keeping secrets. It is about designing cryptographic systems where many parties can jointly produce outcomes they trust, without granting any single actor the power to control or corrupt the result.


To access the whole conversation, head to our X account here, and stay tuned for upcoming Multiplayer Privacy spaces.

Subscribe to Interfold