Spartan offers the first zkSNARKs without a trusted setup for NP, where verifying a proof incurs sub-linear
costs—without requiring uniformity in the NP statement’s structure.
The core insight is that the sum-check protocol, when applied to a suitably
constructed low-degree polynomial, yields a powerful—but highly inefficient—interactive proof protocol.
However, the inefficiency can be tamed with new techniques.
Sum-check protocol
Suppose there is a function f(x1,...,xn), and a prover wants to prove that
for some value H.
The sum-check protocol enables proving such a statement.
When we want to prove, for example, that the vector v has the sum of its coefficients equal to H,
we observe v as a function. Let’s say v has 8 coefficients:
By using the sum-check protocol, we can prove v0+v1+...+v7=H.
In fact, many times it’s needed to prove that each coefficient is equal to zero:
v0=0
...
v7=0
This can be achieved by a zero-check protocol, which is an extension of the sum-check protocol.
So, how does Spartan use the sum-check protocol?
Encoding of R1CS instances as low-degree polynomials
Theorem 4.1 from the paper roughly says that for any R1CS instance, there exists
a degree-3 log(m)-variate polynomial G
such that ∑x∈{0,1}log(m)G(x)=0 if and only if there exists a witness w such that
SatR1CS(x,w)=1 (the witness w satisfies the constraint system).
Let’s say we have the following constraint system:
How do we transform this system into an equation ∑x∈{0,1}log(m)G(x)=0?
We can view matrices A,B,C∈Fm×m as functions with the following signature:
{0,1}s×{0,1}s→F where s=log(m).
We view A,B,C as vectors of 2s coefficients (functions on the hyperboolean cube of dimension 2s).
The first s coordinates denote the row, the second s coordinates denote the column:
Let’s write z=(io,1,w), where io is the input and w is the witness.
We view z as a function on the hyperboolean cube of dimension s, and we take its MLE: z~.
Let’s have the following function for x∈{0,1}s:
But this is not what we need. However, our R1CS can be rewritten as:
f(x)=0foreachx∈{0,1}s
And this can be proven by the zero-check protocol, which
means running the sum-check protocol for the following polynomial:
g(t)=x∈{0,1}s∑f(x)⋅eq(t,x)
where
eq(t,x)=Πi=1s(tixi+(1−ti)(1−xi))
In the sum-check protocol, the prover does most of the work, and the verifier is left to compute
one evaluation of the function at a random point:
ex=g(rx)
where ex is computed by the prover and rx is a random point from Fs.
So, the verifier needs to efficiently compute g(rx).
The paper proposes the prover to compute:
vA=y∈{0,1}s∑A~(rx,y)z~(y)
vB=y∈{0,1}s∑B~(rx,y)z~(y)
vC=y∈{0,1}s∑C~(rx,y)z~(y)
The verifier then needs to check:
ex=(vAvB−vC)⋅eq(τ,rx)
However, the prover needs to prove that vA,vB,vC have been computed correctly, and for this,
the sum-checks can be used again (three sum-checks or, better, a random linear combination
of three sum-checks). The claims to be checked are: