Let’s have the polynomial
p(x)=a0+a1x+...+an−1xn−1
and let
n=2k.
Also, let’s have the field
Fp
and in it, we have the multiplicative subgroup
H={ω1,ω2,...,ωn=1}.
We would like to have an efficient evaluation of the polynomial
p
at all points of
H
(this is for example needed in PLONK).
The FFT enables us to do exactly this.
The first half of the vector uses the same arguments for
peven and podd
as the second half. We only need to compute
peven and podd
for
k
values.
This way, instead of evaluating the polynomial
p
in
2k
values,
we evaluate two polynomials in
k
values. Hey, but isn’t this the same number of operations?
Yeah, it’s even more operations, because we need to do the multiplications
with the powers of
ω
and some additions too.
However, if we recursively split the polynomials in this way, we will end up
with
logn
steps.
And
n⋅logn
is much less than
n2,
which is the number of operations needed if we evaluate the polynomial the naive way
(like in the first matrix expression at the top of the page).
The polynomial has
n
coefficients
(its degree is
n−1),
because that’s what we need to describe values in the domain of size
n:
{1,w,w2,...,wn−1}.
In our case
n=8.
Let’s split
p
into
pevenandpodd:
peven(x)=a6x3+a4x2+a2x+a0
podd(x)=a7x3+a5x2+a3x+a1
p(x)=peven(x2)+x⋅podd(x2)
We would need to evaluate both polynomials at
{1,w2,w4,w6}.
To compute
peven(x)
we just need to have
x2
(which is precomputed because
x
is some power of
ω) and evaluate two functions,
peveneven and pevenodd.
But we need to evaluate these two functions only at
{1,w4}.
We do the same for
podd.
And to compute
p(x),
we again just need to evaluate the function
peven(x2)+x⋅podd(x2).
Note that we already have
peven(x2)
and
podd(x2).
Now to the inverse operation –
the inverse transformation in our case is a polynomial interpolation. That is, we have
a set of evaluation points and need to find the coefficients of the polynomial.