What’s the difference between Learning With Errors (LWE) and
Ring Learning With Errors(RLWE)?
Well, for this, one first needs to know what are ideal lattices.
Ideal lattices are a generalization of cyclic lattices. And what are cyclic lattices?
First, we define the rotational shift operator rot on Rn (n≥2):
A cyclic lattice is a lattice that is closed under the rotation shift operator.
For example, Zn is a cyclic lattice.
Let’s now observe the ring Z[x]/(xn−1).
First of all, if we take the polynomials in Z[x] up to some degree, let’s say n, we can view these
polynomials as points in Zn, simply by viewing, for example, a0+a1x+a2x2 as (a0,a1,a2).
Now, the problem is that when we multiply two polynomials, we can end up with the polynomial that is of degree
greater than n.
For example, let’s have n=2 and polynomials a=a0+a1x, b=b0+b1x. We can see
a as (a0,a1) and b as (b0,b1).
We would need three coordinates for ab=(a0b0,a1b0+a0b1,a1b1).
It would be nice if we could stay within two coordinates.
So, we take R=Z[x]/(x2−1).
Now, in R, we have x2−1=0, which means x2=1.
This allows us to always replace x2 by 1, ensuring that we never get a polynomial of degree greater than 1.
In lattice-based cryptograpy, we operate in the ring R=Z[x]/(xn−1).
Let’s consider an ideal I⊂R. The lattice that corresponds to I (by viewing
the R elements as points
in Zn, as described earlier) is called an ideal lattice.
It is a sublattice of Zn.
It holds:
Theorem:
The lattice L⊂Zn is cyclic if and only if it corresponds to some ideal I
in the quotient polynomial ring R=Z[x]/(xn−1).
The theorem can be quickly proved
(see Wikipedia(opens new window))
by observing that the rotational shift operator is the same as multiplying by x in
Z[x]/(xn−1).
Let’s have (I am abusing the equal sign operator a bit):
where A∈Zqm×n and b∈Zqn can be solved
in polynomial time with Gaussian elimination.
However, variations of this problem can be hard. For example, when we require that
the norm of x is smaller than some bound B. Note that the bound needs to be smaller than q;
otherwise, (q,0,...,0) is a trivial solution.
The SIS problem SIS(m,n,q,B) is the problem of finding x=(x1,...,xn)
such that ∣xi∣≤B.
MIT notes(opens new window)
say the solutions are very likely to exist if (2B+1)m>>qn.
If not, we first pick a B-bounded vector e and compute b=Aemodq.
We "plant" the solution e inside b (this is called the planted regime
or the LWE regime).
The SIS and LWE problems are related.
Let’s say we have a matrix C such that CA=0modq.
We are solving
As+e=ymodq
C(As+e)=Cymodq
Ce=Cymodq
which is a SIS problem.
Example of public-key cryptosystem based on LWE
Let’s have a secret key s∈Zqn and a public key (ai,bi=q<ai,s>+ei)i=1m.
The encryption of a bit x goes as follows. Let’s choose a subset S⊂[m]
and compute the ciphertext:
c=(a,b)=(i∈S∑ai,2x+i∈S∑bi)
The decryption of c is 0 if b−q<a,s> is closer to 0 than to 21,
and 1 otherwise.
and because (e1−e2)∈[−B,B]m
this means a solution to the SIS problem.
However, as simple as this hash function is, it is also inefficient—just reading
the matrix requires a time of n2.
How to make hash function faster
This is where ideal lattices come into play. The idea is to only have matrices which enable
very fast multiplication.
So let’s have A∈Zqn×m where m>n (to have a compression function),
and let’s have n∣m, let’s denote l=nm.
Let’s see A as a tuple of matrices: A=(A1,...,Al).
Let’s have e∈{−1,0,1}m. Let’s see e as an l-tuple (e1,...,el).
The operation Ae is now:
hA(e)=(A1,...,Al)(e1,...,el)T
We will take Ai of special form to have fast multiplication.
Let’s have vectors a1,...,al where
ai=(ai,1,...,ai,n).
Why is this so?
Let’s have two vectors x=(x1,...,xn).
It’s enough to show:
XRot(x)e=Rot(x)Xe
This is clear from below:
XRot(x)e=X(x1In+x2X...+xnXn−1)e
Rot(x)Xe=(x1In+x2X...+xnXn−1)Xe
I haven’t read the paper, but MIT lecture(opens new window)
says Miccancio showed in 2007 that such hash function is a secure one-way function (difficult to invert).
However, it is not a collision-resistant hash function.
Before having a look what’s the reason let’s see the definition of Ring-SIS problem.
Definition:
For a ring R, integer modulus Q≥2, and integer l≥1,
the (average-case) Ring-SIS problem is defined as follows. The input is
a1,...,al∈R[q] sampled independently and uniformly at random.
The goal is to output e1,...,el∈R{−1,0,1} not all zero such
that a1e1+...+alel=0modqR.
However, Ring-SIS over Z[x]/(xn−1) is not hard.
It holds:
xn−1=(x−1)(xn−1+...+x+1)
Let’s denote e~=xn−1+...+x+1.
We can see that (e~,0,...,0) is a solution to Ring-SIS problem
whenever a1 is divisible by x−1.
The probability that a1 is divisible by x−1 is q1 (whenever
a1(1)=0modq).