Univariate Coppersmith algorithm

Problem to be solved

We aim to find a root of the polynomial modulo some integer NN.

More precisely, consider a polynomial p(x)p(x) of degree kk. If there is x0x_0 such that

x0<12N1kϵ|x_0| < \frac{1}{2}N^{\frac{1}{k} - \epsilon}

and

p(x0)=0modNp(x_0) = 0 \; mod \; N

then the Coppersmith algorithm (opens new window) can find x0x_0 in time polynomial in logNlog N, kk, and 1ϵ\frac{1}{\epsilon}.

By setting ϵ=1logN\epsilon = \frac{1}{log N} and exhaustively searching the few unknown high bits of x0x_0, we can extend the range to x0<N1kx_0 < N^{\frac{1}{k}}.

Coppersmith algorithm

Let’s define for each 0i<k,1j<h0 \leq i < k, 1 \leq j < h:

qij(x)=xip(x)jq_{ij}(x) = x^i p(x)^j

Let’s define

X=12N1kϵX = \frac{1}{2}N^{\frac{1}{k} - \epsilon}

We have (h1)k=hkk(h - 1) k = hk - k of these polynomials.

We build a matrix of size (2hkk)×(2hkk)(2hk - k) \times (2hk - k). In the top left hk×hkhk \times hk submatrix, we have the following elements on the diagonal:

1hk,1hkX1,...,1hkX(hk1)\frac{1}{\sqrt{hk}}, \frac{1}{\sqrt{hk}} X^{-1}, ..., \frac{1}{\sqrt{hk}} X^{-(hk-1)}

Let’s observe the example where h=3,k=2h = 3, k = 2, and p(x)=x2+ax+bp(x) = x^2 + ax + b.

We have p(x)2=x4+cx3+dx2+ex+fp(x)^2 = x^4 + cx^3 + dx^2 + ex + f. Let’s denote the following matrix by MM:

(1hkb0f01hkX1abef1hkX21ade1hkX301cd1hkX4001c1hkX50001NNN2N2)\begin{pmatrix} \frac{1}{\sqrt{hk}} & & & & & & b & 0 & f & 0 \\ & \frac{1}{\sqrt{hk}} X^{-1} & & & & & a & b & e & f \\ & & \frac{1}{\sqrt{hk}} X^{-2} & & & & 1 & a & d & e \\ & & & \frac{1}{\sqrt{hk}} X^{-3} & & & 0 & 1 & c & d \\ & & & & \frac{1}{\sqrt{hk}} X^{-4} & & 0 & 0 & 1 & c \\ & & & & & \frac{1}{\sqrt{hk}} X^{-5} & 0 & 0 & 0 & 1 \\ & & & & & & N & & & \\ & & & & & & & N & & \\ & & & & & & & & N^2 & & \\ & & & & & & & & & N^2 \end{pmatrix}

The upper left submatrix is diagonal and has dimensions hk×hkhk \times hk. The upper right submatrix has dimensions hk×(hkk)hk \times (hk - k). Note that the number of rows is hkhk because the biggest degree of the polynomials is (k1)+(h1)k=k1+hkk=hk1(k-1) + (h-1)k = k - 1 + hk - k = hk - 1.

The lower left submatrix is all 00 and is of dimension (hkk)×hk(hk - k) \times hk. The bottom right submatrix is diagonal and is of dimension (hkk)×(hkk)(hk - k) \times (hk - k).

Note that

det(M)=N(h1)hk2Xhk(hk1)2(1hk)hkdet(M) = N^{\frac{(h-1)hk}{2}} X^{\frac{-hk(hk-1)}{2}} (\frac{1}{\sqrt{hk}})^{hk}

We have X=12N1kϵX = \frac{1}{2} N^{\frac{1}{k} - \epsilon} and we choose hh such that:

h1(hk1)(1kϵ)h - 1 \geq (hk - 1)(\frac{1}{k} - \epsilon)

hk7hk \geq 7

Then, it can be shown that hk<2hk12hk < 2^{\frac{hk-1}{2}}. Thus:

det(M)>(Nh1X(hk1)2(hk1)2)hk2det(M) > (N^{h-1} X^{-(hk-1)} 2^{\frac{-(hk-1)}{2}})^{\frac{hk}{2}}

From our choice of X:X:

det(M)>(N(h1)(hk1)(1kϵ)2(hk1)2)hk2det(M) > (N^{(h-1)-(hk-1)(\frac{1}{k} - \epsilon)} 2^{\frac{(hk-1)}{2}})^{\frac{hk}{2}}

The condition h1(hk1)(1kϵ)h - 1 \geq (hk - 1)(\frac{1}{k} - \epsilon) gives us:

det(M)>2hk(hk1)4det(M) > 2^{\frac{hk(hk-1)}{4}}

Let x0<Xx_0 < X be a modular root of p(x)p(x). Then:

p(x0)=y0Np(x_0) = y_0 N

for some integer y0y_0. Let’s define a vector

v=(1,x0,x02,x03,x04,x05,y0,y0x0,y02,y02x0)v = (1, x_0, x_0^2, x_0^3, x_0^4, x_0^5, -y_0, -y_0 x_0, -y_0^2, -y_0^2 x_0)

Let

s=vMs = vM

We can see that the last hkkhk - k coordinates of ss are 00:

s=(1hk,x0Xhk,x02X2hk,...,x05X5hk,0,0,0,0)s = (\frac{1}{\sqrt{hk}}, \frac{x_0}{X \sqrt{hk}}, \frac{x_0^2}{X^2 \sqrt{hk}}, ..., \frac{x_0^5}{X^5 \sqrt{hk}}, 0, 0, 0, 0)

We can also see:

s=(1hk+x02X2hk+...+x05X5hk)12||s|| = (\frac{1}{hk} + \frac{x_0^2}{X^2 hk} + ... + \frac{x_0^5}{X^5 hk})^{\frac{1}{2}}

and

s(1hk+1hk+...+1hk)12=1||s|| \le (\frac{1}{hk} + \frac{1}{hk} + ... + \frac{1}{hk})^{\frac{1}{2}} = 1

Let’s observe the right (2hkk)×(hkk)(2hk - k) \times (hk - k) submatrix. There is a 11 in each column, so by performing the elementary operations on the rows, we can transform this submatrix into the matrix MM':

(0000000000000000000000001000010000100001)\begin{pmatrix} &&&&&& 0 & 0 & 0 & 0 \\&&&&&& 0 & 0 & 0 & 0 \\ &&&&&&0 & 0 & 0 & 0 \\ &&&&&& 0 & 0 & 0 & 0 \\ &&&&&&0 & 0 & 0 & 0 \\ &&&&&&0 & 0 & 0 & 0 \\ &&&&&&1 & 0 & 0 & 0 \\ &&&&&& 0 & 1 & 0 & 0 \\ &&&&&&0 & 0 & 1 & 0 \\ &&&&&& 0 & 0 & 0 & 1 \\ \end{pmatrix}

Let’s check some observations (opens new window) made by Charanjit S. Jutla. MM can be written as follows:

M=(A1B1A2B20C)M = \begin{pmatrix} A_1 & B_1 \\ A_2 & B_2 \\ 0 & C \end{pmatrix}

where B2B_2 is an upper triangular matrix of size (hkk)×(hkk)(hk - k) \times (hk - k), with all diagonal entries equal to 11. Using elementary row operations we can transform MM to MM':

M=(A10A2IA30)M' = \begin{pmatrix} A_1' & 0 \\ A_2' & I \\ A_3' & 0 \end{pmatrix}

where II is an identity matrix of size (hkk)×(hkk)(hk - k) \times (hk - k). The row operations we are using are subtractions of the rows of

(A2B2)\begin{pmatrix} A_2 & B_2 \end{pmatrix}

First, we use these type of subractions to get

(A2I)\begin{pmatrix} A_2' & I \end{pmatrix}

We take the first row and subract the second row multiplied by aa, and continue this process in a similar manner. We observe that A2A_2' is also upper triangular. Let’s denote

M^=(A1A3)\hat M = \begin{pmatrix} A_1' \\ A_3' \end{pmatrix}

M^\hat M is upper triangular too. By using row exchanges, we can obtain:

M~=(A10A30A2I)\tilde{M} = \begin{pmatrix} A_1' & 0 \\ A_3' & 0 \\ A_2' & I \end{pmatrix}

It holds:

det(M^)=det(M~)=det(M)det(\hat{M}) = det(\tilde{M}) = det(M)

So, there exists a matrix H1H_1 of dimension (2hkk)×(2hkk)(2hk - k) \times (2hk - k) such that:

H1M~=MH_1 \tilde{M} = M

We then apply the LLL algorithm (opens new window) to M^\hat M. We get a matrix L^\hat L such that:

U^L^=M^\hat U \hat L = \hat M

It holds:

M~=(A10A30A2I)\tilde{M} = \begin{pmatrix} A_1' & 0 \\ A_3' & 0 \\ A_2' & I \end{pmatrix}

=(M^0A2I)= \begin{pmatrix} \hat M & 0 \\ A_2' & I \end{pmatrix}

=(U^L^0A2I)= \begin{pmatrix} \hat U \hat L & 0 \\ A_2' & I \end{pmatrix}

=(U^00I)(L^0A2I)= \begin{pmatrix} \hat U & 0 \\ 0 & I \end{pmatrix} \begin{pmatrix} \hat L & 0 \\ A_2' & I \end{pmatrix}

So we have:

H1(U^00I)(L^0A2I)=MH_1 \begin{pmatrix} \hat U & 0 \\ 0 & I \end{pmatrix} \begin{pmatrix} \hat L & 0 \\ A_2' & I \end{pmatrix} = M

vH1(U^00I)(L^0A2I)=vM=sv H_1 \begin{pmatrix} \hat U & 0 \\ 0 & I \end{pmatrix} \begin{pmatrix} \hat L & 0 \\ A_2' & I \end{pmatrix} = v M = s

We have a vector

c=(c1,...,c2hkk)=vH1(U^00I)c = (c_1, ..., c_{2hk - k}) = v H_1 \begin{pmatrix} \hat U & 0 \\ 0 & I \end{pmatrix}

We know that vector ss has the last hkkhk - k elements all equal to 00. Therefore, instead of

s=c(L^0A2I)s = c \begin{pmatrix} \hat L & 0 \\ A_2' & I \end{pmatrix}

we can write

s=(c1,...,chk)(L^0)s = (c_1, ..., c_{hk}) \begin{pmatrix} \hat L & 0 \end{pmatrix}

Let’s denote the rows of L^\hat L by bib_i:

s=c1b1+...+chkbhks = c_1 b_1 + ... + c_{hk} b_{hk}

Let’s recall how Gram-Schmidt basis is constructed:

bi=bij=1i1μijbjb_i^* = b_i - \sum_{j=1}^{i-1} \mu_{ij} b_j^*

So:

bi=bi+j=1i1μijbjb_i = b_i^* + \sum_{j=1}^{i-1} \mu_{ij} b_j^*

And we have

s=i=1hkci(bi+j=1i1μijbj)=chkbhk+f1b1+...fhk1bhk1s = \sum_{i=1}^{hk} c_i (b_i^* + \sum_{j=1}^{i-1} \mu_{ij} b_j^*) = c_{hk} b_{hk}^* + f_1 b_1^* + ... f_{hk-1} b_{hk-1}^*

for some f1,...,fhk1f_1, ..., f_{hk-1}.

That means

schkbhk||s|| \geq |c_{hk} b_{hk}^*|

because all bib_i^* where i<hki < hk are orthogonal to bhkb_{hk}^*.

For the LLL algorithm it holds:

bhkdet(M)1hk2(hk1)4|b_{hk}^*| \geq det(M)^{\frac{1}{hk}} 2^{\frac{-(hk-1)}{4}}

And from above we know:

det(M)>2hk(hk1)4det(M) > 2^{\frac{hk(hk-1)}{4}}

Thus:

bhk1|b_{hk}^*| \geq 1

But we saw above that:

s<1||s|| < 1

That means:

chk=0c_{hk} = 0

But chkc_{hk} is a polynomial in x0x_0. Thus, we have obtained another polynomial that has a root at x0x_0.

This is a polynomial equation in x0x_0 which holds in Z\mathbb{Z}, not just modNmod \; N. We can solve this for x0x_0 in polynomial time.