Univariate Coppersmith algorithm Problem to be solved We aim to find a root of the polynomial modulo some integer N N N .
More precisely,
consider a polynomial p ( x ) p(x) p ( x ) of degree k k k .
If there is x 0 x_0 x 0 such that
∣ x 0 ∣ < 1 2 N 1 k − ϵ |x_0| < \frac{1}{2}N^{\frac{1}{k} - \epsilon}
∣ x 0 ∣ < 2 1 N k 1 − ϵ
and
p ( x 0 ) = 0 m o d N p(x_0) = 0 \; mod \; N
p ( x 0 ) = 0 m o d N
then the
Coppersmith algorithm (opens new window)
can find
x 0 x_0 x 0
in time polynomial in
l o g N log N l o g N , k k k ,
and
1 ϵ \frac{1}{\epsilon} ϵ 1 .
By setting
ϵ = 1 l o g N \epsilon = \frac{1}{log N} ϵ = l o g N 1
and exhaustively searching the few unknown high bits of
x 0 x_0 x 0 , we can extend the range to
x 0 < N 1 k x_0 < N^{\frac{1}{k}} x 0 < N k 1 .
Coppersmith algorithm Let’s define for each
0 ≤ i < k , 1 ≤ j < h 0 \leq i < k, 1 \leq j < h 0 ≤ i < k , 1 ≤ j < h :
q i j ( x ) = x i p ( x ) j q_{ij}(x) = x^i p(x)^j
q i j ( x ) = x i p ( x ) j
Let’s define
X = 1 2 N 1 k − ϵ X = \frac{1}{2}N^{\frac{1}{k} - \epsilon}
X = 2 1 N k 1 − ϵ
We have ( h − 1 ) k = h k − k (h - 1) k = hk - k ( h − 1 ) k = h k − k of these polynomials.
We build a matrix of size
( 2 h k − k ) × ( 2 h k − k ) (2hk - k) \times (2hk - k) ( 2 h k − k ) × ( 2 h k − k ) .
In the top left
h k × h k hk \times hk h k × h k
submatrix, we have the following elements on the diagonal:
1 h k , 1 h k X − 1 , . . . , 1 h k X − ( h k − 1 ) \frac{1}{\sqrt{hk}}, \frac{1}{\sqrt{hk}} X^{-1}, ..., \frac{1}{\sqrt{hk}} X^{-(hk-1)}
√ h k 1 , √ h k 1 X − 1 , . . . , √ h k 1 X − ( h k − 1 )
Let’s observe the example where
h = 3 , k = 2 h = 3, k = 2 h = 3 , k = 2 ,
and
p ( x ) = x 2 + a x + b p(x) = x^2 + ax + b p ( x ) = x 2 + a x + b .
We have
p ( x ) 2 = x 4 + c x 3 + d x 2 + e x + f p(x)^2 = x^4 + cx^3 + dx^2 + ex + f p ( x ) 2 = x 4 + c x 3 + d x 2 + e x + f .
Let’s denote
the following matrix
by M M M :
( 1 h k b 0 f 0 1 h k X − 1 a b e f 1 h k X − 2 1 a d e 1 h k X − 3 0 1 c d 1 h k X − 4 0 0 1 c 1 h k X − 5 0 0 0 1 N N N 2 N 2 ) \begin{pmatrix} \frac{1}{\sqrt{hk}} & & & & & & b & 0 & f & 0 \\ & \frac{1}{\sqrt{hk}} X^{-1} & & & & & a & b & e & f \\ & & \frac{1}{\sqrt{hk}} X^{-2} & & & & 1 & a & d & e \\ & & & \frac{1}{\sqrt{hk}} X^{-3} & & & 0 & 1 & c & d \\ & & & & \frac{1}{\sqrt{hk}} X^{-4} & & 0 & 0 & 1 & c \\ & & & & & \frac{1}{\sqrt{hk}} X^{-5} & 0 & 0 & 0 & 1 \\ & & & & & & N & & & \\ & & & & & & & N & & \\ & & & & & & & & N^2 & & \\ & & & & & & & & & N^2 \end{pmatrix}
⎝ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎛ √ h k 1 √ h k 1 X − 1 √ h k 1 X − 2 √ h k 1 X − 3 √ h k 1 X − 4 √ h k 1 X − 5 b a 1 0 0 0 N 0 b a 1 0 0 N f e d c 1 0 N 2 0 f e d c 1 N 2 ⎠ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎞
The upper left submatrix is diagonal and has dimensions
h k × h k hk \times hk h k × h k .
The upper right submatrix has dimensions
h k × ( h k − k ) hk \times (hk - k) h k × ( h k − k ) .
Note that the number of rows is
h k hk h k
because the biggest degree of the polynomials is
( k − 1 ) + ( h − 1 ) k = k − 1 + h k − k = h k − 1 (k-1) + (h-1)k = k - 1 + hk - k = hk - 1 ( k − 1 ) + ( h − 1 ) k = k − 1 + h k − k = h k − 1 .
The lower left submatrix is all
0 0 0
and is of dimension
( h k − k ) × h k (hk - k) \times hk ( h k − k ) × h k .
The bottom right submatrix is diagonal and is of dimension
( h k − k ) × ( h k − k ) (hk - k) \times (hk - k) ( h k − k ) × ( h k − k ) .
Note that
d e t ( M ) = N ( h − 1 ) h k 2 X − h k ( h k − 1 ) 2 ( 1 h k ) h k det(M) = N^{\frac{(h-1)hk}{2}} X^{\frac{-hk(hk-1)}{2}} (\frac{1}{\sqrt{hk}})^{hk}
d e t ( M ) = N 2 ( h − 1 ) h k X 2 − h k ( h k − 1 ) ( √ h k 1 ) h k
We have
X = 1 2 N 1 k − ϵ X = \frac{1}{2} N^{\frac{1}{k} - \epsilon} X = 2 1 N k 1 − ϵ
and we choose
h h h
such that:
h − 1 ≥ ( h k − 1 ) ( 1 k − ϵ ) h - 1 \geq (hk - 1)(\frac{1}{k} - \epsilon)
h − 1 ≥ ( h k − 1 ) ( k 1 − ϵ )
h k ≥ 7 hk \geq 7
h k ≥ 7
Then, it can be shown that
h k < 2 h k − 1 2 hk < 2^{\frac{hk-1}{2}} h k < 2 2 h k − 1 .
Thus:
d e t ( M ) > ( N h − 1 X − ( h k − 1 ) 2 − ( h k − 1 ) 2 ) h k 2 det(M) > (N^{h-1} X^{-(hk-1)} 2^{\frac{-(hk-1)}{2}})^{\frac{hk}{2}}
d e t ( M ) > ( N h − 1 X − ( h k − 1 ) 2 2 − ( h k − 1 ) ) 2 h k
From our choice of
X : X: X :
d e t ( M ) > ( N ( h − 1 ) − ( h k − 1 ) ( 1 k − ϵ ) 2 ( h k − 1 ) 2 ) h k 2 det(M) > (N^{(h-1)-(hk-1)(\frac{1}{k} - \epsilon)} 2^{\frac{(hk-1)}{2}})^{\frac{hk}{2}}
d e t ( M ) > ( N ( h − 1 ) − ( h k − 1 ) ( k 1 − ϵ ) 2 2 ( h k − 1 ) ) 2 h k
The condition
h − 1 ≥ ( h k − 1 ) ( 1 k − ϵ ) h - 1 \geq (hk - 1)(\frac{1}{k} - \epsilon) h − 1 ≥ ( h k − 1 ) ( k 1 − ϵ )
gives us:
d e t ( M ) > 2 h k ( h k − 1 ) 4 det(M) > 2^{\frac{hk(hk-1)}{4}}
d e t ( M ) > 2 4 h k ( h k − 1 )
Let
x 0 < X x_0 < X x 0 < X
be a modular root of
p ( x ) p(x) p ( x ) .
Then:
p ( x 0 ) = y 0 N p(x_0) = y_0 N
p ( x 0 ) = y 0 N
for some integer
y 0 y_0 y 0 .
Let’s define a vector
v = ( 1 , x 0 , x 0 2 , x 0 3 , x 0 4 , x 0 5 , − y 0 , − y 0 x 0 , − y 0 2 , − y 0 2 x 0 ) v = (1, x_0, x_0^2, x_0^3, x_0^4, x_0^5, -y_0, -y_0 x_0, -y_0^2, -y_0^2 x_0)
v = ( 1 , x 0 , x 0 2 , x 0 3 , x 0 4 , x 0 5 , − y 0 , − y 0 x 0 , − y 0 2 , − y 0 2 x 0 )
Let
s = v M s = vM
s = v M
We can see that the last
h k − k hk - k h k − k
coordinates of
s s s are 0 0 0 :
s = ( 1 h k , x 0 X h k , x 0 2 X 2 h k , . . . , x 0 5 X 5 h k , 0 , 0 , 0 , 0 ) s = (\frac{1}{\sqrt{hk}}, \frac{x_0}{X \sqrt{hk}}, \frac{x_0^2}{X^2 \sqrt{hk}}, ..., \frac{x_0^5}{X^5 \sqrt{hk}}, 0, 0, 0, 0)
s = ( √ h k 1 , X √ h k x 0 , X 2 √ h k x 0 2 , . . . , X 5 √ h k x 0 5 , 0 , 0 , 0 , 0 )
We can also see:
∣ ∣ s ∣ ∣ = ( 1 h k + x 0 2 X 2 h k + . . . + x 0 5 X 5 h k ) 1 2 ||s|| = (\frac{1}{hk} + \frac{x_0^2}{X^2 hk} + ... + \frac{x_0^5}{X^5 hk})^{\frac{1}{2}}
∣ ∣ s ∣ ∣ = ( h k 1 + X 2 h k x 0 2 + . . . + X 5 h k x 0 5 ) 2 1
and
∣ ∣ s ∣ ∣ ≤ ( 1 h k + 1 h k + . . . + 1 h k ) 1 2 = 1 ||s|| \le (\frac{1}{hk} + \frac{1}{hk} + ... + \frac{1}{hk})^{\frac{1}{2}} = 1
∣ ∣ s ∣ ∣ ≤ ( h k 1 + h k 1 + . . . + h k 1 ) 2 1 = 1
Let’s observe the right
( 2 h k − k ) × ( h k − k ) (2hk - k) \times (hk - k) ( 2 h k − k ) × ( h k − k ) submatrix.
There is a 1 1 1
in each column, so by performing the elementary operations on the rows,
we can transform this submatrix into the matrix M ′ M' M ′ :
( 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) \begin{pmatrix} &&&&&& 0 & 0 & 0 & 0 \\&&&&&& 0 & 0 & 0 & 0 \\ &&&&&&0 & 0 & 0 & 0 \\ &&&&&& 0 & 0 & 0 & 0 \\ &&&&&&0 & 0 & 0 & 0 \\ &&&&&&0 & 0 & 0 & 0 \\ &&&&&&1 & 0 & 0 & 0 \\ &&&&&& 0 & 1 & 0 & 0 \\ &&&&&&0 & 0 & 1 & 0 \\ &&&&&& 0 & 0 & 0 & 1 \\ \end{pmatrix}
⎝ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎛ 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 ⎠ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎞
Let’s check some observations (opens new window)
made by Charanjit S. Jutla.
M M M
can be written as follows:
M = ( A 1 B 1 A 2 B 2 0 C ) M = \begin{pmatrix} A_1 & B_1 \\ A_2 & B_2 \\ 0 & C \end{pmatrix}
M = ⎝ ⎛ A 1 A 2 0 B 1 B 2 C ⎠ ⎞
where
B 2 B_2 B 2
is an upper triangular matrix of size
( h k − k ) × ( h k − k ) (hk - k) \times (hk - k) ( h k − k ) × ( h k − k ) , with all
diagonal entries equal to 1 1 1 .
Using elementary row operations we can transform M M M to M ′ M' M ′ :
M ′ = ( A 1 ′ 0 A 2 ′ I A 3 ′ 0 ) M' = \begin{pmatrix} A_1' & 0 \\ A_2' & I \\ A_3' & 0 \end{pmatrix}
M ′ = ⎝ ⎛ A 1 ′ A 2 ′ A 3 ′ 0 I 0 ⎠ ⎞
where I I I
is an identity matrix of size
( h k − k ) × ( h k − k ) (hk - k) \times (hk - k) ( h k − k ) × ( h k − k ) .
The row operations we are using are subtractions of the rows of
( A 2 B 2 ) \begin{pmatrix} A_2 & B_2 \end{pmatrix}
( A 2 B 2 )
First, we use these type of subractions to get
( A 2 ′ I ) \begin{pmatrix} A_2' & I \end{pmatrix}
( A 2 ′ I )
We take the first row and subract the second row multiplied by
a a a , and continue this process in a similar manner.
We observe that
A 2 ′ A_2' A 2 ′
is also upper triangular.
Let’s denote
M ^ = ( A 1 ′ A 3 ′ ) \hat M = \begin{pmatrix} A_1' \\ A_3' \end{pmatrix}
M ^ = ( A 1 ′ A 3 ′ )
M ^ \hat M M ^
is upper triangular too.
By using row exchanges, we can obtain:
M ~ = ( A 1 ′ 0 A 3 ′ 0 A 2 ′ I ) \tilde{M} = \begin{pmatrix} A_1' & 0 \\ A_3' & 0 \\ A_2' & I \end{pmatrix}
M ~ = ⎝ ⎛ A 1 ′ A 3 ′ A 2 ′ 0 0 I ⎠ ⎞
It holds:
d e t ( M ^ ) = d e t ( M ~ ) = d e t ( M ) det(\hat{M}) = det(\tilde{M}) = det(M)
d e t ( M ^ ) = d e t ( M ~ ) = d e t ( M )
So, there exists a matrix
H 1 H_1 H 1
of dimension
( 2 h k − k ) × ( 2 h k − k ) (2hk - k) \times (2hk - k) ( 2 h k − k ) × ( 2 h k − k )
such that:
H 1 M ~ = M H_1 \tilde{M} = M
H 1 M ~ = M
We then apply the
LLL algorithm (opens new window)
to M ^ \hat M M ^ .
We get a matrix
L ^ \hat L L ^
such that:
U ^ L ^ = M ^ \hat U \hat L = \hat M
U ^ L ^ = M ^
It holds:
M ~ = ( A 1 ′ 0 A 3 ′ 0 A 2 ′ I ) \tilde{M} = \begin{pmatrix} A_1' & 0 \\ A_3' & 0 \\ A_2' & I \end{pmatrix}
M ~ = ⎝ ⎛ A 1 ′ A 3 ′ A 2 ′ 0 0 I ⎠ ⎞
= ( M ^ 0 A 2 ′ I ) = \begin{pmatrix} \hat M & 0 \\ A_2' & I \end{pmatrix}
= ( M ^ A 2 ′ 0 I )
= ( U ^ L ^ 0 A 2 ′ I ) = \begin{pmatrix} \hat U \hat L & 0 \\ A_2' & I \end{pmatrix}
= ( U ^ L ^ A 2 ′ 0 I )
= ( U ^ 0 0 I ) ( L ^ 0 A 2 ′ I ) = \begin{pmatrix} \hat U & 0 \\ 0 & I \end{pmatrix} \begin{pmatrix} \hat L & 0 \\ A_2' & I \end{pmatrix}
= ( U ^ 0 0 I ) ( L ^ A 2 ′ 0 I )
So we have:
H 1 ( U ^ 0 0 I ) ( L ^ 0 A 2 ′ I ) = M H_1 \begin{pmatrix} \hat U & 0 \\ 0 & I \end{pmatrix} \begin{pmatrix} \hat L & 0 \\ A_2' & I \end{pmatrix} = M
H 1 ( U ^ 0 0 I ) ( L ^ A 2 ′ 0 I ) = M
v H 1 ( U ^ 0 0 I ) ( L ^ 0 A 2 ′ I ) = v M = s v H_1 \begin{pmatrix} \hat U & 0 \\ 0 & I \end{pmatrix} \begin{pmatrix} \hat L & 0 \\ A_2' & I \end{pmatrix} = v M = s
v H 1 ( U ^ 0 0 I ) ( L ^ A 2 ′ 0 I ) = v M = s
We have a vector
c = ( c 1 , . . . , c 2 h k − k ) = v H 1 ( U ^ 0 0 I ) c = (c_1, ..., c_{2hk - k}) = v H_1 \begin{pmatrix} \hat U & 0 \\ 0 & I \end{pmatrix}
c = ( c 1 , . . . , c 2 h k − k ) = v H 1 ( U ^ 0 0 I )
We know that vector
s s s
has the last
h k − k hk - k h k − k
elements all equal to 0 0 0 . Therefore, instead of
s = c ( L ^ 0 A 2 ′ I ) s = c \begin{pmatrix} \hat L & 0 \\ A_2' & I \end{pmatrix}
s = c ( L ^ A 2 ′ 0 I )
we can write
s = ( c 1 , . . . , c h k ) ( L ^ 0 ) s = (c_1, ..., c_{hk}) \begin{pmatrix} \hat L & 0 \end{pmatrix}
s = ( c 1 , . . . , c h k ) ( L ^ 0 )
Let’s denote the rows of
L ^ \hat L L ^
by
b i b_i b i :
s = c 1 b 1 + . . . + c h k b h k s = c_1 b_1 + ... + c_{hk} b_{hk}
s = c 1 b 1 + . . . + c h k b h k
Let’s recall how Gram-Schmidt basis is constructed:
b i ∗ = b i − ∑ j = 1 i − 1 μ i j b j ∗ b_i^* = b_i - \sum_{j=1}^{i-1} \mu_{ij} b_j^*
b i ∗ = b i − j = 1 ∑ i − 1 μ i j b j ∗
So:
b i = b i ∗ + ∑ j = 1 i − 1 μ i j b j ∗ b_i = b_i^* + \sum_{j=1}^{i-1} \mu_{ij} b_j^*
b i = b i ∗ + j = 1 ∑ i − 1 μ i j b j ∗
And we have
s = ∑ i = 1 h k c i ( b i ∗ + ∑ j = 1 i − 1 μ i j b j ∗ ) = c h k b h k ∗ + f 1 b 1 ∗ + . . . f h k − 1 b h k − 1 ∗ s = \sum_{i=1}^{hk} c_i (b_i^* + \sum_{j=1}^{i-1} \mu_{ij} b_j^*) = c_{hk} b_{hk}^* + f_1 b_1^* + ... f_{hk-1} b_{hk-1}^*
s = i = 1 ∑ h k c i ( b i ∗ + j = 1 ∑ i − 1 μ i j b j ∗ ) = c h k b h k ∗ + f 1 b 1 ∗ + . . . f h k − 1 b h k − 1 ∗
for some
f 1 , . . . , f h k − 1 f_1, ..., f_{hk-1} f 1 , . . . , f h k − 1 .
That means
∣ ∣ s ∣ ∣ ≥ ∣ c h k b h k ∗ ∣ ||s|| \geq |c_{hk} b_{hk}^*|
∣ ∣ s ∣ ∣ ≥ ∣ c h k b h k ∗ ∣
because all
b i ∗ b_i^* b i ∗
where
i < h k i < hk i < h k
are orthogonal to
b h k ∗ b_{hk}^* b h k ∗ .
For the LLL algorithm it holds:
∣ b h k ∗ ∣ ≥ d e t ( M ) 1 h k 2 − ( h k − 1 ) 4 |b_{hk}^*| \geq det(M)^{\frac{1}{hk}} 2^{\frac{-(hk-1)}{4}}
∣ b h k ∗ ∣ ≥ d e t ( M ) h k 1 2 4 − ( h k − 1 )
And from above we know:
d e t ( M ) > 2 h k ( h k − 1 ) 4 det(M) > 2^{\frac{hk(hk-1)}{4}}
d e t ( M ) > 2 4 h k ( h k − 1 )
Thus:
∣ b h k ∗ ∣ ≥ 1 |b_{hk}^*| \geq 1
∣ b h k ∗ ∣ ≥ 1
But we saw above that:
∣ ∣ s ∣ ∣ < 1 ||s|| < 1
∣ ∣ s ∣ ∣ < 1
That means:
c h k = 0 c_{hk} = 0
c h k = 0
But
c h k c_{hk} c h k
is a polynomial in
x 0 x_0 x 0 .
Thus, we have obtained another polynomial that has a root at
x 0 x_0 x 0 .
This is a polynomial equation in x 0 x_0 x 0 which holds in Z \mathbb{Z} Z , not just m o d N mod \; N m o d N .
We can solve this for x 0 x_0 x 0 in polynomial time.