Bivariate Coppersmith algorithm

The bivariate Coppersmith algorithm (opens new window) is a technique to solve polynomial equations of the form p(x,y)=0p(x, y) = 0 over the integers Z\mathbb{Z}, given that the solutions are within specific bounds: x<X|x| < X and y<Y|y| < Y.

Suppose p(x,y)=ijpijxiyjp(x, y) = \sum_{ij} p_{ij}x^i y^j has degree δ\delta in each variable. Define D=maxijpijXiYjD = max_{ij} |p_{ij|}X^i Y^j. The algorithm finds a solution (x,y)(x, y) (if it exists) provided that:

XY<D23δXY < D^\frac{2}{3\delta}

The definitions of X,Y,DX, Y, D appear circular, but it’s not a problem, the condition implies there is pijp_{ij} such that:

X3δ2Y3δ2<pijXiYjX^\frac{3\delta}{2} Y^\frac{3\delta}{2} < |p_{ij}| X^i Y^j

X3δ2iY3δ2j<pijX^{\frac{3\delta}{2} - i} Y^{\frac{3\delta}{2} - j} < |p_{ij}|

where 3δ2i\frac{3\delta}{2} - i and 3δ2j\frac{3\delta}{2} - j are strictly positive.

We select an integer k>14ϵk > \frac{1}{4\epsilon}. For each pair of integers (i,j)(i, j) with 0i<k0 \leq i < k and 0j<k0 \leq j < k, we form the polynomial qij=xiyjp(x,y)q_{ij} = x^i y^j p(x, y).

Let’s say we have

p(x,y)=a00+a10y+a01x+a11xyp(x, y) = a_{00} + a_{10} y + a_{01} x + a_{11} xy

Our polynomial has a degree δ=1\delta = 1 in each variable separately.

Let’s take k=2k = 2. Our shifted polynomials are then: p(x,y),yp(x,y),xp(x,y),xyp(x,y)p(x, y), y p(x, y), x p(x, y), x y p(x, y).

We construct the matrix M1M_1 where the polynomials are in the right part positioned vertically:

(1a00000Y1a10a0000Y20a1000X1a010a000X1Y1a11a01a10a00X1Y20a110a10X200a010X2Y100a11a01X2Y2000a11)\begin{pmatrix} 1 & & & & & & & & & a_{00} & 0 & 0 & 0 \\ & Y^{-1} & & & & & & & & a_{10} & a_{00} & 0 & 0 \\ & & Y^{-2} & & & & & & & 0 & a_{10} & 0 & 0 \\ & & & X^{-1} & & & & & & a_{01} & 0 & a_{00} & 0 \\ & & & & X^{-1} Y^{-1} & & & & & a_{11} & a_{01} & a_{10} & a_{00} \\ & & & & & X^{-1} Y^{-2} & & & & 0 & a_{11} & 0 & a_{10} \\ & & & & & & X^{-2} & & & 0 & 0 & a_{01} & 0 \\ & & & & & & & X^{-2} Y^{-1} & & 0 & 0 & a_{11} & a_{01} \\ & & & & &&& & X^{-2} Y^{-2} & 0 & 0 & 0 & a_{11} \end{pmatrix}

The left-hand block M1M_1 is of dimension (δ+k)2×(δ+k)2(\delta + k)^2 \times (\delta + k)^2. The right-hand block of M1M_1 is of dimension (δ+k)2×k2(\delta + k)^2 \times k^2. If the coefficients of pp share no nontrivial common factor, we can use elementary row operations to transform the right-hand block into an identity matrix at the bottom and zeros at the top. (let’s denote the matrix that performs this transformation by TT):

M2=(000000000000000000001000010000100001)M_2 = \begin{pmatrix} & & & & & & & & & 0 & 0 & 0 & 0 \\ & & & & & & & & & 0 & 0 & 0 & 0 \\ & & & & & & & & & 0 & 0 & 0 & 0 \\ & & & & & & & & & 0 & 0 & 0 & 0 \\ & & & & & & & & & 0 & 0 & 0 & 0 \\ & & & & & & & & & 1 & 0 & 0 & 0 \\ & & & & & & & & & 0 & 1 & 0 & 0 \\ & & & & & & & & & 0 & 0 & 1 & 0 \\ & & & & &&& & & 0 & 0 & 0 & 1 \end{pmatrix}

M1=TM2M_1 = T M_2

Let’s denote the top-block (the rows which have the last k2k^2 entries all zero) of M2M_2 by M3M_3.

Let’s define

r=(1,y,y2,x,xy,xy2,x2,x2y,x2y2)r = (1, y, y^2, x, x y, x y^2, x^2, x^2 y, x^2 y^2)

s=rM1=rTM2s = r M_1 = r T M_2

Note that det(T)=1.det(T) = 1.

We can see that the entries of ss corresponding to the left-hand block of M1M_1 are smaller than 11. The last k2k^2 of ss are 00.

Thus, in our case, where the left-hand block of M1M_1 has (k+1)2(k + 1)^2 columns:

s2(δ+k)2||s||^2 \leq (\delta + k)^2

sδ+k||s|| \leq \delta + k

The vector ss lies in the lattice spanned by the rows of M3M_3.

We apply the LLL algorithm (opens new window) to M3M_3, followed by the Gram-Schmidt orthogonalization on the LLL reduced matrix. This process yields the matrix LL. Let’s denote the rows of LL by bib_i.

L=(b1b2b3b4b5)L = \begin{pmatrix} & & & & & b_1 & & & & \\ & & & & & b_2 & & & & \\ & & & & & b_3 & & & & \\ & & & & & b_4 & & & & \\ & & & & & b_5 & & & & \end{pmatrix}

However, let’s consider the transformation of the entire matrix M2M_2. Let’s say

M2=T1LM_2 = T_1 L'

where LL' contains the submatrix LL.

L=(b10000b20000b30000b40000b500001000010000100001)L' = \begin{pmatrix} & & & & & b_1 & & & & 0 & 0 & 0 & 0 \\ & & & & & b_2 & & & & 0 & 0 & 0 & 0 \\ & & & & & b_3 & & & & 0 & 0 & 0 & 0 \\ & & & & & b_4 & & & & 0 & 0 & 0 & 0 \\ & & & & & b_5 & & & & 0 & 0 & 0 & 0 \\ & & & & & & & & & 1 & 0 & 0 & 0 \\ & & & & & & & & & 0 & 1 & 0 & 0 \\ & & & & & & & & & 0 & 0 & 1 & 0 \\ & & & & &&& & & 0 & 0 & 0 & 1 \end{pmatrix}

We have:

s=rM1=rTM2=rTT1Ls = r M_1 = r T M_2 = r T T_1 L'

We can see that rTT1r T T_1 is a vector with the last k2k^2 entries being all zero (because this holds for vector ss). Thus, ss lies in the space spanned by the rows of LL:

s=incibis = \sum_{i}^n c_i b_i

Note that the Gram-Schmidt orthogonalization (without normalization) preserves the determinant (meaning det(T1)=1det(T_1) = 1).

The vectors bib_i are orthogonal. So if

s<bn||s|| < ||b_n||

then we know

s=in1cibis = \sum_{i}^{n-1} c_i b_i

meaning that cn=0c_n = 0. This also means that ss is orthogonal to bnb_n, implying that the inner product of ss and bnb_n is zero. From this, we obtain two new polynomials that evaluate to zero at the same (x,y)(x, y) as our original polynomial pp does.

However, it needs to hold:

s<bn||s|| < ||b_n||

From the LLL paper, we know that if M3M_3 is square, the following holds:

bndet(M3)1n2(n1)4||b_n|| \geq |det(M_3)|^{\frac{1}{n}} 2^{\frac{-(n-1)}{4}}

In our case, M3M_3 is not square.

To estimate bnb_n, Coppersmith defined the matrix

W=(1YY2XXYXY2X2X2YX2Y2)W = \begin{pmatrix} 1 & & & & & & & & \\ & Y & & & & & & & \\ & & Y^2 & & & & & & \\ & & & X & & & & & \\ & & & & X Y & & & & \\ & & & & & X Y^{2} & & & \\ & & & & & & X^{2} & & \\ & & & & & & & X^{2} Y & \\ & & & & &&& & X^{2} Y^{2} \end{pmatrix}

The left-hand block of WM1W M_1 is the identity. Let M4M_4 be the right-hand block of WM1W M_1.

Coppersmith proved that there exists a k2×k2k^2 \times k^2 submatrix in the right-hand block of WM1W M_1 whose determinant is at least DD' (let’s leave the details of what DD' is for now):

(11111xxxx1xxxx1xxxx1xxxx1)\begin{pmatrix} 1 & & & & & & & & & & & & \\ & 1 & & & & & & & & & & & \\ & & 1 & & & & & & & & & & \\ & & & 1 & & & & & & & & & \\ & & & & 1 & & & & & x&x & x&x \\ & & & & & 1 & & & &x & x&x & x \\ & & & & & & 1 & & & x & x & x & x \\ & & & & & & & 1 & & x & x & x & x \\ & & & & &&& & 1 & & & & \end{pmatrix}

We remove the corresponding columns in the left-hand block. We obtain:

A=(1111xxxxxxxxxxxxxxxx1)A = \begin{pmatrix} 1 & & & & & & & \\ & 1 & & & & & & \\ & & 1 & & & & & \\ & & & 1 & & & & \\ & & & & &x & x & x & x & \\ & & & & & x & x & x & x & \\ & & & & &x & x & x & x & \\ & & & & & x & x & x & x & \\ & & & & 1 & & & & \end{pmatrix}

The matrix AA has the same determinant as the k2×k2k^2 \times k^2 submatrix, which is at least DD'.

Let’s take the same columns (as we did above) from the matrix M1M_1 and denote the resulting matrix by M1^\hat{M_1}:

WM1^=AW \hat{M_1} = A

det(A)=det(WM1^)=det(W)det(M1^)Ddet(A) = det(W \hat{M_1}) = det(W) det(\hat{M_1}) \geq D'

We know:

det(W)=(XY)k2(k1)2det(W) = (XY)^{\frac{k^2(k-1)}{2}}

So we have:

det(M1^)D(XY)k2(k1)2det(\hat{M_1}) \geq D' (XY)^{\frac{-k^2(k-1)}{2}}

We have:

det(M1^)=det(M2^)=det(M3^)det(\hat{M_1}) = det(\hat{M_2}) = det(\hat{M_3})

so we can now use bndet(M3^)1n2(n1)4||b_n|| \geq |det(\hat{M_3})|^{\frac{1}{n}} 2^{\frac{-(n-1)}{4}} to estimate bn||b_n||.