The Bauer-Joux algorithm(opens new window)
is a technique for solving polynomial equations of the form
p(x,y,z)=0
over the integers
Z,
given that the solutions lie within specific bounds:
∣x∣<X,
∣y∣<Y, and
∣z∣<Z.
Consider the polynomial p(x,y,z)=1+axy+byz.
We choose a set of monomials S
by which we will shift the polynomial p.
Let's say S={1,x}.
We denote the length of S by s.
We define the set M as the collection of all monomials of p
and all monomials of sm⋅p
for each sm∈S. In our case:
We can see that the last s coordinates of r are 0.
Let L1 denote the lattice generated by the rows of M1.
By applying elementary row operations, we can transform M1 into a matrix of the following form:
where b1, b2, b3, and b4 are obtained
by first applying the LLL algorithm to L′
and then performing Gram-Schmidt orthogonalization on the resulting vectors.
Since s0 has its last s coordinates equal to zero, the vector r0UU1
must have its first s coordinates equal to zero.
For n=4 and s=2, this implies that
r0UU1=(0,0,c1,c2,c3,c4) and:
s0=i=1∑ncibi
Because of the Gram-Schmidt orthogonalization,
the vectors b1, ..., bn are pairwise
orthogonal.
We observe that if ∣∣s0∣∣<∣∣bn∣∣, then cn=0, and
s0 is orthogonal to bn.
In this case, we compute the inner product of s and bn, which
yields a new polynomial p2
that has the same root as the polynomial p.
Let's assume that the polynomial p2
is not algebraically independent from p.
Due to the construction, this would imply
that p2 is a linear combination of the columns
on the right-hand side of the matrix M1
(consisting of the polynomial p and its shifts).
However, since p2 is also orthogonal to these polynomials,
this assumption leads to a contradiction.
Therefore, p and p2 are algebraically independent.
How to get a third independent polynomial?
We have an ideal I=(p,p2).
Bauer and Joux proved that if the ideal I
is prime and p3∈I,
then p,p2,p3 are algebraically independent.
Let IM
be the set of polynomials from I
that are defined over M (i.e., consisting only of monomials from M).
We compute the Gröbner basis of I
and select only the polynomials that are defined over M.
This gives us a set F={r1,...,rn}.
We define a similar matrix as in the first step above, but this time
the polynomials
r1,...,rn are on the right-hand side of the matrix:
In the same way that we constructed p2
above, we construct a new polynomial p3.
By construction, the polynomial p3
is orthogonal to all the polynomials in the set F.
If p3∈I,
then it must be a linear combination of the polynomials from F.
However, it is also orthogonal to all these polynomials,
which implies p3=0.
Since this is not the case, we conclude that p3∉I.
Therefore, if I
is prime,
the polynomials p, p2, and p3 are
algebraically independent.
That means if we obtain p3
through the above construction, it's algebraically indpendent from p1
and p2. But when does this construction succeed?
Remember, we get p2 when ∣∣s0∣∣<∣∣bn∣∣.
Bauer and Joux use a similar estimation of bn
as Coppersmith in the bivariate case.
They use shifts of p and p2
as columns in the right-hand side matrix (as F set).
Similar to the bivariate case, they select the submatrix with
the largest determinant, but
here, the p2 column must also be taken into account: