On a conjecture of Goppa on codes and counting points hyperelliptic curves

What is the best code of a given length? This natural, but very hard, question motivates the following definition.

Definition: Let B_q(n,d) denote the largest k such that there exists a [n,k,d]-code in GF(q)^n.

Determining B_q(n,d) is one of the main problems in the theory of error-correcting codes.

Let C_i be a sequence of linear codes with parameters [n_i,k_i,d_i] such that n_i\rightarrow \infty as i \to \infty, and such that both the limits R=\lim_{i\to \infty}\frac{k_i}{n_i} and \delta=\lim_{i\rightarrow \infty}\frac{d_i}{n_i} exist. The Singleton (upper) bound implies R+\delta\leq 1.

Let \Sigma_q denote the set of all (\delta,R)\in [0,1]^2 such that there exists a sequence C_i, i=1,2,..., of [n_i,k_i,d_i]-codes for which \lim_{i\to \infty} d_i/n_1=\delta and \lim_{i\rightarrow \infty} k_i/n_i=R.

Theorem (Manin [SS]): There exists a continuous decreasing function \alpha_q:[0,1]\to [0,1], such that

  1. \alpha_q is strictly decreasing on [0,\frac{q-1}{q}],
  2. \alpha_q(0)=1,
  3. if {\frac{q-1}{q}}\leq x\leq 1 then \alpha_q(x)=0,
  4. \Sigma_q=\{(\delta,R)\in [0,1]^2\ |\ 0\leq R\leq \alpha_q(\delta)\}.

Theorem (Gilbert-Varshamov): We have \alpha_q(x)\geq 1- x\log_q(q-1)-x\log_q(x)-(1-x)\log_q(1-x). In other words, for each fixed \epsilon >0, there exists an [n,k,d]-code C (which may depend on \epsilon) with R(C)+\delta(C)\geq 1- \delta(C)\log_q({\frac{q-1}{q}})-\delta(C)\log_q(\delta(C))-(1-\delta(C))\log_q(1-\delta(C))-\epsilon. In particular, in the case q=2, \alpha_q(x)\geq1-H_2(\delta), where H_q(\delta)=\delta\cdot \log_q(q-1)-\delta\log_q(\delta)-(1-\delta)\log_q(1-\delta) is the entropy function (for a q-ary channel).

Plot of the Gilbert-Varshamov (dotted), Elias (red), Plotkin (dashed), Singleton (dash-dotted), Hamming (green), and MRRW (stepped) curves using SAGE:

asymptotic coding theory bounds
Not a single value of \alpha_q(x) is known for 0<x<{\frac{q-1}{q}}!

Goppa’s conjecture: \alpha_q(x) = 1-H_2(\delta). (In other words, the Gilbert-Varshamov lower bound is actually attained in the case q=2.)

Though my guess is that Goppa’s conjecture is true, it is known (thanks to work of Zink, Vladut and Varshamov) that the q-ary analog of Goppa’s conjecture is false for q>49. On the other hand, we have the following theorem [J].

Theorem: If B(1.77,p) is true for infinitely many primes p with p\equiv 1\pmod 4 then Goppa’s conjecture is false.

The rest of this post will be devoted to explaining the notation in this Theorem. The notation B(c,p) means: “For all subsets S\subset GF(p), |X_S(GF(p))| \leq  c\cdot p holds.” The notation X_S means: For each non-empty subset S\subset GF(p), define the hyperelliptic curve X_S by y^2=f_S(x), where f_S(x) = \prod_{a\in S}(x-a).

It may seem strange that a statement about the asymptotic bounds of error-correcting curves is related to the number of points on hyperelliptic curves, but for details see [J]. Unfortunately, it appears much more needs to be known about hyperelliptic curves over finite fields if one is to try to use this connection to prove or disprove Goppa’s conjecture.

References:
[J] D. Joyner, On quadratic residue codes and hyperelliptic curves. Discrete Mathematics & Theoretical Computer Science, Vol 10, No 1 (2008).
[SS] S. Shokranian and M.A. Shokrollahi, Coding theory and bilinear complexity, Scientific Series of the International Bureau, KFA Julich Vol. 21, 1994.


Here’s how the above plot is constructed using SAGE:


sage: f1 = lambda x: gv_bound_asymp(x,2)
sage: P1 = plot(f1,0,1/2,linestyle=":")
sage: f2 = lambda x: plotkin_bound_asymp(x,2)
sage: P2 = plot(f2,0,1/2,linestyle="--")
sage: f3 = lambda x: elias_bound_asymp(x,2)
sage: P3 = plot(f3,0,1/2,rgbcolor=(1,0,0))
sage: f4 = lambda x: singleton_bound_asymp(x,2)
sage: P4 = plot(f4,0,1/2,linestyle="-.")
sage: f5 = lambda x: mrrw1_bound_asymp(x,2)
sage: P5 = plot(f5,0,1/2,linestyle="steps")
sage: f6 = lambda x: hamming_bound_asymp(x,2)
sage: P6 = plot(f6,0,1/2,rgbcolor=(0,1,0))
sage: show(P1+P2+P3+P4+P5+P6)


If you are interested in computing what c might be in the above statement B(c,p), the following SAGE code may help.
First, the Python function below will return a random list of elements without repetition, if desired) from an object.

def random_elements(F,n,allow_repetition=True):
    """
    F is a SAGE object for which random_element is a method 
    and having >n elements. For example, a finite field. Here 
    n>1 is an integer. The output is to have no repetitions 
    if allow_repetion=False.
    EXAMPLES:
        sage: p = nth_prime(100)
        sage: K = GF(p)
        sage: random_elements(K,2,allow_repetition=False)
        [336, 186]
    The command random_elements(K,p+1,allow_repetition=False)
    triggers a ValueError.
    """
    if (n>F.order()-2 and allow_repetition==False):
        raise ValueError("Parameters are not well-choosen.")
    L = []
    while len(L)<n:
        c = F.random_element()
        if allow_repetition==False:
            if not(c in L):
                L = L+[c]
        else:
            L = L+[c]
    return L

Next, the SAGE code computes |X_S(GF(p))|/p for a particular choice of the parameters. This gives a lower bound for c.


sage: p = nth_prime(100)
sage: K = GF(p)
sage: R. = PolynomialRing(K, "x")
sage: S = random_elements(K,ZZ((p+1)/2),allow_repetition=False)
sage: fS = prod([x - a for a in S])
sage: XS = HyperellipticCurve(fS)
sage: RR(len(XS.points())/p)
0.975970425138632

Based on experiments with this code, it appears that for “random” chices of S, and p “large”, the quotient |X_S(GF(p))|/p is very close to 1.

“Circle decoding” of the [7,4,3] Hamming code

This is a well-known trick but I’m posting it here since I often have a hard time tracking it down when I need it for an introductory coding theory talk or something. Hopefully someone else will find it amusing too!

Let F = GF(2) and let C be the set of all vectors in the third column below (for simplicity, we omit commas and parentheses, so 0000000 is written instead of (0,0,0,0,0,0,0), for example).

The [7,4,3] Hamming code
  decimal     binary     Hamming [7,4] 
0 0000 0000000
1 0001 0001110
2 0010 0010101
3 0011 0011011
4 0100 0100011
5 0101 0101101
6 0110 0110110
7 0111 0111000
8 1000 1000111
9 1001 1001001
10 1010 1010010
11 1011 1011100
12 1100 1100100
13 1101 1101010
14 1110 1110001
15 1111 1111111

This is a linear code of length 7, dimension 4, and minimum distance 3. It is called the Hamming [7,4,3]-code.
In fact, there is a mapping from F^4 to C given by \phi(x_1,x_2,x_3,x_4)={\bf y}, where
{\bf y}= \left( \begin{array}{c} y_1 \\ y_2 \\ y_3 \\ y_4 \\ y_5 \\ y_6 \\ y_7 \end{array} \right) = \left( \begin{array}{cccc} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \\ 1&0&1&1 \\ 1&1&0&1 \\ 1&1&1&0 \end{array} \right) \left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \end{array} \right) \ \pmod{2}. Moreover, the matrix G=\left(\begin{array}{ccccccc} 1&0&0&0&1&1&1 \\ 0&1&0&0&0&1&1 \\ 0&0&1&0&1&0&1 \\ 0&0&0&1&1&1&0 \end{array}\right) is a generator matrix.

Now, suppose a codeword c \in C is sent over a noisy channel and denote the received word by {\bf w}=(w_1,w_2,w_3,w_4,w_5,w_6,w_7). We assume at most 1 error was made.

  1. Put w_i in the region of the Venn diagram above associated to coordinate i in the table below, i=1,2,...,7.
  2. Do parity checks on each of the circles A, B, and C.

Decoding table
  parity failure region(s)     error position  
none none
A, B, and C 1
B and C 2
A and C 3
A and B 4
A 5
B 6
C 7

Exercise: Decode {\bf w} = (0,1,0,0,0,0,1) in the binary Hamming code using this algorithm.
(This is solved at the bottom of this column.)

In his 1976 autobiography Adventures of a Mathematician, Stanislaw Ulam [U] poses the following problem:

Someone thinks of a number between one and one million (which is just less than 220). Another person is allowed to ask up to twenty questions, to which the first person is supposed to answer only yes or no. Obviously, the number can be guessed by asking first: Is the number in the first half-million? and then again reduce the reservoir of numbers in the next question by one-half, and so on. Finally, the number is obtained in less than log2(1000000). Now suppose one were allowed to lie once or twice, then how many questions would one need to get the right answer?

We define Ulam’s Problem as follows: Fix integers M\geq 1 and e\geq 0. Let Player 1 choose the number from the set of integers 0,1, ...,M and Player 2 ask the yes/no questions to deduce the number, to which Player 1 is allowed to lie at most e times. Player 2 must discern which number Player 1 picked with a minimum number of questions with no feedback (in other words, all the questions must be provided at once)

As far as I know, the solution to this version of Ulam’s problem without feedback is still unsolved if e=2. In the case e=1, see [N] and [M]. (Also, [H] is a good survey.)

Player 1 has to convert his number to binary and encode it as a Hamming [7,4,3] codeword. A table containing all 16 codewords is included above for reference. Player 2 then asks seven questions of the form, “Is the i-th bit of your 7-tuple a 1?” Player 1’s answers form the new 7-tuple, (c_1,c_2,...,c_7), and each coordinate is placed in the corresponding region of the circle. If Player 1 was completely truthful, then the codeword’s parity conditions hold. This means that the 7-tuple generated by Player 2’s questions will match a 7-tuple from the codeword table above. At this point, Player 2 just has to decode his 7-tuple into an integer using the codeword table above to win the game. A single lie, however, will yield a 7-tuple unlike any in the codeword table. If this is the case, Player 2 must error-check the received codeword using the three parity check bits and the decoding table above. In other words, once Player 2 determines the position of the erroneous bit, he corrects it by “flipping its bit”. Decoding the corrected codeword will yield Player 1’s original number.

References

[H] R. Hill, “Searching with lies,” in Surveys in Combinatorics, ed. by P. Rowlinson, London Math Soc, Lecture Notes Series # 218.

[M] J. Montague, “A Solution to Ulam’s Problem with Error-correcting Codes”

[N] I. Niven, “Coding theory applied to a problem of Ulam,” Math Mag 61(1988)275-281.

[U] S. Ulam, Adventures of a mathematician, Scribner and Sons, New York, 1976 .


Solution to exercise using SAGE:


sage: MS = MatrixSpace(GF(2), 4, 7)
sage: G = MS([[1,0,0,0,1,1,1],[0,1,0,0,0,1,1],[0,0,1,0,1,0,1],[0,0,0,1,1,1,0]])
sage: C = LinearCode(G)
sage: V = VectorSpace(GF(2), 7)
sage: w = V([0,1,0,0,0,0,1])
sage: C.decode(w)
(0, 1, 0, 0, 0, 1, 1)

In other words, Player picked 4 and lied on the 6th question.

Here is a Sage subtlety:

sage: C = HammingCode(3, GF(2))
sage: C
Linear code of length 7, dimension 4 over Finite Field of size 2
sage: V = VectorSpace(GF(2), 7)
sage: w = V([0,1,0,0,0,0,1])
sage: C.decode(w)
(1, 1, 0, 0, 0, 0, 1)

Why is this decoded word different? Because the Sage Hamming code is distinct (though “equivalent” to) the Hamming code we used in the example above.

Unfortunately, SAGE does not yet do code isomorphisms (at least not as easily as GUAVA), so we use GUAVA’s CodeIsomorphism function, which calls J. Leon’s GPL’s desauto C code function:


gap> C1 := GeneratorMatCode([[1,0,0,1,0,1,0],[0,1,0,1,0,1,1],[0,0,1,1,0,0,1],[0,0,0,0,1,1,1]],GF(2));
a linear [7,4,1..3]1 code defined by generator matrix over GF(2)
gap> C2 := GeneratorMatCode([[1,0,0,0,1,1,1],[0,1,0,0,0,1,1],[0,0,1,0,1,0,1],[0,0,0,1,1,1,0]],GF(2));
a linear [7,4,1..3]1 code defined by generator matrix over GF(2)
gap> CodeIsomorphism(C1,C2);
(2,6,7,3)
gap> Display(GeneratorMat(C1));
1 . . 1 . 1 .
. 1 . 1 . 1 1
. . 1 1 . . 1
. . . . 1 1 1
gap> Display(GeneratorMat(C2));
1 . . . 1 1 1
. 1 . . . 1 1
. . 1 . 1 . 1
. . . 1 1 1 .

Now we see that the permutation (2,6,7,3) sends the “SAGE Hamming code” to the “circle Hamming code”:


sage: H = HammingCode(3, GF(2))
sage: G = MS([[1,0,0,0,1,1,1],[0,1,0,0,0,1,1],[0,0,1,0,1,0,1],[0,0,0,1,1,1,0]])
sage: C = LinearCode(G)
sage: C.gen_mat()
[1 0 0 0 1 1 1]
[0 1 0 0 0 1 1]
[0 0 1 0 1 0 1]
[0 0 0 1 1 1 0]
sage: S7 = SymmetricGroup(7)
sage: g = S7([(2,6,7,3)])
sage: H.permuted_code(g) == C
True

Some favorite quotes on math, science, learning

Here is a collection of some favorite quotes from scientists and writers. For more, see this post.

There are some things which cannot be learned quickly,
and time, which is all we have,
must be paid heavily for their acquiring.
They are the very simplest things,
and because it takes a man’s life to know them
the little new that each man gets from life
is very costly and the only heritage he has to leave.
Ernest Hemingway
(From A. E. Hotchner, Papa Hemingway, Random House, NY, 1966)

I believe that a scientist looking at nonscientific problems is just as dumb as the next guy.
Richard Feynman

The best thing for being sad is to learn something. That is the only thing that never fails. You may grow old and trembling in your anatomies, you may lie awake at night listening to the disorder of your veins, you may miss your only love, you may see the world about you devastated by evil lunatics, or know your honor trampled in the sewers of baser minds. There is only one thing for it then to learn. Learn why the world wags and what wags it. That is the only thing which the mind can never exhaust, never alienate, never be tortured by, never fear or distrust, and never dream of regretting.
T. H. White in The Once and Future King

Truth, like gold, is to be obtained not by its growth, but by washing away from it all that is not gold. Leo Tolstoy

Education is what survives when what has been learnt has been forgotten.
B. F. Skinner

The advantage is that mathematics is a field in which one’s blunders tend to show very clearly and can be corrected or erased with a stroke of the pencil. It is a field which has often been compared with chess, but differs from the latter in that it is only one’s best moments that count and not one’s worst. A single inattention may lose a chess game, whereas a single successful approach to a problem, among many which have been relegated to the wastebasket, will make a mathematician’s reputation.
Norbert Wiener, in Ex-Prodigy: My Childhood and Youth

Science is a differential equation. Religion is a boundary condition.
Alan Turing

Everything is vague to a degree you do not realize till you have tried to make it precise.
Bertrand Russell

For every complicated problem there is a solution that is simple, direct, understandable, and wrong.
H. L. Mencken

If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is.
John Louis von Neumann

To be what we are, and to become what we are capable of becoming, is the only end in life.
Baruch Spinoza

Math blogs, SAGE, and latex

I only created this wordpress site thinking that I could not type LaTeX on my blogspot page (wdjoyner.blogspot.com). Since I figured out how, following http://wolverinex02.googlepages.com/emoticonsforblogger2, maybe this blog will stay dormant for now.

A latex test:

\frac{\pi}{4}=\int_0^1 \frac{dx}{1+x^2} is smallest

\frac{\pi}{4}=\int_0^1 \frac{dx}{1+x^2}

\frac{\pi}{4}=\int_0^1 \frac{dx}{1+x^2}

\frac{\pi}{4}=\int_0^1 \frac{dx}{1+x^2}

\frac{\pi}{4}=\int_0^1 \frac{dx}{1+x^2}

\frac{\pi}{4}=\int_0^1 \frac{dx}{1+x^2} is biggest

Here is a SAGE plot of the integrand:

integrand

Here is a SAGE session:


sage: x = var("x")
sage: integral(1/(1+x^2),x,0,1)
pi/4
sage: plot(1/(1+x^2),x,0,1)

Actually, I like this much better than blogspot, so I might switch after all!