We say that divisors and are linearly equivalent and write if is a principal divisor, i.e., if for some function . Note that if and are linearly equivalent, they must have the same degree, since the degree of every principal divisor is 0. Divisors of degree 0 are linearly equivalent if and only if they determine the same element of the Jacobian. If is a divisor of degree 0, we denote by the element of the Jacobian determined by . A divisor is said to be effective if for all vertices . We write to mean that is effective. The linear system associated to a divisor is the set
i.e., is the set of all effective divisors linearly equivalent to . Note that if , then . We note also that if , then must be empty.
Sage can be used to compute the linear system of any divisor on a graph.
def linear_system(D, Gamma): """ Returns linear system attached to the divisor D. EXAMPLES: sage: Gamma2 = graphs.CubeGraph(2) sage: Gamma1 = Gamma2.subgraph(vertices = ['00', '01'], edges = [('00', '01')]) sage: f = [['00', '01', '10', '11'], ['00', '01', '00', '01']] sage: is_harmonic_graph_morphism(Gamma1, Gamma2, f) True sage: PhiV = matrix_of_graph_morphism_vertices(Gamma1, Gamma2, f); PhiV [1 0 1 0] [0 1 0 1] sage: D = vector([1,0,0,1]) sage: PhiV*D (1, 1) sage: linear_system(PhiV*D, Gamma1) [(2, 0), (1, 1), (0, 2)] sage: linear_system(D, Gamma2) [(0, 2, 0, 0), (0, 0, 2, 0), (1, 0, 0, 1)] sage: [PhiV*x for x in linear_system(D, Gamma2)] [(0, 2), (2, 0), (1, 1)] """ Q = Gamma.laplacian_matrix() CS = Q.column_space() N = len(D.list()) d = sum(D.list()) #print d lin_sys = [] if d < 0: return lin_sys if (d == 0) and (D in CS): lin_sys = [CS(0)] return lin_sys elif (d == 0): return lin_sys S = IntegerModRing(d+1)^N V = QQ^N for v in S: v = V(v) #print D-v,v,D if D-v in CS: lin_sys.append(v) return lin_sys
By hypothesis, if and only if . By definition is the Paley graph of order .
Paley was a brilliant mathematician who died tragically at the age of 26. Paley graphs are one of the many spin-offs of his work. The following facts are known about them.
Here is Sage code for the Paley graph (thanks to Chris Godsil, see [GB]):
def Paley(q): K = GF(q) return Graph([K, lambda i,j: i != j and (i-j).is_square()])
(Replace “K” by “” above; I was having trouble rendering it in html.) Below is an example.
sage: X = Paley(13) sage: X.vertices() [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] sage: X.is_vertex_transitive() True sage: X.degree_sequence() [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6] sage: X.spectrum() [6, 1.302775637731995?, 1.302775637731995?, 1.302775637731995?, 1.302775637731995?, 1.302775637731995?, 1.302775637731995?, -2.302775637731995?, -2.302775637731995?, -2.302775637731995?, -2.302775637731995?, -2.302775637731995?, -2.302775637731995?] sage: G = X.automorphism_group() sage: G.cardinality() 78
We see that this Paley graph is regular of degree , it has only three distinct eigenvalues, and its automorphism group is order .
Here is an animation of this Paley graph:
The frames in this animation were constructed one-at-a-time by deleting an edge and plotting the new graph.
Here is an animation of the Paley graph of order :
The frames in this animation were constructed using a Python script:
X = Paley(17) E = X.edges() N = len(E) EC = X.eulerian_circuit() for i in range(N): X.plot(layout="circular", graph_border=True, dpi=150).save(filename="paley-graph_"+str(int("1000")+int("%s"%i))+".png") X.delete_edge(EC[i]) X.plot(layout="circular", graph_border=True, dpi=150).save(filename="paley-graph_"+str(int("1000")+int("%s"%N))+".png")
Instead of removing the frames “by hand” they are removed according to their occurrence in a Eulerian circuit of the graph.
Here is an animation of the Paley graph of order :
[GB] Chris Godsil and Rob Beezer, Explorations in Algebraic Graph Theory with Sage, 2012, in preparation.
Let be a Boolean function. (We identify with either the real numbers or the binary field . Hopefully, the context makes it clear which one is used.)
Define a partial order on as follows: for each , we say whenever we have , , …, . A Boolean function is called monotone (increasing) if whenever we have then we also have .
Note that if and are monotone then (a) is also monotone, and (b) . (Overline means bit-wise complement.)
Definition:
Let be any monotone function. We say that is the least support of if consists of all vectors in which are smallest in the partial ordering on . We say a subset of is admissible if for all , and
Monotone functions on are in a natural one-to-one correspondence with the collection of admissible sets.
Example:
Here is an example of a monotone function whose least support vectors are given by
illustrated in the figure below.
The algebraic normal form is .
The following result is due to Charles Celerier.
Theorem:
Let be a monotone function whose least support vectors are given by . Then
The following example is due to Caroline Melles.
Example:
Let be the monotone function whose least support is
Using the Theorem above, we obtain the compact algebraic form
This function is monotone yet has no vanishing Walsh-Hadamard transform values. To see this, we attach the file afsr.sage available from Celerier (https://github.com/celerier/oslo/), then run the following commands.
sage: V = GF(2)^(6) sage: L = [V([1,1,0,0,1,1]),V([0,0,1,1,1,1]), V([1,1,1,1,0,0])] sage: f = monotone_from_support(L) sage: is_monotone(f) True
These commands simply construct a Boolean function whose least support are the vectors in L. Next, we compute the Walsh-Hadamard transform of this using both the method built into Sage’s sage.crypto module, and the function in afsr.sage.
sage: f.walsh_hadamard_transform() (-44, -12, -12, 12, -12, 4, 4, -4, -12, 4, 4, -4, 12, -4, -4, 4, -12, 4, 4, -4, 4, 4, 4, -4, 4, 4, 4, -4, -4, -4, -4, 4, -12, 4, 4, -4, 4, 4, 4, -4, 4, 4, 4, -4, -4, -4, -4, 4, 12, -4, -4, 4, -4, -4, -4, 4, -4, -4, -4, 4, 4, 4, 4, -4) sage: f.algebraic_normal_form() x0*x1*x2*x3 + x0*x1*x4*x5 + x2*x3*x4*x5 sage: x0,x1,x2,x3,x4,x5 = var("x0,x1,x2,x3,x4,x5") sage: g = x0*x1*x2*x3 + x0*x1*x4*x5 + x2*x3*x4*x5 sage: Omega = [v for v in V if g(x0=v[0], x1=v[1], x2=v[2], x3=v[3], x4=v[4], x5=v[5])0] sage: len(Omega) 10 sage: g = lambda x: x[0]*x[1]*x[2]*x[3] + x[0]*x[1]*x[4]*x[5] + x[2]*x[3]*x[4]*x[5] sage: [walsh_transform(g,a) for a in V] [44, 12, 12, -12, 12, -4, -4, 4, 12, -4, -4, 4, -12, 4, 4, -4, 12, -4, -4, 4, -4, -4, -4, 4, -4, -4, -4, 4, 4, 4, 4, -4, 12, -4, -4, 4, -4, -4, -4, 4, -4, -4, -4, 4, 4, 4, 4, -4, -12, 4, 4, -4, 4, 4, 4, -4, 4, 4, 4, -4, -4, -4, -4, 4]
(Note: the Walsh transform method in the BooleanFunction class in Sage differs by a sign from the standard definition.) This verifies that there are no values of the Walsh-Hadamard transform which are .
More details can be found in the paper listed here. I end with two questions (which I think should be answered “no”).
Question: Are there any monotone functions of variables having no free variables of degree ?
Question: Are there any monotone functions of most than two variables which are bent?
Speaking of material, what does this book actually cover? The chapters are:
There are also several appendices (for example, one on Sage a free and open source mathematics software program with very strong linear algebra computational capabilities [S]).
Note the unusual labeling of the chapters – no numbering is used for the chapters, instead capital Roman letters are used (an acronym, E for Eigenvalues, for example). The same indexing method is used for the definitions, examples and theorems as well. How do you refer to a result with this method of labeling? The way cross-referencing works in this book is that each reference is followed by the page number that the reference can be found. For example the theorem that says the adjoint of a matrix respects matrix addition would be referred to as “Theorem AMA [175]” since it can be found on page 175 and is written there as
Theorem AMA (Adjoint and Matrix Addition):
Suppose and are matrices of the same size. Then .
One advantage to this method of indexing is that if, for example, replace the chapter on Vectors with your own chapter then recompile the LaTeX, the theorem on adjoint and matrix addition is still “Theorem AMA,” even though its page number has probably changed. Although this indexing method with the page numbers does make the various results pretty easy to find, all the labeled results are also conveniently listed in the index. Moreover, in the the electronic versions (PDF, XML, jsMath) of the book, all of this cross-referencing is available as hyperlinks.
Each chapter of course is composed of many sections. Each section not only has plentiful detailed examples and its own list of “reading questions” to see if you have understood the material, but also a list of exercises with solutions. More precisely, some exercises are marked as computational (e.g., solve the following system of linear equations), some as theoretical (e.g., prove that if a vector v is in the kernel of a matrix then any scalar times v is also in the kernel), and some as “mixed” (often of the form: find an example of a matrix with property X). As far as I could see, every computational exercise was accompanied by a more-or-less complete solution. However, the other types of problems usually (not always) either had no solution provided or only a sketch of one. A rough count would be about 500 exercises and 350 of them have solutions.
The book is very well-written and at a level similar to Anton’s popular book \cite{A}. What topics are actually covered by this book? There is a detailed table of contents of (a) the chapters, sections and subsections, (b) the definitions, (c) the theorems, (d) the examples, and (e) a 30 page index.
The first chapter on systems of linear equations discusses solving systems of linear equations, as well as classifying them (consistent, homogeneous, and so on), row reduction, and the matrix form of a system of linear equations. This chapter, in my opinion, approaches things in the right way by stressing the importance of the method of row reduction/Gauss elimination. Very detailed proofs are also presented. For most students, the more details the better, so this is also a strong plus.
The second chapter on vectors starts with the basics on vector operations. This is followed by four sections on linear spans and linear independence. This is a challenging topic for students but the book does a very though job. The final section discusses orthogonality, inner products and the Gram-Schmidt procedure.
The chapter on matrices comprises six sections. After several sections explaining the usual details on matrix operations (such as transposes and adjoints) and matrix arithmetic (such as addition and multiplication), the discussion turns to matrix inverses. Two sections are spent with this topic. The last two sections deal with row and column spaces and their basic properties.
The chapter on vector spaces begins with two sections on the basic definitions and properties of vector spaces and subspaces. The next two sections are on linear independence (again), spanning sets (again), and vector space bases. With the difficulty that linear independence and spanning sets present to many students, this reappearance of these topics provides excellent reinforcement. The last sections of the chapter are on the dimension properties of the row-span and column span of a matrix. For example, what Gilbert Strang calls the Fundamental Theorem of Linear Algebra, the so-called “rank plus nullity theorem,” is in one of these sections (see Theorem RPNC in section RNM on page 323).
After a relatively short 24-page chapter on the determinant and its properties, the book continues with the Eigenvalues chapter. This chapter discusses eigenvalues, eigenvectors, diagonalization, and similarity. Topics such as the Jordan canonical form are discussed in a later chapter.
The chapter on linear transformations has separate sections on injective maps, surjective maps, and invertible maps, resp.. This chapter has a wealth of examples to help guide the student through these relatively abstract concepts.
The final chapter is also the most difficult for the student. This is the chapter titled Representations, concerning topics such as the matrix representation of a linear transformation and the properties of this representation. Other topics, such as the Cayley-Hamilton Theorem, the Jordan canonical form, and diagonalization of orthogonal matrices appear in this chapter as well. In version 2.0, some sections of this chapter were less complete than those in other chapters.
Finally, I would also like to mention that the author has done a considerable amount of work on the use of Sage in the classroom. For example, he has added to the Sage code and documentation and has written the following “quick reference sheets” summarizing the most useful linear algebra commands in Sage. For more information, see his website http://linear.ups.edu/sage-fcla.html.
An excellent book written by an experienced teacher. Highly recommended.
Bibliography:
[A] H. Anton, Elementary linear algebra, 8th edition, Wiley, 2000.
[B] Robert Beezer’s Linear Algebra website, http://linear.ups.edu/.
[S] W. Stein, Sage: Open Source Mathematical Software (Version 5.2), The Sage~Group, 2012, http://www.sagemath.org
Define , if k <= n/2 and , if Let , where and is the symmetric group of order . From [2], we have the following result.
Theorem: If p is an n-cycle then 2n+1 is a prime.
Call such a prime a “sestina prime”. Which primes are sestina primes?
Here is Python/Sage code for this permutation:
def sestina(n): """ Computes the element of the symmetric group S_n associated to the shuffle above. EXAMPLES: sage: sestina(4) (1,2,4) sage: sestina(6) (1,2,4,5,3,6) sage: sestina(8) (1,2,4,8)(3,6,5,7) sage: sestina(10) (1,2,4,8,5,10)(3,6,9) sage: sestina(12) (1,2,4,8,9,7,11,3,6,12)(5,10) sage: sestina(14) (1,2,4,8,13,3,6,12,5,10,9,11,7,14) sage: sestina(16) (1,2,4,8,16)(3,6,12,9,15)(5,10,13,7,14) sage: sestina(18) (1,2,4,8,16,5,10,17,3,6,12,13,11,15,7,14,9,18) sage: sestina(20) (1,2,4,8,16,9,18,5,10,20)(3,6,12,17,7,14,13,15,11,19) sage: sestina(22) (1,2,4,8,16,13,19,7,14,17,11,22)(3,6,12,21)(5,10,20)(9,18) """ def fcn(k, n): if k<=int(n/2): return 2*k else: return 2*n+1-2*k L = [fcn(k,n) for k in range(1,n+1)] G = SymmetricGroup(n) return G(L)
And here is an example due to Ezra Pound [3]:
I Damn it all! all this our South stinks peace. You whoreson dog, Papiols, come! Let’s to music! I have no life save when the swords clash. But ah! when I see the standards gold, vair, purple, opposing And the broad fields beneath them turn crimson, Then howl I my heart nigh mad with rejoicing. II In hot summer have I great rejoicing When the tempests kill the earth’s foul peace, And the light’nings from black heav’n flash crimson, And the fierce thunders roar me their music And the winds shriek through the clouds mad, opposing, And through all the riven skies God’s swords clash. III Hell grant soon we hear again the swords clash! And the shrill neighs of destriers in battle rejoicing, Spiked breast to spiked breast opposing! Better one hour’s stour than a year’s peace With fat boards, bawds, wine and frail music! Bah! there’s no wine like the blood’s crimson! IV And I love to see the sun rise blood-crimson. And I watch his spears through the dark clash And it fills all my heart with rejoicing And prys wide my mouth with fast music When I see him so scorn and defy peace, His lone might ’gainst all darkness opposing. V The man who fears war and squats opposing My words for stour, hath no blood of crimson But is fit only to rot in womanish peace Far from where worth’s won and the swords clash For the death of such sluts I go rejoicing; Yea, I fill all the air with my music. VI Papiols, Papiols, to the music! There’s no sound like to swords swords opposing, No cry like the battle’s rejoicing When our elbows and swords drip the crimson And our charges ’gainst “The Leopard’s” rush clash. May God damn for ever all who cry “Peace!” VII And let the music of the swords make them crimson Hell grant soon we hear again the swords clash! Hell blot black for always the thought “Peace”!
References:
[1] http://en.wikipedia.org/wiki/Sestina
[2] Richard Dore and Anton Geraschenko,”Sestinas and Primes” posted to http://stacky.net/wiki/index.php?title=Course_notes, and http://math.berkeley.edu/~anton/written/sestina.pdf
[3] Ezra Pound, “Sestina: Altaforte” (1909), (originally published int the English Review, 1909)
[4] John Bullitt, N. J. A. Sloane and J. H. Conway , http://oeis.org/A019567
This cipher has been re-invented by several authors, such as author and mathematician Charles Lutwidge Dodgson (Lewis Carroll) who claimed his 1868 “The Alphabet Cipher” was unbreakable. Several others claimed the so-called Vigenère cipher was unbreakable (e.g., the Scientific American magazine in 1917). However, Friedrich Kasiski and Charles Babbage broke the cipher in the 1800’s [1]. This cipher was used in the 1700’s, for example, during the American Civil War. The Confederacy used a brass cipher disk to implement the Vigenère cipher (now on display in the NSA Museum in Fort Meade) [1].
The so-called Vigenère cipher is a generalization of the Cesaer shift cipher. Whereas the shift cipher shifts each letter by the same amount (that amount being the key of the shift cipher) the so-called Vigenère cipher shifts a letter by an amount determined by the key, which is a word or phrase known only to the sender and receiver).
For example, if the key was a single letter, such as “C”, then the so-called Vigenère cipher is actually a shift cipher with a shift of 2 (since “C” is the 2-nd letter of the alphabet, if you start counting at 0). If the key was a word with two letters, such as “CA”, then the so-called Vigenère cipher will shift letters in even positions by 2 and letters in odd positions are left alone (or shifted by 0, since “A” is the 0-th letter, if you start counting at 0).
REFERENCES:
[1] Wikipedia article on the Vigenere cipher.
Using Sage, let’s look at a message (a chant at football games between rivals USNA and West Point):
sage: AS = AlphabeticStrings() sage: A = AS.alphabet() sage: VC = VigenereCryptosystem(AS, 1) # sets the key length to be = 1 sage: m = VC.encoding("Beat Army!"); m # trivial example BEATARMY
Now, we choose for illustration a simple key of length 1, and encipher this message:
sage: key = AS("D") sage: c = VC.enciphering(key, m) sage: c # a trivial example EHDWDUPB
You see here that in this case the cipher boils down to the Caesar/shift cipher (shifting by 3).
Deciphering is easy:
sage: VC.deciphering(key, c) BEATARMY
Next, we choose for illustration a simple key of length 2, and encipher the same message:
sage: VC = VigenereCryptosystem(AS, 2) sage: key = AS("CA") sage: m = VC.encoding("Beat Army!"); m BEATARMY sage: c = VC.enciphering(key, m); c DECTCROY
Since one of the key letters is “A” (which shifts by 0), half the plaintext is unchanged in going to the ciphertext.
Here is the algorithmic description of the above (so-called) Vigenère cipher:
ALGORITHM: INPUT: key - a string of upper-case letters (the secret "key") m - string of upper-case letters (the "plaintext" message) OUTPUT: c - string of upper-case letters (the "ciphertext" message) Identify the alphabet A, ..., Z with the integers 0, ..., 25. Step 1: Compute from the string key a list L1 of corresponding integers. Let n1 = len(L1). Step 2: Compute from the string m a list L2 of corresponding integers. Let n2 = len(L2). Step 3: Break L2 up sequencially into sublists of size n1, and one sublist at the end of size <=n1. Step 4: For each of these sublists L of L2, compute a new list C given by C[i] = L[i]+L1[i] (mod 26) to the i-th element in the sublist, for each i. Step 5: Assemble these lists C by concatenation into a new list of length n2. Step 6: Compute from the new list a string c of corresponding letters.
Once it is known that the key is, say, n characters long, frequency analysis can be applied to every n-th letter of the ciphertext to determine the plaintext. This method is called “Kasiski examination“, or the “Kasiski attack” (although it was first discovered by Charles Babbage).
The cipher Vigenère actually discovered is an “auto-key cipher” described as follows.
ALGORITHM: INPUT: key - a string of upper-case letters (the secret "key") m - string of upper-case letters (the "plaintext" message) OUTPUT: c - string of upper-case letters (the "ciphertext" message) Identify the alphabet A, ..., Z with the integers 0, ..., 25. Step 1: Compute from the string m a list L2 of corresponding integers. Let n2 = len(L2). Step 2: Let n1 be the length of the key. Concatenate the string key with the first n2-n1 characters of the plaintext message. Compute from this string of length n2 a list L1 of corresponding integers. Note n2 = len(L1). Step 3: Compute a new list C given by C[i] = L1[i]+L2[i] (mod 26), for each i. Note n2 = len(C). Step 5: Compute from the new list a string c of corresponding letters.
Note how the key is mixed with the plaintext to create the enciphering of the plaintext to ciphertext in Steps 2 and 3.
A screencast describing this has been posted to vimeo.
The Bifid cipher was discusses in Alasdair McAndrew’s book on Cryptography and Sage. We shall follow his discussion. As an example of a Polybius square for the Bifid cipher, pick the key to be “encrypt” (as Alasdair does). In that case, the Polybius square is BTW, the analog is: .
Here is Sage code to produce the case (the case is in Alasdair’s book):
def bifid(pt, key): """ INPUT: pt - plaintext string (digits okay) key - short string for key (no repetitions, digits okay) OUTPUT: ciphertext from Bifid cipher (all caps, no spaces) This is the version of the Bifid cipher that uses the 6x6 Polybius square. AUTHOR: Alasdair McAndrew EXAMPLES: sage: key = "encrypt" sage: pt = "meet me on monday at 8am" sage: bifid(pt, key) [[2, 5], [0, 0], [0, 0], [1, 0], [2, 5], [0, 0], [3, 0], [0, 1], [2, 5], [3, 0], [0, 1], [1, 3], [1, 1], [0, 4], [1, 1], [1, 0], [5, 4], [1, 1], [2, 5]] 'HNHOKNTA5MEPEGNQZYG' """ AS = AlphabeticStrings() A = list(AS.alphabet())+[str(x) for x in range(10)] # first make sure the letters are capitalized # and text has no spaces key0 = [x.capitalize() for x in key if not(x.isspace())] pt0 = [x.capitalize() for x in pt if not(x.isspace())] # create long key long_key = key0+[x for x in A if not(x in key0)] n = len(pt0) # the fractionalization pairs = [[long_key.index(x)//6, long_key.index(x)%6] for x in pt0] print pairs tmp_cipher = flatten([x[0] for x in pairs]+[x[1] for x in pairs]) ct = join([long_key[6*tmp_cipher[2*i]+tmp_cipher[2*i+1]] for i in range(n)], sep="") return ct def bifid_square(key): """ Produced the Polybius square for the 6x6 Bifid cipher. EXAMPLE: sage: key = "encrypt" sage: bifid_square(key) [E N C R Y P] [T A B C D E] [F G H I J K] [L M N O P Q] [R S T U V W] [X Y Z 0 1 2] """ AS = AlphabeticStrings() A = list(AS.alphabet())+[str(x) for x in range(10)] # first make sure the letters are capitalized # and text has no spaces key0 = [SR(x.capitalize()) for x in key if not(x.isspace())] # create long key long_key = key0+[SR(x) for x in A if not(x in key0)] # the fractionalization pairs = [[long_key.index(SR(x))//6, long_key.index(SR(x))%6] for x in A] f = lambda i,j: long_key[6*i+j] M = matrix(SR, 6, 6, f) return M
Have fun!
This algorithm is an example of dynamic programming, which allows one to break the computation down to simpler steps using some sort of recursive procedure. If then this is a -time algorithm. For comparison, the Bellman-Ford algorithm has complexity , which is -time for “dense” graphs. However, Bellman-Ford only yields the shortest paths eminating from a single vertex. To achieve comparable output, we would need to iterate Bellman-Ford over all vertices, which would be an -time algorithm for “dense” graphs. Except for “sparse” graphs, Floyd-Warshall-Roy is better than an interated implementation of Bellman-Ford.
Here is a Sage implementation
def floyd_warshall_roy(A): """ Shortest paths INPUT: A - weighted adjacency matrix OUTPUT dist - a matrix of distances of shortest paths paths - a matrix of shortest paths EXAMPLES: sage: A = matrix([[0,1,2,3],[0,0,2,1],[20,10,0,3],[11,12,13,0]]); A sage: floyd_warshall_roy(A) ([[0, 1, 2, 2], [12, 0, 2, 1], [14, 10, 0, 3], [11, 12, 13, 0]], [-1 1 2 1] [ 3 -1 2 3] [ 3 -1 -1 3] [-1 -1 -1 -1]) sage: A = matrix([[0,1,2,4],[0,0,2,1],[0,0,0,5],[0,0,0,0]]) sage: floyd_warshall_roy(A) ([[0, 1, 2, 2], [+Infinity, 0, 2, 1], [+Infinity, +Infinity, 0, 5], [+Infinity, +Infinity, +Infinity, 0]], [-1 1 2 1] [-1 -1 2 3] [-1 -1 -1 3] [-1 -1 -1 -1]) sage: A = matrix([[0,1,2,3],[0,0,2,1],[-5,0,0,3],[1,0,1,0]]); A sage: floyd_warshall_roy(A) Traceback (click to the left of this block for traceback) ... ValueError: A negative edge weight cycle exists. sage: A = matrix([[0,1,2,3],[0,0,2,1],[-1/2,0,0,3],[1,0,1,0]]); A sage: floyd_warshall_roy(A) ([[0, 1, 2, 2], [3/2, 0, 2, 1], [-1/2, 1/2, 0, 3/2], [1/2, 3/2, 1, 0]], [-1 1 2 1] [ 2 -1 2 3] [-1 0 -1 1] [ 2 2 -1 -1]) """ G = Graph(A, format="weighted_adjacency_matrix") V = G.vertices() E = [(e[0],e[1]) for e in G.edges()] n = len(V) dist = [[0]*n for i in range(n)] paths = [[-1]*n for i in range(n)] # initialization step for i in range(n): for j in range(n): if (i,j) in E: paths[i][j] = j if i == j: dist[i][j] = 0 elif A[i][j] != 0: dist[i][j] = A[i][j] else: dist[i][j] = infinity # iteratively finding the shortest path for j in range(n): for i in range(n): if i != j: for k in range(n): if k != j: if dist[i][k]>dist[i][j]+dist[j][k]: paths[i][k] = V[j] dist[i][k] = min(dist[i][k], dist[i][j] +dist[j][k]) for i in range(n): if dist[i][i] < 0: raise ValueError, "A negative edge weight cycle exists." return dist, matrix(paths)
More and more people are arguing, correctly in my opinion, that in the computational realm, in particular in mathematical research which uses computational experiments, that much higher standards are needed. The Ars Technica article linked above suggests that “it’s time for a major revision of the scientific method.” Also, Victoria Stodden argues one must “facilitate reproducibility. Without this data may be open, but will remain de facto in the ivory tower.” The argument basically is that to reproduce computational mathematical experiments is unreasonably difficult, requiring more that a reasonable expertise. These days, it may in fact (unfortunately) require purchasing very expensive software, or possessing of very sophisticated programming skills, accessibility to special hardware, or (worse) guessing parameters and programming procedures only hinted at by the researcher.
Hopefully, Sage can play the role of a standard bearer for such computational reproducibility. Sage is free, open source and there is a publically available server it runs on (sagenb.org).
What government agencies should require such reproducibility? In my opinion, all scientific funding agencies (NSF, etc) should follow these higher standards of computational accountability.
Panel discussion:
Q1: What are the grand challenges of symbolic computing?
Is the term “symbolic computation” to broad? (Hybrid symbolic/numerical, algebraic geometric computation, algebraic combinatorial/group-theoretical, computer proofs, tensor calculations, differential, mathematical knowledge/database research, user interfaces, …)
General ans: No. Hoon Hong points out that user interfaces are lower level but below to the same group.
Q2: How can the latest algorithmic advances be made readily available: google analog of problem formulation? (Idea: suppose someone has a clever idea for a good algorithm but not enough discipline to implement it …)
One answer: Sage can put software together – is this the right way? Analog of stackoverflow.com?
Q3: What is the next killer app for symbolic computation? (Oil app of Groebner bases, cel phone app, robotics, …)
Q4: Can academically produced software such as LAPACK, LINBOX, SAGE compete with commercial software?
Hoon Hong answer: Yes but why? Why not cooperate. Support Sage very much but more research on interfaces and integration of different systems could lead to cooperation of the commercial systems with Sage.
Another panel:
Q: What are the spectacular successes and failures of computer algebra?
Failures:
(a) Small number of researchers.
(b) Sage could fail from lack of lies with the symbolic/numerical community (as Maxima/Axiom did). Matlab may fail due to uphill battle to integrate Mupad into good symbolic toolbox. (Many voiced view that Matlab is strong because of its many toolboxes, on the panel and privately.)
(c) Education at the High School level using CA.
(d) Presenting output of CA intelligently and in a standard format.
(e) Failure to teach people how to properly use a CA system.
Successes:
(a) Sage – interesting new effort (with caveat above)
(b) Groebner bases, LLL.
My talk on Sage raised a lot of questions. My There is both strong support for Sage and some questions on its design philisophy. My page 6 at http://sage.math.washington.edu/home/wdj/expository/nsf-eccad2009/
was a source of lots of questions.
At ECCAD http://www.cs.uri.edu/eccad2009/Welcome.html, Sage was mentioned a few times in talks as well as in some posters. The “main” Sage event was a laptop software demo Which Karl-Dieter Crisman set up for Sage.
Overall, a good experience for Sage.