Sports ranking methods, 4

This is the fourth of a series of expository posts on matrix-theoretic sports ranking methods. This post discusses the Elo rating.

This system was originally developed by Arpad Elo (Elo (1903-1992) was a physics professor at Marquette University in Milwaukee and a chess master, eight-time winner of the Wisconsin State Chess Championships.) Originally, it was developed for rating chess players in the 1950s and 1960s. Now it is used for table tennis, basketball, and other sports.

We use the following version of his rating system.

As above, assume all the $n$ teams play each other (ties allowed)
and let $r_i$ denote the rating of Team $i$, $i=1,2,\dots,n$.

Let $A=(A_{ij})$ denote an $n\times n$ matrix of score results:

$A_{ij}= \left\{ \begin{array}{rr} -1,& {\rm if\ team\ } i {\rm \ lost\ to\ team\ } j,\\ +1,& {\rm if\ team\ } i {\rm\ beat\ team\ } j,\\ 0, & {\rm if}\ i=j. \end{array} \right.$

Let $S_{ij}=(A_{ij}+1)/2$.

As in the previous post, the matrix $A$ associated to the example of the Patriot league is the adjacency matrix of a diagraph.

1. Initialize all the ratings to be $100$: ${\bf r}=(r_1,\dots,r_n) = (100,\dots,100)$.
2. After Team $i$ plays Team $j$, update their rating using the formula

$r_i = r_i+K(S_{ij}-mu_{ij}),$

where $K=10$ and

$\mu_{ij} = (1+e^{-(r_i-r_j)/400})^{-1}.$

In the example of the Patriot league, the ratings vector is

${\bf r}=(85.124, 104.79, 104.88, 85.032, 94.876, 124.53).$

This gives the ranking

Lafayette $<$ Army $<$ Lehigh $<$ Bucknell $<$ Holy Cross $<$ Navy.

This gives a prediction failure rate of $13.3\%$.

Some SageMath code for this:

def elo_rating(A):
"""
A is a signed adjacency matrix for a directed graph.

Returns elo ratings of the vertices of Gamma = Graph(A)

EXAMPLES:
sage: A = matrix(QQ,[
[0 , -1 , 1  , -1 , -1 , -1 ],
[1,   0 ,  -1,  1,  1,   -1  ],
[-1 , 1 ,  0 ,  1 , 1  , -1  ],
[1 , -1 , -1,  0 ,  -1 , -1  ],
[1 , - 1 , - 1 , 1 , 0 , - 1  ],
[1 ,  1  ,  1  , 1  , 1  , 0 ]
])
sage: elo_rating(A)
(85.124, 104.79, 104.88, 85.032, 94.876, 124.53)

"""
n = len(A.rows())
RR = RealField(prec=20)
V = RR^n
K = 10
r0 = 100 # initial rating
r = n*[r0]
for i in range(n):
for j in range(n):
if ij and A[i][j]==1:
S = 1
elif ij and A[i][j]==-1:
S = 0
else:
S = 1/2
mu = 1/(1+e^(-(r[i]-r[j])/400))
r[i] = r[i]+K*(S-mu)
return V(r)


How do I construct … in GAP?

This page is devoted to answering some basic questions along the line “How do I construct … in GAP?” You may view the html source code for the GAP commands without the output or GAP prompt.

Questions

 How do I construct a … group? permutation dihedral  cyclicconjugacy classes of a finitely presented How do I … a polynomial? factor find roots of evaluate Groebner basis of ideal of Brauer characters How do I find the … of a group representation? How do I compute an mod m, where A is …? Given a group G, how do I compute … ?

• permutation:
To construct a permutation group, write down generators in disjoint cycle notation, put them in a list (i.e., surround them by square brackets), and the permutation group G generated by the cycles (1,2)(3,4) and (1,2,3):
gap> G:=Group((1,2)(3,4),(1,2,3));

Group([ (1,2)(3,4), (1,2,3) ])


This is of course a subgroup of the symmetric group S4 on 4 letters. Indeed, this G is in fact the alternating group on four letters, A4.

By virtue of the fact that the permutations generating G employ integers less than or equal to 4, this group G is a subgroup of the symmetric group S4 on 4 letters. Some permutation groups have special constructions:

gap> S4:=SymmetricGroup(4);
Sym( [ 1 .. 4 ] )
gap> A4:=AlternatingGroup(4);
Alt( [ 1 .. 4 ] )
gap> IsSubgroup(S4,G);
true
gap> IsSubgroup(A4,G);
true
gap> S3:=SymmetricGroup(3);
Sym( [ 1 .. 3 ] )
gap> IsSubgroup(S3,G);
false



• dihedral
To construct a dihedral group, use the special “DihedralGroup” command:
gap> G:=DihedralGroup(6);
gap> Size(G);
6
gap> f:=GeneratorsOfGroup( G );
[ f1, f2 ]
gap> f[1]^2; f[2]^3;
identity of ...
identity of ...
gap> f[1]^2= f[2]^3;
true



• cyclic group
To construct a cyclic group, you may construct integers mod n:

gap> R:=ZmodnZ( 12);
(Integers mod 12)
gap> a:=Random(R);
ZmodnZObj( 11, 12 )
gap> 4*a;
ZmodnZObj( 8, 12 )
gap> b:=Random(R);
ZmodnZObj( 9, 12 )
gap> a+b;
ZmodnZObj( 8, 12 )


or use the special “CyclicGroup” command

gap> G:=CyclicGroup(12);
pc group of size 12 with 3 generators
gap> a:=Random(G);
f3^2
gap> f:=GeneratorsOfGroup( G );
[ f1, f2, f3 ]
gap> f[1]^4;
f3
gap> f[1]^12;
identity of ...



• conjugacy:
The conjugacy classes of a group G are computed using the “ConjugacyClasses” command. This is a list of classes {x^-1*g*x | x in G}.

gap> G:=SL(2,7);
SL(2,7)
gap> CG:=ConjugacyClasses(G);
[ [ [ Z(7)^0, 0*Z(7) ], [ 0*Z(7), Z(7)^0 ] ]^G,
[ [ 0*Z(7), Z(7)^3 ], [ Z(7)^0, Z(7)^5 ] ]^G,
[ [ 0*Z(7), Z(7)^4 ], [ Z(7)^5, Z(7)^5 ] ]^G,
[ [ Z(7)^3, 0*Z(7) ], [ 0*Z(7), Z(7)^3 ] ]^G,
[ [ 0*Z(7), Z(7)^3 ], [ Z(7)^0, Z(7)^2 ] ]^G,
[ [ 0*Z(7), Z(7)^4 ], [ Z(7)^5, Z(7)^2 ] ]^G,
[ [ 0*Z(7), Z(7)^3 ], [ Z(7)^0, 0*Z(7) ] ]^G,
[ [ 0*Z(7), Z(7)^3 ], [ Z(7)^0, Z(7)^4 ] ]^G,
[ [ 0*Z(7), Z(7)^3 ], [ Z(7)^0, Z(7) ] ]^G,
[ [ Z(7)^4, 0*Z(7) ], [ 0*Z(7), Z(7)^2 ] ]^G,
[ [ Z(7)^5, 0*Z(7) ], [ 0*Z(7), Z(7) ] ]^G ]
gap> g:=Representative(CG[3]); Order(g);
[ [ 0*Z(7), Z(7)^4 ], [ Z(7)^5, Z(7)^5 ] ]
14
gap> g:=Representative(CG[4]); Order(g);
[ [ Z(7)^3, 0*Z(7) ], [ 0*Z(7), Z(7)^3 ] ]
2
gap> g:=Representative(CG[5]); Order(g);
[ [ 0*Z(7), Z(7)^3 ], [ Z(7)^0, Z(7)^2 ] ]
7
gap> g:=Representative(CG[6]); Order(g);
[ [ 0*Z(7), Z(7)^4 ], [ Z(7)^5, Z(7)^2 ] ]
7
gap>


• presented
To construct a finitely presented group in GAP, use the “FreeGroup” and “FpGroupPresentation” commands. Here is one example.

gap> M12 := MathieuGroup( 12 );
Group([ (1,2,3,4,5,6,7,8,9,10,11), (3,7,11,8)(4,10,5,6), (1,12)(2,11)(3,6)(4,8)(5,9)(7,10) ])
gap> F := FreeGroup( "a", "b", "c" );
free group on the generators [ a, b, c ]
gap> words := [ F.1, F.2 ];
[ a, b ]
gap> P := PresentationViaCosetTable( M12, F, words );
presentation with 3 gens and 10 rels of total length 97
gap> TzPrintRelators( P );
#I  1. c^2
#I  2. b^4
#I  3. a*c*a*c*a*c
#I  4. a*b^2*a*b^-2*a*b^-2
#I  5. a^11
#I  6. a^2*b*a^-2*b^2*a*b^-1*a^2*b^-1
#I  7. a*b*a^-1*b*a^-1*b^-1*a*b*a^-1*b*a^-1*b^-1
#I  8. a^2*b*a^2*b^2*a^-1*b*a^-1*b^-1*a^-1*b^-1
#I  9. a*b*a*b*a^2*b^-1*a^-1*b^-1*a*c*b*c
#I  10. a^4*b*a^2*b*a^-2*c*a*b*a^-1*c
gap> G := FpGroupPresentation( P );
fp group on the generators [ a, b, c ]
gap> RelatorsOfFpGroup( G );
[ c^2, b^4, a*c*a*c*a*c, a*b^-2*a*b^-2*a*b^-2, a^11, a^2*b*a^-2*b^-2*a*b^-1*a^2*b^-1, a*b*a^-1*b*a^-1*b^-1*a*b*a^-1*b*a^-1*b^-1,
a^2*b*a^2*b^-2*a^-1*b*a^-1*b^-1*a^-1*b^-1, a*b*a*b*a^2*b^-1*a^-1*b^-1*a*c*b*c, a^4*b*a^2*b*a^-2*c*a*b*a^-1*c ]
gap> Size(M12);
95040
gap> Size(G);
95040
gap> IsomorphismGroups(G,M12);
????????


The last command is computationally intensive and requires more than the default memory allocation of 256M of RAM.

Here is another example.

gap> F := FreeGroup( "a", "b");
free group on the generators [ a, b ]
gap> G:=F/[F.1^2,F.2^3,F.1*F.2*F.1^(-1)*F.2^(-1)];
fp group on the generators [ a, b ]
gap> Size(G);
6



• rref
The key command for row reduction is “TriangulizeMat”. The following example illustrates the syntax.

gap> M:=[[1,2,3,4,5],[1,2,1,2,1],[1,1,0,0,0]];
[ [ 1, 2, 3, 4, 5 ], [ 1, 2, 1, 2, 1 ], [ 1, 1, 0, 0, 0 ] ]
gap> TriangulizeMat(M);
gap> M;
[ [ 1, 0, 0, -1, 1 ], [ 0, 1, 0, 1, -1 ], [ 0, 0, 1, 1, 2 ] ]
gap> Display(M);
[ [   1,   0,   0,  -1,   1 ],
[   0,   1,   0,   1,  -1 ],
[   0,   0,   1,   1,   2 ] ]
gap> M:=Z(3)^0*[[1,2,3,4,5],[1,2,1,2,1],[1,1,0,0,0]];
[ [ Z(3)^0, Z(3), 0*Z(3), Z(3)^0, Z(3) ],
[ Z(3)^0, Z(3), Z(3)^0, Z(3), Z(3)^0 ],
[ Z(3)^0, Z(3)^0, 0*Z(3), 0*Z(3), 0*Z(3) ] ]
gap> TriangulizeMat(M);
gap> Display(M);
1 . . 2 1
. 1 . 1 2
. . 1 1 2
gap>


• kernel:
There are different methods for matrices over the integers and matrices over a field. For integer entries, related commands include “NullspaceIntMat” and “SolutionNullspaceIntMat” in section 25.1 “Linear equations over the integers and Integral Matrices” of the reference manual.

gap> M:=[[1,2,3],[4,5,6],[7,8,9]];
[ [ 1, 2, 3 ], [ 4, 5, 6 ], [ 7, 8, 9 ] ]
gap> NullspaceIntMat(M);
[ [ 1, -2, 1 ] ]
gap> SolutionNullspaceIntMat(M,[0,0,1]);
[ fail, [ [ 1, -2, 1 ] ] ]
gap> SolutionNullspaceIntMat(M,[0,0,0]);
[ [ 0, 0, 0 ], [ [ 1, -2, 1 ] ] ]
gap> SolutionNullspaceIntMat(M,[1,2,3]);
[ [ 1, 0, 0 ], [ [ 1, -2, 1 ] ] ]



Here (0,0,1) is not in the image of M
(under v-> v*M) but (0,0,0) and (1,2,3) are.

For field entries, related commands include “NullspaceMat” and “TriangulizedNullspaceMat” in section 24.6 “Matrices Representing Linear Equations and the Gaussian Algorithm”
of the reference manual.

gap> M:=[[1,2,3],[4,5,6],[7,8,9]];
[ [ 1, 2, 3 ], [ 4, 5, 6 ], [ 7, 8, 9 ] ]
gap> NullspaceMat(M);
[ [ 1, -2, 1 ] ]
gap> TriangulizedNullspaceMat(M);
[ [ 1, -2, 1 ] ]
gap> M:=[[1,2,3,1,1],[4,5,6,1,1],[7,8,9,1,1],[1,2,3,1,1]];
[ [ 1, 2, 3, 1, 1 ], [ 4, 5, 6, 1, 1 ], [ 7, 8, 9, 1, 1 ],
[ 1, 2, 3, 1, 1 ] ]
gap> NullspaceMat(M);
[ [ 1, -2, 1, 0 ], [ -1, 0, 0, 1 ] ]
gap> TriangulizedNullspaceMat(M);
[ [ 1, 0, 0, -1 ], [ 0, 1, -1/2, -1/2 ] ]



• characteristic polynomial:
Please see section 24.12.1 of the GAP reference manual for examples of characteristic polynomial of a square matrix (“CharacteristicPolynomial”) and section 56.3 for examples of the “characteristic polynomial” (called a “TracePolynomial”) of an element of a field extension.

• character:
GAP contains very extensive character theoretic functions and data libraries (including an interface the character table in the Atlas). Here is just one simple example.

gap> G:=Group((1,2)(3,4),(1,2,3));
Group([ (1,2)(3,4), (1,2,3) ])
gap> T:=CharacterTable(G);
CharacterTable( Alt( [ 1 .. 4 ] ) )
gap> Display(T);
CT1

2  2  2  .  .
3  1  .  1  1

1a 2a 3a 3b
2P 1a 1a 3b 3a
3P 1a 2a 1a 1a

X.1     1  1  1  1
X.2     1  1  A /A
X.3     1  1 /A  A
X.4     3 -1  .  .

A = E(3)^2
= (-1-ER(-3))/2 = -1-b3
gap> irr:=Irr(G);
[ Character( CharacterTable( Alt( [ 1 .. 4 ] ) ), [ 1, 1, 1, 1 ] ),
Character( CharacterTable( Alt( [ 1 .. 4 ] ) ), [ 1, 1, E(3)^2, E(3) ] ),
Character( CharacterTable( Alt( [ 1 .. 4 ] ) ), [ 1, 1, E(3), E(3)^2 ] ),
Character( CharacterTable( Alt( [ 1 .. 4 ] ) ), [ 3, -1, 0, 0 ] ) ]
gap> Display(irr);
[ [       1,       1,       1,       1 ],
[       1,       1,  E(3)^2,    E(3) ],
[       1,       1,    E(3),  E(3)^2 ],
[       3,      -1,       0,       0 ] ]
gap> chi:=irr[2]; gamma:=CG[3]; g:=Representative(gamma); g^chi;
Character( CharacterTable( Alt( [ 1 .. 4 ] ) ), [ 1, 1, E(3)^2, E(3) ] )
(1,2,3)^G
(1,2,3)
E(3)^2



For further details and examples, see chapters 6972 of the GAP reference manual.

• brauer:
Just a simple example of what GAP can do here. To construct a Brauer character table:

gap> G:=Group((1,2)(3,4),(1,2,3));
Group([ (1,2)(3,4), (1,2,3) ])
gap> irr:=IrreducibleRepresentations(G,GF(7));
[ [ (1,2)(3,4), (1,2,3) ] -> [ [ [ Z(7)^0 ] ], [ [ Z(7)^0 ] ] ],

[ (1,2)(3,4), (1,2,3) ] -> [ [ [ Z(7)^0 ] ], [ [ Z(7)^4 ] ] ],

[ (1,2)(3,4), (1,2,3) ] -> [ [ [ Z(7)^0 ] ], [ [ Z(7)^2 ] ] ],

[ (1,2)(3,4), (1,2,3) ] -> [

[ [ 0*Z(7), Z(7)^3, Z(7)^0 ], [ 0*Z(7), Z(7)^3, 0*Z(7) ],
[ Z(7)^0, Z(7)^3, 0*Z(7) ] ],
[ [ 0*Z(7), Z(7)^0, 0*Z(7) ],
[ 0*Z(7), 0*Z(7), Z(7)^0 ], [ Z(7)^0, 0*Z(7), 0*Z(7) ] ]

] ]
gap> brvals := List(irr,chi-> List(ConjugacyClasses(G),c->
BrauerCharacterValue(Image(chi, Representative(c)))));
[ [ 1, 1, 1, 1 ], [ 1, 1, E(3)^2, E(3) ], [ 1, 1, E(3), E(3)^2 ],
[ 3, -1, 0, 0 ] ]
gap> Display(brvals);
[ [       1,       1,       1,       1 ],

[       1,       1,  E(3)^2,    E(3) ],

[       1,       1,    E(3),  E(3)^2 ],

[       3,      -1,       0,       0 ] ]
gap>


List(ConjugacyClasses(G),c->BrauerCharacterValue(Image(chi, Representative(c)))));
#Display(brvals);
T:=CharacterTable(G);
Display(T);
–>

• polynomial
There are various ways to construct a polynomial in GAP.

gap> Pts:=Z(7)^0*[1,2,3];
[ Z(7)^0, Z(7)^2, Z(7) ]
gap> Vals:=Z(7)^0*[1,2,6];
[ Z(7)^0, Z(7)^2, Z(7)^3 ]
gap> g:=InterpolatedPolynomial(GF(7),Pts,Vals);
Z(7)^5*x_1^2+Z(7)


Or:

gap> p:=3;; F:=GF(p);;
gap> R:=PolynomialRing(F,["x1","x2"]);
PolynomialRing(..., [ x1, x2 ])
gap> vars:=IndeterminatesOfPolynomialRing(R);;
gap> x1:=vars[1]; x2:=vars[2];
x1
x2
gap> p:=x1^5-x2^5;
x1^5-x2^5
gap> DivisorsMultivariatePolynomial(p,R);
[ x1^4+x1^3*x2+x1^2*x2^2+x1*x2^3+x2^4, x1-x2 ]


Or:

gap> x:=X(Rationals);
x_1
gap> f:=x+x^2+1;
x_1^2+x_1+1
gap> Value(f,[x],[1]);
3


• factor
To factor a polynomial in GAP, there is one command for univariate polynomials (“Factors”) and another command for multivariate polynomials (“DivisorsMultivariatePolynomial”). For a factoring a univariate polynomial, GAP provides only methods over finite fields and over subfields of cyclotomic fields. Please see the examples given in section 64.10 “Polynomial Factorization” for more details. For multivariate polynomials, a very slow algorithm has been implemented in GAP and an interface to a very fast algorithm in Singular has been implemented for those who have both Singular and the GAP Singular package installed. The former of these was illustrated above in “polynomial” above. (Again, the ground field must be a finite field or a subfields of cyclotomic fields.) For the latter, please see the example in the (GAP-)Singular manual FactorsUsingSingularNC.

• roots
There are some situations where GAP can find the roots of a polynomial but GAP does not do this generally. (The roots must generate either a finite field or a subfield of a cyclotomic field.) However, there is a package called RadiRoot which must be installed which does help to do this for polynomials with rational coefficients (radiroot itself requires other packages to be installed; please see the webpage for more details). The “Factors” command actually has an option which allows you to increase the groundfield so that a factorization actually returns the roots. Please see the examples given in section 64.10 “Polynomial Factorization” for more details. Here is a second approach.

gap> p:=3; n:=4; F:=GF(p^n); c:=Random(F); r:=2;
3
4
GF(3^4)
Z(3^4)^79
2
gap>  x:=X(F,1); f:=x^r-c*x+c-1;
x_1
x_1^2+Z(3^4)^39*x_1+Z(3^4)^36
gap>  F_f:=FieldExtension( F, f );
AsField( GF(3^4), GF(3^8) )
gap>  alpha:=RootOfDefiningPolynomial(F_f);
Z(3^4)^36
gap> Value(f,[x],[alpha]);
0*Z(3)



Here is a third. First, enter the following program

RootOfPolynomial:=function(f,R)
local F0,Ff,a;
F0:=CoefficientsRing(R);
Ff:=FieldExtension(F0,f);
a:=RootOfDefiningPolynomial(Ff);
return a;
end;


Here’s how this can be used to find a root:

gap> F:=Rationals;
Rationals
gap> x:=X(F,1); f:=x^2+x+1;
x_1
x_1^2+x_1+1
gap> R:=PolynomialRing( F, [ x ]);
PolynomialRing(..., [ x_1 ])
gap> a:=RootOfPolynomial(f,R);
E(3)
gap> # check:
gap> Value(f,[x],[a]);
0


1. In the GAP Forum: Hensel lifting discussion.
2. In the manual, Galois groups.

• evaluate:
The relevant command is “Value”. There are several examples already on this page. For others, please see the examples given in section 64.7 Multivariate polynomials of the manual. For sparse uivariate polynomials, there is also the command “ValuePol” in section 23.6 of the manual.

• integer power
This is easy and intuitive:

gap> a:=1000; n:=100000; m:=123;
1000
100000
123
gap> a^n mod m;
1



• matrix power:
This too is easy and intuitive:

gap> A:=[[1,2],[3,4]]; n:=100000; m:=123;
[ [ 1, 2 ], [ 3, 4 ] ]
100000
123
gap> A^n mod m;
[ [ 1, 41 ], [ 0, 1 ] ]


• polynomial power
GAP allows you to do arithmetic over the polynomial ring R[x], where R = Z/nZ (where n is a positive integer). Here’s an example.

gap> Z4:=ZmodnZ(4);
(Integers mod 4)
gap> R:=UnivariatePolynomialRing(Z4,1);
PolynomialRing(..., [ x ])
gap> x:=IndeterminatesOfPolynomialRing(R)[1];
x
gap> I:=TwoSidedIdealByGenerators( R,[x^8-x^0]);
two-sided ideal in PolynomialRing(..., [ x ]), (1 generators)
gap> gen:=x^8-x^0;
x8-ZmodnZObj(1,4)
gap> QuotientRemainder(R,x^8,gen);
[ ZmodnZObj(1,4), ZmodnZObj(1,4) ]
gap> QuotientRemainder(R,x^15,gen);
[ x^7, x^7 ]
gap> QuotientRemainder(R,x^15+x^8,gen);
[ x^7+ZmodnZObj(1,4), x^7+ZmodnZObj(1,4) ]
gap> PowerMod( R, x+x^0, 15, gen );
ZmodnZObj(0,4)
gap> PowerMod( R, x, 15, gen );
x^7



• Groebner basis
GAP’s Groebner bases algorithms are relatively slow and are included mostly for simple examples and for teaching purposes. However, a GAP interface to a very fast algorithm in Singular has been implemented for those who have both Singular and the GAP Singular package installed. The former of these is illustrated in section 64.17 Groebner bases of the GAP manual. For the latter, please see the example in the (GAP-)Singular manual GroebnerBasis.

• normal subgroup:
Here is an example:

gap> G := AlternatingGroup( 5 );
Group( (1,2,5), (2,3,5), (3,4,5) )
gap> normal := NormalSubgroups( G );
[ Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ), [  ] ),
Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ),
[ (1,2)(3,4), (1,3)(4,5), (1,4)(2,3) ] ) ]


1. Please see Volkmar Felsch’s GAP Forum response to a related question.
2. The xgap package (or, on a mac, Gap.app) displays subgroup lattices graphically.

• abelian subgroup
One idea to compute all the abelian subgroups is to compute all the subgroups then “filter” out the abelian ones. Here is an illustration, taked from a GAP Forum response Volkmar Felsch.

gap> G := AlternatingGroup( 5 );
Group( (1,2,5), (2,3,5), (3,4,5) )
gap> classes := ConjugacyClassesSubgroups( G );
[ ConjugacyClassSubgroups( Group( (1,2,5), (2,3,5),
(3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ), [  ] ) ),
ConjugacyClassSubgroups( Group( (1,2,5), (2,3,5),
(3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ),
[ (2,3)(4,5) ] ) ), ConjugacyClassSubgroups( Group( (1,2,5),
(2,3,5), (3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ),
[ (3,4,5) ] ) ), ConjugacyClassSubgroups( Group( (1,2,5),
(2,3,5), (3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ),
[ (2,3)(4,5), (2,4)(3,5) ] ) ), ConjugacyClassSubgroups( Group(
(1,2,5), (2,3,5), (3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5),
(3,4,5) ), [ (1,2,3,4,5) ] ) ), ConjugacyClassSubgroups( Group(
(1,2,5), (2,3,5), (3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5),
(3,4,5) ), [ (3,4,5), (1,2)(4,5) ] ) ),
ConjugacyClassSubgroups( Group( (1,2,5), (2,3,5),
(3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ),
[ (1,2,3,4,5), (2,5)(3,4) ] ) ), ConjugacyClassSubgroups( Group(
(1,2,5), (2,3,5), (3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5),
(3,4,5) ), [ (2,3)(4,5), (2,4)(3,5), (3,4,5) ] ) ),
ConjugacyClassSubgroups( Group( (1,2,5), (2,3,5), (3,4,5) ), Group(
(1,2,5), (2,3,5), (3,4,5) ) ) ]
gap> cl := classes[4];
ConjugacyClassSubgroups( Group( (1,2,5), (2,3,5),
(3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ),
[ (2,3)(4,5), (2,4)(3,5) ] ) )
gap> length := Size( cl );
5
gap> rep := Representative( cl );
Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ),
[ (2,3)(4,5), (2,4)(3,5) ] )
gap> order := Size( rep );
4
gap> IsAbelian( rep );
true
gap> abel := Filtered( classes, cl -> IsAbelian( Representative( cl ) ) );
[ ConjugacyClassSubgroups( Group( (1,2,5), (2,3,5),
(3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ), [  ] ) ),
ConjugacyClassSubgroups( Group( (1,2,5), (2,3,5),
(3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ),
[ (2,3)(4,5) ] ) ), ConjugacyClassSubgroups( Group( (1,2,5),
(2,3,5), (3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ),
[ (3,4,5) ] ) ), ConjugacyClassSubgroups( Group( (1,2,5),
(2,3,5), (3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5), (3,4,5) ),
[ (2,3)(4,5), (2,4)(3,5) ] ) ), ConjugacyClassSubgroups( Group(
(1,2,5), (2,3,5), (3,4,5) ), Subgroup( Group( (1,2,5), (2,3,5),
(3,4,5) ), [ (1,2,3,4,5) ] ) ) ]


• homology
This depends on how the group is given. For example, suppose that G is a permutation group with generators genG and H is a permutation group with generators genH. To find a homomorphism from G to H, one may use the “GroupHomomorphismByImages” or “GroupHomomorphismByImagesNC” commands. For examples of the syntax, please see section 38.1 Creating Group Homomorphisms. Here’s an illustration of how to convert a finitely presented group into a permutation group.

gap> p:=7;
7
gap> G:=PSL(2,p);
Group([ (3,7,5)(4,8,6), (1,2,6)(3,4,8) ])
gap> H:=SchurCover(G);
fp group of size 336 on the generators [ f1, f2, f3 ]
gap> iso:=IsomorphismPermGroup(H);
[ f1, f2, f3 ] -> [ (1,2,4,3)(5,9,7,10)(6,11,8,12)(13,14,15,16),
(2,5,6)(3,7,8)(11,13,14)(12,15,16), (1,4)(2,3)(5,7)(6,8)(9,10)(11,12)(13,
15)(14,16) ]
gap> H0:=Image(iso);                       # 2-cover of PSL2
Group([ (1,2,4,3)(5,9,7,10)(6,11,8,12)(13,14,15,16),
(2,5,6)(3,7,8)(11,13,14)(12,15,16), (1,4)(2,3)(5,7)(6,8)(9,10)(11,12)(13,
15)(14,16) ])
gap> IdGroup(H0);
[ 336, 114 ]
gap> IdGroup(SL(2,7));
[ 336, 114 ]
gap>


• semi-direct product(Contributed by Nilo de Roock):
As you can easily verify, D8 is isomorphic to C2:C4. Or in GAP…

N:=CyclicGroup(IsPermGroup,4);
G:=CyclicGroup(IsPermGroup,2);
AutN:=AutomorphismGroup(N);
f:=GroupHomomorphismByImages(G,AutN,GeneratorsOfGroup(G),[Elements(AutN)[2]]);
NG:=SemidirectProduct(G,f,N);


Verify with

StructureDescription(NG);


• semi-direct products(Contributed by Nilo de Roock):
The following shows how to construct all non-abelian groups of order 12 as semi-direct products. These products are not trivial yet small enough to verify by hand.

#D12 = (C2 x C2) : C3
G1:=CyclicGroup(IsPermGroup,2);
G2:=CyclicGroup(IsPermGroup,2);
G:=DirectProduct(G1,G2);
N:=CyclicGroup(IsPermGroup,3);
AutN:=AutomorphismGroup(N);
f:=GroupHomomorphismByImages(G,AutN,[Elements(G)[1],Elements(G)[2],Elements(G)[3],Elements(G)[4]],[Elements(AutN)[1],Elements(AutN)[2],Elements(AutN)[1],Elements(AutN)[2]]);
NG:=SemidirectProduct(G,f,N);
Print(str(NG));
Print("\n");

#T = C4 : C3
G:=CyclicGroup(IsPermGroup,4);
N:=CyclicGroup(IsPermGroup,3);
AutN:=AutomorphismGroup(N);
f:=GroupHomomorphismByImages(G,AutN,[Elements(G)[1],Elements(G)[2],Elements(G)[3],Elements(G)[4]],[Elements(AutN)[1],Elements(AutN)[2],Elements(AutN)[1],Elements(AutN)[2]]);
NG:=SemidirectProduct(G,f,N);
Print(str(NG));
Print("\n");

#A4 = C3 : (C2 x C2)
G:=CyclicGroup(IsPermGroup,3);
N1:=CyclicGroup(IsPermGroup,2);
N2:=CyclicGroup(IsPermGroup,2);
N:=DirectProduct(G1,G2);
AutN:=AutomorphismGroup(N);
f:=GroupHomomorphismByImages(G,AutN,[Elements(G)[1],Elements(G)[2],Elements(G)[3]],[Elements(AutN)[1],Elements(AutN)[4],Elements(AutN)[5]]);
NG:=SemidirectProduct(G,f,N);
Print(str(NG));
Print("\n");


• cohomology
GAP will compute the Schur multiplier H2(G,C) using the
“AbelianInvariantsMultiplier” command. Here is an example showing how to find H2(A5,C), where A5 is the alternating group on 5 letters.

gap> A5:=AlternatingGroup(5);
Alt( [ 1 .. 5 ] )
gap> AbelianInvariantsMultiplier(A5);
[ 2 ]


So, H2(A5,C) is Z/2Z.

1. See section 37.23 and section 37.24 of the GAP manual.
2. See D. Holt’s GAP package cohomolo.

Sports ranking methods, 3

This is the third of a series of expository posts on matrix-theoretic sports ranking methods. This post discusses the random walker ranking.

We follow the presentation in the paper by Govan and Meyer (Ranking National Football League teams using Google’s PageRank). The table of “score differentials” based on the table in a previous post is:

$\begin{tabular}{c|cccccc} \verb+x\y+ & Army & Bucknell & Holy Cross & Lafayette & Lehigh & Navy \\ \hline Army & 0 & 0 & 1 & 0 & 0 & 0 \\ Bucknell & 2 & 0 & 0 & 2 & 3 & 0 \\ Holy Cross & 0 & 3 & 0 & 4 & 14 & 0 \\ Lafayette & 10 & 0 & 0 & 0 & 0 & 0 \\ Lehigh & 2 & 0 & 0 & 11 & 0 & 0 \\ Navy & 11 & 14 & 8 & 22 & 6 & 0 \\ \end{tabular}$
This leads to the following matrix:

$M_0=\left(\begin{array}{cccccc} 0 & 0 & 1 & 0 & 0 & 0 \\ 2 & 0 & 0 & 2 & 3 & 0 \\ 0 & 3 & 0 & 4 & 14 & 0 \\ 10 & 0 & 0 & 0 & 0 & 0 \\ 2 & 0 & 0 & 11 & 0 & 0 \\ 11 & 14 & 8 & 22 & 6 & 0 \\ \end{array}\right) .$

The edge-weighted score-differential graph associated to $M_0$ (regarded as a weighted adjacency matrix) is in the figure below.

This matrix $M_0$ must be normalized to create a (row) stochastic matrix:

$M = \left(\begin{array}{cccccc} 0 & 0 & 1 & 0 & 0 & 0 \\ {2}/{7} & 0 & 0 /{7} /{7} & 0 \\ 0 /{7} & 0 /{21} /{3} & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ {2}/{13} & 0 & 0 /{13} & 0 & 0 \\ {11}/{61} /{61} /{61} /{61} /{61} & 0 \\ \end{array}\right) .$

Next, to insure it is irreducible, we replace $M$ by $A=(M+J)/2$, where $J$ is the $6\times 6$ doubly stochastic matrix with every entry equal to $1/6$:

$A=\left(\begin{array}{cccccc} {1}/{12} & 1/{12} & 7/{12} & 1/{12} & 1/{12} & 1/{12} \\ {19}/{84} & 1/{12} & 1/{12} & 19/{84} & 25/{84} & 1/{12} \\ {1}/{12} & 13/{84} & 1/{12} & 5/{28} & 5/{12} & 1/{12} \\ {7}/{12} & 1/{12} & 1/{12} & 1/{12} & 1/{12} & 1/{12} \\ {25}/{156} & 1/{12} & 1/{12} & 79/{156} & 1/{12} & 1/{12} \\ {127}/{732} & 145/{732} & 109/{732} & 193/{732} & 97/{732} & 1/{12} \end{array}\right).$

Let

${\bf v}_0 = \left( \frac{1}{6} , \frac{1}{6} , \frac{1}{6} , \frac{1}{6} , \frac{1}{6} , \frac{1}{6}\right).$

The ranking determined by the random walker method is the reverse of the left eigenvector of $A$ associated to the largest eigenvalue $\lambda_{max}=1$ (by reverse, I mean that the vector ranks the teams from worst-to-best, not from best-to-worst, as we have seen in previous ranking methods).
In other words, the vector

${\bf r}^*=\lim_{n\to \infty}{\bf v}_0A^n.$

This is approximately

${\bf r}^* \cong \left(0.2237\dots ,\,0.1072\dots ,\,0.2006\dots ,\,0.2077\dots ,\,0.1772\dots ,\,0.0833\dots \right).$

Its reverse gives the ranking:

Army $<$ Lafayette $<$ Bucknell $<$ Lehigh $<$ Holy Cross $<$ Navy.

This gives a prediction failure rate of $13.3\%$.

Sports ranking methods, 2

This is the second of a series of expository posts on matrix-theoretic sports ranking methods. This post discusses Keener’s method (see J.P. Keener, The Perron-Frobenius theorem and the ranking of football, SIAM Review 35 (1993)80-93 for details).

See the first post in the series for a discussion of the data we’re using to explain this method. We recall the table of results:

 X\Y Army Bucknell Holy Cross Lafayette Lehigh Navy Army x 14-16 14-13 14-24 10-12 8-19 Bucknell 16-14 x 27-30 18-16 23-20 10-22 Holy Cross 13-14 30-27 x 19-15 17-13 9-16 Lafayette 24-14 16-18 15-19 x 12-23 17-39 Lehigh 12-10 20-23 13-17 23-12 x 12-18 Navy 19-8 22-10 16-9 39-17 18-12 x

Win-loss digraph of the Patriot league mens baseball from 2015

Suppose T teams play each other. Let $A=(a_{ij})_{1\leq i,j\leq T}$ be a non-negative square matrix determined by the results of their games, called the preference matrix. In his 1993 paper, Keener defined the score of the $i$th team to be given by

$s_i = \frac{1}{n_i}\sum_{j=1}^T a_{ij}r_j,$

where $n_i$ denotes the total number of games played by team $i$ and ${\bf r}=(r_1,r_2,\dots ,r_T)$ is the rating vector (where $r_i\geq 0$ denotes the rating of team $i$).

One possible preference matrix the matrix $A$ of total scores obtained from the pre-tournament table below:

$A = \left(\begin{array}{rrrrrr} 0 & 14 & 14 & 14 & 10 & 8 \\ 16 & 0 & 27 & 18 & 23 & 28 \\ 13 & 30 & 0 & 19 & 27 & 43 \\ 24 & 16 & 15 & 0 & 12 & 17 \\ 12 & 20 & 43 & 23 & 0 & 12 \\ 19 & 42 & 30 & 39 & 18 & 0 \end{array}\right),$

(In this case, $n_i=4$ so we ignore the $1/n_i$ factor.)

In his paper, Keener proposed a ranking method where the ranking vector ${\bf r}$ is proportional to its score. The score is expressed as a matrix product $A{\bf r}$, where $A$ is a square preference matrix. In other words, there is a constant $\rho>0$ such that $s_i=\rho r_i$, for each $i$. This is the same as saying $A {\bf r} = \rho {\bf r}$.

The Frobenius-Perron theorem implies that $S$ has an eigenvector ${\bf r}=(r_1,r_2,r_3,r_4,r_5,r_6)$ having positive entries associated to the largest eigenvalue $\lambda_{max}$ of $A$, which has (geometric) multiplicity $1$. Indeed, $A$ has maximum eigenvalue $\lambda_{max}= 110.0385...$, of multiplicity $1$, with eigenvector

${\bf r}=(1, 1.8313\dots , 2.1548\dots , 1.3177\dots , 1.8015\dots , 2.2208\dots ).$

Therefore the teams, according to Kenner’s method, are ranked,

Army $<$ Lafayette $<$ Lehigh $<$ Bucknell $<$ Holy Cross $<$ Navy.

This gives a prediction failure rate of just $6.7\%$.

Memories of TS Michael, by Thomas Quint

TS Michael passed away on November 22, 2016, from cancer. I will miss him as a colleague and a kind, wise soul. Tom Quint has kindly allowed me to post these reminiscences that he wrote up.

Well, I guess I could start with the reason TS and I met in the first place. I was a postdoc at USNA in about 1991 and pretty impressed with myself. So when USNA offered to continue my postdoc for two more years (rather than give me a tenure track position), I turned it down. Smartest move I ever made, because TS got the position and so we got to know each other.

We started working w each other one day when we both attended a talk on “sphere of influence graphs”. I found the subject moderately interesting, but he came into my office all excited, and I couldn’t get rid of him — wouldn’t leave until we had developed a few research ideas.

Interestingly, his position at USNA turned into a tenure track, while mine didn’t. It wasn’t until 1996 that I got my position at U of Nevada.

Work sessions with him always followed the same pattern. As you may or may not know, T.S. a) refused to fly in airplanes, and b) didn’t drive. Living across the country from each other, the only way we could work together face-to-face was: once each summer I would fly out to the east coast to visit my parents, borrow one of their cars for a week, and drive down to Annapolis. First thing we’d do is go to Whole Foods, where he would load up my car with food and other supplies, enough to last at least a few months. My reward was that he always bought me the biggest package of nigiri sushi we could find — not cheap at Whole Foods!

It was fun, even though I had to suffer through eight million stories about the USNA Water Polo Team.

Oh yes, and he used to encourage me to sneak into one of the USNA gyms to work out. We figured that no one would notice if I wore my Nevada sweats (our color is also dark blue, and the pants also had a big “N” on them). It worked.

Truth be told, TS didn’t really have a family of his own, so I think he considered the mids as his family. He cared deeply about them (with bonus points if you were a math major or a water polo player :-).

One more TS anecdote, complete with photo.  Specifically, TS was especially thrilled to find out that we had named our firstborn son Theodore Saul Quint.  Naturally, TS took to calling him “Little TS”.  At any rate, the photo below is of “Big TS” holding “Little TS”, some time in the Fall of 2000.

TS Michael in 2000.

Simple unsolved math problem, 7

Everyone’s heard of the number $\pi =$ 3.141592…, right?

Robert Couse-Baker / CC BY http://2.0 / Flickr: 29233640@N07

And you probably know that $\pi$ is not a rational number (i.e., a quotient of two integers, like 7/3). Unlike a rational number, whose decimal expansion is eventually periodic, if you look at the digits of $\pi$ they seem random,

3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482…

But are they really? No one really knows. There’s a paper that explores the statistics of these digits using the first 22.4 trillion digits of $\pi$. Does any finite sequence of k digits (say, for example, the 4-digit sequence 2016) occur just as often as any other sequence of the same length (say, 1492), for each k? When the answer is yes, the number is called ‘normal.’ That is, a normal number is a real number whose infinite sequence of digits is distributed uniformly in the sense that each digit has the same natural density 1/10, also all possible k-tuples of digits are equally likely with density 1/k, for any integer $k>1$.

The following simple problem is unsolved:

Conjecture: $\pi$ is normal.

Sports ranking methods, 1

This is the first of a series of expository posts on matrix-theoretic sports ranking methods. This post, which owes much to discussions with TS Michael, discusses Massey’s method.

Massey’s method, currently in use by the NCAA (for football, where teams typically play each other once), was developed by Kenneth P. Massey
while an undergraduate math major in the late 1990s. We present a possible variation of Massey’s method adapted to baseball, where teams typically play each other multiple times.

There are exactly 15 pairing between these teams. These pairs are sorted lexicographically, as follows:

(1,2),(1,3),(1,4), …, (5,6).

In other words, sorted as

Army vs Bucknell, Army vs Holy Cross, Army vs Lafayette, …, Lehigh vs Navy.

The cumulative results of the 2016 regular season are given in the table below. We count only the games played in the Patriot league, but not including the Patriot league post-season tournament (see eg, the Patriot League site for details). In the table, the total score (since the teams play multiple games against each other) of the team in the vertical column on the left is listed first. In other words, ”a – b” in row $i$ and column $j$ means the total runs scored by team $i$ against team $j$ is $a$, and the total runs allowed by team $i$ against team $j$ is $b$. Here, we order the six teams as above (team $1$ is Army (USMI at Westpoint), team $2$ is Bucknell, and so on). For instance if X played Y and the scores were $10-0$, $0-1$, $0-1$, $0-1$, $0-1$, $0-1$, then the table would read $10-5$ in the position of row X and column Y.

 X\Y Army Bucknell Holy Cross Lafayette Lehigh Navy Army x 14-16 14-13 14-24 10-12 8-19 Bucknell 16-14 x 27-30 18-16 23-20 10-22 Holy Cross 13-14 30-27 x 19-15 17-13 9-16 Lafayette 24-14 16-18 15-19 x 12-23 17-39 Lehigh 12-10 20-23 13-17 23-12 x 12-18 Navy 19-8 22-10 16-9 39-17 18-12 x

Win-loss digraph of the Patriot league mens baseball from 2015

In this ordering, we record their (sum total) win-loss record (a 1 for a win, -1 for a loss) in a $15\times 6$ matrix:

$M = \left(\begin{array}{cccccc} -1 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & -1 & 0 & 0 & 0 \\ -1 & 0 & 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 & 1 & 0 \\ -1 & 0 & 0 & 0 & 0 & 1 \\ 0 & -1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & -1 & 0 & 0 \\ 0 & 1 & 0 & 0 & -1 & 0 \\ 0 & -1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & -1 & 0 & 0 \\ 0 & 0 & 1 & 0 & -1 & 0 \\ 0 & 0 & -1 & 0 & 0 & 1 \\ 0 & 0 & 0 & -1 & 1 & 0 \\ 0 & 0 & 0 & -1 & 0 & 1 \\ 0 & 0 & 0 & 0 & -1 & 1 \end{array}\right).$

We also record their total losses in a column vector:

${\bf b}= \left(\begin{array}{c} 2 \\ 1 \\ 10 \\ 2 \\ 11 \\ 3 \\ 2 \\ 3 \\ 14 \\ 4 \\ 14 \\ 10 \\ 11 \\ 22 \\ 6 \\ \end{array}\right).$

The Massey ranking of these teams is a vector ${\bf r}$ which best fits the equation

$M{\bf r}={\bf b}.$

While the corresponding linear system is over-determined, we can look for a best (in the least squares sense) approximate solution using the orthogonal projection formula

$P_V = B(B^tB)^{-1}B^t,$

valid for matrices $B$ with linearly independent columns. Unfortunately, in this case $B=M$ does not have linearly independent columns, so the formula doesn’t apply.

Massey’s clever idea is to solve

$M^tM{\bf r}=M^t{\bf b}$

by row-reduction and determine the rankings from the parameterized form of the solution. To this end, we compute

$M^tM= \left(\begin{array}{rrrrrr} 5 & -1 & -1 & -1 & -1 & -1 \\ -1 & 5 & -1 & -1 & -1 & -1 \\ -1 & -1 & 5 & -1 & -1 & -1 \\ -1 & -1 & -1 & 5 & -1 & -1 \\ -1 & -1 & -1 & -1 & 5 & -1 \\ -1 & -1 & -1 & -1 & -1 & 5 \end{array}\right)$

and

$M^t{\bf b}= \left(\begin{array}{r} -24 \\ -10 \\ 10 \\ -29 \\ -10 \\ 63 \\ \end{array}\right).$

Then we compute the rref of

$A= (M^tM,M^t{\bf b}) = \left(\begin{array}{rrrrrr|r} 5 & -1 & -1 & -1 & -1 & -1 & -24 \\ -1 & 5 & -1 & -1 & -1 & -1 & -10 \\ -1 & -1 & 5 & -1 & -1 & -1 & 10 \\ -1 & -1 & -1 & 5 & -1 & -1 & -29 \\ -1 & -1 & -1 & -1 & 5 & -1 & -10 \\ -1 & -1 & -1 & -1 & -1 & 5 & 63 \end{array}\right),$

which is

$rref(M^tM,M^t{\bf b})= \left(\begin{array}{rrrrrr|r} 1 & 0 & 0 & 0 & 0 & -1 & -\frac{87}{6} \\ 0 & 1 & 0 & 0 & 0 & -1 & -\frac{73}{6} \\ 0 & 0 & 1 & 0 & 0 & -1 & -\frac{53}{6} \\ 0 & 0 & 0 & 1 & 0 & -1 & -\frac{92}{3} \\ 0 & 0 & 0 & 0 & 1 & -1 & -\frac{73}{6} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right).$

If ${\bf r}=(r_1,r_2,r_3,r_4,r_5,r_6)$ denotes the rankings of Army, Bucknell, Holy Cross, Lafayette, Lehigh, Navy, in that order, then

$r_1=r_6-\frac{87}{6},\ \ r_2=r_6-\frac{73}{6},\ \ r_3=r_6-\frac{53}{6},\ \ r_4=r_6-\frac{92}{6},\ \ r_5=r_6-\frac{73}{6}.$

Therefore

Lafayette $<$ Army = Bucknell = Lehigh $<$ Holy Cross $<$ Navy.

If we use this ranking to predict win/losses over the season, it would fail to correctly predict Army vs Holy Cross (Army won), Bucknell vs Lehigh, and Lafayette vs Army. This gives a prediction failure rate of $20\%$.

Simple unsolved math problem, 6

If you know a little point-set topology, below is an unsolved math problem whose statement is relatively simple.

Probably everyone has at least seen the Mandelbrot set in some form, as it’s a popular object of mathematical artists. Here’s a picture from Wikipedia:

The formal definition is as follows. Let $f_c (z)=z^2+c$, where $c\in \mathbb{C}$ is a complex number. The Mandelbrot set $X$ is the complex plot of the set of complex numbers $c$ for which the sequence of iterates

$f_c (0), f_c (f_c (0)), f_c (f_c (f_c (0))), \dots,$

remains bounded in absolute value.
We say $X$ is locally connected if every point $x\in X$ admits a neighborhood basis consisting entirely of open, connected sets.

Conjecture: The Mandelbrot set $X$ is locally connected.

A tribute to TS Michael

I’ve known TS for over 20 years as a principled colleague and a great teacher.

TS at the USNA in Dec 2015.

However, we really never spoke much except for the past five-to-ten years or so. For a period, I wrote a lot about error-correcting codes and we’d talk occasionally about our common interests (for example, I found his paper “The rigidity theorems of Hamada and Ohmori, revisited” fascinating). However, once I became interested in graph theory, we spoke as often as I could corner him. He taught me a lot and only know I realize how lucky I was to have him as a colleague.

I remember many times, late on a Friday, when we’d talk for an hour or two about chess, mathematics, “office politics” (he always knew more than me), and allergies. Here’s one of his favorite chess problems:

Mate in 549 moves. This problem was discovered by a team of chess engame experts at Lomonosov University, Moscow, August 2012.

Maybe this says more about me than him, but when it was just the two of us, we rarely talked about families or relationships. None-the-less, he always treated me like a good friend. One of my favorite memories was when my wife and I were shopping at the plaza where his condo building was located (it’s a big plaza). Elva and I were walking store-to-store when we spotted TS. He was walking to distract himself from his discomfort. At the time, doctors didn’t know what his problems were and he suspected allergies. I have a number of food sensitivities and he was a welcomed fountain of medical knowledge about these issues. (In fact, his hints have really helped me a lot, health-wise.) In any case, TS and Elva and I spoke for 30 minutes or so about health and family. I remember how gracious and thoughtful he was, skillfully steering the conversation into non-technical matters for Elva’s benefit. I ran into him another time while waiting for Elva, who was in a nearby doctor’s office (I told you this was a big shopping plaza). TS generously waited with me until Elva was ready to be picked up. What we chatted about is lost in the cobwebs of my memory but I remember vividly where we sat and the kind of day it was. TS had such a kind heart.

As I said, TS taught me a lot about graph theory. Whether in-between classes or when I was lucky enough to spot him late in the day, he’d kindly entertain my naive (usually false) conjectures and speculations about strongly regular graphs. I never heard him speak in anything but the kindest terms. He’d never say “that’s just plain wrong” or “idiotic” (even if it was) but instead teach me the correct way to think about it in a matter in which I could see myself how my speculations were wrong-headed. My upcoming book with Caroline Melles is indebted to his insight and suggestions.

Even after he left Maryland to spend his remaining days with his family in California, TS emailed encouragement and suggestions about an expository paper I was writing to help connect my matrix theory students with the methods of ranking sports teams. While he was very helpful and provided me with his excellent insights as usual, in truth, I used the work on the paper as an excuse to keep up with his health status. I’m relatively ignorant of medical issues and tried to stay optimistic until it’s totally unrealistic. As sad as it was, we was always frank and honest with me about his prognosis.

He’s gone now, but as a teacher, researcher, and as a kind soul, TS is unforgettable.

A list of TS’s publications:

1. T. S. Michael, Tournaments, book chapter in Handbook of Linear Algebra, 2nd ed, CRC Press, Boca Raton, 2013.
2. T. S. Michael, Cycles of length 5 in triangle-free graphs: a sporadic counterexample to a characterization of equality, Bulletin of the Institute of Combinatorics and Its Applications, 67 (2013) 6–8.
3. T. S. Michael and Val Pinciu, Guarding orthogonal prison yards: an upper bound,
Congressus Numerantium, 211 (2012) 57–64.
4. Ilhan Hacioglu and T. S. Michael, The p-ranks of residual and derived skew Hadamard designs,
Discrete Mathematics, 311 (2011) 2216-2219.
5. T. S. Michael, Guards, galleries, fortresses, and the octoplex, College Math Journal, 42 (2011) 191-200. (This paper won a Polya Award)
6. Elizabeth Doering, T. S. Michael, and Bryan Shader, Even and odd tournament matrices with minimum rank over finite fields, Electronic Journal of Linear Algebra, 22 (2011) 363-377.
7. Brenda Johnson, Mark E. Kidwell, and T. S. Michael, Intrinsically knotted graphs have at least 21 edges, Journal of Knot Theory and Its Ramifications, 19 (2010) 1423-1429.
8. T. S. Michael, How to Guard an Art Gallery and Other Discrete Mathematical Adventures. Johns Hopkins University Press, Baltimore, 2009.
9. T. S. Michael and Val Pinciu, Art gallery theorems and triangulations, DIMACS Educational Module Series, 2007, 18 pp (electronic 07-1)
10. T. S. Michael and Thomas Quint, Sphericity, cubicity, and edge clique covers of graphs, Discrete Applied Mathematics, 154 (2006) 1309-1313.
11. T. S. Michael and Val Pinciu, Guarding the guards in art galleries, Math Horizons, 14 (2006), 22-23, 25.
12. Richard J. Bower and T. S. Michael, Packing boxes with bricks, Mathematics Magazine, 79 (2006), 14-30.
13. T. S. Michael and Thomas Quint, Optimal strategies for node selection games: skew matrices and symmetric games, Linear Algebra and Its Applications 412 (2006) 77-92.
14. T. S. Michael, Ryser’s embedding problem for Hadamard matrices, Journal of Combinatorial Designs 14 (2006) 41-51.
15. Richard J. Bower and T. S. Michael, When can you tile a box with translates of two given rectangular bricks?, Electronic Journal of Combinatorics 11 (2004) Note 7, 9 pages.
16. T. S. Michael and Val Pinciu, Art gallery theorems for guarded guards, Computational Geometry 26 (2003) 247-258.
17. T. S. Michael, Impossible decompositions of complete graphs into three Petersen subgraphs, Bulletin of the Institute of Combinatorics and Its Applications 39 (2003) 64-66.
18. T. S. Michael and William N. Traves, Independence sequences of well-covered graphs: non-unimodality and the roller-coaster conjecture, Graphs and Combinatorics 19 (2003) 403-411.
19. T. S. Michael and Thomas Quint, Sphere of influence graphs and the L-Infinity metric, Discrete Applied Mathematics 127 (2003) 447-460.
20. T. S. Michael, Signed degree sequences and multigraphs, Journal of Graph Theory 41 (2002) 101-105.
21. T. S. Michael and Val Pinciu, Multiply guarded guards in orthogonal art galleries, Lecture Notes in Computer Science 2073, pp 753-762, in: Proceedings of the International Conference on Computer Science, San Francisco, Springer, 2001.
22. T. S. Michael, The rigidity theorems of Hamada and Ohmori, revisited, in Coding Theory and Cryptography: From the Geheimschreiber and Enigma to Quantum Theory. (Annapolis, MD, 1998), 175-179, Springer, Berlin, 2000.
23. T. S. Michael and Thomas Quint, Sphere of influence graphs in general metric spaces, Mathematical and Computer Modelling, 29 (1999) 45-53.
24. Suk-Geun Hwang, Arnold R. Kraeuter, and T. S. Michael, An upper bound for the permanent of a nonnegative matrix, Linear Algebra and Its Applications 281 (1998), 259-263.
* First Corrections: Linear Algebra and Its Applications 300 (1999), no. 1-3, 1-2
25. T. S. Michael and W. D. Wallis, Skew-Hadamard matrices and the Smith normal form, Designs, Codes, and Cryptography, 13 (1998) 173-176.
26. T. S. Michael, The p-ranks of skew Hadamard designs, Journal of Combinatorial Theory, Series A, 73 (1996) 170-171.
27. T. S. Michael, The ranks of tournament matrices, American Mathematical Monthly, 102 (1995) 637-639.
28. T. S. Michael, Lower bounds for graph domination by degrees, pp 789-800 in Graph Theory, Combinatorics, and Algorithms: Proceedings of the Seventh Quadrennial International Conference on the Theory and Applications of Graphs, Y. Alavi and A. Schwenk (eds.), Wiley, New York, 1995.
29. T. S. Michael and Thomas Quint, Sphere of influence graphs: a survey, Congressus Numerantium, 105 (1994) 153-160.
30. T. S. Michael and Thomas Quint, Sphere of influence graphs: edge density and clique size, Mathematical and Computer Modelling, 20 (1994) 19-24.
31. T. S. Michael and Aaron Stucker, Mathematical pitfalls with equivalence classes, PRIMUS, 3 (1993) 331-335.
32. T. S. Michael, The structure matrix of the class of r-multigraphs with a prescribed degree sequence, Linear Algebra and Its Applications, 183 (1993) 155-177.
33. T. S. Michael, The decomposition of the complete graph into three isomorphic strongly regular graphs, Congressus Numerantium, 85 (1991) 177-183.
34. T. S. Michael, The structure matrix and a generalization of Ryser’s maximum term rank formula, Linear Algebra and Its Applications, 145 (1991) 21-31.
35. Richard A. Brualdi and T. S. Michael, The class of matrices of zeros, ones and twos with prescribed row and column sums, Linear Algebra and Its Applications, 114(115) (1989) 181-198.
36. Richard A. Brualdi and T. S. Michael, The class of 2-multigraphs with a prescribed degree sequence, Linear and Multilinear Algebra, 24 (1989) 81-102.
37. Richard A. Brualdi, John L. Goldwasser, and T. S. Michael, Maximum permanents of matrices of zeros and ones, Journal of Combinatorial Theory, Series A, 47 (1988) 207-245.

Memories of TS Michael, by Bryan Shader

TS Michael passed away on November 22, 2016, from cancer. I will miss him as a colleague and a kind, wise soul.

TS Michael in December 2015 at the USNA

Bryan Shader has kindly allowed me to post these reminiscences that he wrote up.

Memories of TS Michael, by Bryan Shader

Indirect influence
TS indirectly influenced my choice of U. Wisconsin-Madison for graduate school. My senior year as an undergraduate, Herb Ryser gave a talk at my school. After the talk I was able to meet Ryser and asked for advice on graduate schools. Herb indicated that one of his very good undergraduate students had chosen UW-Madison and really liked the program. I later found out that the person was TS.

Back in the dark ages, universities still did registration by hand. This meant that for a couple of days before each semester the masses of students would wind their way through a maze of stations in a large gymnasium. For TS’s first 4 years, he would invariably encounter a road block because someone had permuted the words in his name (Todd Scott Michael) on one of the forms. After concretely verifying the hatcheck probabilities and fearing that this would cause some difficulties in graduating, he legally changed his name to TS Michael.

Polyominoes & Permanents
I recall many stories about how TS’s undergraduate work on polyominoes affected
his life. In particular, he recalled how once he started working on tilings on
polyominoes, he could no longer shower, or swim without visualizing polynomino
tilings on the wall’s or floor’s tiling. We shared an interest and passion for permanents (the permanent is a function of a matrix much like the determinant and plays a critical role in combinatorics). When working together we frequently would find that we both couldn’t calculate the determinant of a 3 by 3 matrix correctly, because we were calculating the permanent rather than the determinant.

Presentations and pipe-dreams
TS and I often talked about how best to give a mathematical lecture, or
presentation at a conference. Perhaps this is not at all surprising, as our common advisor (Richard Brualdi) is an incredible expositor, as was TS’s undergraduate advisor (Herb Ryser, our mathematical grandfather). TS often mentioned how Herb Ryser scripted every moment of a lecture; he knew each word he would write on the board and exactly where it would be written. TS wasn’t quite so prescriptive–but before any presentation he gave he would go to the actual room of the presentation a couple of times and run through the talk. This would include answering questions from the “pretend” audience. After being inspired by TS’s talks, I adopted this preparation method.
TS and I also fantasized about our talks ending with the audience lifting us up on their shoulders and carrying us out of the room in triumph! That is never happened to either of us (that I know of), but to have it, as a dream has always been a good motivation.

Mathematical heritage
TS was very interested in his mathematical heritage, and his mathematics brothers and sisters. TS was the 12th of Brandi’s 37 PhD students; I was the 15th. In 2005, TS and I organized a conference (called the Brualidfest) in honor of Richard Brualdi. Below I attach some photos of the design for the T-shirt.

t-shirt design for Brualdi-Fest, 1

The first image shows a biclique partition of K_5; for each color the edges of the given color form a complete bipartite graph; and each each of the completed graph on 5 vertices is in exactly one of these complete bipartite graph. This is related to one of TS’s favorite theorem: the Graham-Pollak Theorem.

t-shirt design for Bruldi-Fest, 2

The second image (when the symbols are replaced by 1s) is the incidence matrix of the projective plane of order 2; one of TS’s favorite matrices.

Here’s a photo of the Brualdi and his students at the conference:

From L to R they are: John Mason (?), Thomas Forreger, John Goldwasser, Dan Pritikin, Suk-geun Hwang, Han Cho, T.S. Michael, B. Shader, Keith Chavey, Jennifer Quinn, Mark Lawrence, Susan Hollingsworth, Nancy Neudauer, Adam Berliner, and Louis Deaett.

Here’s a picture for a 2012 conference:

From bottom to top: T.S. Michael (1988), US Naval Academy, MD; Bryan Shader (1990), University of Wyoming, WY; Jennifer Quinn (1993), University of Washington, Tacoma, WA; Nancy Neudauer (1998), Pacific University, OR; Susan Hollingsworth (2006), Edgewood College, WI; Adam Berliner (2009), St. Olaf College, MN; Louis Deaett (2009), Quinnipiac University, CT; Michael Schroeder (2011), Marshall University, WV; Seth Meyer (2012), Kathleen Kiernan (2012).

Here’s a caricature of TS made by Kathy Wilson (spouse of mathematician
Richard Wilson) at the Brualdifest:

TS Michael, by Kathy Wilson

Long Mathematical Discussions
During graduate school, TS and I would regularly bump into each other as we
were coming and going from the office. Often this happened as we were crossing University Avenue, one of the busiest streets in Madison. The typical conversation started with a “Hi, how are you doing? Have you considered X?” We would then spend the next 60-90 minutes on the street corner (whether it was a sweltering 90 degrees+, or a cold, windy day) considering X. In more recent years, these conversations have moved to hotel lobbies at conferences that we attend together. These discussions have been some of the best moments of my life, and through them I have become a better mathematician.

Here’s a photo of T.S. Michael with Kevin van der Meulen at the Brualdi-fest.

I’m guessing they are in the midst of one of those “Have you considered X?” moments that TS is famous for.

Mathematical insight
TS has taught me a lot about mathematics, including:

•  How trying to generalize a result can lead to better understanding of the original result.
•  How phrasing a question appropriately is often the key to a mathematical breakthrough
• Results that are surprising (e.g. go against ones intuition), use an elegant proof (e.g. bring in matrices in an unexpected way), and are aesthetically pleasing are worth pursing (as Piet Hein said “Problems worthy of attack, prove their worth by fighting back.”)
•  The struggle to present the proof of a result in the simplest, most self-contained way is important because often it produces a better understanding. If you can’t say something in a clean way, then perhaps you really don’t understand it fully.

TS’ mathematics fathers are:
Richard Brualdi ← Herb Ryser ← Cyrus MacDuffee ← Leonard Dickson ← E.H. Moore ← H. A. Newton ← Michel Chasles ← Simeon Poisoon ← Joseph Lagrange ← Leonhard Euler ← Johann Bernoulli.