Sports ranking methods, 4

This is the fourth of a series of expository posts on matrix-theoretic sports ranking methods. This post discusses the Elo rating.

This system was originally developed by Arpad Elo (Elo (1903-1992) was a physics professor at Marquette University in Milwaukee and a chess master, eight-time winner of the Wisconsin State Chess Championships.) Originally, it was developed for rating chess players in the 1950s and 1960s. Now it is used for table tennis, basketball, and other sports.

We use the following version of his rating system.

As above, assume all the $n$ teams play each other (ties allowed)
and let r_i denote the rating of Team i, i=1,2,\dots,n.

Let A=(A_{ij}) denote an $n\times n$ matrix of score results:

A_{ij}= \left\{ \begin{array}{rr} -1,& {\rm if\ team\ } i {\rm \ lost\ to\ team\ } j,\\ +1,& {\rm if\ team\ } i {\rm\ beat\ team\ } j,\\ 0, & {\rm if}\ i=j. \end{array} \right.

Let S_{ij}=(A_{ij}+1)/2.

As in the previous post, the matrix A associated to the example of the Patriot league is the adjacency matrix of a diagraph.

  1. Initialize all the ratings to be 100: {\bf r}=(r_1,\dots,r_n) = (100,\dots,100).
  2. After Team i plays Team j, update their rating using the formula

    r_i = r_i+K(S_{ij}-mu_{ij}),

    where K=10 and

    \mu_{ij} = (1+e^{-(r_i-r_j)/400})^{-1}.

In the example of the Patriot league, the ratings vector is

{\bf r}=(85.124, 104.79, 104.88, 85.032, 94.876, 124.53).

This gives the ranking

Lafayette < Army < Lehigh < Bucknell < Holy Cross < Navy.

This gives a prediction failure rate of 13.3\%.

Some SageMath code for this:

def elo_rating(A):
    """
    A is a signed adjacency matrix for a directed graph.

    Returns elo ratings of the vertices of Gamma = Graph(A) 
        
    EXAMPLES:
        sage: A = matrix(QQ,[
        [0 , -1 , 1  , -1 , -1 , -1 ],
        [1,   0 ,  -1,  1,  1,   -1  ],
        [-1 , 1 ,  0 ,  1 , 1  , -1  ],
        [1 , -1 , -1,  0 ,  -1 , -1  ],
        [1 , - 1 , - 1 , 1 , 0 , - 1  ],
        [1 ,  1  ,  1  , 1  , 1  , 0 ]
        ])
        sage: elo_rating(A)
        (85.124, 104.79, 104.88, 85.032, 94.876, 124.53)

    """
    n = len(A.rows())
    RR = RealField(prec=20)
    V = RR^n
    K = 10
    r0 = 100 # initial rating
    r = n*[r0]
    for i in range(n):
        for j in range(n):
            if ij and A[i][j]==1:
                S = 1
            elif ij and A[i][j]==-1:
                S = 0
            else:
                S = 1/2
            mu = 1/(1+e^(-(r[i]-r[j])/400))
            r[i] = r[i]+K*(S-mu)
    return V(r)

Sports ranking methods, 3

This is the third of a series of expository posts on matrix-theoretic sports ranking methods. This post discusses the random walker ranking.

We follow the presentation in the paper by Govan and Meyer (Ranking National Football League teams using Google’s PageRank). The table of “score differentials” based on the table in a previous post is:

\begin{tabular}{c|cccccc} \verb+x\y+ & Army & Bucknell & Holy Cross & Lafayette & Lehigh & Navy \\ \hline Army & 0 & 0 & 1 & 0 & 0 & 0 \\ Bucknell & 2 & 0 & 0 & 2 & 3 & 0 \\ Holy Cross & 0 & 3 & 0 & 4 & 14 & 0 \\ Lafayette & 10 & 0 & 0 & 0 & 0 & 0 \\ Lehigh & 2 & 0 & 0 & 11 & 0 & 0 \\ Navy & 11 & 14 & 8 & 22 & 6 & 0 \\ \end{tabular}
This leads to the following matrix:

M_0=\left(\begin{array}{cccccc} 0 & 0 & 1 & 0 & 0 & 0 \\ 2 & 0 & 0 & 2 & 3 & 0 \\ 0 & 3 & 0 & 4 & 14 & 0 \\ 10 & 0 & 0 & 0 & 0 & 0 \\ 2 & 0 & 0 & 11 & 0 & 0 \\ 11 & 14 & 8 & 22 & 6 & 0 \\ \end{array}\right) .

The edge-weighted score-differential graph associated to M_0 (regarded as a weighted adjacency matrix) is in the figure below.

sm261_baseball-ranking-application_teams-digraph2

This matrix M_0 must be normalized to create a (row) stochastic matrix:

M = \left(\begin{array}{cccccc} 0 & 0 & 1 & 0 & 0 & 0 \\ {2}/{7} & 0 & 0 /{7} /{7} & 0 \\ 0 /{7} & 0 /{21} /{3} & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ {2}/{13} & 0 & 0 /{13} & 0 & 0 \\ {11}/{61} /{61} /{61} /{61} /{61} & 0 \\ \end{array}\right) .

Next, to insure it is irreducible, we replace M by A=(M+J)/2, where J is the 6\times 6 doubly stochastic matrix with every entry equal to 1/6:

A=\left(\begin{array}{cccccc} {1}/{12} & 1/{12} & 7/{12} & 1/{12} & 1/{12} & 1/{12} \\ {19}/{84} & 1/{12} & 1/{12} & 19/{84} & 25/{84} & 1/{12} \\ {1}/{12} & 13/{84} & 1/{12} & 5/{28} & 5/{12} & 1/{12} \\ {7}/{12} & 1/{12} & 1/{12} & 1/{12} & 1/{12} & 1/{12} \\ {25}/{156} & 1/{12} & 1/{12} & 79/{156} & 1/{12} & 1/{12} \\ {127}/{732} & 145/{732} & 109/{732} & 193/{732} & 97/{732} & 1/{12} \end{array}\right).

Let

{\bf v}_0 = \left( \frac{1}{6} , \frac{1}{6} , \frac{1}{6} , \frac{1}{6} , \frac{1}{6} , \frac{1}{6}\right).

The ranking determined by the random walker method is the reverse of the left eigenvector of A associated to the largest eigenvalue \lambda_{max}=1 (by reverse, I mean that the vector ranks the teams from worst-to-best, not from best-to-worst, as we have seen in previous ranking methods).
In other words, the vector

{\bf r}^*=\lim_{n\to \infty}{\bf v}_0A^n.

This is approximately

{\bf r}^* \cong \left(0.2237\dots ,\,0.1072\dots ,\,0.2006\dots ,\,0.2077\dots ,\,0.1772\dots ,\,0.0833\dots \right).

Its reverse gives the ranking:

Army < Lafayette < Bucknell < Lehigh < Holy Cross < Navy.

This gives a prediction failure rate of 13.3\%.

Sports ranking methods, 2

This is the second of a series of expository posts on matrix-theoretic sports ranking methods. This post discusses Keener’s method (see J.P. Keener, The Perron-Frobenius theorem and the ranking of football, SIAM Review 35 (1993)80-93 for details).

See the first post in the series for a discussion of the data we’re using to explain this method. We recall the table of results:

X\Y Army Bucknell Holy Cross Lafayette Lehigh Navy
Army x 14-16 14-13 14-24 10-12 8-19
Bucknell 16-14 x 27-30 18-16 23-20 10-22
Holy Cross 13-14 30-27 x 19-15 17-13 9-16
Lafayette 24-14 16-18 15-19 x 12-23 17-39
Lehigh 12-10 20-23 13-17 23-12 x 12-18
Navy 19-8 22-10 16-9 39-17 18-12 x
sm261_baseball-ranking-application_teams-digraph

Win-loss digraph of the Patriot league mens baseball from 2015

Suppose T teams play each other. Let A=(a_{ij})_{1\leq i,j\leq T} be a non-negative square matrix determined by the results of their games, called the preference matrix. In his 1993 paper, Keener defined the score of the ith team to be given by

s_i = \frac{1}{n_i}\sum_{j=1}^T a_{ij}r_j,

where n_i denotes the total number of games played by team i and {\bf r}=(r_1,r_2,\dots ,r_T) is the rating vector (where r_i\geq 0 denotes the rating of team i).

One possible preference matrix the matrix A of total scores obtained from the pre-tournament table below:

A = \left(\begin{array}{rrrrrr} 0 & 14 & 14 & 14 & 10 & 8 \\ 16 & 0 & 27 & 18 & 23 & 28 \\ 13 & 30 & 0 & 19 & 27 & 43 \\ 24 & 16 & 15 & 0 & 12 & 17 \\ 12 & 20 & 43 & 23 & 0 & 12 \\ 19 & 42 & 30 & 39 & 18 & 0 \end{array}\right),

(In this case, n_i=4 so we ignore the 1/n_i factor.)

In his paper, Keener proposed a ranking method where the ranking vector {\bf r} is proportional to its score. The score is expressed as a matrix product A{\bf r}, where A is a square preference matrix. In other words, there is a constant \rho>0 such that s_i=\rho r_i, for each i. This is the same as saying A {\bf r} = \rho {\bf r}.

The Frobenius-Perron theorem implies that S has an eigenvector {\bf r}=(r_1,r_2,r_3,r_4,r_5,r_6) having positive entries associated to the largest eigenvalue $\lambda_{max}$ of A, which has (geometric) multiplicity 1. Indeed, A has maximum eigenvalue \lambda_{max}= 110.0385..., of multiplicity 1, with eigenvector

{\bf r}=(1, 1.8313\dots , 2.1548\dots , 1.3177\dots , 1.8015\dots , 2.2208\dots ).

Therefore the teams, according to Kenner’s method, are ranked,

Army < Lafayette < Lehigh < Bucknell < Holy Cross < Navy.

This gives a prediction failure rate of just 6.7\%.

Memories of TS Michael, by Thomas Quint

TS Michael passed away on November 22, 2016, from cancer. I will miss him as a colleague and a kind, wise soul. Tom Quint has kindly allowed me to post these reminiscences that he wrote up.


Well, I guess I could start with the reason TS and I met in the first place. I was a postdoc at USNA in about 1991 and pretty impressed with myself. So when USNA offered to continue my postdoc for two more years (rather than give me a tenure track position), I turned it down. Smartest move I ever made, because TS got the position and so we got to know each other.

We started working w each other one day when we both attended a talk on “sphere of influence graphs”. I found the subject moderately interesting, but he came into my office all excited, and I couldn’t get rid of him — wouldn’t leave until we had developed a few research ideas.

Interestingly, his position at USNA turned into a tenure track, while mine didn’t. It wasn’t until 1996 that I got my position at U of Nevada.

Work sessions with him always followed the same pattern. As you may or may not know, T.S. a) refused to fly in airplanes, and b) didn’t drive. Living across the country from each other, the only way we could work together face-to-face was: once each summer I would fly out to the east coast to visit my parents, borrow one of their cars for a week, and drive down to Annapolis. First thing we’d do is go to Whole Foods, where he would load up my car with food and other supplies, enough to last at least a few months. My reward was that he always bought me the biggest package of nigiri sushi we could find — not cheap at Whole Foods!

It was fun, even though I had to suffer through eight million stories about the USNA Water Polo Team.

Oh yes, and he used to encourage me to sneak into one of the USNA gyms to work out. We figured that no one would notice if I wore my Nevada sweats (our color is also dark blue, and the pants also had a big “N” on them). It worked.

Truth be told, TS didn’t really have a family of his own, so I think he considered the mids as his family. He cared deeply about them (with bonus points if you were a math major or a water polo player :-).

One more TS anecdote, complete with photo.  Specifically, TS was especially thrilled to find out that we had named our firstborn son Theodore Saul Quint.  Naturally, TS took to calling him “Little TS”.  At any rate, the photo below is of “Big TS” holding “Little TS”, some time in the Fall of 2000.

tslittlets_photo2000

TS Michael in 2000.

Sports ranking methods, 1

This is the first of a series of expository posts on matrix-theoretic sports ranking methods. This post, which owes much to discussions with TS Michael, discusses Massey’s method.

Massey’s method, currently in use by the NCAA (for football, where teams typically play each other once), was developed by Kenneth P. Massey
while an undergraduate math major in the late 1990s. We present a possible variation of Massey’s method adapted to baseball, where teams typically play each other multiple times.

There are exactly 15 pairing between these teams. These pairs are sorted lexicographically, as follows:

(1,2),(1,3),(1,4), …, (5,6).

In other words, sorted as

Army vs Bucknell, Army vs Holy Cross, Army vs Lafayette, …, Lehigh vs Navy.

The cumulative results of the 2016 regular season are given in the table below. We count only the games played in the Patriot league, but not including the Patriot league post-season tournament (see eg, the Patriot League site for details). In the table, the total score (since the teams play multiple games against each other) of the team in the vertical column on the left is listed first. In other words, ”a – b” in row $i$ and column $j$ means the total runs scored by team i against team j is a, and the total runs allowed by team i against team j is b. Here, we order the six teams as above (team 1 is Army (USMI at Westpoint), team 2 is Bucknell, and so on). For instance if X played Y and the scores were 10-0, 0-1, 0-1, 0-1, 0-1, 0-1, then the table would read 10-5 in the position of row X and column Y.

X\Y Army Bucknell Holy Cross Lafayette Lehigh Navy
Army x 14-16 14-13 14-24 10-12 8-19
Bucknell 16-14 x 27-30 18-16 23-20 10-22
Holy Cross 13-14 30-27 x 19-15 17-13 9-16
Lafayette 24-14 16-18 15-19 x 12-23 17-39
Lehigh 12-10 20-23 13-17 23-12 x 12-18
Navy 19-8 22-10 16-9 39-17 18-12 x
sm261_baseball-ranking-application_teams-digraph

Win-loss digraph of the Patriot league mens baseball from 2015

In this ordering, we record their (sum total) win-loss record (a 1 for a win, -1 for a loss) in a 15\times 6 matrix:

M = \left(\begin{array}{cccccc} -1 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & -1 & 0 & 0 & 0 \\ -1 & 0 & 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 & 1 & 0 \\ -1 & 0 & 0 & 0 & 0 & 1 \\ 0 & -1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & -1 & 0 & 0 \\ 0 & 1 & 0 & 0 & -1 & 0 \\ 0 & -1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & -1 & 0 & 0 \\ 0 & 0 & 1 & 0 & -1 & 0 \\ 0 & 0 & -1 & 0 & 0 & 1 \\ 0 & 0 & 0 & -1 & 1 & 0 \\ 0 & 0 & 0 & -1 & 0 & 1 \\ 0 & 0 & 0 & 0 & -1 & 1 \end{array}\right).

We also record their total losses in a column vector:

{\bf b}= \left(\begin{array}{c} 2 \\ 1 \\ 10 \\ 2 \\ 11 \\ 3 \\ 2 \\ 3 \\ 14 \\ 4 \\ 14 \\ 10 \\ 11 \\ 22 \\ 6 \\ \end{array}\right).

The Massey ranking of these teams is a vector {\bf r} which best fits the equation

M{\bf r}={\bf b}.

While the corresponding linear system is over-determined, we can look for a best (in the least squares sense) approximate solution using the orthogonal projection formula

P_V = B(B^tB)^{-1}B^t,

valid for matrices B with linearly independent columns. Unfortunately, in this case B=M does not have linearly independent columns, so the formula doesn’t apply.

Massey’s clever idea is to solve

M^tM{\bf r}=M^t{\bf b}

by row-reduction and determine the rankings from the parameterized form of the solution. To this end, we compute

M^tM= \left(\begin{array}{rrrrrr} 5 & -1 & -1 & -1 & -1 & -1 \\ -1 & 5 & -1 & -1 & -1 & -1 \\ -1 & -1 & 5 & -1 & -1 & -1 \\ -1 & -1 & -1 & 5 & -1 & -1 \\ -1 & -1 & -1 & -1 & 5 & -1 \\ -1 & -1 & -1 & -1 & -1 & 5 \end{array}\right)

and

M^t{\bf b}= \left(\begin{array}{r} -24 \\ -10 \\ 10 \\ -29 \\ -10 \\ 63 \\ \end{array}\right).

Then we compute the rref of

A= (M^tM,M^t{\bf b}) = \left(\begin{array}{rrrrrr|r} 5 & -1 & -1 & -1 & -1 & -1 & -24 \\ -1 & 5 & -1 & -1 & -1 & -1 & -10 \\ -1 & -1 & 5 & -1 & -1 & -1 & 10 \\ -1 & -1 & -1 & 5 & -1 & -1 & -29 \\ -1 & -1 & -1 & -1 & 5 & -1 & -10 \\ -1 & -1 & -1 & -1 & -1 & 5 & 63 \end{array}\right),

which is

rref(M^tM,M^t{\bf b})= \left(\begin{array}{rrrrrr|r} 1 & 0 & 0 & 0 & 0 & -1 & -\frac{87}{6} \\ 0 & 1 & 0 & 0 & 0 & -1 & -\frac{73}{6} \\ 0 & 0 & 1 & 0 & 0 & -1 & -\frac{53}{6} \\ 0 & 0 & 0 & 1 & 0 & -1 & -\frac{92}{3} \\ 0 & 0 & 0 & 0 & 1 & -1 & -\frac{73}{6} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right).

If {\bf r}=(r_1,r_2,r_3,r_4,r_5,r_6) denotes the rankings of Army, Bucknell, Holy Cross, Lafayette, Lehigh, Navy, in that order, then

r_1=r_6-\frac{87}{6},\ \ r_2=r_6-\frac{73}{6},\ \ r_3=r_6-\frac{53}{6},\ \ r_4=r_6-\frac{92}{6},\ \ r_5=r_6-\frac{73}{6}.

Therefore

Lafayette < Army = Bucknell = Lehigh < Holy Cross < Navy.

If we use this ranking to predict win/losses over the season, it would fail to correctly predict Army vs Holy Cross (Army won), Bucknell vs Lehigh, and Lafayette vs Army. This gives a prediction failure rate of 20\%.

A tribute to TS Michael

I’ve known TS for over 20 years as a principled colleague and a great teacher.

ts-michaels_2015-12-21_small

TS at the USNA in Dec 2015.

However, we really never spoke much except for the past five-to-ten years or so. For a period, I wrote a lot about error-correcting codes and we’d talk occasionally about our common interests (for example, I found his paper “The rigidity theorems of Hamada and Ohmori, revisited” fascinating). However, once I became interested in graph theory, we spoke as often as I could corner him. He taught me a lot and only know I realize how lucky I was to have him as a colleague.

I remember many times, late on a Friday, when we’d talk for an hour or two about chess, mathematics, “office politics” (he always knew more than me), and allergies. Here’s one of his favorite chess problems:

mate-in-549

Mate in 549 moves. This problem was discovered by a team of chess engame experts at Lomonosov University, Moscow, August 2012.

Maybe this says more about me than him, but when it was just the two of us, we rarely talked about families or relationships. None-the-less, he always treated me like a good friend. One of my favorite memories was when my wife and I were shopping at the plaza where his condo building was located (it’s a big plaza). Elva and I were walking store-to-store when we spotted TS. He was walking to distract himself from his discomfort. At the time, doctors didn’t know what his problems were and he suspected allergies. I have a number of food sensitivities and he was a welcomed fountain of medical knowledge about these issues. (In fact, his hints have really helped me a lot, health-wise.) In any case, TS and Elva and I spoke for 30 minutes or so about health and family. I remember how gracious and thoughtful he was, skillfully steering the conversation into non-technical matters for Elva’s benefit. I ran into him another time while waiting for Elva, who was in a nearby doctor’s office (I told you this was a big shopping plaza). TS generously waited with me until Elva was ready to be picked up. What we chatted about is lost in the cobwebs of my memory but I remember vividly where we sat and the kind of day it was. TS had such a kind heart.

As I said, TS taught me a lot about graph theory. Whether in-between classes or when I was lucky enough to spot him late in the day, he’d kindly entertain my naive (usually false) conjectures and speculations about strongly regular graphs. I never heard him speak in anything but the kindest terms. He’d never say “that’s just plain wrong” or “idiotic” (even if it was) but instead teach me the correct way to think about it in a matter in which I could see myself how my speculations were wrong-headed. My upcoming book with Caroline Melles is indebted to his insight and suggestions.

Even after he left Maryland to spend his remaining days with his family in California, TS emailed encouragement and suggestions about an expository paper I was writing to help connect my matrix theory students with the methods of ranking sports teams. While he was very helpful and provided me with his excellent insights as usual, in truth, I used the work on the paper as an excuse to keep up with his health status. I’m relatively ignorant of medical issues and tried to stay optimistic until it’s totally unrealistic. As sad as it was, we was always frank and honest with me about his prognosis.

He’s gone now, but as a teacher, researcher, and as a kind soul, TS is unforgettable.


 

A list of TS’s publications:

  1. T. S. Michael, Tournaments, book chapter in Handbook of Linear Algebra, 2nd ed, CRC Press, Boca Raton, 2013.
  2. T. S. Michael, Cycles of length 5 in triangle-free graphs: a sporadic counterexample to a characterization of equality, Bulletin of the Institute of Combinatorics and Its Applications, 67 (2013) 6–8.
  3. T. S. Michael and Val Pinciu, Guarding orthogonal prison yards: an upper bound,
    Congressus Numerantium, 211 (2012) 57–64.
  4. Ilhan Hacioglu and T. S. Michael, The p-ranks of residual and derived skew Hadamard designs,
    Discrete Mathematics, 311 (2011) 2216-2219.
  5. T. S. Michael, Guards, galleries, fortresses, and the octoplex, College Math Journal, 42 (2011) 191-200. (This paper won a Polya Award)
  6. Elizabeth Doering, T. S. Michael, and Bryan Shader, Even and odd tournament matrices with minimum rank over finite fields, Electronic Journal of Linear Algebra, 22 (2011) 363-377.
  7. Brenda Johnson, Mark E. Kidwell, and T. S. Michael, Intrinsically knotted graphs have at least 21 edges, Journal of Knot Theory and Its Ramifications, 19 (2010) 1423-1429.
  8. T. S. Michael, How to Guard an Art Gallery and Other Discrete Mathematical Adventures. Johns Hopkins University Press, Baltimore, 2009.
  9. T. S. Michael and Val Pinciu, Art gallery theorems and triangulations, DIMACS Educational Module Series, 2007, 18 pp (electronic 07-1)
  10. T. S. Michael and Thomas Quint, Sphericity, cubicity, and edge clique covers of graphs, Discrete Applied Mathematics, 154 (2006) 1309-1313.
  11. T. S. Michael and Val Pinciu, Guarding the guards in art galleries, Math Horizons, 14 (2006), 22-23, 25.
  12. Richard J. Bower and T. S. Michael, Packing boxes with bricks, Mathematics Magazine, 79 (2006), 14-30.
  13. T. S. Michael and Thomas Quint, Optimal strategies for node selection games: skew matrices and symmetric games, Linear Algebra and Its Applications 412 (2006) 77-92.
  14. T. S. Michael, Ryser’s embedding problem for Hadamard matrices, Journal of Combinatorial Designs 14 (2006) 41-51.
  15. Richard J. Bower and T. S. Michael, When can you tile a box with translates of two given rectangular bricks?, Electronic Journal of Combinatorics 11 (2004) Note 7, 9 pages.
  16. T. S. Michael and Val Pinciu, Art gallery theorems for guarded guards, Computational Geometry 26 (2003) 247-258.
  17. T. S. Michael, Impossible decompositions of complete graphs into three Petersen subgraphs, Bulletin of the Institute of Combinatorics and Its Applications 39 (2003) 64-66.
  18. T. S. Michael and William N. Traves, Independence sequences of well-covered graphs: non-unimodality and the roller-coaster conjecture, Graphs and Combinatorics 19 (2003) 403-411.
  19. T. S. Michael and Thomas Quint, Sphere of influence graphs and the L-Infinity metric, Discrete Applied Mathematics 127 (2003) 447-460.
  20. T. S. Michael, Signed degree sequences and multigraphs, Journal of Graph Theory 41 (2002) 101-105.
  21. T. S. Michael and Val Pinciu, Multiply guarded guards in orthogonal art galleries, Lecture Notes in Computer Science 2073, pp 753-762, in: Proceedings of the International Conference on Computer Science, San Francisco, Springer, 2001.
  22. T. S. Michael, The rigidity theorems of Hamada and Ohmori, revisited, in Coding Theory and Cryptography: From the Geheimschreiber and Enigma to Quantum Theory. (Annapolis, MD, 1998), 175-179, Springer, Berlin, 2000.
  23. T. S. Michael and Thomas Quint, Sphere of influence graphs in general metric spaces, Mathematical and Computer Modelling, 29 (1999) 45-53.
  24. Suk-Geun Hwang, Arnold R. Kraeuter, and T. S. Michael, An upper bound for the permanent of a nonnegative matrix, Linear Algebra and Its Applications 281 (1998), 259-263.
    * First Corrections: Linear Algebra and Its Applications 300 (1999), no. 1-3, 1-2
  25. T. S. Michael and W. D. Wallis, Skew-Hadamard matrices and the Smith normal form, Designs, Codes, and Cryptography, 13 (1998) 173-176.
  26. T. S. Michael, The p-ranks of skew Hadamard designs, Journal of Combinatorial Theory, Series A, 73 (1996) 170-171.
  27. T. S. Michael, The ranks of tournament matrices, American Mathematical Monthly, 102 (1995) 637-639.
  28. T. S. Michael, Lower bounds for graph domination by degrees, pp 789-800 in Graph Theory, Combinatorics, and Algorithms: Proceedings of the Seventh Quadrennial International Conference on the Theory and Applications of Graphs, Y. Alavi and A. Schwenk (eds.), Wiley, New York, 1995.
  29. T. S. Michael and Thomas Quint, Sphere of influence graphs: a survey, Congressus Numerantium, 105 (1994) 153-160.
  30. T. S. Michael and Thomas Quint, Sphere of influence graphs: edge density and clique size, Mathematical and Computer Modelling, 20 (1994) 19-24.
  31. T. S. Michael and Aaron Stucker, Mathematical pitfalls with equivalence classes, PRIMUS, 3 (1993) 331-335.
  32. T. S. Michael, The structure matrix of the class of r-multigraphs with a prescribed degree sequence, Linear Algebra and Its Applications, 183 (1993) 155-177.
  33. T. S. Michael, The decomposition of the complete graph into three isomorphic strongly regular graphs, Congressus Numerantium, 85 (1991) 177-183.
  34. T. S. Michael, The structure matrix and a generalization of Ryser’s maximum term rank formula, Linear Algebra and Its Applications, 145 (1991) 21-31.
  35. Richard A. Brualdi and T. S. Michael, The class of matrices of zeros, ones and twos with prescribed row and column sums, Linear Algebra and Its Applications, 114(115) (1989) 181-198.
  36. Richard A. Brualdi and T. S. Michael, The class of 2-multigraphs with a prescribed degree sequence, Linear and Multilinear Algebra, 24 (1989) 81-102.
  37. Richard A. Brualdi, John L. Goldwasser, and T. S. Michael, Maximum permanents of matrices of zeros and ones, Journal of Combinatorial Theory, Series A, 47 (1988) 207-245.

 

The minimum upset ranking problem

Suppose n teams play each other, and let Team r_1 < Team r_2 < \dots < Team r_n denote some fixed ranking (where r_1,\dots,r_n is some permutation of 1,\dots,n). An upset occurs when a lower ranked team beats an upper ranked team. For each ranking, {\bf r}, let U({\bf r}) denote the total number of upsets. The minimum upset problem is to find an “efficient” construction of a ranking for which U({\bf r}) is as small as possible.

In general, let A_{ij} denote the number of times Team i beat team $j$ minus the number of times Team j beat Team i. We regard this matrix as the signed adjacency matrix of a digraph \Gamma. Our goal is to find a Hamiltonian (undirected) path through the vertices of \Gamma which goes the “wrong way” on as few edges as possible.

  1. Construct the list of spanning trees of \Gamma (regarded as an undirected graph).
  2. Construct the sublist of Hamiltonian paths (from the spanning trees of maximum degree 2).
  3. For each Hamiltonian path, compute the associated upset number: the total number of edges transversal in \Gamma going the “right way” minus the total number going the “wrong way.”
  4. Locate a Hamiltonian for which this upset number is as large as possible.

Use this sagemath/python code to compute such a Hamiltonian path.

def hamiltonian_paths(Gamma, signed_adjacency_matrix = []):
    """
    Returns a list of hamiltonian paths (spanning trees of 
    max degree <=2).

    EXAMPLES:
        sage: Gamma = graphs.GridGraph([3,3])
        sage: HP = hamiltonian_paths(Gamma)
        sage: len(HP)
        20
        sage: A = matrix(QQ,[
        [0 , -1 , 1  , -1 , -1 , -1 ],
        [1,   0 ,  -1,  1,  1,   -1  ],
        [-1 , 1 ,  0 ,  1 , 1  , -1  ],
        [1 , -1 , -1,  0 ,  -1 , -1  ],
        [1 , - 1 , - 1 , 1 , 0 , - 1  ],
        [1 ,  1  ,  1  , 1  , 1  , 0 ]
        ])
        sage: Gamma = Graph(A, format='weighted_adjacency_matrix')
        sage: HP = hamiltonian_paths(Gamma, signed_adjacency_matrix = A)
        sage: L = [sum(x[2]) for x in HP]; max(L)
        5
        sage: L.index(5)
        21
        sage: HP[21]                                 
        [Graph on 6 vertices,
         [0, 5, 2, 1, 3, 4],
         [-1, 1, -1, -1, -1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -1]]
        sage: L.count(5)
        1

    """
    ST = Gamma.spanning_trees()
    if signed_adjacency_matrix == []:
        HP = []
        for X in ST:
            L = X.degree_sequence()
            if max(L)<=2:
                #print L,ST.index(X), max(L)
                HP.append(X)
        return HP
    if signed_adjacency_matrix != []:
        A = signed_adjacency_matrix
        HP = []
        for X in ST:
            L = X.degree_sequence()
            if max(L)<=2:
                #VX = X.vertices()
                EX = X.edges()
		if EX[0][1] != EX[-1][1]:
                    ranking = X.shortest_path(EX[0][0],EX[-1][1])
		else:
		    ranking = X.shortest_path(EX[0][0],EX[-1][0])
		signature = [A[ranking[i]][ranking[j]] for i in range(len(ranking)-1) for j in range(i+1,len(ranking))]
                HP.append([X,ranking,signature])
        return HP

Wessell describes this method in a different way.

  1. Construct a matrix, M=(M_{ij}), with rows and columns indexed by the teams in some fixed order. The entry in the i-th row and the j-th column is defined bym_{ij}= \left\{ \begin{array}{rr} 0,& {\rm if\ team\ } i {\rm \ lost\ to\ team\ } j,\\ 1,& {\rm if\ team\ } i {\rm\ beat\ team\ } j,\\ 0, & {\rm if}\ i=j. \end{array} \right.
  2. Reorder the rows (and corresponding columns) to in a basic win-loss order: the teams that won the most games go at the
    top of M, and those that lost the most at the bottom.
  3. Randomly swap rows and their associated columns, each time checking if the
    number of upsets has gone down or not from the previous time. If it has gone down, we keep
    the swap that just happened, if not we switch the two rows and columns back and try again.

An implementaiton of this in Sagemath/python code is:

def minimum_upset_random(M,N=10):
    """
    EXAMPLES:
        sage: M = matrix(QQ,[
        [0 , 0 , 1  , 0 , 0 , 0 ],
        [1,   0 ,  0,  1,  1,   0  ],
        [0 , 1 ,  0 ,  1 , 1  , 0  ],
        [1 , 0 , 0,  0 ,  0 , 0  ],
        [1 , 0 , 0 , 1 , 0 , 0  ],
        [1 ,  1  ,  1  , 1  , 1  , 0 ]
        ])
        sage: minimum_upset_random(M)
        (
        [0 0 1 1 0 1]                    
        [1 0 0 1 0 1]                    
        [0 1 0 0 0 0]                    
        [0 0 1 0 0 0]                    
        [1 1 1 1 0 1]                    
        [0 0 1 1 0 0], [1, 2, 0, 3, 5, 4]
        )

    """
    n = len(M.rows())
    Sn = SymmetricGroup(n)
    M1 = M
    wins = sum([sum([M1[j][i] for i in range(j,6)]) for j in range(6)])
    g0 = Sn(1)
    for k in range(N):
        g = Sn.random_element()
        P = g.matrix()
        M0 = P*M1*P^(-1)
        if sum([sum([M0[j][i] for i in range(j,6)]) for j in range(6)])>wins:
            M1 = M0
            g0 = g*g0
    return M1,g0(range(n))