In the works: a book “Exploring Graphs via Harmonic Morphisms”

Caroline Melles and I have been working for some years on a 2-volume book in graph theory which investigates harmonic morphisms. These are, roughly speaking, mappings from one graph to another that preserve locally harmonic functions on these graphs. Therefore, this topic fits into the general framework of harmonic analysis on graphs.

This post only concerns the first volume. The intent here is to mention some of the types of results we obtain. Of course, by no means is it intended to be a complete description.

The second volume will be summarized in a separate post.

Graphs in our book are unweighted and, unless stated otherwise, have no loops or multiple edges. The basic idea is this: in chapter 2 we classify harmonic morphisms using a criteria expressed as a matrix identity. For various graph-theoretical constructions (such as edge deletion or join or a graph product or …) that can be performed on a given graph Gamma, we pick a graph morphism associated to the construction (such as sending a vertex in the constructed graph to the given graph). That morphism is associated to a matrix (which we called the vertex map matrix in chapter 3 of our earlier book, Adventures in Graph Theory). When this matrix satisfies the above-mentioned matrix criteria then the associated morphism is harmonic.

Chapter 1 is on Graph Morphisms.

This chapter is devoted to background on graph morphisms and some of the methods we use to study them.

  1. Roughly speaking, a morphism is a mapping between graphs that preserves incidence structure. After defining horizontal and vertical edges, vertical multiplicities, local horizontal multiplicities, it recalls well-known graph families like cycle graphs, path graphs, and complete graphs.
  2. There are a few very useful degree identities. First, there is a fundamental formula relating vertex degrees to multiplicities under morphisms. There is also a formula for the degree
    of the morphism in terms of vertical multiplicities and local horizontal multiplicities.
  3. A topic threading through the book is that of matrix-theoretic methods. This first chapter introduces vertex map matrices and edge map matrices that encode morphisms. After establishing key matrix identities and products, reviews adjacency matrices and their spectra, with detailed analysis of cycle graph eigenvalues using Chebyshev polynomials and complex roots of unity.
  4. It recalls signed and unsigned incidence matrices, with and without edge orientations, and establishes the fundamental Graph Homomorphism Identity relating incidence matrices to morphism matrices,
  5. introduces Laplacian matrices as differences of degree and adjacency matrices, connecting to the incidence matrix framework.
  6. Introduced graph blowup morphisms via a blowup construction where vertices are replaced by independent sets, creating natural homomorphisms with specific structural properties.
  7. Some functorial properties of graph morphisms are established, such as how morphisms behave under graph constructions like subdivisions, smoothing, deletions, and substitutions.
  8. The chapter ends with exercises and a chapter summary.

Chapter 2 on Harmonic Morphisms

    This chapter is devoted to the basics of harmonic morphisms.
  1. Introduces the core definition: a graph morphism is harmonic if local horizontal multiplicities are constant across edges incident to each vertex’s image.
  2. Cycle space and cocycle space – Develops the algebraic framework using homology and cohomology of graphs. Covers Urakawa’s theorem on pullbacks of harmonic 1-forms and Baker-Norin results on divisors and Jacobians.
  3. Matrix-theoretic methods – Establishes the fundamental matrix characterization: a morphism is harmonic iff there exists a diagonal multiplicity matrix satisfying specific adjacency matrix identities. Proves equivalence with an analogous Laplacian matrix identity and an analogous incidence matrix criteria.
  4. The Riemann-Hurwitz formula – Presents the graph-theoretic analogue relating genera of graphs via harmonic morphisms, with matrix proof and applications to regular graphs.
  5. Some functorial consequences – Demonstrates how harmonic morphisms interact with graph constructions like subdivision, edge substitution, leaf addition, and deletion. Shows these
    operations preserve harmonicity under appropriate conditions.
  6. The chapter ends with exercises and a chapter summary.

All harmonic morphisms from this graph to C4 are covers.

  1. Fundamental Problem: Given a graph Gamma1, for which graphs Gamma2 is there a non-trivial harmonic morphism phi from Gamma2 to Gamma1?
  1. Follow-up question: Can the number of such phi be counted?

Chapter 3 on Counting Problems

This chapter looks at various families, such as the path graphs. What is especially remarkable is that, as we will see, the problem of counting harmonic morphisms often boils down to solving certain recurrance relations, some of which arose (in a completely different context of course) in
the work of medieval mathematicians, both in Europe and in India.

  1. Regarding harmonic morphisms between path graphs, we show how to construct and count the harmonic morphisms from longer path graphs to shorter ones.
  2. Regarding harmonic morphisms between cycle graphs, we show how to construct and count the harmonic morphisms from larger cycle graphs (when they exist) to smaller ones. It turns out all such harmonic morphisms are necessarily covers.
  3. Regarding harmonic morphisms between complete graphs, we show how to construct and count the harmonic morphisms from larger complete graphs (when they exist) to smaller ones.
  4. Harmonic morphisms to P2 (arising from the Baker-Norin Theorem) can be counted.
  5. Harmonic morphisms to P3 (the path graph with only 3 vertices) can be counted in special cases.
    There are lots of open questions, such as which trees have a harmonic morphism to P3.
  6. The chapter ends with exercises and a chapter summary.

Chapter 4 on Harmonic Quotient Morphisms

    This chapter studies quotient graphs arising from group actions and from vertex partitions.
  1. Quotient graphs from group actions. Harmonic actions and transitive actions are studied separately.
  2. Quotient graphs from paritions. Orbit partitions and equitable partitions are studied.
  3. As a nice application of harmonic morphisms with particularly nice structural properties, we consider multicovers and blowup graphs.
  4. The last section provides explicit formulas for the eigenvalue spectra of harmonic blowups of bipartite graphs, connecting the eigenvalues of the source and target graphs through the blowup parameters. The main result is the Godsil-McKay Theorem.
  5. The chapter ends with exercises and a chapter summary.

Chapter 5 on Graph Morphisms and Graph Products

    This chapter studies graph morphisms associated to tensor products of graphs and lexicographical products of graphs.

    Roughly speaking, a graph product of Gamma1 with Gamma2 is a graph Gamma3 = (V3, E3), where V3 = V1 x V2 is the Cartesian product and there is a rule for the edges E3 based on some conditions on the vertices. The graph products considered in this book are the disjunctive, Cartesian, tensor, lexicographic, and the strong products.

    The most basic questions one wants answered are these:
    is the projection pr1 : Gamma1 x Gamma2 to Gamma1 harmonic, and
    is the projection pr2 : Gamma1 x Gamma2 to Gamma2 harmonic?
    If they do turn out to be harmonic morphisms, we also want to know the vertical and horizontal multiplicities as well. If they do not turn out to be harmonic morphisms, we also want (if possible) to establish conditions on the graphs under which the projections are harmonic.

    However, we want to not only consider products of graphs but also products of morphisms.
    In this case, the most basic question one wants answered is this:
    Given harmonic morphisms phi : Gamma2 to Gamma1 and phi’ : Gamma2′ to Gamma1′, is the
    product phi x phi’ harmonic?

  1. For example, we show that projection morphisms from tensor products are always harmonic with explicit horizontal multiplicity formulas.
  2. Moreover, we prove that the tensor product of harmonic morphisms (without vertical edges) yields a harmonic morphism with horizontal multiplicity matrix given by the Kronecker product of the original multiplicity matrices.
  3. If Gamma x Gamma’ is a lexicographical product then the projection onto the first factor, pr1, is a harmonic morphism. However, the projection onto the second factor is not in general.
  4. We establish a connection between the balanced blowup graph and a lexicographical product. One corollary of this connection is that the blowdown graph agrees with the first projection of the product, so is a harmonic morphism.

Chapter 6 on More Products and Constructions

  1. This chapter studies graph morphisms associated to Cartesian/strong/disjunctive products of graphs as well as joins and NEPS graphs.
  2. For example, we show that projection morphisms from Cartesian products or from strong products are always harmonic with explicit horizontal multiplicity formulas.
  3. Roughly speaking, one of the results states:
    Given two m-quasi-multicovers phi from Gamma2 to Gamma1 and phi’ from Gamma2′ to Gamma1′, the Cartesian product phi x phi’ is also an m-quasi-multicover (hence harmonic).
  4. Another result, roughly speaking, states:
    Given two harmonic morphisms phi from Gamma2 to Gamma1 and phi’ from Gamma2′ to Gamma1′, the
    strong product phi x phi’ is also harmonic.
  5. Can one classify the graphs for which the disjunctive product projections pr1 or pr2 are graph morphisms?
  6. For example, we show that if phi from Gamma2 to Gamma1 and phi’ from Gamma2′ to Gamma1′ are graph morphisms, then the associated product map from Gamma2 x Gamma2′ to Gamma1 x Gamma1′ (where x is the disjunctive product) is, in general, not a graph morphism.
  7. Given two harmonic morphisms phi from Gamma2 to Gamma1 and phi’ from Gamma2′ to Gamma1′, the join morphism phi wedge phi’ is harmonic if and only if a certain technical condition is true.
  8. A theorem due to Urakawa states that projection morphisms from a NEPS graph to one
    of its factors are always harmonic. Moreover, we give explicit horizontal multiplicity formulas.
  9. The chapter ends with exercises and a chapter summary.

Computations are supported throughout using SageMath and Mathematica. The plan is the publish the volume with Birkhauser. We thank the editors there, especially John Benedetto, for their encouragement and guidance.

The Beautiful Math of Tangent Lines: Unveiling the Lemniscate’s Secrets

An experimental post

I asked Gemini to create a blog post based on this plot (below) and the Python code I wrote (based on ideas explained to me by Edray Goins) that generated it. (The prompt I fed to Gemini was modified from a prompt found online for generating a blog post about food.) The response, basically unedited, is below. The topic is inspired by the PhD thesis of Euphemia Haynes (a student of Aubrey Landry, Catholic University of America, 1943). See this earlier post for more information on it.

Mathematics can be beautiful, and sometimes that beauty is hidden in plain sight. Take, for instance, this striking image:

a graceful lemniscate curve adorned with a web of tangent lines, all seemingly converging on a single point. This isn’t some artistic rendering; it’s the output of a fascinating algorithm, written in Python and SageMathplot_lemniscate_quartic_tangents.

This algorithm doesn’t just draw a pretty picture; it solves a sophisticated geometric problem: finding all the tangent lines to a lemniscate quartic that also pass through a specific point on the curve. Let’s dissect the magic:

  1. The Lemniscate: The algorithm begins by defining the lemniscate quartic curve. This curve, shaped like a sideways figure-eight, has a unique and elegant equation: (x² + y²)² = a(x² – y²), where ‘a’ is a constant.
  2. The Target Point: We choose a specific point, P0, on the lemniscate. This is the point where all our tangent lines will magically meet.
  3. The Tangent Hunt: This is where the algorithm gets clever. It doesn’t directly solve for the tangent lines. Instead, it cleverly iterates through numerous points (P) on the lemniscate. For each point, it calculates two things:
    • The slope of the secant line connecting P0 and P.
    • The slope of the tangent line to the lemniscate at point P.
  4. The Convergence: If the slopes from step 3 are approximately equal (within a tolerance), it means the secant line through P0 and P is also a tangent line to the curve at P. This line is added to our collection of tangent lines.
  5. The Reveal: Finally, the algorithm plots the lemniscate along with all the tangent lines it discovered. The result is the captivating image we see—a testament to the hidden geometric relationships within this seemingly simple curve.

The algorithm isn’t just about finding tangents; it reveals a beautiful correspondence between points on the lemniscate and its tangent lines. It’s a visual exploration of the intricate geometry underlying algebraic curves, demonstrating how computation can unveil hidden mathematical beauty. So next time you see a seemingly simple curve, remember the rich and sometimes surprising geometry hiding within.

The baseball states graph

A state of a baseball game is a 10-tuple (s0,s1,s2,s3,j,vs,hs,tab,b,s), where

  • s0 denotes the number of outs (represented as one of the integers 0,1,2)
  • s1 is 0 if there is no runner on 1st base and 1 otherwise,
  • s2 is 0 if there is no runner on 2nd base and 1 otherwise,
  • s3 is 0 if there is no runner on 3rd base and 1 otherwise,
  • j is the inning number (represented usually as one of the integers 1, 2, . . . , 9, but it can have a larger value if the score is tied),
  • vs is the current score (number of runs) of the visiting team,
  • hs is the current score of the home team,
  • tab is the “team at bat” – 0 for visitor and 1 for home,
  • b counts the balls to the batter,
  • s counts the strikes.

For simplicity, we will always work within a given inning and omit the variables past the inning number variable. Therefore, for the remainder, regard the set of all possible states as a list of 4-tuples. These states can be listed in a 8×3 array:

(0,0,0,0), (1,0,0,0), (2,0,0,0),
(0,1,0,0), (1,1,0,0), (2,1,0,0),
(0,0,1,0), (1,0,1,0), (2,0,1,0),
(0,0,0,1), (1,0,0,1), (2,0,0,1),
(0,1,1,0), (1,1,1,0), (2,1,1,0),
(0,1,0,1), (1,1,0,1), (2,1,0,1),
(0,0,1,1), (1,0,1,1), (2,0,1,1),
(0,1,1,1), (1,1,1,1), (2,1,1,1)

Similarly, the so-called run expectancy matrix (denoted REM or RE24) is formed by examining the 24 states of a baseball game and entering the number of expected runs into each state. (This matrix depends on the year the games were played and on the ballpark. Interactive RE24 visualizations and downloadable datasets can be found on the FanGraphs website, https://blogs.fangraphs.com/introducing-the-batter-specific-run-expectancy-tool/ .) We are only looking at the states here.

There are often several ways to transition from one state to another. For example, the transition (0,0,0,0) -> (0,0,0,1) could be from a (triple) hit by the batter followed by running the bases to 3rd base. In the reverse order, the transition (0,1,1,1) -> (0,0,0,0) is also possible, when the batter hits a (grand slam) homerun. We list (most of) the possible transitions below. Omitted are the “self-transitions”, such as (1) (0,0,0,0) -> (0,0,0,0) when the batter hits a homerun, or (2) (0,1,0,0) -> (0,1,0,0) when the batter hits a single but gets an RBI from the runner on 1st base advancing all the way home. (Question: These self-transitions yield loops in the associated graph but should they be allowed for wider applications?) As indicated above, we also omit transitioning to the next inning (3 outs).

Transitions from (0,0,0,0): (0,1,0,0), (0,0,1,0), (0,0,0,1), (1,0,0,0).
Transitions from (0,1,0,0): (0,0,0,0), (0,0,1,0), (0,0,0,1), (0,1,1,0),  (0,1,0,1),  (0,0,1,1), (1,1,0,0), (1,0,1,0), (1,0,0,1), (2,0,0,0).
Transitions from (0,0,1,0): (0,0,0,0), (0,1,0,0), (0,0,0,1), (0,1,1,0),  (0,1,0,1),  (0,0,1,1), (1,1,0,0), (1,0,1,0), (1,0,0,1), (2,0,0,0).
Transitions from (0,0,0,1): (0,0,0,0), (0,1,0,0), (0,0,1,0), (0,1,0,1),  (0,0,1,1), (1,1,0,0), (1,0,1,0), (1,0,0,1), (2,0,0,0).
Transitions from (0,1,1,0): (0,0,0,0),  (0,1,0,0), (0,0,1,0), (0,0,0,1), (0,1,0,1), (0,0,1,1), (0,1,1,1), (1,1,0,0), (1,0,1,0), (1,0,0,1), (1,1,1,0), (1,1,0,1), (1,0,1,1), (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1).
Transitions from (0,1,0,1): (0,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1), (0,1,1,0), (0,0,1,1), (0,1,1,1), (1,1,0,0), (1,0,1,0), (1,0,0,1), (1,1,1,0), (1,1,0,1), (1,0,1,1), (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1).
Transitions from (0,0,1,1): (0,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1), (0,1,1,0), (0,1,0,1), (0,1,1,1), (1,1,0,0), (1,0,1,0), (1,0,0,1), (1,1,1,0), (1,1,0,1), (1,0,1,1), (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1).
Transitions from (0,1,1,1): (0,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1),  (0,1,1,0), (0,1,0,1), (0,0,1,1), (1,1,0,0), (1,0,1,0), (1,0,0,1), (1,1,1,0), (1,1,0,1), (1,0,1,1), (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1), (2,1,1,0), (2,1,0,1), (2,0,1,1).

Transitions from (1,0,0,0): (1,1,0,0), (1,0,1,0), (1,0,0,1), (2,0,0,0).
Transitions from (1,1,0,0): (1,0,0,0), (1,0,1,0), (1,0,0,1), (1,1,1,0), (1,1,0,1),  (1,0,1,1), (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1).
Transitions from (1,0,1,0): (1,0,0,0), (1,1,0,0), (1,0,0,1), (1,1,1,0), (1,1,0,1),  (1,0,1,1), (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1).
Transitions from (1,0,0,1): (1,0,0,0), (1,1,0,0), (1,0,1,0), (1,1,0,1), (1,0,1,1), (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1).
Transitions from (1,1,1,0): (1,0,0,0), (1,1,0,0), (1,0,1,0), (1,0,0,1), (1,1,0,1), (1,0,1,1), (1,1,1,1), (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1), (2,1,1,0), (2,1,0,1), (2,0,1,1).
Transitions from (1,1,0,1): (1,0,0,0), (1,1,0,0), (1,0,1,0), (1,0,0,1), (1,1,1,0), (1,0,1,1), (1,1,1,1), (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1), (2,1,1,0), (2,1,0,1), (2,0,1,1).
Transitions from (1,0,1,1): (1,0,0,0), (1,1,0,0), (1,0,1,0), (1,0,0,1), (1,1,0,1), (1,1,0,1), (1,1,1,1), (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1), (2,1,1,0), (2,1,0,1), (2,0,1,1).
Transitions from (1,1,1,1): (1,0,0,0), (1,1,0,0), (1,0,1,0), (1,0,0,1), (1,1,1,0), (1,1,0,1), (1,0,1,1), (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1), (2,1,1,0), (2,1,0,1), (2,0,1,1), (2,1,1,1).

Transitions from (2,0,0,0): (2,1,0,0), (2,0,1,0), (2,0,0,1).
Transitions from (2,1,0,0): (2,0,0,0), (2,0,1,0), (2,0,0,1), (2,1,1,0), (2,1,0,1), (2,0,1,1).
Transitions from (2,0,1,0): (2,0,0,0), (2,1,0,0), (2,0,0,1), (2,1,1,0), (2,1,0,1), (2,0,1,1).
Transitions from (2,0,0,1): (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,1,1,0), (2,1,0,1), (2,0,1,1).
Transitions from (2,1,1,0): (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1), (2,1,0,1), (2,0,1,1), (2,1,1,1).
Transitions from (2,1,0,1): (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1), (2,1,1,0), (2,0,1,1), (2,1,1,1).
Transitions from (2,0,1,1): (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1), (2,1,1,0), (2,1,0,1), (2,1,1,1).
Transitions from (2,1,1,1): (2,0,0,0), (2,1,0,0), (2,0,1,0), (2,0,0,1), (2,1,1,0), (2,1,0,1), (2,0,1,1).

The graph with vertices being the 24 states and edges being those determined by the above neighborhoods, has 24 vertices, 182 edges, and degree of each state is given as follows: 

(0, 0, 0, 0), 8
(0, 0, 0, 1), 11
(0, 0, 1, 0), 11
(0, 0, 1, 1), 17
(0, 1, 0, 0), 11
(0, 1, 0, 1), 17
(0, 1, 1, 0), 17
(0, 1, 1, 1), 20
(1, 0, 0, 0), 9
(1, 0, 0, 1), 18
(1, 0, 1, 0), 18
(1, 0, 1, 1), 18
(1, 1, 0, 0), 18
(1, 1, 0, 1), 18
(1, 1, 1, 0), 18
(1, 1, 1, 1), 15
(2, 0, 0, 0), 22
(2, 0, 0, 1), 18
(2, 0, 1, 0), 18
(2, 0, 1, 1), 12
(2, 1, 0, 0), 18
(2, 1, 0, 1), 12
(2, 1, 1, 0), 12
(2, 1, 1, 1), 8

Note the state of maximal degree (of 22) is (2,0,0,0).

Harmonic quotients of regular graphs – examples

Caroline Melles and I have written a preprint that collects numerous examples of harmonic quotient morphisms \phi : \Gamma_2 \to \Gamma_1, where \Gamma_1=\Gamma_2/G is a quotient graph obtained from some subgroup G \subset Aut(\Gamma_2). The examples are for graphs having a small number of vertices (no more than 12). For the most part, we also focused on regular graphs with small degree (no more than 5). They were all computed using SageMath and a module of special purpose Python functions I’ve written (available on request). I’ve not counted, but the number of examples is relatively large, maybe over one hundred.

I’ll post it to the math arxiv at some point but if you are interested now, here’s a copy: click here for pdf.

A graph_id function in SageMath

While GAP has a group_id function which locates a “small” group in a small groups database (see the SageMath page or the GAP page for more info), AFAIK, SageMath doesn’t have something similar. I’ve written one (see below) based on the mountain of hard work done years ago by Emily Kirkman.

def graph_id_graph6(Gamma, verbose=False):
    """
    Returns the graph6 id of Gamma = (V,E).
    If verbose then it also displays the table of all graphs with
    |V| vertices and |E| edges.

    Assumes Gamma is a "small" graph.

    EXAMPLES:
    sage: Gamma = graphs.HouseGraph()
    sage: graph_id_graph6(Gamma, verbose=False)
    'Dbk'
    sage: graph_id_graph6(Gamma, verbose=True)

     graphs with 5 vertices and 6 edges:

    Graph6               Aut Grp Size         Degree Sequence     
    ------------------------------------------------------------
    DB{                  2                    [1, 2, 2, 3, 4]     
    DFw                  12                   [2, 2, 2, 3, 3]     
    DJ[                  24                   [0, 3, 3, 3, 3]     
    DJk                  2                    [1, 2, 3, 3, 3]     
    DK{                  8                    [2, 2, 2, 2, 4]     
    Dbk                  2                    [2, 2, 2, 3, 3]     


    'Dbk'

"""
n = len(Gamma.vertices())
m = len(Gamma.edges())
ds = Gamma.degree_sequence()
Q = GraphQuery(display_cols=['graph6', 'aut_grp_size', 'degree_sequence'], num_edges=['=', m], num_vertices=["=", n])
for g in Q:
    if g.is_isomorphic(Gamma):
        if verbose:
            print("\n graphs with %s vertices"%n+" and %s edges:\n"%m)
            Q.show()
            print("\n")
        return g.graph6_string()

Let’s do the Landau shuffle

Here’s a shuffle I’ve not seen before:

  1. Take an ordinary deck of 52 cards and place them, face up, in the following pattern:
    Going from the top of the deck to the bottom, placing cards down left-to-right, put 13 cards in the top row:
    \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\
    11 cards in the next row:
    \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\
    then 9 cards in the next row:
    \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\
    then 7 cards in the next row:
    \Box\ \Box\ \Box\ \Box\ \Box\ \Box\ \Box\
    then 5 cards in the next row:
    \Box\ \Box\ \Box\ \Box\ \Box\
    then 3 cards in the next row:
    \Box\ \Box\ \Box\
    and finally, the remaining 4 cards in the last row:
    \Box\ \Box\ \Box\ \Box\
  2. Now, take the left-most card in each row and move it to the right of the others (effectively, this is a cyclic shift of that row of cards to the left).
  3. Finally, reassemble the deck by reversing the order of the placement.

In memory of the great German mathematician Edmund Landau (1877-1938, see also this bio), I call this the Landau shuffle. As with any card shuffle, this shuffle permutes the original ordering of the cards. To restore the deck to it’s original ordering you must perform this shuffle exactly 180180 times. (By the way, this is called the order of the shuffle.) Yes, one hundred eighty thousand, one hundred and eighty times. Moreover, no other shuffle than this Landau shuffle will require more repetitions to restore the deck. So, in some sense, the Landau shuffle is the shuffle that most effectively rearranges the cards.

Now suppose we have a deck of n (distictly labeled) cards, where n>2 is an integer. The collection of all possible shuffles, or permutations, of this deck is denoted S_n and called the symmetric group. The above discussion leads naturally to the following question(s).

Question: What is the largest possible order of a shuffle of this deck (and how do you construct it)?

This requires a tiny bit of group theory. You only need to know that any permutation of n symbols (such as the card deck above) can be decomposed into a composition or product) of disjoint cycles. To compute the order of an element g \in S_n, write that element g in disjoint cycle notation. Denote the lengths of the disjoint cycles occurring in g as k_1, k_2, \dots , k_m, where k_i>0 are integers forming a partition of n: n = k_1 + k_2 +\dots + k_m. Then the order of g is known to be given by order(g) = LCM(k_1, k_2, ...., k_m), where LCM denotes the least common multiple.

The Landau function g(n) is the function that returns the maximum possible order of an element g \in S_n. Landau introduced this function in a 1903 paper where he also proved the asymptotic relation \lim_{n\to \infty} \frac{\ln(g(n))}{\sqrt{n\ln(n)}}=1.

Example: If n=52 then note 52 = 13+11+9+7+5+3+4 and that LCM(13,11,9,77,5,3,4)=180180.

Oddly, my favorite mathematical software program SageMath does not have an implementation of the Landau function, so we end with some SageMath code.

def landau_function(n):
L = Partitions(n).list()
lcms = [lcm(P) for P in L]
return max(lcms)

Here is an example (the time is included to show this took about 2 seconds on my rather old mac laptop):

sage: time landau_function(52)
CPU times: user 1.91 s, sys: 56.1 ms, total: 1.97 s
Wall time: 1.97 s
180180

Differential equations and SageMath

The files below were on my teaching page when I was a college teacher. Since I retired, they disappeared. Samuel Lelièvre found an archived copy on the web, so I’m posting them here.

The files are licensed under the Attribution-ShareAlike Creative Commons license.

  1. Partial fractions handout, pdf
  2. Introduction to matrix determinants handout, pdf
  3. Impulse-response handout, pdf
  4. Introduction to ODEs, pdf
  5. Initial value problems, pdf
  6. Existence and uniqueness, pdf
  7. Euler’s method for numerically approximating solutions to DEs, pdf.
    Includes both 1st order DE case (with Euler and improved Euler) and higher order DE and systems of DEs cases, without improved Euler.
  8. Direction fields and isoclines, pdf
  9. 1st order ODEs, separable and linear cases, pdf
  10. A falling body problem in Newtonian mechanics, pdf
  11. A mixing problem, pdf
  12. Linear ODEs, I, pdf
  13. Linear ODEs, II, pdf
  14. Undetermined coefficients for non-homogeneous 2nd order constant coefficient ODEs, pdf
  15. Variation of parameters for non-homogeneous 2nd order constant coefficient ODEs, pdf.
  16. Annihilator method for non-homogeneous 2nd order constant coefficient ODEs, pdf.
    I found students preferred (the more-or-less equivalent) undetermined coefficient method, so didn’t put much effort into these notes.
  17. Springs, I, pdf
  18. Springs, II, pdf
  19. Springs, III, pdf
  20. LRC circuits, pdf
  21. Power series methods, I, pdf
  22. Power series methods, II, pdf
  23. Introduction to Laplace transform methods, I, pdf
  24. Introduction to Laplace transform methods, II, pdf
  25. Lanchester’s equations modeling the battle between two armies, pdf
  26. Row reduction/Gauss elimination method for systems of linear equations, pdf.
  27. Eigenvalue method for homogeneous constant coefficient 2×2 systems of 1st order ODEs, pdf.
  28. Variation of parameters for first order non-homogeneous linear constant coefficient systems of ODEs, pdf.
  29. Electrical networks using Laplace transforms, pdf
  30. Separation of variables and the Transport PDE, pdf
  31. Fourier series, pdf.
  32. one-dimensional heat equation using Fourier series, pdf.
  33. one-dimensional wave equation using Fourier series, pdf.
  34. one-dimensional Schroedinger’s wave equation for a “free particle in a box” using Fourier series, pdf.
  35. All these lectures collected as one pdf (216 pages).
    While licensed Attribution-ShareAlike CC, in the US this book is in the public domain, as it was written while I was a US federal government employee as part of my official duties. A warning – it has lots of typos. The latest version, written with Marshall Hampton, is a JHUP book, much more polished, available on amazon and the JHUP website. Google “Introduction to Differential Equations Using Sage”.

Course review: pdf

Love, War, and Zombies, pdf
This set of slides is of a lecture I would give if there was enough time towards the end of the semester

Mathematics of zombies

What do you do if there is a Zombie attack? Can mathematics help? This page is (humorously) dedicated to collecting links to papers or blog posted related to the mathematical models of Zombies.

 

George Romero’s 1968 Night of the Living Dead, now in the public domain, introduced reanimated ghouls, otherwise known as zombies, which craved live human flesh. Romero’s script was insired on Richard Matheson’s I Am Legend. In Romero’s version, the zombies could be killed by destroying the zombie’s brain. A dead human could, in some cases be “reanimated,” turning into a zombie. These conditions are modeled mathematically in several papers, given below.

  1. When Zombies Attack! Mathematical Modelling of a Zombie Outbreak!, paper by Mathematicians at the University of Ottawa, Philip Munz, Ioan Hudea, Joe Imad and Robert J. Smith? (yes, his last name is spelled “Smith?”).
  2. youtube video 28 Minutes Later – The Maths of Zombies , made by Dr James Grime (aka, “siningbanana”), which references the above paper.
  3. Epidemics in the presence of social attraction and repulsion, Oct 2010 Zombie paper by Evelyn Sander and Chad M. Topaz.
  4. Statistical Inference in a Zombie Outbreak Model, slides for a talk given by Des Higman, May 2010.
  5. Mathematics kills zombies dead!, 08/17/2009 blog post by “Zombie Research Society Staff”.
  6. The Mathematics of Zombies, August 18, 2009 blog post by Mike Elliot.
  7. Love, War and Zombies – Systems of Differential Equations using Sage, April 2011 slides by David Joyner. Sage commands for Love, War and Zombies talk. This was given as a Project Mosaic/M-cast broadcast.
  8. Public domain 1968 film Night of the Living Dead by George Romero.

Linear systems of graphs in Sage

Let \Gamma be a graph. A divisor on \Gamma is an element of the free group generated by the vertices V, {\mathbb{Z}}[V].

We say that divisors D and D^\prime are linearly equivalent and write D \sim D^\prime if D^\prime-D is a principal divisor, i.e., if D^\prime = D + \text{div}(f) for some function f : V \rightarrow {\mathbb{Z}}. Note that if D and D^\prime are linearly equivalent, they must have the same degree, since the degree of every principal divisor is 0. Divisors of degree 0 are linearly equivalent if and only if they determine the same element of the Jacobian. If D is a divisor of degree 0, we denote by [D] the element of the Jacobian determined by D. A divisor D is said to be effective if D(v) \geq 0 for all vertices v. We write D \geq 0 to mean that D is effective. The linear system associated to a divisor D is the set

|D| = \{ D^\prime \in \text{Div}(\Gamma ) : D^\prime \geq 0 \text{ and } D^\prime \sim D\},

i.e., |D| is the set of all effective divisors linearly equivalent to D. Note that if D_1 \sim D_2, then |D_1| = |D_2|. We note also that if \text{deg}(D)<0, then |D| must be empty.

Sage can be used to compute the linear system of any divisor on a graph.

def linear_system(D, Gamma):
    """
    Returns linear system attached to the divisor D.

    EXAMPLES:
        sage: Gamma2 = graphs.CubeGraph(2)
        sage: Gamma1 = Gamma2.subgraph(vertices = ['00', '01'], edges = [('00', '01')])
        sage: f = [['00', '01', '10', '11'], ['00', '01', '00', '01']]
        sage: is_harmonic_graph_morphism(Gamma1, Gamma2, f)
        True
        sage: PhiV = matrix_of_graph_morphism_vertices(Gamma1, Gamma2, f); PhiV
        [1 0 1 0]
        [0 1 0 1]
        sage: D = vector([1,0,0,1])
        sage: PhiV*D
        (1, 1)
        sage: linear_system(PhiV*D, Gamma1)
        [(2, 0), (1, 1), (0, 2)]
        sage: linear_system(D, Gamma2)
        [(0, 2, 0, 0), (0, 0, 2, 0), (1, 0, 0, 1)]
        sage: [PhiV*x for x in linear_system(D, Gamma2)]
        [(0, 2), (2, 0), (1, 1)]

    """
    Q = Gamma.laplacian_matrix()
    CS = Q.column_space()
    N = len(D.list())
    d = sum(D.list())
    #print d
    lin_sys = []
    if d < 0:
        return lin_sys
    if (d == 0) and (D in CS):
        lin_sys = [CS(0)]
        return lin_sys
    elif (d == 0):
        return lin_sys
    S = IntegerModRing(d+1)^N
    V = QQ^N
    for v in S:
        v = V(v)
        #print D-v,v,D
        if D-v in CS:
            lin_sys.append(v)
    return lin_sys