# A mathematical card trick

If you search hard enough on the internet you’ll discover a pamphlet from the 1898 by Si Stebbins entitled “Card tricks and the way they are performed” (which I’ll denote by [S98] for simplicity). In it you’ll find the “Si Stebbins system” which he claims is entirely his own invention. I’m no magician, by from what I can dig up on Magicpedia, Si Stebbins’ real name is William Henry Coffrin (May 4 1867 — October 12 1950), born in Claremont New Hampshire. The system presented below was taught to Si by a Syrian magician named Selim Cid that Si sometimes worked with. However, this system below seems to have been known by Italian card magicians in the late 1500’s. In any case, this blog post is devoted to discussing parts of the pamphlet [S98] from the mathematical perspective.

In stacking the cards (face down) put the 6 of Hearts $6 \heartsuit$ first, the 9 of Spades $9 \spadesuit$ next (so it is below the $6 \heartsuit$ in the deck), and so on to the end, reading across left to right as indicted in the table below (BTW, the pamphlet [S98] uses the reversed ordering.) My guess is that with this ordering of the deck — spacing the cards 3 apart — it still looks random at first glance.

Next, I’ll present a more mathematical version of this system to illustrate it’s connections with group theory.

We follow the ordering suggested by the mnemonic CHaSeD, we identify the suits with numbers as follows: Clubs is 0, Hearts is 1, Spades is 2 and Diamonds is 3. Therefore, the suits may be identified with the additive group of integers (mod 4), namely: ${\bf{Z}}/4{\bf{Z}} = \{ 0,1,2,3 \}$.

For the ranks, identify King with 0, Ace with 1, 2 with 2, $\dots$, 10 with 10, Jack with 11, Queen with 12. Therefore, the ranks may be identified with the additive group of integers (mod 13), namely: ${\bf{Z}}/13{\bf{Z}}=\{ 0,1,2,\dots 12\}$.

Rearranging the columns slightly, we have the following table, equivalent to the one above.

In this way, we identify the card deck with the abelian group

$G = {\bf{Z}}/4{\bf{Z}} \times {\bf{Z}}/13{\bf{Z}}$.

For example, if you spot the $2 \clubsuit$ then you know that 13 cards later (and If you reach the end of the deck, continue counting with the top card) is the $2 \heartsuit$, 13 cards after that is the $2 \spadesuit$, and 13 cards after that is the $2 \diamondsuit$.

Here are some rules this system satisfies:

Rule 1 “Shuffling”: Never riff shuffle or mix the cards. Instead, split the deck in two, the “bottom half” as the left stack and the “top half” as the right stack. Take the left stack and place it on the right one. This has the effect of rotating the deck but preserving the ordering. Do this as many times as you like. Mathematically, each such cut adds an element of the group $G$ to each element of the deck. Some people call this a “false shuffle” of “false cut.”

Rule 2 “Rank position”: The corresponding ranks of successive cards in the deck differs by 3.

Rule 3 “Suit position”: Every card of the same denomination is 13 cards apart and runs in the same order of suits as in the CHaSeD mnemonic, namely, Clubs $\clubsuit$, Hearts $\heartsuit$, Spades $\spadesuit$, Diamonds $\diamondsuit$.

At least, we can give a few simple card tricks based on this system.

Trick 1: A player picks a card from the deck, keeps it hidden from you. You name that card.

This trick can be set up in more than one way. For example, you can either

(a) spread the cards out behind the back in such a manner that when the card is drawn you can separate the deck at that point bringing the two parts in front of you, say a “top” and a “bottom” stack, or

(b) give the deck to the player, let them pick a card at random, which separates the deck into two stacks, say a “top” and a “bottom” stack, and have the player return the stacks separately.

You know that the card the player has drawn is the card following the bottom card of the top stack. If the card on the bottom of the top stack is denoted $(a,\alpha) \in G$ and the card drawn is $(b,\beta)$ then

$b \equiv a+3 \pmod{13}, \ \ \ \ \ \ \beta \equiv \alpha +1 \pmod{4}.$

For example, a player draws a card and you find that the bottom card is the $9 \diamondsuit$. What is the card the player picked?

solution: Use the first congruence listed: add 3 to 9, which is 12 or the Queen. Use the second congruence listed: add one to Diamond $\diamondsuit$ (which is 3) to get $0 \pmod 4$ (which is Clubs $\clubsuit$). The card drawn is the $Q \clubsuit$.

Trick 2: Run through the deck of cards (face down) one at a time until the player tells you to stop. Name the card you were asked to stop on.

Place cards behind the back first taking notice what the bottom card is. To get the top card, add 3 to the rank of the bottom card, add 1 to the suit of the bottom card. As you run through the deck you silently say the name of the next card (adding 3 to the rank and 1 to the suit each time). Therefore, you know the card you are asked to stop on, as you are naming them to yourself as you go along.

# A footnote to Robert H. Mountjoy

In an earlier post titled Mathematical romantic? I mentioned some papers I inherited of one of my mathematical hero’s Andre Weil with his signature. In fact, I was fortunate enough to go to dinner with him once in Princeton in the mid-to-late 1980s – a very gentle, charming person with a deep love of mathematics. I remember he said he missed his wife, Eveline, who passed away in 1986. (They were married in 1937.)

All this is simply to motivate the question, why did I get these papers? First, as mentioned in the post, I was given Larry Goldstein‘s old office and he either was kind enough to gift me his old preprints or left them to be thrown away by the next inhabitant of his office. BTW, if you haven’t heard of him, Larry was a student of Shimura, when became a Gibbs Fellow at Yale, then went to the University of Maryland at COllege Park in 1969. He wrote lots of papers (and books) on number theory, eventually becoming a full professor, but eventually settled into computers and data science work. He left the University of Maryland about the time I arrived in the early 1980s to create some computer companies that he ran.

This motivates the question: How did Larry get these papers of Weil? I think Larry inherited them from Mountjoy (who died before Larry arrived at UMCP, but more on him later). This motivates the question, who is Mountjoy and how did he get them?

I’ve done some digging around the internet and here’s what I discovered.

The earliest mention I could find is when he was listed as a recipient of an NSF Fellowship in “Annual Report of the National Science Foundation: 1950-1953” under Chicago, Illinois, Mathematics, 1953. So he was a grad student at the University of Chicago in 1953. Andre Weil was there at the time. (He left sometime in 1958.) Mountjoy could have gotten the notes of Andre Weil then. Just before Weil left Chicago, Walter Lewis Baily arrived (in 1957, to be exact). This is important because in May 1965 the Notices of the AMS reported that reported:

Mountjoy, Robert Harbison
Abelian varieties attached to representations of discontinuous groups (S. Mac Lane and W. L. Baily)

(His thesis was published posthumously in American Journal of Mathematics Vol. 89 (1967)149-224.) This thesis is in a field studied by Weil and Baily but not Saunders.

But we’re getting ahead of ourselves. The 1962 issue of Maryland Magazine had this:

Mathematics Grant
A team of University of Maryland mathematics researchers have received a grant of $53,000 from the National Science Foundation to continue some technical investigations they started two years ago. The mathematical study they are directing is entitled “Problems in Geometric Function Theory.” The project is under the direction of Dr. James Hummel. Dr. Mischael Zedek. and Prof. Robert H. Mountjoy, all of the Mathematics Department. They are assisted by four graduate-student researchists. The$53,000 grant is a renewal of an original grant which was made two years ago.

We know he was working at UMCP in 1962.

The newspaper Democrat and Chronicle, from Rochester, New York, on Wednesday, May 25, 1965 (Page 40) published the news that Robert H. Mountjoy “Died suddenly at Purcellville, VA, May 23, 1965”. I couldn’t read the rest (it’s behind a paywall but I could see that much). The next day, they published more: “Robert H. Mountjoy, son-in-law of Mr and Mrs Allen P Mills of Brighton, was killed in a traffic crash in Virgina. Mountjoy, about 30, a mathematics instructors at the University of Maryland, leaves a widow Sarah Mills Mountjoy and a 5-month old son Alexander, and his parents Mr and Mrs Lucius Mountjoy of Chicago.”

It’s so sad. The saying goes “May his memory be a blessing.” I never met him, but from what I’ve learned of Mountjoy, his memory is indeed a blessing.

# The number-theoretic side of J. Barkley Rosser

By chance, I ran across a reference to a paper of J Barkey Rosser and it brought back fond memories of days long ago when I would browse the stacks in the math dept library at the University of Washington in Seattle. I remember finding papers describing number-theoretic computations of Rosser and Schoenfeld. I knew nothing about Rosser. Was he a number theorist?

J. Barkley Rosser, taken at Math meeting in Denver

Here’s all I could glean from different sources on the internet:
J. Barkley Rosser was born in Jacksonville, Florida in 1907. He earned both his Bachelor of Science (1929) and his Master of Science (1931) from the University of Florida. Both degrees were in physics. He obtained his Ph.D. in mathematics (in fact, mathematical logic) from Princeton University in 1934, under the supervision of Alonso Church. After getting his Ph.D., Rosser taught at Princeton, Harvard, and Cornell and spent the latter part of his career at the University of Wisconsin-Madison. As a logician, Rosser is known for his part in the Church-Rosser Theorem and the Kleene–Rosser Paradox in lambda calculus. Moreover, he served as president of the Association for Symbolic Logic. As an applied mathematician, he served as president of the Society of Industrial and Applied Mathematics (otherwise known as SIAM). While at the University of Wisconsin-Madison, he served as the director of the U.S. Army Mathematics Research Center. He continued to lecture well into his late 70s, and passed away at his home in Madison in 1989. He has a son, J. Barkley Rosser Jr, who’s an economist at James Madison University.

Lowell Schoenfeld spent his early years in New York City, graduating Cum Laude from the College of the City of New York in 1940. He went on to MIT to earn a Master’s. He received his Ph.D. in 1944 from the University of Pennsylvania under the direction of Hans Rademacher. (During his years in graduate school, he seems to have worked for the Philadelphia Navy Yard as well, writing reports on aircraft navigational computers.) After positions at Temple University and Harvard, he moved to the University of Illinois, where he met his future wife. He met Josephine M. Mitchell when she was a tenured Associate Professor and he was an untenured Assistant Professor. After they married, the University would no longer allow Mitchell to teach, so the couple both resigned their positions and eventually settled at Pennsylvania State University. They spent about 10 years there but in 1968 the couple moved to the University of Buffalo, where they remained until their retirements in the 1980s.

As far as I can tell, these are the papers they wrote together, all in analytic number theory:

[1] Rosser, J. Barkley; Schoenfeld, Lowell. “Approximate formulas for some functions of prime numbers”. Illinois J. Math. 6 (1962), no. 1, 64–94.
[2] Rosser, J. Barkley; Schoenfeld, Lowell; J.M. Yohe. “Rigorous Computation and the Zeros of the Riemann Zeta-Function,” 1969
[3] Rosser, J. Barkley; Schoenfeld, Lowell. “Sharper Bounds for the Chebyshev Functions $\theta (x)$ and $\psi (x)$” Mathematics of Computation Vol. 29, No. 129 (Jan., 1975), pp. 243-269
[4] Rosser, J. Barkley; Schoenfeld, Lowell. “Approximation of the Riemann Zeta-Function” 1971.

I haven’t seen a copy of the papers [2] and [4] in years but I’m guessing these are what I looked at as a teenager in Seattle, years ago, wandering through the stacks at the UW.

Rosser also wrote papers on topics in recreational mathematics, such as magic squares. Two such papers were co-written with R.J. Walker from Cornell University, who’s more well-known for his textbook Algebraic Curves:

Rosser, Barkley; Walker, R. J. “The algebraic theory of diabolic magic squares,” Duke Math. J. 5 (1939), no. 4, 705–728
Rosser, Barkley; Walker, R. J. “On the transformation group for diabolic magic squares of order four,” Bull. Amer. Math. Soc. 44 (1938), no. 6, 416–420.

Diabolic magic squares, also called pan-diagonal magic squares, are $n\times n$ squares of integers $1, 2, ..., n^2$ whose rows all add to a constant C, whose columns all add to C, whose diagonals both add to C, and whose “broken diagonals” all add to C. An example was given by the German artist Albrecht Durer in the 1514 engraving called Melencolia I: (where C=34):

# Expected maximums and fun with Faulhaber’s formula

A recent Futility Closet post inspired this one. There, Greg Ross mentioned a 2020 paper by P Sullivan titled “Is the Last Banana Game Fair?” in Mathematics Teacher. (BTW, it’s behind a paywall and I haven’t seen that paper).

Suppose Alice and Bob don’t want to share a banana. They each have a fair 6-sided die to throw. To decide who gets the banana, each of them rolls their die. If the largest number rolled is a 1, 2, 3, or 4, then Alice wins the banana. If the largest number rolled is a 5 or 6, then Bob wins. This is the last banana game. In this post, I’m not going to discuss the last banana game specifically, but instead look at a related question.

Let’s define things more generally. Let $I_n=\{1,2,...,n\}$, let $X,Y$ be two independent, uniform random variables taken from $I_n$, and let $Z=max(X,Y)$. The last banana game concerns the case $n=6$. Here, I’m interested in investigating the question: What is $E(Z)$?

Computing this isn’t hard. By definition of independent and max, we have
$P(Z\leq z)=P(X\leq z)P(Y\leq z)$.
Since $P(X\leq z)=P(Y\leq z)={\frac{z}{n}}$, we have
$P(Z\leq z)={\frac{z^2}{n^2}}$.
The expected value of $Z$ is defined as $\sum kP(Z=k)$, but there’s a handy-dandy formula we can use instead:
$E(Z)=\sum_{k=0}^{n-1} P(Z>k)=\sum_{k=0}^{n-1}[1-P(Z\leq k)]$.
Now we use the previous computation to get
$E(Z)=n-{\frac{1}{n^2}}\sum_{k=1}^{n-1}k^2=n-{\frac{1}{n^2}}{\frac{(n-1)n}{6}}={\frac{2}{3}}n+{\frac{1}{2}}-{\frac{1}{6n}}.$
This solves the problem as stated. But this method generalizes in a straightforward way to selecting m independent r.v.s in $I_n$, so let’s keep going.

First, let’s pause for some background and history. Notice how, in the last step above, we needed to know the formula for the sum of the squares of the first n consecutive positive integers? When we generalize this to selecting m integers, we need to know the formula for the sum of the m-th powers of the first n consecutive positive integers. This leads to the following topic.

Faulhaber polynomials are, for this post (apparently the terminology is not standardized) the sequence of polynomials $F_m(n)$ of degree m+1 in the variable n that gives the value of the sum of the m-th powers of the first n consecutive positive integers:

$\sum_{k=1}^{n} k^m=F_m(n)$.

(It is not immediately obvious that they exist for all integers $m\geq 1$ but they do and Faulhaber’s results establish this existence.) These polynomials were discovered by (German) mathematician Johann Faulhaber in the early 1600s, over 400 years ago. He computed them for “small” values of m and also discovered a sort of recursive formula relating $F_{2\ell +1}(n)$ to $F_{2\ell}(n)$. It was about 100 years later, in the early 1700s, that (Swiss) mathematician Jacob Bernoulli, who referenced Faulhaber, gave an explicit formula for these polynomials in terms of the now-famous Bernoulli numbers. Incidentally, Bernoulli numbers were discovered independently around the same time by (Japanese) mathematician Seki Takakazu. Concerning the Faulhaber polys, we have
$F_1(n)={\frac{n(n+1)}{2}}$,
$F_2(n)={\frac{n(n+1)(2n+1)}{6}}$,
and in general,
$F_m(n)={\frac{n^{m+1}}{m+1}}+{\frac{n^m}{2}}+$ lower order terms.

With this background aside, we return to the main topic of this post. Let $I_n=\{1,2,...,n\}$, let $X_1,X_2,...,x_m$ be m independent, uniform random variables taken from $I_n$, and let $Z=max(X_1,X_2,...,X_m)$. Again we ask: What is $E(Z)$? The above computation in the $m=2$ case generalizes to:

$E(Z)=n-{\frac{1}{n^m}}\sum_{k=1}^{n-1}k^m=n-{\frac{1}{n^m}}F_m(n-1).$

For m fixed and n “sufficiently large”, we have

$E(Z)={\frac{m}{m+1}}n+O(1).$

I find this to be an intuitively satisfying result. The max of a bunch of independently chosen integers taken from $I_n$ should get closer and closer to n as (the fixed) m gets larger and larger.

# Michael Reid’s Happy New Year Puzzles, 2018

Belatedly posted by permission of Michael Reid. Enjoy!

Here are some New Year’s puzzles to help start out 2018.

1. Arrange the ten digits 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 in the
expression $a^b + c^d + e^f + g^h + i^j$ to make 2018.

2. (a) Express $2018 = p^q + r^s$ where p, q, r, s are primes.
(b) Express $2018 = p^q - r^s$ where p, q, r, s are primes.

3. (a) Is it possible to put the first 9 primes, 2, 3, 5, 7, 11, 13,
17, 19 and 23 into a 3×3 matrix that has determinant 2018?
(b) Is it possible to put the first 16 primes, 2, 3, 5, … , 53,
into a 4×4 matrix that has determinant 2018?

4. (a) Express 2018 = A / B using the fewest number of distinct
digits.
For example, the expression 7020622 / 3479 uses only seven
different digits. But it is possible to do better than this.
(b) Express $2018 = (A_1 \cdot A_2 \cdot ... \cdot A_m) / (B_1 \cdot B_2 \cdot ... \cdot B_n)$ using the fewest number of distinct digits.

# Michael Reid’s Happy New Year Puzzle, 2017

Belatedly posted by permission of Michael Reid. Enjoy!

Here are some interesting puzzles to start the New Year; hopefully they are not too easy!

1. Express 2017 as a quotient of palindromes.

2. (a) Are there two positive integers whose sum is 2017 and whose product
is a palindrome?
(b) Are there two positive integers whose difference is 2017 and whose
product is a palindrome?

3. Is there a positive integer n such that both 2017 + n and
2017 n are palindromes?

4. What is the smallest possible sum of the decimal digits of 2017 n ,
where n is …
(a) … a positive integer?
(b) … a prime number?
(c) … a palindrome?

5. Consider the following two operations on a positive integer:
(i) replace a string of consecutive digits by its square,
(ii) if a string of consecutive digits is a perfect cube,
replace the string by its cube root.

Neither the string being replaced, nor its replacement, may have
have “leading zeros”. For example, from 31416 , we may change it to
319616 , by squaring the 14 . From 71253 , we may change it to
753 by taking the cube root of 125 .

(a) Starting from the number 2017 , what is the smallest number we
can reach with a sequence of these operations?
(b) What is the smallest number from which we can start, and reach
the number 2017 with a sequence of these operations?

6. Find a list of positive rational numbers, q_1 , q_2 , … , q_n
whose product is 1 , and whose sum is 2017 . Make your list as
short as possible.
Extra Credit: Prove that you have the shortest possible list.

# Michael Reid’s Happy New Year Problems, 2020

Posted by permission of Michael Reid. Enjoy!

New Year’s Greetings!

Here are some fun puzzles to start the year.

1. Substitute the numbers 1, 2, … , 9 for the letters
a, b, … , i in the expression $a^b + c^d + (e + f + g - h)^i$
to get 2020.

2. Use the digits 1, 2, … , 9 in order, and any of the usual
arithmetic operations and parentheses to get a number that is
as close as possible to, but not exactly equal to 2020.

3. Express 2019/2020 as a sum of distinct Egyptian fractions,
i.e. $1 / n_1 + 1 / n_2 + ... + 1 / n_k$ for integers $0 < n_1 < n_2 < ... n_k < 202049$
(but 202049 is not square).

5. Make a 4×4 matrix of single-digit integers (0-9) with digits
2, 0, 2, 0 on the main diagonal, and having determinant 2020.
Is it possible to do it with a symmetric matrix?

If you liked this one, check out other puzzles ont this blog tagged with “Michael Reid”.

Let $P$ denote the set of all primes and, for $p \in P$, let $(*/p)$ denote the Legendre quadratic residue symbol mod p. Let ${\mathbb N}=\{1,2,\dots\}$ denote the set of natural numbers and let

$L: {\mathbb N}\to \{-1,0,1\}^P,$

denote the mapping $L(n)=( (n/2), (n/3), (n/5), \dots),$ so the $k$th component of $L(n)$ is $L(n)_k=(n/p_k)$ where $p_k$ denotes the $k$th prime in $P$. The following result is “well-known” (or, as the joke goes, it’s well-known to those who know it well:-).

Theorem: The restriction of $L$ to the subset ${\mathbb S}$ of square-free integers is an injection.

In fact, one can be a little more precise. Let $P_{\leq M}$ denote the set of the first $M$ primes, let ${\mathbb S}_N$ denote the set of square-free integers $\leq N$, and let

$L_M: {\mathbb N}\to \{-1,0,1\}^{P_M},$

denote the mapping $L_M(n)=( (n/2), (n/3), (n/5), \dots, (n/p_M)).$

Theorem: For each $N>1$, there is an $M=M(N)>1$ such that the restriction of $L_M$ to the subset ${\mathbb S}_N$ is an injection.

I am no expert in this field, so perhaps the following question is known.

Question: Can one give an effective upper bound on $M=M(N)>1$ as a function of $N>1$?

I’ve done some computer computations using SageMath and it seems to me that

$M=O(N)$

(i.e., there is a linear bound) is possible. On the other hand, my computations were pretty limited, so that’s just a guess.

# Michael Reid’s Happy New Year Puzzle, 2012

Here is another puzzle from Michael Reid on NOBNET. (Michael also notes that if you find a solution (a,b,c,d,e,f) then (d,e,f,a,b,c) is another solution.)

Puzzle: Replace a, b, c, d, e, f with the first six prime numbers, in some order, to make the expression $a\cdot b^c + d \cdot e^f$ equal to the New Year.

Best wishes for a happy, healthy and prosperous New Year!

# Michael Reid’s Digit operation puzzles

Michael Reid (http://www.math.ucf.edu/~reid/index.html) posted these questions on Nobnet recently. I’m reposting them here.

Consider two rules for modifying a positive integer:

i) replace a substring of its digits by the square of the number represented by the substring
ii) if a substring is a perfect cube, replace the substring by its cube root

Neither the substring being replaced, nor the string replacing it may have leading zeroes.

For example, starting with 123, we may square 23′ to obtain 1529. Then we can square 29′ to get 15841, then square 5′ to get 125841, and then we could take the cube root of 125′ to get 5841, and so on.

What is the smallest number we can obtain from the number 2011 with a sequence of these operations?

Now suppose the two operations are instead

iii) replace a substring by its cube
iv) replace a substring which is a square by its square root

What is the smallest number we can get with these operations if we start from 2011?

Here is another “digit operations” puzzle that I hope you will find a nice challenge.

Consider two rules for modifying a positive integer:

i) replace a substring of its digits by 7 times the number represented by the substring
ii) if a substring is a perfect 7th power, replace the substring by its 7th root

Neither the substring being replaced, nor the string replacing it may have leading zeroes.

For example, starting with 347, we may replace the substring 34′ by 238′ (multiplying by 7) to obtain 2387. Then we can multiply the substring 3′ to get 22187. Now we can take the 7th root of 2187′ to get 23, and so on.

What is the smallest number we can obtain from the number 2011 with a sequence of these operations?

Now suppose the two operations are instead

iii) if a substring is divisible by 7, we may divide the substring by 7
iv) replace a substring by its 7th power

(which are the reverses of operations i) and ii) above).

What is the smallest number we can get with these operations if we start from 2011?