Real world applications of representation theory

(Subtitle: Representation theorists will rule the world one day just you wait)

 

This post describes some applications of representation theory of non-abelian groups to various fields and gives some references.

  • Engineering.
    • Tensegrity – the design of “strut-and-cable” constructions.Want to build a building with cables and struts but don’t know representation theory? Check out these references:
      • R. Connelly and A. Back, “Mathematics and tensegrity”, Amer Scientist, April-May 1998, pages 142-151
      • symmetric tensegrities
    • Telephone network designs.This is the information age with more and more telephone lines needed every day. Want to reach out and touch someone? You need representation theory.
      • F. Bien, “Construction of telephone networks by group representations”, Notices A. M. S. 36(1989)5-22
    • Nonlinear network problems.This is cheating a little since the works in the reference below really use the theory of Lie groups instead of representation theory itself. Still, there is a tangential relation at least between representation theory of Lie groups and the solution to certain nonlinear network problems.
      • C. Desoer, R. Brockett, J. Wood, R. Hirshorn,
        A. Willsky, G. Blankenship, Applications of Lie group theory to nonlinear network problems, (Supplement to IEEE Symposium on Circuit Theory, 1974), Western Periodicals Co., N. Hollywood, CA, 1974
    • Control theory.
      • R. W. Brockett, “Lie theory and control systems defined on spheres”, SIAM J on Applied Math 25(1973) 213-225
    • Robotics.The future is not in plastics (see the movie “The Graduate“) but in robotics.
      How do you figure out their movements before building them? You guessed it, using representation theory.

      • G. Chirikjian, “Determination and synthesis of discretely actuated manipulator workspaces using harmonic analysis”, in Advances in Robotic Kinematics, 5, 1996, Springer-Verlag
      • G. Chirikjian and I. Ebert-Uphoff, “Discretely actuated manipulator workspace generation by closed-form convolution”, in ASME Design Engineering Technical Conference, August 18-22 1996
    • Radar design.W. Schempp, Harmonic analysis on the Heisenberg nilpotent Lie group, with
      applications to signal theory
      , Longman Scientific & Technical, New York (Copublished in the U.S. with Wiley), 1986.
    • Antenna design.B. Hassibi, B. Hochwald, A. Shokrollahi, W. Sweldens, “Representation theory for high-rate multiple antenna code design,” 2000 preprint (see A. Shokrollahi’s site for similar works).
    • Design of stereo systems.We’re talkin’ quadrophonic state-of-the-art.
      • K. Hannabus, “Sound and symmetry”, Math. Intelligencer, 19, Fall 1997, pages 16-20
    • Coding theory. Interesting progress in coding theory has been made using group theory and representation theory. Here are a few selected references.
      • F. MacWilliams and N. Sloane, The Theory of Error-Correcting Codes,
        North-Holland/Elsevier, 1993 (8th printing)
      • I. Blake and R. Mullin, Mathematical Theory of Coding, Academic Press, 1975
        49(1995)215-223
      • J.-P. Tillich and G. Zemor,
        “Optimal cycle codes constructed from Ramanujan graphs,” SIAM J on Disc. Math. 10(1997)447-459
      • H. Ward and J. Wood, “Characters and the equivalence of codes,” J. Combin. Theory A 73348-352
      • J. Lafferty and D. Rockmore, “Spectral Techniques for Expander Codes” , (Extended Abstract) 1997 Symposium on Theory of Computation (available
        at  Dan Rockmore’s web page)
  • Mathematical physics.
    Any complete list of books and papers in this field which use representation theory would be much too long for the limited goal we have here (which is simply
    to list some real-world applications). A small selection is given below.

    • Differential equations (such as the heat equation, Schrodinger wave equation, etc).M. Craddock, “The symmetry groups of linear partial differential equations
      and representation theory, I” J. Diff. Equations 116(1995)202-247
    • Mechanics.
      • D.H. Sattinger, O.L. Weaver, Lie Groups and Algebras With Applications to Physics, Geometry, and Mechanics (Applied Mathematical Sciences, Vol 61) , Springer Verlag, 1986
      • Johan Belinfante, “Lie algebras and inhomogeneous simple materials”,
        SIAM J on Applied Math 25(1973)260-268
    • Models for elementary particles.
    • Quantum mechanics.
      • Eugene Wigner, “Reduction of direct products and restriction of representations to subgroups: the everyday tasks of the quantum theorists”, SIAM J on Applied Math 25(1973) 169-185
      • V. Vladimirov, I. Volovich, and E. Zelenov, “Spectral theory in p-adic quantum mechanics and representation theory,” Soviet Math. Doklady 41(1990)40-44
    • p-adic string theory.
      • Y. Manin, “Reflections on arithmetical physics,” in Conformal invariance and string theory, Academic Press, 1989, pages 293-303
      • V. Vladimirov, I. Volovich, and E. Zelenov, p-adic analysis and mathematical physics, World Scientific, 1994
      • V. Vladimirov, “On the Freund-Witten adelic formula for Veneziano amplitudes,” Letters in Math. Physics 27(1993)123-131
  • Mathematical chemistry.
    • Spectroscopy.B. Judd, “Lie groups in Atomic and molecular spectroscopy”, SIAM J on Applied Math 25(1973) 186-192
    • Crystallography.
      • G. Ramachandran and R. Srinivasan, Fourier methods in crystallography,
        New York, Wiley-Interscience, 1970.
      • T. Janssen, Crystallographic groups, North-Holland Pub., London, 1973.
      • J. Zak, A. Casher, M. Gluck, Y. Gur, The irreducible representations of space groups, W. A. Benjamin, Inc., New York, 1969.
    • Molecular strucure of the Buckyball.
      • F. Chung and S. Sternberg, “Mathematics and the buckyball”, American Scientist 83(1993)56-71
      • F. Chung, B. Kostant, and S. Sternberg, “Groups and the buckyball”, in Lie theory and geometry, (ed. J.-L. Brylinski et al), Birkhauser, 1994
      • G. James, “The representation theory for the Buckminsterfullerene,” J. Alg. 167(1994)803-820
  • Knot theory (which, in turn, has applications to modeling DNA) uses representation theory. F. Constantinescu and F. Toppan, “On the linearized Artin braid representation,” J. Knot Theory and its Ramifications, 2(1993)
  • The Riemann hypothesis.
    Think you’re going to solve the Riemann hypothesis without using
    representation theory? Check this paper out: A. Connes, “Formule de traces en geometrie non-commutative et hypothese de Riemann”, C. R. Acad. Sci. Paris 323 (1996)1231-1236. (For those who argue that this is not a real-world application, we refer to Barry Cipra’s article, “Prime Formula Weds Number Theory and Quantum Physics,” Science, 1996 December 20, 274, no. 5295, page 2014, in Research News.)
  • Circuit design, statistics, signal processing, …
    See the survey paper
    D. Rockmore, “Some applications of generalized FFTs” in Proceedings of the DIMACS
    Workshop on Groups and Computation, June 7-10, 1995 eds. L. Finkelstein and W. Kantor, (1997) 329–369. (available at  Dan Rockmore’s web page)
  • Vision – See the survey papers by Jacek Turski:Geometric Fourier Analysis of the Conformal Camera for Active Vision, SIAM Review, Volume 46 Issue 2 pages 230-255, 2004 Society for Industrial and Applied Mathematics, and, Geometric Fourier Analysis for Computational Vision, JFAA 11, 1-23, 2005.

Hill verses Hamming

It’s easy to imagine the 19th century Philadelphia wool dealer Frank J. Primrose as a happy man. I envision him shearing sheep during the day, while in the evening he brings his wife flowers and plays games with his little children until bedtime. However, in 1887 Frank J. Primrose was not a happy man. This is because in June of that year, he had telegraphed his agent in Kansas instructions to buy a certain amount of wool. However, the telegraph operator made a single mistake in transmitting his message and Primrose unintentionally bought far more wool than he could possibly sell. Ordinarily, such a small error has little consequence, because errors can often be detected from the context of the message. However, this was an unusual case and the mistake cost him about a half-million dollars in today’s money. He promptly sued and his case eventually made its way to the Supreme Court. The famous 1894 United States Supreme Court case Primrose v. Western Union Telegraph Company decided that the telegraph company was not liable for the error in transmission of a message.

Thus was born the need for error-correcting codes.


Introduction

Lester Hill is most famously known for the Hill cipher, frequently taught in linear algebra courses today. We describe this cryptosystem in more detail in one of the sections below, but here is the rough idea. In this system, developed and published in the 1920’s, we take a k\times k matrix K, composed of integers between 0 and 25, and encipher plaintext p by p\longmapsto c=Kp, where the arithmetical operations are performed mod 26. Here K is the key, which should be known only to the sender and the intended receiver, and c is the ciphertext transmitted to the receiver.

On the other hand, Richard Hamming is known for the Hamming codes, also frequently taught in a linear algebra course. This will be describes in more detail in one of the sections below, be here is the basic idea. In this scheme, developed in the 1940’s, we take a k\times k matrix G over a finite field F, constructed in a very particular way, and encode a message m by m\longmapsto c=mG, where the arithmetical operations are performed in F. The matrix G is called the generator matrix and c is the codeword transmitted to the receiver.

Here, in a nutshell, is the mystery at the heart of this post.

These schemes of Hill and Hamming, while algebraically very similar, have quite different aims. One is intended for secure communication, the other for reliable communication. However, in an unpublished paper [H5], Hill developed a hybrid encryption/error-detection scheme, what we shall call “Hill codes” (described in more detail below).

Why wasn’t Hill’s result published and therefore Hill, more than Hamming, known as a pioneer of error-correcting codes?

Perhaps Hill himself hinted at the answer. In an overly optimistic statement, Hill wrote (italics mine):

Further problems connected with checking operations in finite fields will be treated in another paper. Machines may be devised to render almost quite automatic the evaluation of checking elements c_1,\dots,c_q according to any proposed reference matrix of the general type described in Section 7, whatever the finite field in which the operations are effected. Such machines would enable us to dispense entirely with tables of any sort, and checks could be determined with great speed. But before checking machines could be seriously planned, the following problem — which is one, incidentally, of considerable interest from the standpoint of pure number theory — would require solution.

– Lester Hill, [H5]

By my interpretation, this suggests Hill wanted to answer the question below before moving on. As simple looking as it is, this problem is still, as far as I know, unsolved at the time of this writing.

Question 1 (Hill’s Problem):
Given k and q, find the largest r such that there exists a k\times r van der Monde matrix with the property that every square submatrix is non-singular.

Indeed, this is closely related to the following related question from MacWilliams-Sloane [MS77], also still unsolved at this time. (Since Cauchy matrices do give a large family of matrices with the desired property, I’m guessing Hill was not aware of them.)

Question 2: Research Problem (11.1d)
Given k and q, find the largest r such that there exists a k\times r matrix having entries in GF(q) with the property that every square submatrix is non-singular.

In this post, after brief biographies, an even more brief description of the Hill cipher and Hamming codes is given, with examples. Finally, we reference previous blog posts where the above-mentioned unpublished paper, in which Hill discovered error-correcting codes, is discussed in more detail.


Short biographies

Who is Hill? Recent short biographies have been published by C. Christensen and his co-authors. Modified slightly from [C14] and [CJT12] is the following information.

Lester Sanders Hill was born on January 19, 1890 in New York. He graduated from Columbia University in 1911 with a B. A. in Mathematics and earned his Master’s Degree in 1913. He taught mathematics for a few years at Montana University, then at Princeton University. He served in the United States Navy Reserves during World War I. After the WWI, he taught at the University of Maine and then at Yale, from which he earned his Ph.D. in mathematics in 1926. His Ph.D. advisor is not definitely known at this writing but I think a reasonable guess is Wallace Alvin Wilson.

In 1927, he accepted a position with the faculty of Hunter College in New York City, and he remained there, with one exception, until his resignation in 1960 due to illness. The one exception was for teaching at the G.I. University in Biarritz in 1946, during which time he may have been reactivated as a Naval Reserves officer. Hill died January 9, 1961.

Thanks to an interview that David Kahn had with Hill’s widow reported in [C14], we know that Hill loved to read detective stories, to tell jokes and, while not shy, enjoyed small gatherings as opposed to large parties.

Who is Hamming? His life is much better known and details can be readily found in several sources.

Richard Wesley Hamming was born on February 11, 1915, in Chicago. Hamming earned a B.S. in mathematics from the University of Chicago in 1937, a masters from the University of Nebraska in 1939, and a PhD in mathematics (with a thesis on differential equations)
from the University of Illinois at Urbana-Champaign in 1942. In April 1945 he joined the Manhattan Project at the Los Alamos Laboratory, then left to join the Bell Telephone Laboratories in 1946. In 1976, he retired from Bell Labs and moved to the Naval Postgraduate School in Monterey, California, where he worked as an Adjunct Professor
and senior lecturer in computer science until his death on January 7, 1998.

Hill’s cipher

The Hill cipher is a polygraphic cipher invented by Lester S. Hill in 1920’s. Hill and his colleague Wisner from Hunter College filed a patent for a telegraphic device encryption and error-detection device which was roughly based on ideas arising from the Hill cipher. It appears nothing concrete became of their efforts to market the device to the military, banks or the telegraph company (see Christensen, Joyner and Torres [CJT12] for more details). Incidently, Standage’s excellent book [St98] tells the amusing story of the telegraph company’s failed attempt to add a relatively simplistic error-detection to telegraph codes during that time period.

Some books state that the Hill cipher never saw any practical use in the real world. However, research by historians F. L. Bauer and David Kahn uncovered the fact that the Hill cipher saw some use during World War II encrypting three-letter groups of radio call signs [C14]. Perhaps insignificant, at least compared to the practical value of Hamming codes, none-the-less, it was a real-world use.

The following discussion assumes an elementary knowledge of matrices. First, each letter is first encoded as a number, namely

A \leftrightarrow 0, B \leftrightarrow 1, \dots, Z \leftrightarrow 25. The subset of the integers \{0, 1, \dots , 25\} will be denoted by Z/26Z. This is closed under addition and multiplication (mod 26), and sums and products (mod 26) satisfy the usual associative and distributive properties. For R = Z/26Z, let GL(k,R) denote the set of invertible matrix transformations T:R^k\to R^k (that is, one-to-one and onto linear functions).


The construction

Suppose your message m consists of n capital letters, with no spaces. This may be regarded an n-tuple M with elements in R = Z/26Z. Identify the message M as a sequence of column vectors {\bf p}\in R^k. A key in the Hill cipher is a k\times k matrix K, all of whose entries are in R, such that the matrix K is invertible. It is important to keep K and k secret.

The encryption is performed by computing {\bf c} = K{\bf p}, and rewriting the resulting vector as a string over the same alphabet. Decryption is performed similarly by computing {\bf p} = K^{-1} {\bf c}..

Example 1: Suppose m is the message “BWGN”. Transcoding into numbers, the plaintext is rewritten p_0=1, p_1=22, p_2=6, p_3=13. Suppose the key is
K=\left(\begin{array}{rr} 1 & 3 \\ 5 & 12 \end{array}\right).
Using Hill’s encryption above gives c_0=7,c_1=3,c_2=24,c_3=3. (Verification is left to the reader as an exercise.)

Security concerns: For example, this cipher is linear and can be broken by a known plaintext attack.


Hamming codes

Richard Hamming is a pioneer of coding theory, introducing the binary
Hamming codes in the late 1940’s. In the days when an computer error could crash the computer and force the programmer to retype his punch cards, Hamming, out of frustration, designed a system whereby the computer could automatically correct certain errors. The family of codes named after him can easily correct one error.


Hill’s unpublished paper

While he was a student at Yale, Hill published three papers in Telegraph and Telephone Age [H1], [H2], [H3]. In these papers Hill described a mathematical method for checking the accuracy of telegraph communications. There is some overlap with these papers and [H5], so it seems likely to me that Hill’s unpublished paper [H5] dates from this time (that is, during his later years at Yale or early years at Hunter).

In [H5], Hill describes a family of linear block codes over a finite field and an algorithm for error-detection (which can be easily extended to error-correction). In it, he states the construction of what I’ll call the “Hill codes,” (defined below), gives numerous computational examples, and concludes by recording Hill’s Problem (stated above as Question 1). It is quite possibly Hill’s best work.

Here is how Hill describes his set-up.

Our problem is to provide convenient and practical accuracy checks upon
a sequence of n elements f_1, f_2, \dots, f_r in a finite algebraic
field F. We send, in place of the simple sequence f_1, f_2, \dots, f_r, the amplified sequence f_1, f_2, \dots, f_r, c_1, c_2, \dots, c_k
consisting of the “operand” sequence and the “checking” sequence.

– Lester Hill, [H5]

Then Hill continues as follows. Let F=GF(p) denote the finite field having p elements, where p>2 is a prime number. The checking sequence contains k elements of F as follows:
c_j = \sum_{i=1}^r a_{i}^jf_i,
for j = 1, 2, \dots, k. The checks are to be determined by means of a
fixed matrix
A = \left( \begin{array}{cccc} a_{1} & a_{2} & \dots & a_{r} \\ a_{1}^2 & a_{2}^2 & \dots & a_{r}^2 \\ \vdots & & & \vdots \\ a_{1}^k & a_{2}^k & \dots & a_{r}^k \\ \end{array} \right)
of elements of F, the matrix having been constructed according to the criteria in Hill’s Problem above. In other words, if the operand sequence (i.e., the message) is the vector {\bf f} = (f_1, f_2, \dots, f_r), then the amplified sequence (or codeword in the Hill code) to be transmitted is

{\bf c} = {\bf f}G,
where G = \left( I_r, A \right) and where I_r denotes the
r\times r identity matrix. The Hill code is the row space of G.

We conclude with one more open question.

Question 3:
What is the minimum distance of a Hill code?

The minimum distance of any Hamming code is 3.

Do all sufficiently long Hill codes have minimum distance greater than 3?


Summary

Most books today (for example, the excellent MAA publication written by Thompson [T83]) date the origins of the theory of error-correcting codes to the late 1940s, due to Richard Hamming. However, this paper argues that the actual birth is in the 1920s due to Lester Hill. Topics discussed include why Hill’s discoveries weren’t publicly known until relatively recently, what Hill actually did that trumps Hamming, and some open (mathematical) questions connected with Hill’s work.

For more details, see these previous blog posts.

Acknowledgements: Many thanks to Chris Christensen and Alexander Barg for
helpful and encouraging conversations. I’d like to explicitly credit Chris Christensen, as well as historian David Kahn, for the original discoveries of the source material.


Bibliography

[C14] C. Christensen, Lester Hill revisited, Cryptologia 38(2014)293-332.

[CJT12] ——, D. Joyner and J. Torres, Lester Hill’s error-detecting codes, Cryptologia 36(2012)88-103.

[H1] L. Hill, A novel checking method for telegraphic sequences, Telegraph and
Telephone Age (October 1, 1926), 456 – 460.

[H2] ——, The role of prime numbers in the checking of telegraphic communications, I, Telegraph and Telephone Age (April 1, 1927), 151 – 154.

[H3] ——, The role of prime numbers in the checking of telegraphic
communications, II, Telegraph and Telephone Age (July, 16, 1927), 323 – 324.

[H4] ——, Lester S. Hill to Lloyd B. Wilson, November 21, 1925. Letter.

[H5] ——, Checking the accuracy of transmittal of telegraphic communications by means of operations in finite algebraic fields, undated and unpublished notes, 40 pages.
(hill-error-checking-notes-unpublished)

[MS77] F. MacWilliams and N. Sloane, The Theory of Error-Correcting Codes, North-Holland, 1977.

[Sh] A. Shokrollahi, On cyclic MDS codes, in Coding Theory and Cryptography: From Enigma and Geheimschreiber to Quantum Theory, (ed. D. Joyner), Springer-Verlag, 2000.

[St98] T. Standage, The Victorian Internet, Walker & Company, 1998.

[T83] T. Thompson, From Error-Correcting Codes Through Sphere Packings to Simple Groups, Mathematical Association of America, 1983.

Chess problem 1 by Karl Fabel

I found the following problem in the book C. Bandelow, Inside Rubik’s cube and beyond, Birkhauser, 1982

fabel_chess-problem1
White to play and mate in 182 moves.

 

Here are the pieces and their locations:

Black pieces:

  • pawns at b3, b6, c7, g4, g6, g7, h7
  • knights at a8, f1, h3
  • bishops at a7, g2
  • rook at h2
  • king at h1

White pieces:

  • pawns at b2, b5, c6, g3
  • knights at d1, e2
  • rook at e1
  • king at d8

solution below

.

.

.

.

.

.

.

.

This is a Dec 5, 1999 email from Dror Efraty, with some minor edits:
hi David,

I send you my analyzis of the solution.

this is my analisis of the position:

in the given position few black pieces can move.
if Nh3 moves white mates in 1 move: Nf2#
if Bg2 moves, white mates in 2 – 1. R:f1 Kg2, 2. Ne3#
so black can only move with his king side pawns, and with Ba7.
note that after black captures g3 with his pawn (and later, when he moves
other pawns to g3) he can move Nf4 next move, and no mate is possible.
thus immidiately after black has a pawn on g3, white must move:
1. N:g3+ Kg1, 2. Ne2+ Kh1 to remove black’s pawn from g3.

this means, that if white kills black’s Ba7 and queen side pawns, black
must move with either Nh3 or Bg2, which result in mate in 2. but this is
not so easy for white to kill Ba7. the obvious way will be to move Kb7,
but then black move: B:c6+ and kg2 to release the position. then, black
has enough to win the game. also, if white’s king moves to almost every
other white square on the board, black can check him with either Bg2 or
Nh3, and release the position.there are 2 exceptions: c8 (current
position) and a4.
now, white can only kill black’s Ba7 on b8 and not on a7. but this can’t
happen if all white’s moves are king moves on black squares, because then
it takes white an even number of moves to return with his king to c8, and
during that time black moves Ba7-b8-a7-b8-a7, so that when white moves
Kd8-c8 black moves Bb8-a7. also, white can’t move either of his knights
because this will let black move Nh3 and release the position, so white
can only move his king.
so, as explained, white can kill Ba7 only if he can move an odd number of
moves with his king, and return to c8. this can be done in 19 moves:
Kd8-e7-f8:g7-f6-e5-d4-c3-b4-a4-a3-b4-c3-d4-e5-f6-e7-d8-c8.
and after black’s pawn g7 is gone, white can do it in 17 moves:
Kd8-e7-f6-e5-d4-c3-b4-a4-a3-b4-c3-d4-e5-f6-e7-d8-c8.
after each such sequence, black’s bishop is on a7, and can’t move to b8,
so black spare a pawn move.
black has 8 pawn moves to spare: h7-h6, h6-h5, h5-h4, h4:g3, g4-g3, g6-g5,
g5-g4, and g4-g3 again. as noted above, after the moves: h4:g3 and g4-g3
white must make the moves N:g3+ Kg1, Ne2+ kh1 so that black won’t release
the position.
so the sequence for the mate is:
white’s king 19 moves trip (including killing g7, meanwhile black moves
with his Ba7)
19. … h7-h6
20.-36. white king’s 17 moves trip
36. … h6-h5
37.-53. white king’s 17 moves trip
53. … h5-h4
54.-70. white king’s 17 moves trip
70. … h4:g3
71. N:g3+ Kg1, 72. Ne2+ Kh1
73.-89. white king’s 17 moves trip
89. … g4-g3
90. N:g3+ Kg1, 91. Ne2+ Kh1
92.-108. white king’s 17 moves trip
108. … g6-g5
109.-125. white king’s 17 moves trip
125. … g5-g4
126.-142. white king’s 17 moves trip
142. … g4-g3
143. N:g3+ Kg1, 144. Ne2+ Kh1
145.-161. white king’s 17 moves trip
162. … Ba7-b8
163. K:b8 Bg7 – somewhere,
164. R:f1+ Kg2, and finally 165. Nf3#

black can save his g7 pawn by moving g6-g5, and g7-g6, but
this means that black has pawns on g6 and g7, and white can make shorter
odd moves trips to h7. he can do it only after black moves h7-h6 ohterwise
black moves Nf4+ or Nf2+. now, white has an eleven moves trip:
Kd8-e7-f8-g7-h8-h7-g7-f8-e7-d8-c8.

so, the count is:

1.-19. white king trip, meanwhile black moves g6-g5 and g7-g6
19. … h7-h6
20.-30. 11 moves trip, … h6-h5
31.-41. 11 moves trip, … h5-h4
42.-52. 11 moves trip, … h4:g3
53. N:g3+ Kg1, 54. Ne2+ Kh1
55.-71. 17 moves trip, … g4-g3
72. N:g3+ Kg1, 73. Ne2+ Kh1
74.-90. 17 moves trip, … g5-g4
91.-107. 17 moves trip, … g4-g3
108. N:g3+ Kg1, 109. Ne2+ Kh1
110.-126. 17 moves trip, … g6-g5
127.-143. 17 moves trip, … g5-g4
144.-160. 17 moves trip, … g4-g3
161. N:g3+ Kg1, 162. Ne2+ Kh1
163.-179. 17 moves trip
179. … Bb8, 180. K:b8 Bg2-somewhere, 181 R:f1+ Kg2, 182. Ne3#

so, these are the whole 182 moves. another fabel’s masterpiece.

Dror.