Sage and the future of mathematics

I am not a biologist nor a chemist, and maybe neither are you, but suppose we were. Now, if I described a procedure, in “standard” detail, to produce a result XYZ, you would (based on your reasonably expected expertise in the field), follow the steps you believe were described and either reproduce XYZ or, if I was mistaken, not be able to reproduce XYZ. This is called scientific reproducibility. It is cructial to what I believe is one of the fundamental principles of science, namely Popper’s Falsifiability Criterion.

More and more people are arguing, correctly in my opinion, that in the computational realm, in particular in mathematical research which uses computational experiments, that much higher standards are needed. The Ars Technica article linked above suggests that “it’s time for a major revision of the scientific method.” Also, Victoria Stodden argues one must “facilitate reproducibility. Without this data may be open, but will remain de facto in the ivory tower.” The argument basically is that to reproduce computational mathematical experiments is unreasonably difficult, requiring more that a reasonable expertise. These days, it may in fact (unfortunately) require purchasing very expensive software, or possessing of very sophisticated programming skills, accessibility to special hardware, or (worse) guessing parameters and programming procedures only hinted at by the researcher.

Hopefully, Sage can play the role of a standard bearer for such computational reproducibility. Sage is free, open source and there is a publically available server it runs on (sagenb.org).

What government agencies should require such reproducibility? In my opinion, all scientific funding agencies (NSF, etc) should follow these higher standards of computational accountability.

3 thoughts on “Sage and the future of mathematics

  1. A very good and important point. And given that Wolfram Inc have publicly admitted that the internals of Mathematica are “too complicated” to be analyzed (I can’t find the exact quote, but I’m sure you know the one I mean), pretty much rules out Mathematica for reproducible mathematical science. I think the time is right for serious researchers to put their emphasis on the use of open-source tools.

  2. Thanks Alasdair.

    Here’s a related post from slashdot:

    +———————————————————————————————-+
    | Call For Scientific Research Code To Be Released |
    | from the but-then-people-will-see-how-awful-it-is dept. |
    | posted by Soulskill on Tuesday February 09, @09:41 (Programming) |
    | https://science.slashdot.org/story/10/02/09/1336250/Call-For-Scientific-Research-Code-To-Be|
    +———————————————————————————————-+

    Pentagram writes “Professor Ince, writing in the Guardian, has issued a
    call for scientists to [0]make the code they use in the course of their
    research publicly available. He focuses specifically on the [1]topical
    controversies in climate science, and concludes with the view that
    researchers who are able but unwilling to release programs they use
    should not be regarded as scientists. Quoting: ‘There is enough evidence
    for us to regard a lot of scientific software with worry. For example
    Professor Les Hatton, an international expert in software testing
    resident in the Universities of Kent and Kingston, carried out [2]an
    extensive analysis of several million lines of scientific code. He showed
    that the software had an unacceptably high level of detectable
    inconsistencies. For example, interface inconsistencies between software
    modules which pass data from one part of a program to another occurred at
    the rate of one in every seven interfaces on average in the programming
    language Fortran, and one in every 37 interfaces in the language C. This
    is hugely worrying when you realise that just one error — just one — will
    usually invalidate a computer program. What he also discovered, even more
    worryingly, is that the accuracy of results declined from six significant
    figures to one significant figure during the running of programs.'”

    Discuss this story at:
    http://science.slashdot.org/comments.pl?sid=10/02/09/1336250

    Links:
    0. http://www.guardian.co.uk/technology/2010/feb/05/science-climate-emails-code-release
    1. http://science.slashdot.org/story/09/12/05/137203/Scientific-Journal-emNatureem-Finds-Nothing-Notable-In-CRU-Leak
    2. http://www.leshatton.org/IEEE_CSE_297.html

  3. Pingback: On reproducible research « mvngu

Leave a comment