Editing
Reproducible computational experiments using SCons
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Introduction== This paper introduces an environment for reproducible computational experiments developed as part of the "Madagascar" software package. To reproduce the example experiments in this paper, you can download Madagascar from https://www.ahay.org . At the moment, the main Madagascar interface is the Unix shell command line so that you will need a Unix/POSIX system (Linux, Mac OS X, Solaris, etc.) or Unix emulation under Windows (Cygwin, SFU, etc.) Our focus, however, is not only on particular tools we use in our research but also on the general philosophy of reproducible computations. ===Reproducible research philosophy=== Peer review is the backbone of scientific progress. From the ancient alchemists who worked secretly on magic solutions to insolvable problems, modern science has come a long way to become a social enterprise where the community openly publishes and verifies hypotheses, theories, and experimental results. By reproducing and verifying previously published research, a researcher can take new steps to advance the progress of science. Traditionally, scientific disciplines are divided into theoretical and experimental studies. The reproduction and verification of theoretical results usually require only imagination (apart from pencils and paper), and experimental results are verified in laboratories using equipment and materials similar to those described in the publication. During the last century, computational studies emerged as a new scientific discipline. Computational experiments are carried out on a computer by applying numerical algorithms to digital data. How reproducible are such experiments? On one hand, reproducing the result of a numerical experiment is difficult. The reader needs to have access to precisely the same kind of input data, software, and hardware as the publication's author to reproduce the published result. It is often difficult or impossible to provide detailed specifications for these components. On the other hand, essential computational system components such as operating systems and file formats are getting increasingly standardized. New components can be shared in principle because they represent digital information transferable over the Internet. The practice of software sharing has fueled the miraculously efficient development of Linux, Apache, and many other open-source software projects. Its proponents often refer to this ideology as an analog of the scientific peer review tradition. Eric Raymond, a well-known open-source advocate writes (Raymond, 2004<ref>Raymond, E. S., 2004, The art of UNIX programming: Addison-Wesley.</ref>): <blockquote> Abandoning the habit of secrecy in favor of process transparency and peer review was the crucial step by which alchemy became chemistry. In the same way, it is beginning to appear that open-source development may signal the long-awaited maturation of software development as a discipline. </blockquote> While software development tries to imitate science, computational science must borrow from the open-source model to sustain itself as a fully scientific discipline. In the words of Randy LeVeque, a prominent mathematician (LeVeque, 2006<ref>LeVeque, R. J., to appear, 2006, Wave propagation software, computational science, and reproducible research: Presented at the Proc. International Congress of Mathematicians.</ref>), <blockquote> Within the world of science, computation is now rightly seen as a third vertex of a triangle, complementing experiment and theory. However, as it is now often practiced, one can make a good case that computing is the last refuge of the scientific scoundrel [...] Where else in science can one get away with publishing observations that are claimed to prove a theory or illustrate the success of a technique without having to give a careful description of the methods used in sufficient detail that others can attempt to repeat the experiment? [...] Scientific and mathematical journals are filled with pretty pictures these days of computational experiments that the reader has no hope of repeating. Even brilliant and well-intentioned computational scientists often do a poor job of presenting their work in a reproducible manner. The methods are often very vaguely defined, and even if they are carefully defined, they would normally have to be implemented from scratch by the reader in order to test them. </blockquote> In computer science, the concept of publishing and explaining computer programs goes back to the idea of ''literate programming'' promoted by Knuth (1984<ref>Knuth, D. E., 1984, Literate programming: Computer Journal, '''27''', 97--111.</ref>) and expended by many other researchers (Thimbleby, 2003<ref>Thimbleby, H., 2003, Explaining code for publication: Software - Practice & Experience, '''33''', 975--908.</ref>). In his 2004 lecture on "Better Programming," Harold Thimbleby notes<ref>http://www.uclic.ucl.ac.uk/harold/</ref> <blockquote> We want ideas, and in particular programs, that work in one place to work elsewhere. One form of objectivity is that published science must work elsewhere than just in the author's laboratory or even just in the author's imagination; this requirement is called ''reproducibility'' . </blockquote> <!-- The quest for peer review and reproducibility is vital for computational geosciences and computational geophysics in particular. The very first paper published in ''Geophysics'' was titled "Black Magic in Geophysical Prospecting" () and presented an account of different "magical" methods of oil explorations promoted by entrepreneurs in the early days of the geophysical exploration industry. Although none of these methods exist today, it is not a secret that industrial practice is full of nearly magical tricks, often hidden besides a scientific appearance. Only a scrutiny of peer review and result verification can help us distinguish magic from science and advance the latter. --> Nearly ten years ago, the technology of reproducible research in geophysics was pioneered by Jon Claerbout and his students at the Stanford Exploration Project (SEP). SEP's system of reproducible research requires the author of a publication to document the creation of numerical results from the input data and software sources to let others test and verify the reproducibility of the results (Claerbout, 1992a<ref>Claerbout, J., 1992a, Electronic documents give reproducible research a new meaning: 62nd Ann. Internat. Mtg, 601--604, Soc. of Expl. Geophys.</ref>;Schwab et al., 2000<ref>Schwab, M., M. Karrenbach, and J. Claerbout, 2000, Making scientific computations reproducible: Computing in Science & Engineering, '''2''', 61--67.</ref>). The discipline of reproducible research was also adopted and popularized in the statistics and wavelet theory community by Buckheit and Donoho (1995<ref>Buckheit, J. and D. L. Donoho, 1995, Wavelab and reproducible research, ''in'' Wavelets and Statistics, volume '''103''', 55--81. Springer-Verlag.</ref>). It is referenced in several popular wavelet theory books (Hubbard, 1998<ref>Hubbard, B. B., 1998, The world according to wavelets: The story of a mathematical technique in the making: AK Peters.</ref>;Mallat, 1999<ref>Mallat, S., 1999, A wavelet tour of signal processing: Academic Press.</ref>). Pledges for reproducible research appear nowadays in fields as diverse as bioinformatics (Gentleman et al., 2004<ref>Gentleman, R. C., V. J. Carey, D. M. Bates, B. Bolstad, M. Dettling, S. Dudoit, B. Ellis, L. Gautier, Y. Ge, J. Gentry, K. Hornik, T. Hothorn, W. Huber, S. Iacus, R. Irizarry, F. Leisch, C. Li, M. Maechler, A. J. Rossini, G. Sawitzki, C. Smith, G. Smyth, L. Tierney, J. Y. Yang, and J. Zhang, 2004, Bioconductor: open software development for computational biology and bioinformatics: Genome Biology, '''5''', R80.</ref>), geoinformatics (Bivand, 2006<ref>Bivand, R., 2006, Implementing spatial data analysis software tools in r: Geographical Analysis, '''38''', 23--40.</ref>), and computational wave propagation (LeVeque, 2006<ref>LeVeque, R. J., to appear, 2006, Wave propagation software, computational science, and reproducible research: Presented at the Proc. International Congress of Mathematicians.</ref>). However, computational scientists' adoption of reproducible research practice has been slow. Partially, this is caused by complicated and inadequate tools. ===Tools for reproducible research=== The reproducible research system developed at Stanford is based on "make" (Stallman et al., 2004<ref>Stallman, R. M., R. McGrath, and P. D. Smith, 2004, GNU make: A program for directing recompilation: GNU Press.</ref>), a Unix software construction utility. Initially, SEP used "cake," a dialect of "make" (Nichols and Cole, 1989<ref>Nichols, D. and S. Cole, 1989, Device independent software installation with CAKE, ''in'' SEP-61, 341--344. Stanford Exploration Project.</ref>;Claerbout and Nichols, 1990<ref>Claerbout, J. F. and D. Nichols, 1990, Why active documents need cake, ''in'' SEP-67, 145--148. Stanford Exploration Project.</ref>;Claerbout, 1992b<ref>-------- 1992b, How to use Cake with interactive documents, ''in'' SEP-73, 451--460. Stanford Exploration Project.</ref>;Claerbout and Karrenbach, 1993<ref>Claerbout, J. F. and M. Karrenbach, 1993, How to use cake with interactive documents, ''in'' SEP-77, 427--444. Stanford Exploration Project.</ref>). The system was converted to "GNU make," a more standard dialect, by Schwab and Schroeder (1995<ref>Schwab, M. and J. Schroeder, 1995, Reproducible research documents using GNUmake, ''in'' SEP-89, 217--226. Stanford Exploration Project.</ref>). The "make" program keeps track of dependencies between different components of the system and the software construction targets, which, in the case of a reproducible research system, turn into figures and manuscripts. The author specifies the targets and commands for their construction in "makefiles," which serve as databases for defining source and target dependencies. A dependency-based system leads to rapid development because when one of the sources changes, only parts that depend on this source get recomputed. Buckheit and Donoho (1995<ref>Buckheit, J. and D. L. Donoho, 1995, Wavelab and reproducible research, ''in'' Wavelets and Statistics, volume '''103''', 55--81. Springer-Verlag.</ref>) based their system on MATLAB, a popular integrated development environment produced by MathWorks (Sigmon and Davis, 2001<ref>Sigmon, K. and T. A. Davis, 2001, MATLAB primer, sixth edition: Chapman & Hall.</ref>). While MATLAB is an adequate tool for prototyping numerical algorithms, it may not be sufficient for large-scale computations typical for many applications in computational geophysics. "Make" is a handy utility employed by thousands of software development projects. Unfortunately, it is not well designed from the perspective of user experience. "Make" employs an obscure and limited special language (a mixture of Unix shell and special-purpose commands), which often appears confusing to inexperienced users. According to Peter van der Linden, a software expert from Sun Microsystems (van der Linden, 1994<ref>van der Linden, P., 1994, Expert C programming: Prentice Hall.</ref>), <blockquote> "Sendmail" and "make" are two well-known programs that are pretty widely regarded as originally being debugged into existence. That's why their command languages are so poorly thought out and difficult to learn. It's not just you -- everyone finds them troublesome. </blockquote> The inconvenience of the "make" command language is also in its limited capabilities. The reproducible research system developed by Schwab et al. (2000<ref>Schwab, M., M. Karrenbach, and J. Claerbout, 2000, Making scientific computations reproducible: Computing in Science & Engineering, '''2''', 61--67.</ref>) includes not only custom "make" rules but also an obscure and hardly portable agglomeration of shell and Perl scripts that extend "make" (Fomel et al., 1997<ref>Fomel, S., M. Schwab, and J. Schroeder, 1997, Empowering SEP's documents, ''in'' SEP-94, 339--361. Stanford Exploration Project.</ref>). Several alternative systems for dependency-checking software construction have been developed in recent years. One of the most promising new tools is SCons, enthusiastically endorsed by Dubois (2003<ref>Dubois, P. F., 2003, Why Johnny can't build: Computing in Science & Engineering, '''5''', 83--88.</ref>). The SCons initial design won the Software Carpentry competition sponsored by Los Alamos National Laboratory in 2000 in the category of "a dependency management tool to replace make." Some of the main advantages of SCons are: *SCons configuration files are Python scripts. Python is a modern programming language praised for its readability, elegance, simplicity, and power (Rossum, 2000a<ref>Rossum, G. V., 2000a, Python reference manual: Iuniverse Inc.</ref>;Rossum, 2000b<ref>-------- 2000b, Python tutorial: Iuniverse Inc.</ref>). Scales and Ecke (2002<ref>Scales, J. A. and H. Ecke, 2002, What programming languages should we teach our undergraduates?: The Leading Edge, '''21''', 260--267.</ref>) recommend Python as the first programming language for geophysics students. *SCons offers reliable, automatic, and extensible dependency analysis and creates a global view of all dependencies—no more "make depend," "make clean," or multiple build passes of touching and reordering targets to get all the dependencies. *SCons has built-in support for many programming languages and systems, including C, C++, Fortran, Java, and LaTeX. *While "make" relies on timestamps to detect file changes (creating numerous problems on platforms with different system clocks), SCons uses a more reliable detection mechanism, employing MD5 signatures by default. It can detect changes not only in files but also in commands used to build them. *SCons provides integrated support for parallel builds. *SCons provides configuration support analogous to the "autoconf" utility for testing the environment on different platforms. *SCons is designed from the ground up as a cross-platform tool. It works equally well on POSIX systems (Linux, Mac OS X, Solaris, etc.) and Windows. *The stability of SCons is assured by an incremental development methodology utilizing comprehensive regression tests. *SCons is publicly released under a liberal open-source license<ref>As of this writing, SCons is in a beta version of 0.96, approaching the 1.0 official release. See http://www.scons.org/.</ref>. In this paper, we propose to adopt SCons as a new platform for reproducible research in scientific computing. ===Paper organization=== To demonstrate our adoption of SCons for reproducible research, we first describe a couple of simple examples of computational experiments and then show how SCons helps us document our computational results. <!-- \newpage ==Madagascar open-source code== Madagascar's homepage is http://rsf.sourceforge.net. Madagascar source code is proposed in two versions: [https://sourceforge.net/project/showfiles.php?group_id=162909 stable] and [http://rsf.sourceforge.net/wiki/index.php/Svn-url development]. The stable version is a snapshot of Madagascar at a given time. It was installed on different platforms and tested before being released. Updates are typically done every few months as opposed to the development version, which is updated every few hours or days by a dynamic team of developers. As such, there is no guarantee that the development version will be fully functional and stable at any given time. In the remainder of this paper, we assume that you have successfully installed Madagascar stable version and that you have an Internet connection\footnote{XXX provide alternate means to download Lena.img if no Internet connection XXX}. -->
Summary:
Please note that all contributions to Madagascar are considered to be released under the GNU Free Documentation License 1.3 or later (see
My wiki:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
English
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Getting Madagascar
download
Installation
GitHub repository
SEGTeX
Introduction
Package overview
Tutorial
Hands-on tour
Reproducible documents
Hall of Fame
User Documentation
List of programs
Common programs
Popular programs
The RSF file format
Reproducibility with SCons
Developer documentation
Adding programs
Contributing programs
API demo: clipping data
API demo: explicit finite differences
Community
Conferences
User mailing list
Developer mailing list
GitHub organization
LinkedIn group
Development blog
Twitter
Slack
Tools
What links here
Related changes
Special pages
Page information