Madagascar Code Patterns: Difference between revisions
→Loop over ensembles: vulnerability |
|||
Line 5: | Line 5: | ||
It is necessary in the case of algorithms that act on one ensemble at a time. Parallelizable with OMP. | It is necessary in the case of algorithms that act on one ensemble at a time. Parallelizable with OMP. | ||
==I/O-optimized loop over samples== | ==I/O-optimized loop over samples== |
Latest revision as of 01:20, 29 October 2011
All patterns are in C, unless otherwise noted.
Loop over ensembles[edit]
This is a simple loop over reading 1-D arrays ("traces"), 2-D ("gathers"), etc.
It is necessary in the case of algorithms that act on one ensemble at a time. Parallelizable with OMP.
I/O-optimized loop over samples[edit]
Description and usage[edit]
It consists of looping over an entire dataset and applying a given procedure to every single sample in a dataset, regardless of what "trace"/"frame"/"volume" it belongs to. Example: computing the sum of all elements of a dataset; computing a histogram; performing a clip operation; etc. It uses the BUFSIZ macro defined in stdio.h to ensure efficient stream I/O. Its occurences can be easily found by grepping for BUFSIZ in the codebase.
Example[edit]
<c> int n; /* Total number of elements in dataset */ int nbuf; /* Number of elements in I/O buffer */ float *fbuf; /* I/O array */ sf_file in=NULL; /* Input file. Here is stdin, but this is not compulsory */
in = sf_input("in");
n = sf_filesize(in);
/* This example uses float as data type. Any other data type (int, sf_complex, etc) can be used, as appropriate */ nbuf = BUFSIZ/sizeof(float);
fbuf = sf_floatalloc(nbuf);
for (; n > 0; n -= nbuf) {
if (nbuf > n) nbuf = n;
sf_floatread(fbuf, nbuf, in);
for (i=0; i < nbuf; i++) { /* Do computations here */ }
} </c>
Potential for improvement[edit]
This pattern should be parallelized using OpenMP.
The GNU C Library documentation states that when doing I/O on a file (as opposed to a stream), the st_blksize field of the file attributes is a better choice than BUFSIZ.
OMP parallelized loop[edit]
Description and usage[edit]
Shared-memory parallelization using the OpenMP library.