The first constraint to observe when dealing with wide/full-azimuth data is its sheer volume (tens of Terabytes). Data manipulation becomes the bottleneck procedure that the programmer must pay attention to. In practice, this means that data sorting, FFT-ing, axis reversing and transposing are not trivial operations any longer and their number must be minimized. As a consequence, it will often be preferrable to re-write a particular processing tool to apply to the current form the data is in, instead of re-shaping the data to fit to an existing algorithm. Thankfully, such re-writing would usually only involve re-ordering loops and adding or removing FFTs.
The circumstances above mean that clean, documented, maintainable codes, that can be modified in a pinch without adding bugs are a must when working with wide-azimuth data. The collaboration among geographically separated programmers that do not know each other and do not share a common cultural background necessarily imposes these qualities on open-source software. Considerable discipline is needed by in-house programmers in order to get to the same result. Companies which use open-source software that has the above-described qualities will be able to have a faster wide/full azimuth project turnaround. Conversely, the emergence of wide/full-azimuth data acquisition represents a great opportunity for community-based geophysical open-source software!
code
more code
~~~~