RSVP to either email@example.com OR Richard.Gray@chevron.com (to one only please).
Unless we have exceeded the allowable number of people for the auditorium, we will not be replying to your email.
LunchBox Geophysics is free! Simply bring your own lunch (refreshments provided) and enjoy.
Many seismic processing techniques have strict requirements on the regular spatial distribution of traces in seismic data. Datasets which do not fulfill these requirements, such as most 3D land surveys, will suffer from poor processing results when these techniques are used. Although not a substitute for well-sampled field data, interpolation can provide useful data preconditioning that allows this techniques to work better, and hence provide a superior result.
Seismic data interpolation has been around for long time, but only recently have we been able to use complex multi-dimensional algorithms that have the capability to infill large gaps in wide azimuth 3D land surveys. This innovation offers great potential for improving results for many key seismic processes like migration, velocity analysis, amplitude variation with offset and azimuth, etc.
Interpolation is probably one of the most challenging tasks of seismic processing. A large number of factors get involved. Some of these are related to the seismic samples themselves, geological structure and original sampling density, but some others are related to less obvious issues, like statics, topography, geometry design, data header interpolation, etc. The success of interpolation depends on the careful execution of each single step. In a more general sense we can look at two different aspects:
- A numerical algorithm that can predict undersampled details of the data without being mislead by aliasing and noise. This includes the prediction of rapid amplitude variations (high wavenumbers), which are essential for amplitude variation with offset and azimuth and for true amplitude migration.
- A strategy to apply this algorithm where the data looks the simplest and the sampling is the denser. This strategy involves the handling of very different input and output geometries with different targets, like increasing fold, resolution, covering gaps, etc.
In this paper, I will present a flexible strategy for wide azimuth land data interpolation that has proved successful over a large number of different projects and discuss the numerical algorithm that makes possible to preserve subtle details of the data and amplitude variations along all seismic dimensions.
Daniel Trad received his Licenciatura (1994) in Geophysics from the Universidad Nacional de San Juan, Argentina, and his Ph.D. (2001) from the University of British Columbia, Canada. He worked in Argentina and Brazil in robust algorithms for processing of electromagnetic data (1993-1997), and then moved to Canada where he has been working in seismic data processing since then. His Ph.D thesis and postdoctoral work was on sparse Radon transforms, interpolation and migration. Since 2003 he is research geophysicist for CGGVeritas, in Alberta, Canada. His current main research is in multidimensional interpolation for land data.