top of page
Norm Mark

Artificial Intelligence for Automating Seismic Horizon Picking

By Norman Mark, Oil and Gas Exploration Consultant, San Diego, CA USA


Abstract

Fully-automated seismic horizon picking dramatically shortens the time between acquiring seismic data and picking drilling targets. Oil and gas exploration companies worldwide spend millions of tedious man-hours picking horizons on seismic data to produce inventories of drilling targets. Freeing up that time to better incorporate the geologic and geophysical properties within mapped structures guarantees more drilling success.


Artificial intelligence is frequently mentioned in today’s news: self-driving cars, vehicle identification, robot vacuums, fingerprint identification, facial recognition, Alexa - are just a few examples. Why not seismic data interpretation! Here, it is proved automated seismic horizon picking is possible.


From a 2-column text file of time-amplitude pairs my algorithm produces a reflection-time-ordered text file of all continuous series of connected pixel coordinates easily-read by mapping software. These connected pixels are plotted separately on the input seismic test line’s peaks and troughs. Over ten thousand connected segments were written in less than a minute of computer time, saving many hours of manual labor. This time-saving extrapolated to 3D seismic surveys will reduce the time it takes to find drilling targets by months.


Introduction/Motivation

An inordinate amount of the time generating subsurface maps from seismic data is spent defining horizons by clicking a mouse on connected pixels, connected being the key word.


Before beginning this project I was an oil exploration geophysicist and worked with major and minor oil companies for many years interpreting and processing seismic data using some of the most advanced software for interpretation and processing.


Seismic data is extremely expensive to acquire, which drives oil companies to want to get their money’s worth from seismic data – not to mention drilling costs! Great improvements in numerical solutions to the wave equation, computer processing speed, computer network efficiency and lower computer costs have led to seismic images looking more and more like real geology over the last 30 years. An effective seismic processing geophysicist is continuously challenged by new technology.


The appearance of seismic interpretation software in the ‘80’s sped up map making compared to early when maps were made using sepias and drafting sets. But today, it is still a tedious process using a mouse to connect the continuous pixels of a geologic horizon reflection. Improvements in the time-consuming process of picking horizons on those seismic images have not kept pace with image quality improvements.


Most oil companies require a candidate for employment to have an MS in Geoscience. Sadly, the possibly over-qualified new hire will spend exorbitant amounts of time – perhaps most of his/her career “connecting the dots” like a kindergartner would in a coloring book.


Extensive internet searches showed no evidence that horizon picking had been automated. There were apparently no web-published algorithms that seemed to have any relevance, but literature on facial recognition and fingerprint identification software gave confidence that automating horizon picking is possible. That was the motivation for this project. At start time, identifying faces and matching finger prints to people seemed far more complex tasks. See references.


Keys to the development of the horizon detection algorithm was developing an understanding of how seismic horizon pixels are connected and of how to sort the horizon coordinates into depth-order.


Oil exploration and geophysical service companies combined have millions of seismic line miles partially interpreted at best. What if a human connecting the dots became unnecessary? What if software could make the hundreds or thousands of often tedious and time-consuming decisions in advance for each seismic image by connecting the adjacent pixels? Then those who interpret seismic data would have much more time available to truly interpret: have more time to better understand the geologic and geophysical properties of the structures they have mapped


More good maps mean more discovered structures. More time spent on integrating the geophysical and geological properties into those maps mean fewer dry holes. There could be fewer poorly-mapped prospects without the blurry-eyed, mind-numbing tedium in finding drilling prospects.


Conventional mapping software


Current commercial geophysical mapping software such as Petrel, 3DCanvas, and Kingdom have the built-in facility to extend and propagate individual horizons picked with a mouse along the seismic section being picked and into the third dimension by following coded rules for adjacency. The commercial packages do not pick more than one horizon at a time. Most of them have the facility to extend the picked horizon on a 2D image into the third dimension. A 3D surface can be formed by the software but its “seed” must be manually picked.


At the beginning of the project I was shown an example of detailed automated horizon-tracking but without any ordered text output. Finding peaks and troughs is a far much simpler task than finding them in depth-order. A simple for-loop will find every peak pixel but only in top-to-bottom, side-to-side order.


Input Data


Figure 1. The public domain proof-of-concept input seismic section is a public domain vibroseis line from Rankin Springs, Australia. It is a relative amplitude plot with 600 traces by 750x4ms time samples. It shows faulted turnover with several potential hydrocarbon fault traps - possible 4-way closure with many possible fault traps. It’s image quality is middle-of-the-road which makes it an ideal test line.

Worldwide, seismic data quality ranges from looking like a complete waste of money to showing such detail that one can “almost see” fluid movement within the horizons. If I chose the latter data type for the test case it would be said by some that the data set made it easy. That is why an older vintage, noisy but structurally interesting line was selected for this study. As it turned out, the algorithm is not prejudiced: it is blind to overall image quality.


The test line shows enough detail to understand the motivation for acquiring the data in the first place – possible fault traps and possible four-way closure – possible dome or plunging dome. There are patches of long, short, and zero-continuity segments, clear fault terminations, zones of conflicting dip and zones of mostly random noise in the most potentially hydrocarbon-rich, or at least most structurally complex part of the section.


The data were normalized after removing the overall mean from each trace. The brightest red signifies an amplitude of 1. The brightest blue signifies an amplitude of -1. No-color implies an amplitude close to zero - typical for a relative amplitude display.


The data input format is a text file of a trace numbers followed by two columns - time in milliseconds and seismic amplitude - 600 traces by 750 time samples for each trace.



Figure 2. Tables of local minima, maxima and zeroes were computed from the normalized seismic amplitudes and were the horizon detection code input. The definition of local minima and maxima are shown at the top. The tables are ordered by trace number and time sample for peaks, 1's, and troughs, -1's. The ones represent local minima and maxima at particular reflection times but could represent any of many possible computed seismic attributes. The results would only have value, however, if there were visible linear continuity of the attribute within the seismic image.

Figure 3. Results from applying the horizon detection algorithm to the seismic peaks. All continuous horizon segments 10 traces and longer are shown. Random colors for the segments were chosen to enable seeing the individual horizons. It does appear that more seismic reflections should have been detected, and they have been but are not longer than 10 traces.

Figure 4. All connected peak segments longer than 3 pixels. Hardly any red is seen because the found segments perfectly overlay the horizons, proving there is no offset error: the found segments are congruent with the peak waveforms. Eliminating the shortest connections is an effective random noise filter: the shortest segments are in the regions of greatest seismic noise.

Figure 5. All trough horizons posted on seismic negative amplitudes longer than 10 traces. The assigned colors of the segments/horizons are random to enable the individual lengths to be observed. The segments are congruent with the darkest blue proving there is no offset error.

To prove that the algorithm found all the connected peaks and troughs, their coordinates (x-y pairs) were subtracted from those of the input data.


Figure 6. Detail of remaining peak and trough pixels after horizon detection. These are from the upper left corner and each tiny square represents an x-y coordinate. There are no connections longer than 2 traces.

Results


The images of horizons posted on the peaks and troughs seismic amplitude sections verify proof-of-concept. Detected horizon lengths can be extended by conditioning the data with coherence filters as long as they do not give the results an artificial-looking appearance. However, for testing purposes a mild continuity filter was applied and more horizons were captured. See Figure 9.


The remaining-pixels displays, Figure 6, proves that every connection was found by the software because the remaining pixels are isolated – surrounded by zeroes.


The results also show perfect congruence of the detected peak and trough segments with the seismic local maxima and local minima. There is no offset error. The code finds the absolute horizon position without approximating horizon position.


The output text files are in a convenient format easily read by mapping software and humans – reflection time vs distance order (y-x pairs).


The Code is Extensible to 3D


All commercial 3D seismic interpretation software, as previously mentioned have the facility to build surfaces connecting horizons at the same level in adjacent seismic lines along the z-axis or y-axis as horizons are picked.


If the test line here were part of a 3D survey, the output of the horizon detector code could be used as seeds to build 3D surfaces.


Details on Input Data


The input data is a two-column table of trace labels followed by times in milliseconds: 600 traces, 750 time samples, as it was found online as a public domain dataset.


Only one column would have been necessary if the sample rate were published. It is a convenient format. No research was required to interpret a complex trace header.


The type of amplitudes/signal is transparent to the code. Whether the amplitudes are time, depth, instantaneous frequency, phase, velocity – as long as there is visual continuity, my algorithm will track it.


Only this data set was used for this study. The amplitudes were used to compute and plot the relative amplitude seismic stacks shown in the figures and the variation of amplitude is why the seismic sections vary in color intensity. After producing the seismic section plots the above data were used to compute tables of local minima and local maxima for which the concept is illustrated below.


Figure 7. Detail near the end of the detected peak horizons output file. It lists horizon coordinates in y-x (time vs. trace) order. There were 5082 segments found on peak amplitudes down to 745 milliseconds reflection time. The text file shown is in a format which can easily be read or modified to be read by current seismic mapping software. Some horizons are 4 pixels long, some others are several hundred pixels long.

Figure 8. Zoomed out view of the output text file showing some of the longest connected segments.

Figure 9. Left - horizons longer than 100 traces posted on seismic peaks. Right - horizons longer than 100 traces with a mild continuity filter applied.

Figure 10. Horizons longer than 100 traces posted on seismic troughs.

Results and Discussion of Time Saved by Automation


The algorithm works extremely well in a minuscule fraction of the time it would take using commercial mapping software today. On a core i-7 laptop both the peaks and troughs segments, over 5000 each, were picked in just under a minute.


Preconditioning the data with an effective coherence filter will enable the algorithm to find many longer segments than it normally would on a noisy seismic section.


All coding was in Java which was chosen because of its great speed and its greater readability than C++.


If the subject seismic line were part of a 3D seismic survey in which all lines were the same length and as typical, there were both inlines and crosslines with the same number of traces as the test line here - say 600 inlines and 600 crosslines (1200 seismic lines total), that would require approximately 1200 minutes - 20 hours of computer processing time. On a distributed network of many nodes the total processing time would be much less. On a small distributed 20-node network the processing time would be reduced to 1 hour.


Assuming the input data here is a typical line and contains the average number of segments per line in this hypothetical 3D survey, then the number of horizons discovered would be 1200 lines x 10000+ peak and trough horizons for a horizon total of over 12 million.


A single geophysicist interpreting this hypothetical survey would spend months picking horizons- hopefully the key horizons - and end up with only a fraction of the horizons in the volume interpreted.


The time saved by automation will allow geoscientists to be able to spend much more time incorporating geological and geophysical data into their seismic maps. These months saved by automation will dramatically help to speed up the decisions on whether and where to drill or not drill at a time when oil and gas inventories are painfully ($$$) low.



References


1.Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

Philip J. Kellman, Jennifer L. Mnookin, Gennady Erlikhman , Patrick Garrigan, Tandra Ghose, Everett Mettler, David Charlton, Itiel E. Dror Published: May 2, 2014 https://doi.org/10.1371/journal.pone.0094617


2. H. Zhou, A. Mian, L. Wei, D. Creighton, M. Hossny and S. Nahavandi, "Recent Advances on Singlemodal and Multimodal Face Recognition: A Survey," in IEEE Transactions on Human-Machine Systems, vol. 44, no. 6, pp. 701-716, Dec. 2014, doi: 10.1109/THMS.2014.2340578.


The views, interpretations, and conclusions expressed within this article represent those of the author (authors or other entities) and are not necessarily shared or representative of the GSH, GSHJ, or any other entity associated with the journal or society.





1 comment

1 коментар


Jeremy Dyer
Jeremy Dyer
29 трав. 2022 р.

A promising candidate for future commercial development. Looking forward to seeing the results in 3D.

Вподобати
bottom of page