For automated processing of thin-layer structure data

In spite of the fact that the automated processing of geophysical survey has reached quite a high level these days some problems of the sphere still have no satisfactory solution that could be considered generally accepted. One of such problems that have sparked lately the most keen interest is interpretation of open-hole data obtained to single out and correctly identify inter-layers of extremely low thickness (comparable with the tool’s registering interval). While fast and high quality 3D modeling of the well surveyed is the end goal. Further we will examine key problems encountered down the road as well as methods suggested by PrimeGeo to solve them.

Problem 1. Low resolution ability of tools. 3D modeling requires as large volume of available information to be used as possible therefore as a rule curves that considerably different from the resolution ability are fed to the input of the computing modules. This often leads to a noticeable distortion of the result, in particular the cases given below at fig. 1 a) and b) may turn out to be undistinguishable in terms of classical methods of processing. This is related to the fact that the results of measurements in every single point of a well are influenced not only by the rocks directly at this depth but also those in some vicinity. Thus, for example, the induction logging is used only to estimate formation resistivity values averaged across the 1-2 m depth interval.

Рис. 1 Пример увеличения доли компоненты в составе породы (а ) и пример полного замещения этой компонентой остальных компонент породы (б)

a)                                                       b)

Fig. 1 Example of the component share increased in the formation composition (а) and example of full substitution of the rest of the formation components by this component (b)

It is possible to get rid of the described inconvenience by applying deconvolution operations to the measured curves: if we know what constituents the values registered by the tool are formed of (this information is enclosed in the so-called geometric factor characteristic of a particular tool) we can calculate true parameters of the formation rid of the depth averaging. As a result the thin homogeneous inter-layers become easily distinguishable from the areas with continuous change of the component composition and, accordingly the 3D models produced in the process are more precise and detailed.

      Now we have a lot of deconvolution methods at our disposal comprising the whole spectrum of problems arising while 3D modeling. They can be classified in accordance with the following characteristics:

1) number of curves fed to the input;

2) volume of prior information to be used;

3) accounting of the method penetrating power (for example, several probes of the induction logging can be used to some extent to get rid of not only the depth averaging but also the radial averaging);

4) ability to suppress the influence of circuit noise;

5) final mathematical implementation of the method.

As an individual example we should note that the method of joint processing of curves based on the knowledge of statistical properties of circuit noises and the geological cross-section allows us to solve the problem known in the English geophysical community as alpha processing in a quite spectacular way. It consists in applying a thin structure over the low resolution curve based on the information registered by other tools. It is curious that the processing in this case is absolutely symmetrical: the restoration error declines for each and every curve examined so that even the data standing for the best resolution ability get more specific (it is, of course, purely quantitative rather than qualitative effect though).

 

      Problem 2. No trustworthy method of identifying boundaries. The starting point of the lithologic analysis at the moment is identifying the boundaries of individual horizons mainly carried out manually or using minimum of supplementary tools. It is related to the fact that some cases encountered in practice can be interpreted ambivalently. Let us examine, for example, the curve given at fig. 2 below; the way it looks like obviously does not give us a clue what we are dealing with:  a sequence of horizons of equal thickness or a single horizon having spatial fluctuations of parameters. This takes us to an important concept: in order to consistently identify the boundaries we should not take into account local measurement peaks only but also the performance of the curve at a certain interval. Otherwise, accidental variations that can be related to the effect of spontaneous extraneous factors (to the extent of circuit noise) will be sure taken for the boundaries of horizons.

рис.2

Fig. 2

The solution of the problem lies in the calculation of the indicative curve every point of which is determined by the values of input data obtained at a certain interval of depths that should not be too narrow (this way or another the problem falls on the interpreter’s shoulders as the method is not associated in any way with a particular type of the tool). Fig. 3 demonstrates the result of applying the method to the induction logging curve (blue curve is the input data, red is indicative). The dashed line shows the boundaries singled out automatically in accordance with the values of the indicative curve taking into account the results of measurements at the 1 meter interval (two lines close to each other do not correspond to two boundaries but one extended). As we see the positions of the boundaries quite well follow the logic of the problem requiring though additional check by the interpreter.

рис.3

Fig. 3

In contrast fig. 4 and fig. 5 give analogous results based on the analysis of correspondingly 0.2 and 3 m depth intervals. The shortcomings of the variants are quite obvious.

рис.4

Fig. 4

рис.5

Fig. 5

 

Problem 3. Resource intensiveness of calculation modules. Solution of petrophysical set of equations in relation to the component composition of the formation is a traditional way of 3D modeling. At the same time curves representing input data have a various nature and can even be in conflict to each other. As a result a well posed problem requires an excess of information thus the set of equations being solved turns out overdetermined and should be viewed as an optimization problem. In spite of the reasoning transparency the consequences turn out to be quite sorrowful: the calculation modules based on the formal minimization of the residual error have a predictably low productiveness. As a result the interpreter loses flexibility critical for unavoidable use of the trial and error method of choosing a direct petrophysical model.

      The use of numerical minimization in fact is not necessary and can be substituted by a multidimensional linear transformation the form of which is governed by the known tolerances of original curves in the beginning of the calculation. At the same time even for a hypothetic 10 km deep well 3D modeling with 10 cm step between the neighboring points (100,000 points altogether) takes less than a millisecond (it takes around 1.5 min for the traditional module to solve the problem). Together with the colossal increment of productiveness the described approach can be also used to determine the inaccuracies of the calculated 3D model that makes the results the most transparent in the physical sense. Fig. 6 demonstrates an example of such calculation where the red curve gives a true value of the content of some component at the given depth, the blue one gives the calculated values and the green one gives a corridor of probable deviations from the calculated values. The example is a model: the inaccuracies of the original curves used are intentionally impractically big to demonstrate most pointedly the potentials of the method.

рис.6

 

 

 

 

 

 

 

 

Fig. 6

Conclusion

The approaches set forth above dedicated to automated processing of thin-layer structures represent the results of an extended physico-mathematical research of the PrimeGeo Company. The research was implemented in ready-to-use software modules and applied numerous times to model examples of various complexity. At the same time putting the modules in operation requires an enormous preparation work aimed at joining the abstract theory and the practice of geophysical study. The steps taken in this direction demonstrate that the proposed complex of methods is really in demand by specialists in the open hole survey and in any case will be useful. Therefore the PrimeGeo Company is determined to continue the intensive activity as planned.

Комментарии
Leave a Reply

Your email address will not be published. Required fields are marked *