Automatic rock identification from core photos using modern machine learning methods

Tyumen State University Herald. Physical and Mathematical Modeling. Oil, Gas, Energy


Release:

2021. Vol. 7. № 4 (28)

Title: 
Automatic rock identification from core photos using modern machine learning methods


For citation: Dyachkov S. M., Yadryshnikova O. A., Polyakov D. V., Devyatka N. P., Chermyanin P. I., Dmitrievskiy M. V. 2021. “Automatic rock identification from core photos using modern machine learning methids”. Tyumen State University Herald. Physical and Mathematical Mo­deling. Oil, Gas, Energy, vol. 7, no. 4 (28), pp. 181-198. DOI: 10.21684/2411-7978-2021-7-4-181-198

About the authors:

Sergey M. Dyachkov, Chief Specialist, Department of Prototypes and Development Technologies, Tyumen Petroleum Research Center; smdyachkov@tnnc.rosneft.ru; ORCID: 0000-0002-3238-3259

Olga A. Yadryshnikova, Cand. Sci. (Tech.), Chief Manager, Algorithmization Department, Tyumen Petroleum Research Center; oayadrishnikova@tnnc.rosneft.ru

Dmitriy V. Polyakov, Specialist, Department of Prototypes and Development Technologies, Tyumen Petroleum Research Center; dvpolyakov3-tnk@tnnc.rosneft.ru; ORCID: 0000-0002-9726-1375

Nadezhda P. Devyatka, Head of the Department of Lithological-Facies and Sedimentological Core Study, Tyumen Petroleum Research Center; npdevyatka@tnnc.rosneft.ru

Pavel I. Chermyanin, Head of the Department of the Intelligent Technologies Development, Tyumen Petroleum Research Center; pichermyanin2@tnnc.rosneft.ru

Mikhail V. Dmitrievskiy, Cand. Sci. (Phys.-Math.), Chief Manager, High Technology Systems Development Department, Tyumen Petroleum Research Center; mvdmitrievskiy@tnnc.rosneft.ru

Abstract:

Layer-by-layer description of the core is performed to understand the regularities of the structure of the geological section, predict the development of reservoirs, clarify stratigraphic boundaries and obtain calculation parameters for assessing hydrocarbon reserves. In this case, the name of the breed is one of the key parameters determined in the layer-by-layer description.

This paper presents a comparative analysis of two approaches to determining the breed using machine learning methods: based on graphical identifiers and convolutional neural networks.

The original sample contained photographs of core samples from the Tyumenskaya suite fields (8 fields, 15 wells, more than 2 km of core) under daylight. For the analysis, 4 main classes of rocks (siltstones, mudstones, sandstones, coals) were selected. For these rocks, windows of 5 × 5 cm were formed and compressed to 299 × 299 pixels. The total sample exceeded 90,000 windows: 70% — training sample (60,359 windows) and 30% — test (31,140 windows). The training and test samples contain photographs of core samples from different fields.

The comparison was made between convolutional neural networks (ResNet, ResNeXt, Inception, etc.) and a classifier (such as XGBoost) based on graphic identifiers of two types: color (average color, dominant colors) and texture (entropy, Euler’s number, contrast, dissimilarity, uniformity, energy, correlation). According to the results of the experiments, the model based on convolutional neural networks turned out to be more sensitive to implicit features and made it possible to reduce the error in the weighted average f1-measure with respect to the ensemble of weak classifiers by 12.5% ​​on the test sample even without optimization of hyperparameters.

Thus, we can conclude that the model based on convolutional neural networks is more sensitive to implicit features that are difficult to extract using known graphic identifiers. On the other hand, the approach based on graphic identifiers and an ensemble of weak classifiers can be used without specialized computing power (video cards).

References:

  1. Kuzenkov V., Kashirskikh D., Paromov S., Ramazanov Yu., Serkin M. (ed.). 2018. “Full-size core studies” RN-LAB: wit. 2018661917 the RF on the official registration of the computer program. No. 2018619071. [In Russian]

  2. Devyatka N., Kashirskikh D., Paromov S., Vakhrusheva I., Kuzenkov V. (ed.). 2019. “Lithology” RN-LAB: wit. 2019616974 the RF on the official registration of the computer program. No. 2019616974. [In Russian]

  3. Abashkin V., Seleznev I., Chertova A., Istomin S., Romanov D., Samokhvalov A. 2020. “Quantitative analysis of whole core photos for continental oilfield of Western Siberia”. SPE Russian Petroleum Technology Conference. Paper SPE-202017-MS. DOI: 10.2118/202017-MS

  4. Alzubaidi F., Mostaghimi P., Swietojanski P., Clark S., Armstrong R. 2021. “Automated lithology classification from drill core images using convolutional neural networks”. Journal of Petroleum Science and Engineering, vol. 197, art. 107933. DOI: 10.1016/j.petrol.2020.107933

  5. Baraboshkin E., Ismailova L., Orlov D., Zhukovskaya E. A., Kalmykov G. A., Khotylev O. V., Baraboshkin E. Yu., Koroteev D. A. 2019. “Deep Convolutions for In-Depth Automated Rock Typing”. Computers and Geosciences, vol. 135, art. 104330. DOI: 10.1016/j.cageo.2019.104330

  6. Chen T., Guestrin C. 2016. “XGBoost: a scalable tree boosting system”. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785‑794. DOI: 10.1145/2939672.2939785

  7. Haralick R. M. 1979. “Statistical and structural approaches to texture”. Proceedings of the IEEE, vol. 67, no. 5, pp. 786-804. DOI: 10.1109/PROC.1979.11328

  8. Ivchenko V., Baraboshkin E., Ismailova L., Orlov D., Koroteev D., Baraboshkin E. Yu. 2018. “Core photo lithological interpretation based on computer analyses”. Proceedings of the IEEE Northwest Russia Conference on Mathematical Methods in Engineering and Technology, vol. 8, pp. 426-428.

  9. Krizhevsky A. 2009. “Learning multiple layers of features from tiny images”. University of Toronto.

  10. Lin T.-Y., Maire M., Belongie S., Hays J., Perona P., Ramanan D., Dollár P., Zitnick C. 2014. “Microsoft COCO: Common objects in context”. European Conference on Computer Vision, pp. 740-755.

  11. Lin X., Ji J., Gu Y. 2007. “The euler number study of image and its application”. 2nd IEEE Conference on Industrial Electronics and Applications, pp. 910-912. DOI: 10.1109/ICIEA.2007.4318541

  12. Lloyd S. 1982. “Least squares quantization in PCM”. IEEE Transactions on Information Theory, vol. 28, no. 2, pp. 129-137. DOI: 10.1109/TIT.1982.1056489

  13. Löfstedt T., Brynolfsson P., Asklund T., Nyholm T., Garpebring A. 2019. “Gray-level invariant Haralick texture features”. PLOS ONE, vol. 14, no. 2, art. e0212110. DOI: 10.1371/journal.pone.0212110

  14. Paszke A., Gross S., Massa F. et al. 2019. “PyTorch: an imperative style, high-performance deep learning library”. Advances in neural information processing systems, vol. 32, pp. 8024-8035.

  15. Russakovsky O., Deng J., Su H., Krause J., Satheesh S., Ma S., Huang Z., Karpathy A., Khosla A., Bernstein M., Berg A., Fei-Fei L. 2015. “ImageNet Large Scale Visual Recognition Challenge”. International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211-252. DOI: 10.1007/s11263-015-0816-y

  16. Van der Walt S., Schönberger J. L., Nunez-Iglesias J., Boulogne F., Warner J. D., Yager N., Gouillart E., Yu T. 2014. “Scikit-image: image processing in Python”. PeerJ, vol. 1, art. e453. DOI: 10.7717/peerj.453

  17. Xie S., Girshick R., Dollár P., Tu Z., He K. 2017. “Aggregated residual transformations for deep neural networks”. IEEE Conference on Computer Vision and Pattern Recognition, pp. 5987-5995. DOI: 10.1109/CVPR.2017.634

  18. Yang W., Cai L., Wu F. 2020. “Image segmentation based on gray level and local relative entropy two dimensional histogram”. PLOS ONE, vol. 15, no. 3, art. e0229651. DOI: 10.1371/journal.pone.0229651