Convolutional Neural Networks in the Detection of Astronomical Objects from the Messier Catalog

  • Witold Beluch Faculty of Mechanical Engineering, Silesian University of Technology, Gliwice, Poland
  • Paweł Śliwa Faculty of Mechanical Engineering, Silesian University of Technology, Gliwice, Poland

Abstract

This paper explores the application of convolutional neural networks in the field of amateur astronomy. The authors have employed the available astronomical datasets to develop a detector for identifying astronomical objects from the Messier catalog. A concept framework for creating such a detector for astronomical objects using artificial intelligence tools in the form of a detector based on convolutional neural networks is presented. Augmentation and pre-processing procedures have been used to extend the feature distribution in the training set. Examples confirming the effectiveness of the proposed detector of astronomical objects are presented.

Keywords

convolutional neural networks, astronomical objects detection, Messier catalog,

References

1. S. Abraham, A.K. Aniyan, A.K. Kembhavi, N.S. Philip, K. Vaghmare, Detection of bars in galaxies using a deep convolutional neural network, Monthly Notices of the Royal Astronomical Society, 477(1): 894–903, 2014, doi: 10.1093/mnras/sty627.
2. K. Aiuchi, K. Yoshida, M. Onozaki, Y. Katayama, M. Nakamura, K. Nakamura, Sensor-controlled heliostat with an equatorial mount, Solar Energy, 80(9): 1089–1097, 2006, doi: 10.1016/j.solener.2005.10.007.
3. S.H.S. Basha, S.R. Dubey, V. Pulabaigari, S. Mukherjee, Impact of fully connected layers on performance of convolutional neural networks for image classification, Neurocomputing, 378: 112–119, 2020, doi: 10.1016/j.neucom.2019.10.008.
4. Y. Boureau, J. Ponce, Y. LeCun, A theoretical analysis of feature pooling in visual recognition, [in:] Proceedings of the 27th International Conference on Machine Learning (ICML’10), 111–118, 2010.
5. A. Buslaev, V.I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, A.A. Kalinin, Albumentations: fast and flexible image augmentations,Information, 11(2): 125, 2020, doi: 10.3390/info11020125.
6. R. Collobert, J. Weston, A unified architecture for natural language processing: deep neural networks with multitask learning, [in:] Proceedings of the 25th International Conference on Machine Learning ICML ’08, pp. 160–167, 2008, doi: 10.1145/1390156.1390177.
7. S. Dieleman, K.W. Willett, J. Dambre, Rotation-invariant convolutional neural networks for galaxy morphology prediction, Monthly Notices of the Royal Astronomical Society 450(2), 1441–1459, 2015, doi: 10.1093/mnras/stv632.
8. I. Goodfellow, Y. Bengio, A. Courville, Deep Learning, MIT Press, 2016.
9. R. Girshick, Fast R-CNN, [in:] Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448, 2015, doi: 10.1109/ICCV.2015.169.
10. K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, 2015, arXiv: 1512.03385.
11. S. Hossain, D. Lee, Deep learning-based real-time multiple-object detection and tracking from aerial imagery via a flying robot with GPU-based embedded devices, Sensors, 19(15): 3371, 2019, doi: 10.3390/s19153371.
12. S. Ji, W. Xu, M. Yang, K. Yu, 3D convolutional neural networks for human action recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1): 221–231, 2012, doi: 10.1109/TPAMI.2012.59.
13. A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet classification with deep convolutional neural networks, Communications of the ACM, 60(6): 84–90, 2017, doi: 10.1145/3065386.
14. T.S. Kumar, R.N. Banavar, Design and development of telescope control system and software for the 50/80 cm Schmidt telescope, Optical Engineering, 52: 081607, 2013, doi: 10.1117/1.OE.52.8.081607.
15. Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, L. Jackel, Handwritten digit recognition with a back-propagation network, Advances in Neural Information Processing Systems, 2: 396–404, 1989.
16. Y. LeCun, Y. Bengio, Convolutional network for images, speech and time series, [in:] The handbook of brain theory and neural networks, M.A. Arbib [Ed.], The MIT Press, 1995.
17. Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, [in:] Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998, doi: 10.1109/5.726791.
18. X. Li, W. Wang, Learning discriminative features via weights-biased softmax loss, Pattern Recognition, 107: 107405, 2020, doi: 10.1016/j.patcog.2020.107405.
19. K. Li, W. Ma, U. Sajid, Y. Wu, G. Wang, Object detection with convolutional neural networks, [in:] Deep Learning in Computer Vision: Principles and Applications, CRC Press, 2020.
20. W. Liu et al., SSD: single shot multiBox detector, [in:] B. Leibe, J. Matas, N. Sebe, M. Welling [Eds.], Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, Vol. 9905, pp. 21–37, Springer, Cham, 2016, doi: 10.1007/978-3-319-46448-0_2.
21. J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, [in:] Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440, 2015, doi: 10.1109/CVPR.2015.7298965.
22. C. Messier, Catalog of Nebulae and Star Clusters, Memoirs of the French Academy of Sciences (in French: Catalogue des Nébuleuses & des amas d’Étoiles, Connoissance des Temps Pour l’Année), pp. 435–461, 1781.
23. L. Nanni, S. Ghidoni, S. Brahnam, Handcrafted vs non-handcrafted features for computer vision classification, Pattern Recognition, 71: 158–172, 2017, doi: 10.1016/j.patcog.2017.05.025.
24. H. Nguyen, Improving Faster R-CNN framework for fast vehicle detection, Mathematical Problems in Engineering, 2019: article ID 3808064, 11 pages, 2019, doi: 10.1155/2019/3808064.
25. B. Planche, E. Andres, Hands-On Computer Vision with TensorFlow 2: Leverage deep learning to create powerful image processing apps with TensorFlow 2.0 and Keras, Packt Publishing, 2019.
26. D.M.W. Powers, Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation, Journal of Machine Learning Technologies, 2(1): 37–63, 2011.
27. S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, [in:] NIPS’15: Proceedings of the 28th International Conference on Neural Information Processing Systems, 1: 91–99, 2015.
28. M.R.N. Rao, V.V. Prasad, P.S. Teja, Md. Zindavali, O. Reddy, A survey on prevention of overfitting in convolution neural networks using machine learning techniques, International Journal of Engineering and Technology, 7(2): 177–180, 2018, doi: 10.14419/ijet.v7i2.32.15399.
29. J. Redmon, S. Divvala, R.B. Girshick, A. Farhadi, You only look once: unified, real-time object detection, [in:] Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.779-788 2016, doi: 10.1109/CVPR.2016.91.
30. S. Ren, K. He, R.B. Girshick, J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6): 1137–1149, 2017, doi: 10.1109/TPAMI.2016.2577031.
31. K. Simonyan, A. Zisserman, Two-stream convolutional networks for action recognition in videos, [in:] NIPS’14: Proceedings of the 27th International Conference on Neural Information Processing Systems, pp. 568–576, 2014.
32. K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv/1409.1556, 2015.
33. E. Spyrou, H. Le Borgne, T. Mailis, E. Cooke, Y. Avrithis, N. O’Connor, Fusing MPEG-7 visual descriptors for image classification, [in:] ICANN 2005 – International Conference on Artificial Neural Networks, 11–15 September, Warsaw, Poland, 2005, doi: 10.1007/11550907_134.
34. C. Szegedy et al., Going deeper with convolutions, [in:] Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, 2015, doi: 10.1109/CVPR.2015.7298594.
35. K. Wang, P. Guo, F. Yu, L. Duan, Y. Wang, H. Du, Computational intelligence in astronomy: A survey, International Journal of Computational Intelligence Systems, 11: 575–590, 2018, doi: 10.2991/ijcis.11.1.43.
36. M.D. Zeiler, R. Fergus, Visualizing and understanding convolutional networks, [in:] ECCV 2014. Part I, D. Fleet, T. Pajdle., B. Schiele, T. Tuytelaars [Eds.], LNCS 8689, pp. 818–833, 2014.
37. Amateur Astrophotography Magazine, Astro Publishing Ltd., https://web.archive.org/web/20230324100114/, accessed August 18, 2023.
38. Github: The TensorFlow Object Detection API, https://github.com/tensorflow/models/tree/master/research/object_detection, accessed February 17, 2021.
39. The STScI Digitized Sky Survey, Space Telescope Science Institute in Baltimore, Maryland, https://archive.stsci.edu/cgi-bin/dss_form?target=M8&resolver=SIMBAD, accessed February 15, 2021.
Published
Oct 2, 2023
How to Cite
BELUCH, Witold; ŚLIWA, Paweł. Convolutional Neural Networks in the Detection of Astronomical Objects from the Messier Catalog. Computer Assisted Methods in Engineering and Science, [S.l.], v. 30, n. 4, p. 461–479, oct. 2023. ISSN 2956-5839. Available at: <https://cames.ippt.pan.pl/index.php/cames/article/view/527>. Date accessed: 15 nov. 2024. doi: http://dx.doi.org/10.24423/cames.527.
Section
Articles