Publicación:
Training with synthetic images for object detection and segmentation in real machinery images

No hay miniatura disponible
Fecha
2020
Autores
Salas A.J.C.
Meza-Lovon G.
Fernandez M.E.L.
Raposo A.
Título de la revista
Revista ISSN
Título del volumen
Editor
Institute of Electrical and Electronics Engineers Inc.
Proyectos de investigación
Unidades organizativas
Número de la revista
Abstracto
Over the last years, Convolutional Neural Networks have been extensively used for solving problems such as image classification, object segmentation, and object detection. However, deep neural networks require a great deal of data correctly labeled in order to perform properly. Generally, generation and labeling processes are carried out by recruiting people to label the data manually. To overcome this problem, many researchers have studied the use of data generated automatically by a renderer. To the best of our knowledge, most of this research was conducted for general-purpose domains but not for specific ones. This paper presents a methodology to generate synthetic data and train a deep learning model for the segmentation of pieces of machinery. For doing so, we built a computer graphics synthetic 3D scenery with the 3D models of real pieces of machinery for rendering and capturing virtual photos from this 3D scenery. Subsequently, we train a Mask R-CNN using the pre-trained weights of COCO dataset. Finally, we obtained our best averages of 85.7% mAP for object detection and 84.8% mAP for object segmentation, over our real test dataset and training only with synthetic images filtered with Gaussian Blur. © 2020 IEEE.
Descripción
Palabras clave
synthetic data generation, deep learning, object detection, object segmentation
Citación