Deep learning-based automatic volumetric damage quantification using depth camera


Structures are susceptible to failures due to the deterioration of the structural elements. For example, concrete failure can be induced by external agents such as excessive usage and overloading. Therefore, proper assessment of the structural condition is highly desirable for performance evaluation. Traditional methods for structural damage evaluation were majorly based on visual inspection relying on the judgment of trained personnel. Recently, the use of computational methods together with sensors and data acquisition techniques have been incorporated to analyze the structural integrity. Despite their high accuracy rate, sensor-based data acquisition methods are expensive, often vulnerable to environmental changes, and difficult to manipulate thus not suitable for real-time detection of damage.

Extraction features and processing of data from images have attracted research attention as a promising data acquisition technology especially for the design and operation of systems from a distance. In particular, key parameters used in determining the structural properties and detection of potential hazards such as displacement have been determined through a two-dimensional imaging system. Additionally, three-dimensional analysis technique has been introduced to address the limitations associated with the two-dimensional system. The latter is capable of performing additional measurements that are of great significance in the detection and quantification of volumetric changes. Unfortunately, three-dimensional concrete imaging and health assessment are not well captured in the presently published literature.

Among the available methods for obtaining three-dimensional data, structure-from-motion approach and the use of special cameras containing red, green, blue and depth channels have been used. These methods are based on fixed depth camera setup and thus cannot be used in various scenarios and especially those involving automated systems. On the other hand, the existing methods cannot be used to effectively identify any type of volumetric damage. As such, a fully automated depth camera damage assessment system as an alternative and promising solution for volumetric damage quantification have been proposed.

To this note, University of Manitoba scientists namely: Gustavo Beckman, Dr. Dimos Polyzois and Dr. Young-Jin Cha from the Department of Civil Engineering developed an automatic convolutional neural network-based concrete spalling damage detection method. The method was realized by integrating the low-cost Microsoft Kinect V2 red-green-blue-depth channel-based (RGB-D) camera and faster region-based convolutional neural network (faster R-CNN) to allow automatic spalling damage detection, localization, and quantification. Here, no premeasured distance between the analyzed element and sensor was required. Furthermore, a database comprising of 1091 images for volumetric damage was developed for use in the modification, training, and validation of the deep learning network. The research work is currently published in the journal, Automation in Construction.

Standard structural surfaces of the detected concrete spalling were identified and segmented based on the location and depth data. This enabled the identification and quantification of the multiple surface concrete spalling volumes without taking the distance between the sensor and the depth element into consideration. The provided data equally allowed accurate extraction of the geometric properties. Besides, the newly developed method recorded an average precision of 90.79%, a mean volume quantification precision error of 9.45% for distances in the 100cm to 250 cm range and 3.24% for maximum damage depth distance ranges.

In summary, the deep-learning-based technology provides a reliable damage detection, localization, and quantification approach for structural health monitoring and, therefore, can be used as a prototype for automatic concrete spalling damage detection and quantification. Also, it can accommodate a variety of advanced depth cameras for enhanced accuracy.


Beckman, G., Polyzois, D., & Cha, Y. (2019). Deep learning-based automatic volumetric damage quantification using depth camera. Automation in Construction, 99, 114-124.

Go To Automation in Construction

Check Also

Shock Fragility Spectra for Equipment Survivability Under Aircraft Impact