Residual convolutional neural networks to automatically extract significant breast density features
Lizzi F., Laruina F., Oliva P., Retico A., Fantacci M. E.
In this paper, we present a work on breast density classification performed with deep residual neural network and we discuss the future analysis we could perform. Breast density is one of the most important breast cancer risk factor and it represents the amount of fibroglandular tissue with respect to fat tissue as seen on a mammographic exam. However, it is not easy to include it in risk models because of its variability among women and its qualitative definition. We trained a deep CNN to perform breast density classification in two ways. First, we classified mammograms using two "super-classes" that are dense and non-dense breast. Second, we trained the residual neural network to classify mammograms according to the four classes of the BI-RADS standard. We obtained very good results compared to our literature knowledge in terms of accuracy and recall. In the near future, we are going to improve the robustness of our algorithm with respect to the mammographic systems used and we want to include pathological exams too. Then we want to study and characterize the CNN-extracted features in order to identify the most significant for breast density. Finally, we want to study how to quantitatively measure the precision of the network in capturing the significative part of the images.Source: CAIP 2019 International Workshops, ViMaBi and DL-UAV, pp. 28–35, Salerno, Italy, 06 September 2019
academic.microsoft.com | dblp.uni-trier.de | link.springer.com | link.springer.com | link.springer.com | CNR ExploRA | rd.springer.com