Breast cancer (BC) is one of the most common types of cancer among women with a high mortality rate. Histopathological analysis facilitates the detection and diagnosis of BC but is a highly time-consuming specialised task, dependent on the experience of the pathologists. Hence, there is a dire need for computer-assisted diagnosis (CAD) to relieve the workload on pathologists. Dr Sudhakar Tummala, Assistant Professor, Department of Electronics and Communication Engineering, has conducted breakthrough research on this domain in his paper titled BreaST-Net: Multi-Class Classification of Breast Cancer from Histopathological Images Using Ensemble of Swin Transformers published in the Q1 Journal Mathematics, having an Impact Factor of 2.6.
Breast cancer (BC) is one of the deadly forms of cancer and a major cause of female mortality worldwide. The standard imaging procedures for screening BC involve mammography and ultrasonography. However, these imaging procedures cannot differentiate subtypes of benign and malignant cancers. Therefore, histopathology images could provide better sensitivity toward benign and malignant cancer subtypes. Recently, vision transformers are gaining attention in medical imaging due to their success in various computer vision tasks. Swin transformer (SwinT) is a variant of vision transformer that works on the concept of non-overlapping shifted windows and is a proven method for various vision detection tasks. Hence, in this study, we have investigated the ability of an ensemble of SwinTs for the 2- class classification of benign vs. malignant and 8-class classification of four benign and four malignant subtypes, using an openly available BreaKHis dataset containing 7909 histopathology images acquired at different zoom factors of 40×, 100×, 200× and 400×. The ensemble of SwinTs (including tiny, small, base, and large) demonstrated an average test accuracy of 96.0% for the 8-class and 99.6% for the 2-class classification, outperforming all the previous works. Hence, an ensemble of SwinTs could identify BC subtypes using histopathological images and may lead to pathologist relief.
A brief summary of the research in layperson’s terms
Breast cancer (BC) is the second deadliest cancer after lung cancer, causing morbidity and mortality worldwide in the women population. Its incidence may increase by more than 50% by the year 2030 in the United States. The non-invasive diagnostic procedures for BC involve a physical examination and imaging techniques such as mammography, ultrasonography and magnetic resonance imaging. However, the physical examination may not detect it early, and Imaging procedures offer low sensitivity for a more comprehensive assessment of cancerous regions and identification of cancer subtypes. Histopathological imaging via breast biopsy, even though minimally invasive, may provide accurate identification of the cancer subtype and precise localization of the lesion. However, this manual examination by the pathologist could be tiresome and prone to errors. Therefore, automated methods for BC subtype classification are warranted.
Deep learning has revolutionised many areas in the last decade, including healthcare for various tasks such as accurate disease diagnosis, prognosis, and robotic-assisted surgery. There were studies based on deep convolutional neural networks (CNN) for detecting BC using the aforementioned imaging procedures. However, CNNs exhibit inherent inductive bias and are variant to translation, rotation, and location of the object of interest in the image. Therefore, image augmentation is generally applied while training CNN models, although the data augmentation may not provide expected variations in the training set. Hence, self-attention based deep learning models that are more robust towards the orientation and location of an object of interest in the image are rapidly growing.
SwinTs are an improved version of earlier vision transformer (ViT) architecture and are hierarchical vision transformers using shifted windows that work based on self-attention. For efficient modelling, self-attention within local windows was proposed and computed, and to evenly partition the image, the windows are arranged in a non-overlapping manner. The window-based self-attention has linear complexity and is scalable. However, the modelling power of window-based self-attention is limited because it lacks connections across windows. Therefore, a shifted window partitioning approach that alternates between the partitioning configurations in consecutive Swin transformer blocks was proposed to allow cross-window connections while maintaining the efficient computation of non-overlapping windows. The shifted window scheme in Swin transformers offers increased efficiency by restricting self- attention computation to local windows that are non-overlapping while also facilitating a cross-window connection. Overall, the SwinT network’s performance was superior to that of the standard ViTs.
Therefore, the paper analyses the ability of an ensemble of Swin transformer models (BreaST-Net) for the automated multi-class classification of BC by investigating histopathological images. The work dealt with both benign and malignant subtypes. Further, the benign cancer subtypes include fibroadenoma, tubular adenoma, phyllodes tumour, and adenosis. Whereas the malignant subtypes contain ductal carcinoma, papillary carcinoma, lobular carcinoma, and mucinous carcinoma.
Social implications of the research
Dr Sudhaker Tummala explains that the computer-aided subtyping of breast cancer from histopathology images using an ensemble of fine-tuned SwinT models can be an alternative to manual diagnoses, thereby reducing the burden on clinical pathologists.
- Prof. Seifedine Kadry, Department of Applied Data Science, Noroff University College, Kristiansand, Norway
- Dr Jungeun Kim, Division of Computer Science, Department of Software, Kongju National University, Korea
In the future, Dr Tummala will advance his research to add explainability to the ensemble model predictions and also to develop models that can work on fewer data samples.