The Directorate of Entrepreneurship and Innovation steps forth with yet another brave initiative from Dr Sunitha K A, Associate Professor, the Department of Electronics and Communication Engineering. Dr Sunitha had envisioned a dream to empower patients suffering from ailments that require sustained medical assistance with a specially designed convertible wheelchair that can aid the patients in mobility and self-help. In association with Hatchlab Research Centre, Dr Sunitha has initiated a health-tech startup exclusively for patients who cannot move their bodies due to several medical conditions and to raise awareness of the continuing rise of similar cases in today’s society.
Dr Sunitha says, “The main challenge of the patient is to perform the basic movements like coming out of the bed and to sit in the wheelchair (and vice-versa), and it is almost impossible for a patient to do this simple act without external support of a person or a nurse. We have listed several such scenarios and cases where our specially designed wheelchair can be converted into a bed and can be easily controlled by the patient itself. Apart from that, all the necessary inputs like the urine levels in the drop-bag, pulse rate, emergency indicated, oxygen levels and several other parameters are well integrated into the system itself which displays in the dashboard and is communicated to the stakeholders.”
The prototype for the convertible wheelchair has been successfully tested and appreciated by doctors and experts in Chennai. With the help of Hatchlab Research Centre, the prime focus for the next few months is to create a completely functional prototype. The innovative venture plans to recreate the fully functional prototype, seed funding and round one funding from the investors, followed by large-scale production. Several multinational design patents for the product have already been filed by Dr Sunitha, and the next stage of the cohort is the commercialisation of the product.Continue reading →
Nobel Laureate Prof. David Wineland, University of Oregon, USA, virtually joined the International Conference on Electronic and Photonic Integrated Circuits (EPIC- 2022) hosted by SRM University-AP from December 15 to 17, 2022. The American Physicist who was awarded the 2012 Nobel Prize in Physics for devising methods to study the quantum mechanical behaviour of individual ions delivered an insightful lecture on Atomic Clocks. The three-day-long conference organised by the Department of Electronics and Communication Engineering, SRM AP, concluded on Saturday, December 17, 2022; the Convenors of this event were Dr Pradyut Kumar Sanki and Dr Swagata Samanta.
Nobel Laureate Prof. David Wineland elaborated on why the world needs precise clocks, the basics of how atomic clocks work, optical atomic clocks, the state of play and what the future might hold. Prof. Juejun Hu, Massachusetts Institute of Technology, USA; Prof. Edward Wasige, University of Glasgow, UK; and Prof. Lorenzo Pavesi, University of Trento, Italy were the Plenary speakers of this conference. Prof. Amlan Chakrabarti, Director, A. K. Choudhury School of Information Technology, University of Calcutta; Prof. Shankar Kumar Selvaraja, IISc Bangalore; Prof. Chetna Singhal, IIT Kharagpur; Prof. Naren Naik, IIT Kanpur; Prof. Samaresh Das, IIT Delhi; Prof. Shanti Bhattacharya, IIT Madras; Dr. Pranabendu Ganguly, IIT Kharagpur; Prof. Sarbani Ghosh, BITS Pilani; Dr. Bruno Romeiro, International Iberian Nanotechnology Laboratory, Portugal; Prof. Sakellaris Mailis, Skolkovo Institute of Science and Technology, Moscow, Russia; Prof. Shyamal Mondal, Defence Institute of Advanced Technology; Prof. Enakshi Bhattacharya, IIT Madras were the eminent speakers of the first two days. Prof. Achanta Venugopal, Director, NPL Delhi; Prof. T Srinivas, IISc Bangalore, and Prof. Ravindra Jha, IIT Guwahati, gave the keynote speeches on the last day of the programme. A session on ‘Women in Devices, Circuits & Systems’ was delivered by Prof. Sujata Pal, IIT Ropar; and Prof. Takako Hashimoto, Vice President of the Chiba University of Commerce (CUC), Japan; Industrial Talk by Dr Sajal Sarkar, Power Grid Corporation of India Ltd.; Dr. Pradipta Patra, Samsung Semiconductor India; Dr. Satyabrata Sarangi, Meta; Sunnyvale, California, USA; and Dr. Souvik Kundu, Intel Labs, USA, were the other highlights of the day.
Additionally, a Panel Discussion was handled by Dr Rajkumar Elagiri, Apex Semiconductor; Dr Kamal Das, IBM Research Lab; and Dr Soumya Maity, Dell Technologies. Furthermore, the Young Researcher Forum conducted as part of the conference featured renowned academicians such as Dr Biswabandhu Jana, MIT and Harvard Hospital, USA; Dr Bibhas Manna, TU Wien, Germany; Dr Ankita Jain, Queens University, Canada; Dr Subhrajit Mukherjee, Technion – Israel Institute of Technology, Israel; Dr Rajat Subhra Karmakar, National Taiwan University, Taiwan; Dr Surajit Bose, Leibniz University Hannover, Germany; Dr Akanksha Pathak, Emory University, School of Medicine, USA; Dr Debidas Kundu, Carleton University, Canada; and Mayur Kumar Chhipa, ISBAT University, Kampala, Uganda, East Africa.
A pre-conference event: Smart SRM Hackathon – 24 Hrs Circuit Design Contest was organised on December 14, 2022. Poster & technical exhibition called Jigyasa took place on the first day of the conference EPIC-2022.Continue reading →
Breast cancer (BC) is one of the most common types of cancer among women with a high mortality rate. Histopathological analysis facilitates the detection and diagnosis of BC but is a highly time-consuming specialised task, dependent on the experience of the pathologists. Hence, there is a dire need for computer-assisted diagnosis (CAD) to relieve the workload on pathologists. Dr Sudhakar Tummala, Assistant Professor, Department of Electronics and Communication Engineering, has conducted breakthrough research on this domain in his paper titled BreaST-Net: Multi-Class Classification of Breast Cancer from Histopathological Images Using Ensemble of Swin Transformers published in the Q1 Journal Mathematics, having an Impact Factor of 2.6.
Breast cancer (BC) is one of the deadly forms of cancer and a major cause of female mortality worldwide. The standard imaging procedures for screening BC involve mammography and ultrasonography. However, these imaging procedures cannot differentiate subtypes of benign and malignant cancers. Therefore, histopathology images could provide better sensitivity toward benign and malignant cancer subtypes. Recently, vision transformers are gaining attention in medical imaging due to their success in various computer vision tasks. Swin transformer (SwinT) is a variant of vision transformer that works on the concept of non-overlapping shifted windows and is a proven method for various vision detection tasks. Hence, in this study, we have investigated the ability of an ensemble of SwinTs for the 2- class classification of benign vs. malignant and 8-class classification of four benign and four malignant subtypes, using an openly available BreaKHis dataset containing 7909 histopathology images acquired at different zoom factors of 40×, 100×, 200× and 400×. The ensemble of SwinTs (including tiny, small, base, and large) demonstrated an average test accuracy of 96.0% for the 8-class and 99.6% for the 2-class classification, outperforming all the previous works. Hence, an ensemble of SwinTs could identify BC subtypes using histopathological images and may lead to pathologist relief.
A brief summary of the research in layperson’s terms
Breast cancer (BC) is the second deadliest cancer after lung cancer, causing morbidity and mortality worldwide in the women population. Its incidence may increase by more than 50% by the year 2030 in the United States. The non-invasive diagnostic procedures for BC involve a physical examination and imaging techniques such as mammography, ultrasonography and magnetic resonance imaging. However, the physical examination may not detect it early, and Imaging procedures offer low sensitivity for a more comprehensive assessment of cancerous regions and identification of cancer subtypes. Histopathological imaging via breast biopsy, even though minimally invasive, may provide accurate identification of the cancer subtype and precise localization of the lesion. However, this manual examination by the pathologist could be tiresome and prone to errors. Therefore, automated methods for BC subtype classification are warranted.
Deep learning has revolutionised many areas in the last decade, including healthcare for various tasks such as accurate disease diagnosis, prognosis, and robotic-assisted surgery. There were studies based on deep convolutional neural networks (CNN) for detecting BC using the aforementioned imaging procedures. However, CNNs exhibit inherent inductive bias and are variant to translation, rotation, and location of the object of interest in the image. Therefore, image augmentation is generally applied while training CNN models, although the data augmentation may not provide expected variations in the training set. Hence, self-attention based deep learning models that are more robust towards the orientation and location of an object of interest in the image are rapidly growing.
SwinTs are an improved version of earlier vision transformer (ViT) architecture and are hierarchical vision transformers using shifted windows that work based on self-attention. For efficient modelling, self-attention within local windows was proposed and computed, and to evenly partition the image, the windows are arranged in a non-overlapping manner. The window-based self-attention has linear complexity and is scalable. However, the modelling power of window-based self-attention is limited because it lacks connections across windows. Therefore, a shifted window partitioning approach that alternates between the partitioning configurations in consecutive Swin transformer blocks was proposed to allow cross-window connections while maintaining the efficient computation of non-overlapping windows. The shifted window scheme in Swin transformers offers increased efficiency by restricting self- attention computation to local windows that are non-overlapping while also facilitating a cross-window connection. Overall, the SwinT network’s performance was superior to that of the standard ViTs.
Therefore, the paper analyses the ability of an ensemble of Swin transformer models (BreaST-Net) for the automated multi-class classification of BC by investigating histopathological images. The work dealt with both benign and malignant subtypes. Further, the benign cancer subtypes include fibroadenoma, tubular adenoma, phyllodes tumour, and adenosis. Whereas the malignant subtypes contain ductal carcinoma, papillary carcinoma, lobular carcinoma, and mucinous carcinoma.
Social implications of the research
Dr Sudhaker Tummala explains that the computer-aided subtyping of breast cancer from histopathology images using an ensemble of fine-tuned SwinT models can be an alternative to manual diagnoses, thereby reducing the burden on clinical pathologists.
In the future, Dr Tummala will advance his research to add explainability to the ensemble model predictions and also to develop models that can work on fewer data samples.Continue reading →
With the recent advancements in modern wireless body area network (WBAN) communication, the demand for compact low-profile wireless computing devices has witnessed a vast increase. Consequently, the antennas which play a critical role in this network are developed with different polarization in distinct frequency bands so as to maintain better reliability of communication links. Dr Divya Chaturvedi, Assistant Professor, Department of Electronics and Communication Engineering, has published a paper titled, “A Dual-Band Dual-Polarized SIW Cavity-Backed Antenna-Duplexer for Off-body Communication” as first author in the Q1 Journal AEJ – Alexandria Engineering Journal having an impact factor of 6.77. The paper discusses the self-duplexing antennas, offering two channels for concurrent transmission and reception, leading to a simple and compact transceiver.
A novel dual-band, dual-polarized antenna-duplexer scheme is intended to be used for WLAN 802.11a and ISM band applications using Substrate Integrated Waveguide (SIW) Technology. The antenna consists of two planar SIW cavities of different dimensions where a smaller sized diamond- shaped cavity is inserted inside the larger rectangular cavity to share the common aperture area. The diamond-ring shaped slots are etched in each cavity for radiation. The larger diamond ring slot is excited with a microstrip feedline to operate at 5.2 GHz while the smaller slot is excited with a coaxial probe to operate at 5.8 GHz. The antenna produces linear polarization at 5.2 GHz (5.1–5.3 GHz) due to the merging of TE 110 and TE 120 cavity modes while circular polarization around 5.8 GHz due to orthogonally excited TM100 and TM010 modes (5.68–5.95 GHz). The slots are excited in an orthogonal fashion to maintain a better decoupling between the ports (i.e. –23 dB). The performance of the antenna has been verified in free space as well as in the vicinity of the human body. The antenna offers the gain of 6.2 dBi /6.6 dBi in free space and 5.8 dBi / 6.4 dBi on-body at lower-/ higher frequency-bands, respectively. Also, the specific absorption rate (SAR) is obtained < 0.245 W/Kg for 0.5 W input power averaged over 10 mW/g mass of the tissue. The proposed design is a low-profile, compact single-layered design, which is a suitable option for off-body communication.
Explanation of the research in layperson’s terms
The paper further expounds on the social implication of this innovative research. Dr Chaturvedi explains that the antenna, being dual-band and dual-polarized, can function as a transceiver circuit. Due to different polarization, it can operate in both the frequency bands simultaneously without affecting the performance. In the first frequency band at 5.2 GHz, it can link with Wi-Fi and in the second frequency band at 5.8 GHz, it is able to communicate with antennas placed in other medical instruments which are used in the vicinity of the human body.
1. Dr Arvind Kumar, Assis. Professor, b Department of Electronics and Communication
Engineering, VNIT Nagpur, India
2. Dr Ayman A Althuwayb, Department of Electrical Engineering, College of Engineering,
Jouf University, Sakaka, Aljouf 72388, Saudi Arabia
Primary brain tumours make up less than 2% of cancers and statistically occur in around 250,000 people a year globally. Medical resonance imaging (MRI) plays a pivotal role in the diagnosis of brain tumours and advanced imaging techniques can precisely detect brain tumours. On this note, Dr Sudhakar Tummala, Assistant Professor, Department of Electronics and Computer Engineering, has published a paper titled, “Classification of Brain Tumour from Magnetic Resonance Imaging using Vision Transformers Ensembling” in the journal Current Oncology having an impact factor of 3.1. The paper highlights the pioneering breakthrough made in the development of vision transformers (ViT) in enhancing MRI for efficient classification of brain tumours, thus reducing the burden on radiologists.
Abstract of the paper
The automated classification of brain tumours plays an important role in supporting radiologists in decision making. Recently, vision transformer (ViT)-based deep neural network architectures have gained attention in the computer vision research domain owing to the tremendous success of transformer models in natural language processing. Hence, in this study, the ability of an ensemble of standard ViT models for the diagnosis of brain tumours from T1-weighted (T1w) magnetic resonance imaging (MRI) is investigated. Pretrained and fine tuned ViT models (B/16, B/32, L/16, and L/32) on ImageNet were adopted for the classification task. A brain tumour dataset from figshare, consisting of 3064 T1w contrast-enhanced (CE) MRI slices with meningiomas, gliomas, and pituitary tumours, was used for the cross-validation and testing of the ensemble ViT model’s ability to perform a three-class classification task. The best individual model was L/32, with an overall test accuracy of 98.2% at 384 × 384 resolution. The ensemble of all four ViT models demonstrated an overall testing accuracy of 98.7% at the same resolution, outperforming individual model’s ability at both resolutions and their ensemble at 224 × 224 resolution. In conclusion, an ensemble of ViT models could be deployed for the computer-aided diagnosis of brain tumours based on T1w CE MRI, leading to radiologist relief.
A brief summary of the research in layperson’s terms
Brain tumours (BTs) are characterised by the abnormal growth of neural and glial cells. BTs causes several medical conditions, including the loss of sensation, hearing and vision problems, headaches, nausea, and seizures. There exist several types of brain tumours, and the most prevalent cases include meningiomas (originate from the membrane surrounding the brain), which are non-cancerous; gliomas (start from glial cells and the spinal cord); and glioblastomas (grow from the brain), which are cancerous. Sometimes, cancer can spread from other parts of the body, which is called brain metastasis. A pituitary tumour is another type of brain tumour that develops in the pituitary gland in the brain, and this gland primarily regulates other glands in the body. Magnetic resonance imaging (MRI) is a versatile imaging method that enables one to noninvasively visualise inside the body, and is in extensive use in the field of neuroimaging.
There exist several structural MRI protocols to visualise inside the brain, but the prime modalities include T1-weighted (T1w), T2-weighted, and T1w contrast-enhanced (CE) MRI. BTs appear with altered pixel intensity contrasts in structural MRI images compared with neighbouring normal tissues, enabling clinical radiologists to diagnose them. Several previous studies have attempted to automatically classify brain tumours using MRI images, starting with traditional machine learning classifiers, such as support vector machines (SVMs), k-nearest-neighbour (kNN), and Random Forest, from hand-crafted features of MRI slices. With the rise of convolutional neural network (CNN) deep learning model architectures since 2012, in addition to emerging advanced computational resources, such as GPUs and TPUs, during the past decade, several methods have been proposed for the classification of brain tumours based on the finetuning of the existing state-of-the-art CNN models, such as AlexNet, VGG16, ResNets, Inception, DenseNets, and Xception, which had already been found to be successful for various computer vision tasks.
Despite the tremendous success of CNNs, they generally have inductive biases, i.e., the translation equivariance of the local receptive field. Due to these inductive biases, CNN models have issues when learning long-range information; moreover, data augmentation is generally required for CNNs to improve their performance due to their dependency on local pixel variations during learning.Therefore, in this work, the ability of pretrained and fine tuned ViT models, both individually and in an ensemble manner, is evaluated for the classification of meningiomas, gliomas, and pituitary tumours from T1w CE MRI at both 224 × 224 and 384 × 384 image resolutions.
Dr Sudhakar Tummala has mentioned the social implications of the research by expounding that the computer-aided diagnosis of brain tumours from T1w CE MRI using an ensemble of fine tuned ViT models can be an alternative to manual diagnoses, thereby reducing the burden on clinical radiologists. He also explains the future prospects of his research, which is to add explainability to the ensemble model predictions and to develop methods for precise contouring of tumour boundaries.
Details of Collaborations
Prof Seifedine Kadry, Department of Applied Data Science, Noroff University College, Kristiansand, Norway.
Dr Syed Ahmad Chan Bukhari, Division of Computer Science, Mathematics and Science, Collins College of Professional Studies, St. John’s University, New York, USA.Continue reading →