Recent News

  • Exploring the Exciting Potential of 6G Networking June 7, 2024

    Patent Published by Dr Manjula

    The Department of Computer Science and Engineering is proud to announce the acceptance of the book chapter titled, Dielectric Characterization of Ovine Heart Tissues at Terahertz Frequencies via Machine Learning: A Use Case for in-vivo Wireless Nano-Communication in the book, “Edge-Enabled 6G Networking: Foundations, Technologies, and Applications.” The book chapter by Dr Manjula R and her students, Ms NSK Sarayu, Ms N Sai Sruthi, Ms D Samaya, and Mr K Tarun Teja from the department caters to UG/PG and PhD students, educational institutions, and medical healthcare sectors. Dr Manjula’s research doesn’t just underscore the significance of understanding the dielectric properties of heart tissues but also highlights the transformative potential of machine learning in predicting, diagnosing and offering therapeutic interventions equipped with real-time monitoring capabilities. The research also lays the groundwork for future advancements in this field, facilitating the development of more efficient and reliable in-vivo sensing technologies.

    Abstract of the Book Chapter:

    A new generation of sensing, processing, and communicating devices at the size of a few cubic micrometers are made possible by nanotechnology. Such tiny devices will transform healthcare applications and open up new possibilities for in-body settings. A thorough understanding of the in-vivo channel characteristics is essential to achieve efficient communication between the nanonodes floating in the circulatory system (here, it is the heart) and the gateway devices fixed in the skin. This entails one to have accurate knowledge on the dielectric properties (permittivity and conductivity) of cardiac tissues in terahertz band (0.1 to 10 THz). This research examines the strength of the machine learning models in accurate calculation of the dielectric properties of the cardiac tissues. Initially, we generate the data using 3-pole Debye Model and then use machine learning models (Linear Regression, Polynomial Regression, Gradient Boosting, and KNN), on this data, to estimate the dielectric properties. We compare the values predicted by machine learning models with those given by the analytical model. Our investigation shows that the Gradient Boosting method has better prediction performance. Further, we have also validated these results using Origin software employing curve fitting technique. In addition, the research also contributes to the study of data expansion by predicting unknown data based on available experimental data, emphasizing the broader applicability of machine learning in biomedical research. The study’s conclusions enhance areas like non-invasive sensing in the context of 6G, which may improve data and monitoring in a networked healthcare environment.

    Continue reading →
  • An Inventive Navigation System for the Visually Impaired May 15, 2024

    patent-published-by-cse

    The Department of Computer Science and Engineering is proud to announce that the patent titled “A System and a Method for Assisting Visually Impaired Individuals” has been published by Dr Subhankar Ghatak and Dr Aurobindo Behera, Asst Professors, along with UG students, Mr Samah Maaheen Sayyad, Mr Chinneboena Venkat Tharun, and Ms Rishitha Chowdary Gunnam. Their patent introduces a smart solution to help visually impaired people navigate busy streets more safely. The research team uses cloud technology to turn this visual information into helpful vocal instructions that the users can hear through their mobile phones. These instructions describe things like traffic signals, crosswalks, and obstacles, making it easier for them to move around independently, making way for an inclusive society.

    Abstract

    This patent proposes a novel solution to ease navigation for visually impaired individuals. It integrates cloud technology, computer vision algorithms, and Deep Learning Algorithms to convert real-time visual data into vocal cues delivered through a mobile app. The system employs wearable cameras to capture visual information, processes it on the cloud, and delivers relevant auditory prompts to aid navigation, enhancing spatial awareness and safety for visually impaired users.

    Practical implementation/Social implications of the research

    The practical implementation of the research involves several key components.

    • Developing or optimising wearable camera devices that are comfortable and subtle for visually impaired individuals. These cameras should be capable of capturing high-quality real-time visual data.
    • A robust cloud infrastructure is required to process this data quickly and efficiently using advanced computer vision algorithms and deep learning algorithms.
    • Design and develop a user-friendly mobile application that delivers processed visual information as vocal cues in real-time. This application should be intuitive, customisable, and accessible to visually impaired users.
    cse-patent

    Fig.1: Schematic representation of the proposal

    The social implications of implementing this research are significant. We can greatly enhance their independence and quality of life by providing visually impaired individuals with a reliable and efficient navigation aid. Navigating city environments can be challenging and hazardous for the visually impaired, leading to increased dependency and reduced mobility. The research aims to mitigate these challenges by empowering users to navigate confidently and autonomously. This fosters a more inclusive society where individuals with visual impairments can participate actively in urban mobility, employment, and social activities.

    In the future, the research cohort plans to further enhance and refine technology to better serve the needs of visually impaired individuals. This includes improving the accuracy and reliability of object recognition and scene understanding algorithms to provide more detailed and contextually relevant vocal cues. Additionally, they aim to explore novel sensor technologies and integration methods to expand the capabilities of our system, such as incorporating haptic feedback for enhanced spatial awareness. Furthermore, we intend to conduct extensive user testing and feedback sessions to iteratively improve the usability and effectiveness of our solution. This user-centric approach will ensure that our technology meets the diverse needs and preferences of visually impaired users in various real-world scenarios.

    Moreover, the team is committed to collaborating with stakeholders, including advocacy groups, healthcare professionals, and technology companies, to promote the adoption and dissemination of this technology on a larger scale. By fostering partnerships and engaging with the community, they can maximise the positive impact of their research on the lives of visually impaired individuals worldwide.

    Continue reading →

TOP