Complex IoT networks comprise multiple devices connected to the gateway. A smoother functioning is ensured through new techniques for device authentication to the gateway. On this note, the research paper titled “A Lightweight Mutual and Transitive Authentication Mechanism for IoT Network” has been published by Dr Amit Kumar Mandal, Assistant Professor, Department of Computer Science and Engineering and his research scholar Mrs Rudra Krishna Srija in the Q1 Journal Ad Hoc Networks, Elsevier having an impact factor of 4.8. The research details the use of the polynomial-based protocol in enhancing device connection for transitive communication.
Abstract of the paper
In large and complex IoT systems like the smart city or smart industry which consist of thousands of connected devices, it may not always be feasible to be directly connected to the gateway but it may be possible to be connected to another device. Therefore, already authenticated devices should facilitate the new device’s authentication by the gateway. To address this issue, the existing protocols use multiple authentication protocols based on different cryptography techniques, which are difficult to implement and manage in resource-constrained IoT devices. In this paper, we propose a Transitive device authentication protocol based on the Chebyshev polynomial.
The work is primarily aimed at improving transitive communication in machine-to-machine communication or device-to-device communication in large-scale heterogeneous IoT network scenarios. The research team targets to investigate the benefits of adopting the designed protocol in particular within low-power and lossy networks in the future.
Collaborations
Università Ca Foscari Venezia, Venice, Italy
Continue reading →An Image caption generator system implies the detection of the image as well as producing the caption with natural language processing by the computer. This is a tedious job. Image caption generator systems can solve various problems, such as self-driving cars, aiding the blind, etc.
The recent research at the Department of Computer Science and Engineering proposes a model to generate the captions for an image using ResNet and Long Short-Term Memory. Assistant Professors Dr Morampudi Mahesh Kumar and Dr V Dinesh Reddy have published the paper Image Description Generator using Residual Neural Network and Long-Short-Term Memory in the Computer Science Journal of Moldova with an impact factor of 0.43.
The captions or descriptions for an image are generated from an inverse dictionary formed during the model’s training. Automatic image description generation is helpful in various fields like picture cataloguing, blind persons, social media, and various natural language processing applications.
Despite the numerous enhancements in image description generators, there is always a scope for development. Taking advantage of the larger unsupervised data or weakly supervised methods is a challenge to explore in this area, and this is already there among the future plan of the researchers. Another major challenge could be generating summaries or descriptions for short videos. This research work can also be extended to other sets of natural languages apart from English.
Abstract
Human beings can describe scenarios and objects in a picture through vision easily, whereas performing the same task with a computer is a complicated one. Generating captions for the objects of an image helps everyone to understand the scenario of the image in a better way. Instinctively describing the content of an image requires the apprehension of computer vision as well as natural language processing. This task has gained huge popularity in the field of technology, and there is a lot of research work being carried out. Recent works have been successful in identifying objects in the image but are facing many challenges in generating captions to the given image accurately by understanding the scenario. To address this challenge, we propose a model to generate the caption for an image. Residual Neural Network (ResNet) is used to extract the features from an image. These features are converted into a vector of size 2048. The caption generation for the image is obtained with Long Short-Term Memory (LSTM). The proposed model was experimented with on the Flickr8K dataset and obtained an accuracy of 88.4%. The experimental results indicate that our model produces appropriate captions compared to the state of art models.
Continue reading →