Faculty Dr Sambit Kumar Mishra

Dr Sambit Kumar Mishra

Assistant Professor

Department of Computer Science and Engineering

Contact Details

sambitkumar.m@srmap.edu.in

Office Location

SR Block, Level 6, Cabin No: 9

Education

2019
Ph.D. (CSE)
National Institute of Technology, Rourkela
India
2014
M.Tech. (CS)
Utkal University, Bhubaneswar
India
2011
M.Sc. (CS)
Utkal University, Bhubaneswar
India

Personal Website

Experience

  • 13th May 2019- 2020 | Assistant Professor | SoA Deemed to be University, Bhubaneswar, India
  • 2nd July 2018 – 10th May 2019 | Assistant Professor | Oriental University, Indore, India
  • 4th July 2011 – 10th July 2012 | Assistant Professor | Padmashree Kurtartha Acharya College of Engineering, Bargarh, Odisha, India

Research Interest

  • Multi-Objective service allocation in cloud and multi-cloud environment.
  • Integration of IoT and Cloud Computing.
  • Proposing new algorithms for optimizing multiple parameters in distributed system.
  • in

Awards

  • 2018 – IETE Best Research Award in IETE Seminar on Advances in Smart Hardware Technologies (ASH-Tech 2018) – IETE
  • 2019 - InSc Young Achiever Award - InSc

Memberships

  • IEEE Computer Society
  • IETE Associate Member
  • InSc Member

Publications

  • A Robust Model for Quantum-Resistant Cryptography to Tackle Quantum Risks

    Guha D., Lenka R., Sharma V., Mishra S.K., Alkhayyat A., Tripathy H.K.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link

    View abstract ⏷

    As quantum computing advances, conventional cryptographic algorithms face developing threats, necessitating the improvement of quantum-resistant security mechanisms. Winternitz One-Time Signature (WOTS) is a promising cryptographic scheme that offers robust resistance in competition to quantum attacks. This paper explores the software of WOTS in enhancing the protection of digital communications and information integrity in a quantum computing generation. By manner of analysing the fundamental standards, sensible implementations, and ability demanding situations of WOTS, this research dreams to provide insights into its effectiveness as a quantum-resistant protection solution.
  • Tomato Leaf Disease Detection Using Deep Learning and Machine Learning

    Chebrolu M., Garikapati K., Veeramachaneni Y., Annabathina J., Mishra S.K., Mishra S.K.

    Conference paper, 2025 International Conference on Artificial Intelligence and Machine Vision, AIMV 2025, 2025, DOI Link

    View abstract ⏷

    Detecting diseases in tomato leaves at an early stage is crucial for preventing crop damage and improving food security. Traditional diagnostic methods are often inefficient, requiring significant expertise and time. To address this challenge, we explore AI-driven approaches, integrating DL and ML methods for automated disease detection. This study employs CNNs, specifically leveraging the VGG16 architecture for feature extraction. Additionally, we compare its effectiveness with classical classifiers such as KNN and SVM. Using a publicly available dataset of healthy and diseased tomato leaves, our results indicate that CNN-based models outperform conventional machine learning classifiers in both accuracy and efficiency. Moreover, integrating IoT-based analytics enhances early detection, reducing crop losses and promoting sustainable agricultural practices.
  • When latent features meet side information: A preference relation based graph neural network for collaborative filtering

    Shi X., Zhang Y., Pujahari A., Mishra S.K.

    Article, Expert Systems with Applications, 2025, DOI Link

    View abstract ⏷

    As recommender systems shift from rating-based to interaction-based models, graph neural network-based collaborative filtering models are gaining popularity due to their powerful representation of user-item interactions. However, these models may not produce good item ranking since they focus on explicit preference predictions. Further, these models do not consider side information since they only capture latent feature information of user-item interactions. This study proposes an approach to overcome these two issues by employing preference relation in the graph neural network model for collaborative filtering. Using preference relation ensures the model will generate a good ranking of items. The item side information is integrated into the model through a trainable matrix, which is crucial when the data is highly sparse. The main advantage of this approach is that the model can be generalized to any recommendation scenario where a graph neural network is used for collaborative filtering. Experimental results obtained using the recent RS datasets show that the proposed model outperformed the related baselines.
  • Trading Strategy with EMA’s and Risk Management

    Pranav Somisetty S.D., Jagadishwar Gatte S., Kosuri N.B., Gowrish Chinta L., Mishra S.K., Kumar Mishra S.

    Conference paper, 2025 International Conference on Artificial Intelligence and Machine Vision, AIMV 2025, 2025, DOI Link

    View abstract ⏷

    The trading world often appears mysterious, filled with stories of fear, hope, addiction, and occasional profits. However, many fail to recognize that consistent profitability in trading is driven by discipline, a well-defined strategy, and strict adherence to rules. This lack of awareness is a key reason why 75-90% of new traders enter the market with high expectations but end up losing their hard-earned money. In this research we propose a quantitative trading strategy based on exponential moving average (EMA) crossovers, volume analysis, and structured profit booking. The strategy utilises a short-term 9-period EMA and along-term 15-period EMA to identify trend reversals, generating buy signals when the two different EMA's crosses under some conditions and sell signals are generated when the opposite occurs. Meanwhile, a confirmation mechanism is introduced, requiring the price to move at least 0.06% above the crossover price while ensuring the crossover candle remains bullish. Additionally, volume conditions are incorporated to validate momentum, ensuring buy signals are triggered only when the trading volume increases in ascending order. To optimize trade management, a multi-tier profit booking system is implemented, allowing partial exits at predefined levels. which ensures that the traders secure gains while allowing profitable trades to run. The strategy's performance is evaluated through historical back-testing, assessing profitability, accuracy, and risk-reward dynamics. The results demonstrate the effectiveness of integrating EMA crossings with volumes and structured exit points in improving trade success rates. This might become the future of so many people to convert their portfolio from a losing streak to a winning streak.
  • Enhancing Heart Disease Prediction with Data Augmentation and ML Classifiers

    Rachapalli V.K., Meenavalli C., Nunna S.P., Yarramaneni P., Mishra S.K., Mishra S.K.

    Conference paper, 2025 International Conference on Artificial Intelligence and Machine Vision, AIMV 2025, 2025, DOI Link

    View abstract ⏷

    Heart disease is a significant cause of death worldwide, and early prediction is vital for prevention and treatment. This project uses the Framingham Heart Study dataset for the early prediction of Coronary Heart Disease (CHD) using machine learning methods. The Framingham Heart Study is a highly unbalanced dataset, with only 16 % cases of CHD, which impacts the accuracy of the model. To overcome this, data augmentation techniques such as SMOTE and cGAN are applied to create synthetic cases of CHD. The machine learning algorithms that are compared: Random Forest, XGBoost, SVM, and MLP. XGBoost has achieved the highest AUC-ROC of 0.973 when cGAN-augmented data is used, while cGAN-augmented data improves recall and overall model performance significantly. This study identifies the potential for combining machine learning with data augmentation to improve CHD prediction.
  • Ensembling AI and Federated Learning for Industry 4.0: A Privacy-Preserving Approach in Edge Computing

    Sahoo S.K., Dash A., Mishra S.K., Humayun M.

    Book chapter, Advances in Science, Technology and Innovation, 2025, DOI Link

    View abstract ⏷

    The emergence of Industry 4.0 resulted in a disruptive era marked by the incorporation of cutting-edge technology, such as edge computing and artificial intelligence (AI), into industrial processes. The integration of AI and Federated Learning (FL) methodologies and the creation of intelligent solutions that protect privacy within the framework of Industry 4.0 are two key ideas that will be explored in this chapter. The chapter highlights that one major obstacle to edge computing’s widespread adoption in Industry 4.0 is privacy concerns. It emphasizes the necessity of finding solutions that balance the demands for real-time processing with the strictest privacy regulation. The main goal is to investigate how intelligent edge device solutions can be implemented while maintaining privacy protection through the use of FL. The goal of this chapter is to shed light on how to use the synergies between AI and FL to address privacy concerns related to Industry 4.0. The chapter ends with a call for Industry 4.0, which will see the standardization of edge computing, federated learning techniques, and artificial intelligence. By putting in place privacy-preserving safeguards, organizations are encouraged to adopt new technologies while maintaining strict data privacy and security standards. In the rapidly changing context of Industry 4.0, this symbiotic connection is expected to transform industrial landscapes, guiding them towards unmatched efficiency and creativity.
  • Intent-Driven VM Allocation Strategy for Optimizing Cloudlet Processing in Edge-Cloud Computing

    Sahoo S.K., Mishra S.K., Puthal D.

    Article, IEEE Internet of Things Journal, 2025, DOI Link

    View abstract ⏷

    Edge-cloud computing refers to a paradigm that combines the benefits of edge and cloud computing to optimize data processing and resource utilization. Edge-cloud computing plays a crucial role in resource allocation by optimizing the distribution of computational resources between edge devices and centralized cloud infrastructures. In the rapidly evolving landscape of edge-cloud computing, efficient VM allocation is critical for optimizing resource utilization, minimizing latency, and ensuring high SLA compliance. This paper introduces a novel heuristic VM allocation strategy, named LLCD, to enhance cloudlet or task processing in edge-cloud data centers. By employing a heuristic approach inspired by mixed-integer nonlinear programming models, this strategy dynamically assigns VMs based on their current load and the impending deadlines of tasks, significantly reducing overall system latency and enhancing SLA success rates. Simulation was conducted across various computational intensities. The findings reveal that the proposed approach substantially improves resource utilization and operational efficiency, adapting to dynamic workloads, by achieving an SLA success ratio as 74.26% and 83.7% in different deadline scenarios. The adaptive nature of the LLCD algorithm allows real-time task reallocation based on system feedback, which mirrors the operational principles of AI-driven orchestration in distributed IoT environments. The validation is achieved through a multi-iteration simulation model that emulates dynamic IoT workloads, demonstrating LLCD’s learning capability in maintaining SLA stability and consistent latency reduction across changing task distributions. Moreover, the proposed heuristic provides a foundation for latency-efficient and learning-based management in distributed computing environments.
  • Container Placement Using Penalty-Based PSO in the Cloud Data Center

    Akram Khan M., Sahoo B., Kumar Mishra S.

    Article, Concurrency and Computation: Practice and Experience, 2025, DOI Link

    View abstract ⏷

    Containerization has transformed application deployment by offering a lightweight, scalable, and portable architecture for the deployment of container applications and their dependencies. In contemporary cloud computing data centers, where virtual machines (VMs) are frequently utilized to host containerized applications, the challenge of effective placement of the container has garnered significant attention. Container placement (CP) involves placing a container over the VM to execute a container. CP is a nontrivial problem in the container cloud data center (CCDC). Poor placement decisions can lead to decreased service performance or wastage of cloud resources. Efficient placement of containers within a virtual environment is critical while optimizing resource utilization and performance. This paper proposes a penalty-based particle swarm optimization (PB-PSO) CP algorithm. In the proposed algorithm, we have considered the makespan, cost, and load of the VM while making the CP decisions. We have proposed the concept of a load-balancing penalty to prevent a VM from becoming overloaded. This algorithm solves various CP challenges by varying container application sizes in heterogeneous cloud environments. The primary goal of the proposed algorithm is to minimize the makespan and computational cost of containers through efficient resource utilization. We have performed extensive simulation studies to verify the efficacy of the proposed algorithm using the CloudSim 4.0 simulator. The proposed optimization algorithm (PB-PSO) aims to minimize both the makespan and the execution monetary costs and maximize the resource utilization simultaneously. During the simulation, we observed a reduction of 10% to 15% in both execution cost and makespan. Furthermore, our algorithm achieved the most optimal cost-makespan trade-offs compared to other competing algorithms.
  • A Survey on Task Scheduling in Edge-Cloud

    Sahoo S.K., Mishra S.K.

    Article, SN Computer Science, 2025, DOI Link

    View abstract ⏷

    In this modern era, cloud computing is not enough to meet today’s intelligent society’s data processing needs, so edge computing has emerged. In contrast to computation in the cloud, it elaborates user proximity and proximity to the data source. To store local, small sized, and processed data on the edges of the network is more effective. The edge paradigm, intended to be a leading computation due to its low latency, also faces many challenges due to computational capabilities and resource availability. Edge computing allows edge devices to release heavy loads and computational operations on the remote server. This allows us to take full advantage of the server-side computing and storage in edge devices. However, the offload of all highly compressed computing operations on a remote server at the same time may become overcrowded, leading to intensive processing delays for many computing operations and unexpectedly elevated power usage. Instead of that, it is possible that spare edge resources may need to be utilized effectively and the access to expensive cloud resources would be restricted. As a result, it is important to investigate the collaborative planning process (scheduling) for the edge servers with a cloud server based on task features, development objectives, and system status. It can assist in performing all the computing functions efficiently and effectively. This paper analyzes and summarizes computing conditions for the edge computing context and classifies the computation of tasks into various edge-cloud computing scenarios. At the end, based on the problem structure, various collaborative planning methods for computational functions are presented.
  • Multi-objective based container placement strategy in CaaS

    Khan M.A., Sahoo B., Mishra S.K., Shankar A.

    Article, Software - Practice and Experience, 2025, DOI Link

    View abstract ⏷

    In contrast to a conventional virtual machine (VM), a container is a lightweight virtualization technology. Containers are becoming a prominent technology for cloud services because of their portable, scalable, and flexible deployments, especially in the Internet of Things (IoT), smart devices, and fog and edge computing. It is a type of operating system-level virtualization in which the kernel allows multiple isolated containers to run independently. Container placement (CP) is a nontrivial problem in Container-as-a-Service (CaaS). CP is mapping to a container over virtual machines (VMs) to execute an application. Designing an efficient CP strategy is complex due to several intertwined challenges. These challenges arise from a diverse spectrum of computing resources, like on-demand and unpredictable fluctuations of IT resources by multiple tenants. In this article, we propose a modified sum-based container placement algorithm called a multi-objective optimization-based container placement algorithm (MSBCPA). In the proposed algorithm, we have considered two metrics: makespan and monetary costs for optimizing available IT resources. We have conducted comprehensive simulation experiments to validate the effectiveness of the proposed algorithm over the CloudSim 4.0 simulator. The proposed optimization algorithm (MSBCPA) aims to minimize the makespan and the execution monetary costs simultaneously. In the simulation, we found that the execution cost and energy consumption cost reduce by 20% to 30% and achieve the best possible cost-makespan trade-offs compared to competing algorithms.
  • An Integrated ELM Based Feature Reduction Combination Detection for Gene Expression Data Analysis

    Tripathy J., Dash R., Pattanayak B.K., Mishra S.K.

    Article, SN Computer Science, 2025, DOI Link

    View abstract ⏷

    Globally, cancer stands as the second leading cause of mortality. Various strategies have been proposed to address this issue, with a strong emphasis on utilizing gene expression data to enhance cancer detection methods. However, challenges arise due to the high dimensionality, limited sample size relative to its dimensions, and the inherent redundancy and noise in many genes. Consequently, it is advisable to employ a subset of genes rather than the entire set for classifying gene expression data. This research introduces a model that incorporates Ranked-based Filter (RF) techniques for extracting significant features and employs Extreme Learning Machine (ELM) for data classification. The computational cost of using RF technique over high dimensional data is low. However extraction of significant genes using one or two stage of reduction is not effective. Thus, a 4-stage feature reduction strategy is applied. The reduced data is then utilized for classification using few variants of ELM model and activation function. Subsequently, a two-stage grading approach is implemented to determine the most suitable classifier for data classification. This analysis is conducted over four microarray gene expression data using four activation function with seven learning based classifiers, from which it is shown that II-ELM classifier outperforms in terms of performance matrix and ROC graph.
  • Message from ICEC Steering Committee Chair ICEC 2024

    Mishra S.K., Puthal D.

    Editorial, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

  • A Systematic Review on Federated Learning in Edge-Cloud Continuum

    Mishra S.K., Sahoo S.K., Swain C.K.

    Review, SN Computer Science, 2024, DOI Link

    View abstract ⏷

    Federated learning (FL) is a cutting-edge machine learning platform that protects user privacy while enabling collaborative learning across various devices. It is particularly relevant in the current environment when massive volumes of data are generated at the edge of networks by developing technologies like social networking, cloud computing, edge computing, and the Internet of Things. FL reduces the possibility of unauthorized access by third parties by allowing data to stay on local devices, hence mitigating any privacy breaches. The integration of FL in Cloud, Edge, and hybrid Edge-Cloud settings are some of the computing paradigms that this study investigates. We highlight the salient features of FL, go over the main obstacles to its implementation and use, and make recommendations for future study directions. Furthermore, we assess how FL, by facilitating safe and cooperative data sharing among vehicles, can improve service quality in the Internet of Vehicles (IoV). Our study findings are intended to offer practical insights and suggestions that may have an impact on a variety of computing technology research topics.
  • Special issue on collaborative edge computing for secure and scalable Internet of Things

    Puthal D., Mishra A.K., Mishra S.K.

    Editorial, Software - Practice and Experience, 2024, DOI Link

  • Message from Convener and Co-Conveners ICEC-2024

    Mishra S.K., Enduri M.K., Dash J.K., Manikandan V.M.

    Editorial, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

  • Applications of Federated Learning in Computing Technologies

    Mishra S.K., Sindhu K., Teja M.S., Akhil V., Krishna R.H., Praveen P., Mishra T.K.

    Book chapter, Convergence of Cloud with AI for Big Data Analytics: Foundations and Innovation, 2024, DOI Link

    View abstract ⏷

    Federated learning is a technique that trains the knowledge across different decentralized devices holding samples of information without exchanging them. The concept is additionally called collaborative learning. In federated learning, the clients are allowed separately to teach the deep neural network models with the local data combined at the deep neural network model at the central server. All the local datasets are uploaded to a minimum of one server, so it assumes that local data samples are identically distributed. It doesn’t transmit the information to the server. Because of its security and privacy concerns, it’s widely utilized in many applications like IoT, cloud computing; Edge computing, Vehicular edge computing, and many more. The details of implementation for the privacy of information in federated learning for shielding the privacy of local uploaded data are described. Since there will be trillions of edge devices, the system efficiency and privacy should be taken with no consideration in evaluating federated learning algorithms in computing technologies. This will incorporate the effectiveness, privacy, and usage of federated learning in several computing technologies. Here, different applications of federated learning, its privacy concerns, and its definition in various fields of computing like IoT, Edge, and Cloud Computing are presented.
  • Designing a GSM and ARDUINO based Reliable Home Automation System

    Tripathy J., Dash S., Dash R., Pal J., Padhi S., Mishra S.K.

    Conference paper, Proceedings - 2024 OITS International Conference on Information Technology, OCIT 2024, 2024, DOI Link

    View abstract ⏷

    This paper introduces the design and prototype of a new home automation system that utilizes GSM technology as the network infrastructure to connect its components. The proposed system is composed of two primary parts: the first is the GSM module, which acts as the core of the system, managing, controlling, and monitoring the user's home. Users and system administrators can connect to the GSM locally to access devices and manage system functions. The second part is the hardware interface module, which provides the necessary interface for relays and actuators within the home automation system. The mobile phone, originally designed for making calls and sending text messages, has evolved into a versatile device, especially with the advent of smartphones. In this study, the researcher develops a home automation system using GSM and Arduino, allowing users to control household appliances by simply sending SMS commands through their GSM-based phones.This paper states that a smartphone is not necessary; but an old GSM phone can effectively be used to turn home electronic appliances on and off from any location. The proposed system offers greater scalability and flexibility compared to commercially available home automation systems.
  • A deep transfer learning model for green environment security analysis in smart city

    Sahu M., Dash R., Kumar Mishra S., Humayun M., Alfayad M., Assiri M.

    Article, Journal of King Saud University - Computer and Information Sciences, 2024, DOI Link

    View abstract ⏷

    Green environmental security refers to the state of human-environment interactions that include reducing resource shortages, pollution, and biological dangers that can cause societal disorder. In IoT-enabled smart cities, due to the advancement of technologies, sensors and actuators collect vast quantities of data that are analyzed to extract potentially useful information. However, due to the noise and diversity of the data generated, only a small portion of the massive data collected from smart cities is used. In sustainable Land Use and Land Cover (LULC) management, environmental deterioration resulting from improper land usage in the digital ecosystem is a global issue that has garnered attention. The deep learning techniques of AI are recognized for their capacity to manage vast amounts of erroneous and unstructured data. In this paper, we propose a morphologically augmented fine-tuned DenseNet-121(MAFDN) LULC classification model to automate the categorization of high spatial resolution scene images for environmental conservation. This work includes an augmentation process (i.e. erosion, dilation, blurring, and contrast enhancement operations) to extract spatial patterns and enlarge the training size of the dataset. A few state-of-the-art techniques are incorporated for contrasting the efficacy of the proposed approach. This facilitates green resource management and personalized provision of services.
  • Enhancing Edge Intelligence with Layer-wise Adaptive Precision and Randomized PCA

    Mishra S.K., Velankani Joise Divya G.C., Maddi P.A., Tanniru N.M., Manthena S.L.P.

    Conference paper, Proceedings of 2nd International Conference on Advancements in Smart, Secure and Intelligent Computing, ASSIC 2024, 2024, DOI Link

    View abstract ⏷

    Edge intelligence is the ability of edge devices to carry out intelligent operations, such as object identification, speech recognition, or natural language processing, utilizing machine learning algorithms. The primary goal is to fix edge computing's problems and improve its performance. The main goal of this work is to apply RPCA to increase energy efficiency and reduce memory usage. The algorithm computes the covariance matrix of the centered data, finds the eigenvectors and eigenvalues of the covariance matrix, sorts the eigenvectors and eigenvalues in descending order of the eigenvalues, chooses the first set of eigenvectors, and projects the data onto the chosen eigenvectors. This article employs a technique known as layer-wise adaptive precision (LAP), which decreases the precision of activations in neural network layers that contribute less to output accuracy.
  • Role of federated learning in edge computing: A survey

    Mishra S.K., Kumar N.S., Rao B., Brahmendra, Teja L.

    Article, Journal of Autonomous Intelligence, 2024, DOI Link

    View abstract ⏷

    This paper explores various approaches to enhance federated learning (FL) through the utilization of edge computing. Three techniques, namely Edge-Fed, hybrid federated learning at edge devices, and cluster federated learning, are investigated. The Edge-Fed approach implements the computational and communication challenges faced by mobile devices in FL by offloading calculations to edge servers. It introduces a network architecture comprising a central cloud server, an edge server, and IoT devices, enabling local aggregations and reducing global communication frequency. Edge-Fed offers benefits such as reduced computational costs, faster training, and decreased bandwidth requirements. Hybrid federated learning at edge devices aims to optimize FL in multi-access edge computing (MAEC) systems. Cluster federated learning introduces a cluster-based hierarchical aggregation system to enhance FL performance. The paper explores the applications of these techniques in various domains, including smart cities, vehicular networks, healthcare, cybersecurity, natural language processing, autonomous vehicles and smart homes. The combination of edge computing (EC) and federated learning (FL) is a promising technique gaining popularity across many applications. EC brings cloud computing services closer to data sources, further enhancing FL. The integration of FL and EC offers potential benefits in terms of collaborative learning.
  • Task Offloading Technique Selection In Mobile Edge Computing

    Mishra S.K., Challa H.K., Kotha K.S., Yarramreddy D.P.

    Conference paper, Proceedings of 2nd International Conference on Advancements in Smart, Secure and Intelligent Computing, ASSIC 2024, 2024, DOI Link

    View abstract ⏷

    In distributed computing environments, computation offloading is a vital strategy for maximizing the performance and energy efficiency of mobile devices. Distributed deep learning-based offloading (DDLO) [10] and deep reinforcement learning for online computation offloading (DROO) [10] are two popular methods for solving the computation offloading problem. In DDLO, the data is divided into smaller pieces during offloading and distributed throughout the systems or devices. In DROO, an agent is trained to determine the optimum offloading choices based on the resources at hand, the network environment, and the application's performance requirements. Comparison is presented of both approaches, emphasizing their benefits and drawbacks and the situations when one approach is more suitable than the other. Precision, effectiveness, and adaptability are just a few of the different metrics we use to evaluate the performance of both techniques in a variety of workload and network configuration scenarios. Our findings indicate that while deep reinforcement learning is more able to respond to environmental changes, distributed deep learning-based offloading is more efficient in terms of computational resources.
  • Message from General Chairs ICEC-2024

    Mishra S.K., Mohapatra P.

    Editorial, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

  • Advanced Temporal Attention Mechanism Based 5G Traffic Prediction Model for IoT Ecosystems

    Samudrala D.S., Mishra S.K., Senapati R.

    Conference paper, Proceedings - 2024 IEEE 21st International Conference on Mobile Ad-Hoc and Smart Systems, MASS 2024, 2024, DOI Link

    View abstract ⏷

    Traffic prediction in5G is important for effective deployment and operation of Internet of Things (IoT) ecosystems. It enables resource management and optimization, guaranteeing that the network can handle unpredictable traffic volumes with-out experiencing traffic jams. This helps to ensure high quality of service and low latency for applications such as autonomous automobiles and virtual reality. Predictive traffic management further enhances user experience by keeping services consistent and reliable, particularly during busy hours. There are various approaches to traffic prediction in 5G networks, and each has advantages and disadvantages of its own. The choice of model will depend on how precise, adaptable, and computationally demanding the network must be. The model proposed in this paper integrates lightweight convolution with temporal attention to deliver accurate and efficient traffic prediction for 5G networks that may further be useful for developing IoT ecosystem.
  • Maximizing Resource Utilization Using Hybrid Cloud-based Task Allocation Algorithm

    Mishra S.K., Mohith G.K.H., Ambati S.T., Guduru K.K., Senapati R.

    Conference paper, Proceedings - 2024 IEEE 21st International Conference on Mobile Ad-Hoc and Smart Systems, MASS 2024, 2024, DOI Link

    View abstract ⏷

    Cloud computing operates similarly to a utility, providing users with on-demand access to various hardware and software resources, billed according to usage. These resources are primarily virtualized, with virtual machines (VMs) serving as critical components. However, task allocation within VMs presents significant challenges, as uneven distribution can lead to underloading or overloading, causing system inefficiencies and potential failures. This study addresses these issues by proposing a novel hybrid task allocation algorithm that combines the strengths of the Artificial Bee Colony (ABC) algorithm with Particle Swarm Optimization (PSO). Our approach aims to enhance resource utilization and reduce the risks of VM overload or underload. We conduct a comprehensive evaluation of the proposed hybrid algorithm against traditional ABC and PSO algorithms, focusing on their effectiveness in managing diverse task loads. The results of our empirical analysis indicate that our hybrid approach outperforms the conventional algorithms, leading to better resource utilization and more accurate task allocation. These findings have significant implications for optimizing task allocation in cloud computing environments, and we suggest potential avenues for future research to further refine these strategies.
  • Enhancing Traffic Flow Through Advanced ACO Mechanism

    Divya G C V.J., Mishra S.K., Puthal D.

    Conference paper, IEEE INFOCOM 2024 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2024, 2024, DOI Link

    View abstract ⏷

    Severe traffic congestion is a significant challenge for urban areas, and improving sustainable urban development is critical, yet traditional traffic management systems often struggle to cope with dynamic real-time conditions due to their reliance on predetermined schedules and fixed control mechanisms. This paper advocates for the application of optimizing techniques, specifically an enhanced version of ant colony optimization (ACO), to alleviate this challenge. By effectively managing and enhancing vehicle movement, these approaches target the reduction of congestion, travel times, and costs while concurrently enhancing fuel efficiency. This approach can also be adapted to optimize the deployment and movement of drones in wireless communication networks, ensuring optimal coverage and resource utilization. Implementations, comparisons, and visualizations show how these approaches help improve traffic movement, thereby minimizing congestion-associated problems.
  • AI Based Feature Selection for Intrusion Detection Classifiers in Cloud of Things

    Ravala R.K., Polisetty K.B., Mishra S.K.

    Conference paper, 2024 1st International Conference on Cognitive, Green and Ubiquitous Computing, IC-CGU 2024, 2024, DOI Link

    View abstract ⏷

    The popularity of cloud computing can be attributed to its on-demand nature, scalability, and flexibility. However, because of its heightened vulnerability and propensity for so-phisticated, widespread attacks, safeguarding this distributed en-vironment presents difficulties. Conventional IDS are insufficient. The proposed IDS for cloud environments in this study makes use of ensemble feature selection and classification techniques. This approach robustly distinguishes between attacks and normal traf-fic by merging individual classifiers through voting. Performance measures and ROC-AUC analysis show that the new approach is significantly more accurate and has fewer false alarms than the previous one. For cloud intrusion detection, this method provides a statistically better option.
  • A Panoramic Review on Cutting-Edge Methods for Video Anomaly Localization

    Nayak R., Mishra S.K., Dalai A.K., Pati U.C., Das S.K.

    Review, IEEE Access, 2024, DOI Link

    View abstract ⏷

    Video anomaly detection and localization is the process of spatiotemporally localizing the anomalous video segment corresponding to the abnormal event or activities. It is challenging due to the inherent ambiguity of anomalies, diverse environmental factors, the intricate nature of human activities, and the absence of adequate datasets. Further, the spatial localization of the video anomalies (video anomaly localization) after the temporal localization of the video anomalies (video anomaly detection) is also a complex task. Video anomaly localization is essential for pinpointing the anomalous event or object in the spatial domain. Hence, the intelligent video surveillance system must have video anomaly detection and localization as key functionalities. However, the state-of-the-art lacks a dedicated survey of video anomaly localization. Hence, this article comprehensively surveys the cutting-edge approaches for video anomaly localization, associated threshold selection strategies, publicly available datasets, performance evaluation criteria, and open trending research challenges with potential solution strategies.
  • An Ensemble Deep Learning Model for Oral Squamous Cell Carcinoma Detection Using Histopathological Image Analysis

    Das M., Dash R., Kumar Mishra S., Kumar Dalai A.

    Article, IEEE Access, 2024, DOI Link

    View abstract ⏷

    Deep learning approaches for medical image analysis are widely applied for the recognition and classification of different kinds of cancer. In this study, histopathological images of oral cells are analyzed for the programmed recognition of Oral squamous cell carcinoma (OSCC) using the proposed framework. The suggested model applies transfer learning and ensemble learning in two phases. In the 1st phase, a few Convolutional neural network (CNN) models are considered through transfer learning applications for OSCC detection. In the 2nd phase, the ensemble model is constructed considering the best two pre-trained CNN from the 1st phase. The proposed classifier is compared with leading-edge models like Alexnet, Resnet50, Resnet101, Inception net, Xception net, and InceptionresnetV2. Results are analyzed to demonstrate the effectiveness of the suggested framework. A three-phase comparative analysis is considered. Firstly, various metrics including accuracy, recall, F-score, and precision are evaluated. Secondly, a graphical analysis using a loss and accuracy graph is performed. Lastly, the accuracy of the proposed classifier is compared with that of other models from existing literature. Following the three-stage performance evaluation, the proposed ensemble classifier exhibits enhanced performance with an accuracy of 97.88%.
  • Comparative Evaluation of Optimization Techniques for Industrial Wireless Sensor Network Hello Flood Attack Mitigation

    Srinivas S., Tejaswi S., Mishra S.K.

    Conference paper, Proceedings - 2024 3rd International Conference on Computational Modelling, Simulation and Optimization, ICCMSO 2024, 2024, DOI Link

    View abstract ⏷

    Protecting Industrial Wireless Sensor Networks (IWSNs) means ensuring that crucial industrial processes remain as stable and whole as ever. In order to mitigate the 'Hello Flood Attack' in IWSNs, this paper compares three optimization heuristic techniques: Genetic Algorithm (GA), Simulated Annealing (SA) and Particle Swarm Optimization (PSO). Genetic Algorithm (GA) progresses remedies, Simulated Annealing (SA) interactively fixes communication setup and Particle Swarm Optimization (PSO) upgrades features to elevate vigor. The study looks into how well each optimization technique enhances network resilience and protects against the negative effects of Hello Flood Attacks. There is also a benchmark scenario for comparison. These results offer valuable information on the development of safe, secure IWSNs by pointing out the benefits and drawbacks of these systems.
  • Predictive VM Consolidation for Latency Sensitive Tasks in Heterogeneous Cloud

    Kumar Swain C., Routray P., Kumar Mishra S., Alwabel A.

    Conference paper, Lecture Notes in Networks and Systems, 2023, DOI Link

    View abstract ⏷

    Virtualization technology plays a crucial role for reducing the cost in a cloud environment. Efficient virtual machine (VM) packing method that focuses on compaction of hosts such that most of its resources are used when it serves the user requests. Here our aim is to reduce the power requirements of a cloud system by focusing on minimizing the number of hosts. We propose a predictive scheduling approach considering the deadline of a task request and make flexible decisions to allocate the tasks to hosts. Experimental results show that the proposed approach can save around 5 to 10% power consumption than the standard VM packing methods in most scenarios. Even when the total power consumption requirements remain the same as that of standard methods in some scenarios, the average number of hosts required in the cloud environment are reduced and thereby reducing the cost.
  • Blockchain-Based Medical Report Management and Distribution System

    Sahoo S.K., Mishra S.K., Guru A.

    Book chapter, 6G Enabled Fog Computing in IoT: Applications and Opportunities, 2023, DOI Link

    View abstract ⏷

    Generally, the Hospital operations contain loads of scientific reviews which can be a crucial part of operations. As a result of integrating pathology and other testing labs within the medical center, hospitals today have improved their business operations while also achieving greener and faster diagnoses. Many dif-ferent strategies are used in hospital operations, from patient admission and control to health center cost management. This will raise operational complexity and make it more challenging to manage, especially when combined with newly introduced offerings like pathology and pharmaceutical control. In order to overcome this issue, we employ the Hyperledger notion and a blockchain era to retain the data of each individual transaction with 100% authenticity. Instead of using a centralized server, all transactions are encrypted and kept as blocks, which are then used to authenticate within a network of computers. Additionally, we employ the hyper ledger concept to associate and store all associated scientific files for each transaction with a date stamp. This makes it possible to confirm the legitimacy of each document and identify any changes made by someone else. This consultation defines that affected person's clinical record is personal and every affected person has his very own privacy. To guard the reviews from hackers or enemies, who will make changes on clinical reviews and additionally saving the statistics without lacking any content material which performs an important position to shape a life. To study reviews, we are using a block chain method which splits the information into modules. Using this method hackers or enemies can't get the right information. "To bring forward a secure, safe, efficient, and legitimate medical report man-agement system" is the primary goal of this project.
  • LiDAR-based Building Damage Detection in Edge-Cloud Continuum

    Mishra S.K., Sanisetty M.L., Shaik A.Z., Thotakura S.L., Aluru S.L., Puthal D.

    Conference paper, 2023 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress, DASC/PiCom/CBDCom/CyberSciTech 2023, 2023, DOI Link

    View abstract ⏷

    In recent years, natural disasters such as earth-quakes and hurricanes have caused significant damage to buildings and infrastructure worldwide. As a result, there has been an increasing demand for efficient and accurate methods of assessing the extent of building damage to facilitate effective recovery efforts. One emerging technology that shows great promise in this area is Light Detection and Ranging (Li-DAR). Therefore, this paper proposes a novel detection framework utilizing textural feature extraction strategies for Li-DAR-based building damage detection. Li-DAR, a remote sensing technology, has ability to create detailed maps of buildings and other infrastructure, allowing for precise identification and measurement of damage caused by natural disasters. Integration of the popular paradigm Edge cloud continuum extends cloud's capabilities to the edge of the network, enabling more effective post-disaster recovery efforts. Smart Li-DAR sensors pre-process the captured data and send it to the nearest edge device for further processing.. Inclusion of machine learning algorithms like K-means clustering algorithm here is used to classify the buildings into damaged and undamaged classes by analyzing the extracted textural features. The scheme can detect various types of building damage. The cloud server is utilized to store the processed maps. The integration of the Edge-Cloud Continuum (ECC) has added more value by reducing the network usage, and latency of the Li-DAR-based building damage detection system. ECC enables processing and analysis of data at the point of origin as well as large-scale data processing and storage in cloud-based systems. This proposed framework has shown promising results in preliminary experiments and has the potential to revolutionize post-disaster recovery efforts by providing efficient building damage maps.
  • CS-Based Energy-Efficient Service Allocation in Cloud

    Kumar Mishra S., Kumar Sahoo S., Kumar Swain C., Guru A., Kumar Sethy P., Sahoo B.

    Conference paper, Lecture Notes in Networks and Systems, 2023, DOI Link

    View abstract ⏷

    Nowadays, cloud computing is growing rapidly and has been developed as an adequate and adaptable paradigm in solving large-scale problems. Since the number of cloud users and their requests are increasing fast, the loads on the cloud data center may be under-loaded or over-loaded. These circumstances induce various problems, such as high response time and energy consumption. High energy consumption in the cloud data center has drastic negative impacts on the environment. Literature shows that scheduling plays a significant role in the reduction of energy consumption. In the recent decade, this problem has attracted huge interest among researchers, and several solutions have been proposed. Energy-efficient service (task) allocation with high Customer Satisfaction (CS) constraint has become a critical problem of a cloud. In this paper, a high CS-based energy-efficient service allocation framework has been designed. This optimizes the energy consumption as well as the CS level in the cloud. The proposed algorithm is simulated in CloudSim simulator and compared with some standard algorithms. The simulation results show in favor of the proposed algorithm.
  • Automatic Detection of Oral Squamous Cell Carcinoma from Histopathological Images of Oral Mucosa Using Deep Convolutional Neural Network

    Das M., Dash R., Mishra S.K.

    Article, International Journal of Environmental Research and Public Health, 2023, DOI Link

    View abstract ⏷

    Worldwide, oral cancer is the sixth most common type of cancer. India is in 2nd position, with the highest number of oral cancer patients. To the population of oral cancer patients, India contributes to almost one-third of the total count. Among several types of oral cancer, the most common and dominant one is oral squamous cell carcinoma (OSCC). The major reason for oral cancer is tobacco consumption, excessive alcohol consumption, unhygienic mouth condition, betel quid eating, viral infection (namely human papillomavirus), etc. The early detection of oral cancer type OSCC, in its preliminary stage, gives more chances for better treatment and proper therapy. In this paper, author proposes a convolutional neural network model, for the automatic and early detection of OSCC, and for experimental purposes, histopathological oral cancer images are considered. The proposed model is compared and analyzed with state-of-the-art deep learning models like VGG16, VGG19, Alexnet, ResNet50, ResNet101, Mobile Net and Inception Net. The proposed model achieved a cross-validation accuracy of 97.82%, which indicates the suitability of the proposed approach for the automatic classification of oral cancer data.
  • A Hybrid Encryption Approach using DNA-Based Shift Protected Algorithm and AES for Edge-Cloud System Security

    Mishra S.K., Cherukuri C., Dheeraj P.V., Puthal D.

    Conference paper, OCIT 2023 - 21st International Conference on Information Technology, Proceedings, 2023, DOI Link

    View abstract ⏷

    The modern applications, such as smart cities, connected homes, and crisis management systems, has driven the emergence of the edge-cloud continuum to enable data processing to occur closer to the source, reducing latency and enhancing data processing efficiency. However, due to the distributed nature of edge nodes and cloud environments, data security remains a critical concern. Malicious actors may intercept or eavesdrop on communication channels between edge devices and the cloud. DNA computing, a groundbreaking security concept inspired by biological DNA, offers a promising solution to address these security challenges. This paper proposes a DNA-based cryptographic method for secure data transfer and communication in edge-cloud computing environments. The research also examines into various data security threats in the edge-cloud continuum and explores potential countermeasures.
  • A comparative study of different scheduling approaches for splittable latency sensitive tasks in Fog-Cloud environment

    Sandeep K.S., Koundinya C.A., Prabhas A.V., Swain C.K., Mishra S.K.

    Conference paper, 2023 2nd International Conference on Ambient Intelligence in Health Care, ICAIHC 2023, 2023, DOI Link

    View abstract ⏷

    IoT has revolutionized the way we live and the work we do by connecting different devices through the Internet. In the present scenario, the number of IoT devices are increasing rapidly due to the increase in technology and the increase in the comforts of life. Nowadays we can see that many of them are using IoT devices regularly, it's estimated that by the end of 2030, there will be 30 billion users who will be using IoT applications. These devices send data to the cloud for processing. Due to the distance of the cloud from the IoT devices, the application requests get delayed service responses. So to handle the latency sensitive applications we require the micro cloud service like fog servers deployed near to the data generation points. The fog layer lies between the IoT devices and Cloud which acts as an intermediate layer. This helps in reducing latency of the tasks and provide better performance. As the number of IoT applications keeps on increasing, the resources available with the fog nodes may not handle the upcoming demands. So to overcome these demands, we are using splittable methods to allocate the tasks to Fog/ Cloud nodes more compactly. If a task can be splitted before the deadline into different modules, then we split the given task and allocate those tasks to different fog nodes/ servers and then collecting back the data from the fog nodes/ servers and merging them into a single unit. With the help of this method, we can increase the performance of the system.
  • Latency Aware – Resource Planning in Edge Using Fuzzy Logic

    Sahoo S.K., Dash A., Vemula D.R., Swain C.K., Mishra S.K.

    Conference paper, 2023 2nd International Conference on Ambient Intelligence in Health Care, ICAIHC 2023, 2023, DOI Link

    View abstract ⏷

    As a potential paradigm for enabling effective and low-latency computation at the network's edge, edge computing has recently come into the spotlight. In edge computing environments, resource allocation is essential for ensuring the best possible resource utilization while still satisfying application requirements. Traditional resource allocation algorithms, however, struggle to effectively capture the uncertainties and ambiguity associated with resource availability and application needs because of the dynamic and varied nature of edge environments. This research offers a fuzzy logic-based method for planning to allocate resources in edge computing. Fuzzy logic offers a flexible and understandable framework for modeling and reasoning with imperfect and ambiguous data. The suggested method offers a more reliable and adaptable resource allocation system that can successfully address the uncertainties present in edge computing by utilizing fuzzy logic. The resource allocation process incorporates fuzzy membership functions to capture the vagueness of resource availability and application requirements. Fuzzy rules are defined to map the linguistic variables representing resource availability, application demands, and performance objectives to appropriate resource allocation decisions. The fuzzy inference engine then utilizes these rules to make intelligent decisions regarding resource allocation, considering the fuzzy inputs and the system's predefined objectives.
  • A Smart Logistic Classification Method for Remote Sensed Image Land Cover Data

    Sahu M., Dash R., Mishra S.K., Puthal D.

    Article, SN Computer Science, 2022, DOI Link

    View abstract ⏷

    A smart system integrates appliances of sensing, acquisition, classification and managing with regard to interpreting and analyzing a situation to generate decisions depending on the available data in a predictive way. Remotely sensed images are an essential tool for evaluating and analyzing land cover dynamics, particularly for forest-cover change. The remote data gathered for this operation from different sensors are of high spatial resolution and thus suffer from high interclass and low intraclass vulnerability issues which retards classification accuracy. To address this problem, in this research analysis, a smart logistic fusion-based supervised multi-class classification (SLFSMC) model is proposed to obtain a thematic map of different land cover types and thereby performing smart actions. In the pre-processing stage of the proposed work, a pair of closing and opening morphological operations is employed to produce the fused image to exploit the contextual information of adjacent pixels. Thereafter quality assessment of the fused image is estimated on four fusion metrics. In the second phase, this fused image is taken as input to the proposed classifiers. Afterward, a multi-class classification model is designed based on the supervised learning concept to generate maps for analyzing and exporting decisions based on any critical climatic situation. In our paper, for estimating the performance of proposed SLFSMC among few conventional classification techniques such as the Naïve Bayes classifier, decision tree, Support vector machine, and K-nearest neighbors, a statistical tool called as Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is involved. We have implemented proposed SLFSMC system on some of the regions of Victoria, a state of Australia, after the deforestation caused due to different reasons.
  • Crop Recommendation System Using Support Vector Machine Considering Indian Dataset

    Mishra T.K., Mishra S.K., Sai K.J., Peddi S., Surusomayajula M.

    Conference paper, Lecture Notes in Networks and Systems, 2022, DOI Link

    View abstract ⏷

    Since a long years, agriculture is considered as a major profession for livelihoods of the Indians. Still, agriculture is not profitable as many farmers take the worse step as they cannot survive from the burden of loans. So, one such place where there is yet large scope to develop is agriculture. In comparison with other countries, India has the highest production rate in agriculture. However, still, most agricultural fields are underdeveloped due to the lack of deployment of ecosystem control technologies. Agriculture when combined with technology can bring the finest results. Crop yield depends on multiple climatic conditions such as air temperature, soil temperature, humidity, and soil moisture. In general, farmers depend on self-monitoring and experience for harvesting fields. Scarcity of water is a main issue in today’s life. This scarcity is affecting people worldwide. So water is also a vital component of crop yield, here we are considering rainfall instead direct water. Predicting the crop selection/yield in advance of its harvest would help the policymakers and farmers for taking appropriate measures for farming, marketing, and storage. Thus, in this paper we propose a crop selection using machine learning technique as support vector machine (SVM) and polynomial regression. This model will help the farmers to know the yield of their crop before cultivating the agricultural field and thus help them to make the appropriate decisions. It attempts to solve the issue by building a prototype of an interactive prediction system. Accurate yield prediction is required to be done after understanding the functional relationship between yield and these parameters because along with all advances in the machines and technologies used in farming, useful and accurate information about different matters also plays a significant role in it. In this paper, we have simulated SVM and polynomial regression technique to predict which crop can yield better profit. Both of the models are simulated comprehensively on the Indian dataset, and an analytical report has been presented.
  • Combination of Reduction Detection Using TOPSIS for Gene Expression Data Analysis

    Tripathy J., Dash R., Pattanayak B.K., Mishra S.K., Mishra T.K., Puthal D.

    Article, Big Data and Cognitive Computing, 2022, DOI Link

    View abstract ⏷

    In high-dimensional data analysis, Feature Selection (FS) is one of the most fundamental issues in machine learning and requires the attention of researchers. These datasets are characterized by huge space due to a high number of features, out of which only a few are significant for analysis. Thus, significant feature extraction is crucial. There are various techniques available for feature selection; among them, the filter techniques are significant in this community, as they can be used with any type of learning algorithm and drastically lower the running time of optimization algorithms and improve the performance of the model. Furthermore, the application of a filter approach depends on the characteristics of the dataset as well as on the machine learning model. Thus, to avoid these issues in this research, a combination of feature reduction (CFR) is considered designing a pipeline of filter approaches for high-dimensional microarray data classification. Considering four filter approaches, sixteen combinations of pipelines are generated. The feature subset is reduced in different levels, and ultimately, the significant feature set is evaluated. The pipelined filter techniques are Correlation-Based Feature Selection (CBFS), Chi-Square Test (CST), Information Gain (InG), and Relief Feature Selection (RFS), and the classification techniques are Decision Tree (DT), Logistic Regression (LR), Random Forest (RF), and k-Nearest Neighbor (k-NN). The performance of CFR depends highly on the datasets as well as on the classifiers. Thereafter, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking all reduction combinations and evaluating the superior filter combination among all.
  • A Data Aggregation Approach Exploiting Spatial and Temporal Correlation among Sensor Data in Wireless Sensor Networks

    Dash L., Pattanayak B.K., Mishra S.K., Sahoo K.S., Jhanjhi N.Z., Baz M., Masud M.

    Article, Electronics (Switzerland), 2022, DOI Link

    View abstract ⏷

    Wireless sensor networks (WSNs) have various applications which include zone surveillance, environmental monitoring, event tracking where the operation mode is long term. WSNs are characterized by low-powered and battery-operated sensor devices with a finite source of energy. Due to the dense deployment of these devices practically it is impossible to replace the batteries. The finite source of energy should be utilized in a meaningful way to maximize the overall network lifetime. In the space domain, there is a high correlation among sensor surveillance constituting the large volume of the sensor network topology. Each consecutive observation constitutes the temporal correlation depending on the physical phenomenon nature of the sensor nodes. These spatio-temporal correlations can be efficiently utilized in order to enhance the maximum savings in energy uses. In this paper, we have proposed a Spatial and Temporal Correlation-based Data Redundancy Reduction (STCDRR) protocol which eliminates redundancy at the source level and aggregator level. The estimated performance score of proposed algorithms is approximately 7.2 when the score of existing algorithms such as the KAB (K-means algorithm based on the ANOVA model and Bartlett test) and ED (Euclidian distance) are 5.2, 0.5, respectively. It reflects that the STCDRR protocol can achieve a higher data compression rate, lower false-negative rate, lower false-positive rate. These results are valid for numeric data collected from a real data set. This experiment does not consider non-numeric values.
  • Task Allocation in Containerized Cloud Computing Environment

    Akram Khan M., Kumar Mishra S., Kumari A., Sahoo B.

    Conference paper, ASSIC 2022 - Proceedings: International Conference on Advancements in Smart, Secure and Intelligent Computing, 2022, DOI Link

    View abstract ⏷

    Containerization technology makes use of operating system-level virtualization to pack application that runs with required libraries and is isolated from other processes on the same host. The lightweight easy deployment of containers made them popular at many data centers. It has captured the market of virtual machines and emerged as lightweight technology that offers better microservices support. Many organizations are widely deploying container technology for handling their diverse and unexpected workload derived from modern applications such as Edge/ Fog computing, Big Data, and IoT in either proprietary clusters or public, private cloud data centers. In the cloud computing environment, scheduling plays a pivotal role. In the same way in container technology, scheduling also plays a critical role in achieving the optimum utilization of available resources. Designing an efficient scheduler is itself a challenging task. The challenges arise from various aspects like the diversity of computing resources and maintaining fairness among numerous tenants, sharing resources with each other as per their requirements, unexpected variation in resource demands and heterogeneity of jobs, etc. This survey provides a multi-perspective overview of container scheduling. Here, we have organized the container scheduling problem into four categories based on the type of optimization algorithm applied to get the linear programming Modeling, heuristic, meta-heuristic, machine learning, and artificial intelligence-based mathematical model. In the previous research work has been done on either Virtual machine placements to Physical Machines or Container instances to Physical machines. This leads to either underutilized PMs or over-utilized PMs. But in this paper, we try to combine both virtualization technology Containers as well as VMs. The primary aim is to optimize resource utilization in terms of CPU time. in this paper, we proposed a meta-heuristics algorithm named Sorted Task-based allocation. Simulation results show that the proposed Sorted TBA algorithm performs better than the Random and Unsorted TBA algorithms.
  • VM consolidation based on overload detection and VM selection policy

    Jena S., Sahu L.K., Mishra S.K., Sahoo B.

    Conference paper, Proceedings of the Confluence 2021: 11th International Conference on Cloud Computing, Data Science and Engineering, 2021, DOI Link

    View abstract ⏷

    Even though cloud computing has been a big boon to the ICT (Information and communication technology) industry, it faces high energy consumption and substantial CO2 emission. Due to the increase in demand for computational resources, it is now necessary and of utmost significance to improve the energy consumption of the cloud system. Virtual Machine (VM) consolidation is one of the powerful tools to improve energy efficiency as it reduces the number of VM migrations by managing VMs from overloaded/underloaded hosts. Implementation of VM consolidation techniques leads to a decrease in the amount of hardware consumption, energy consumption, and data footprints which leads to an increased Quality of Service (QoS). In this paper, an energy aware VM selection algorithm is proposed along with an overload detection algorithm. The proposed algorithm runs on the CloudSim toolkit environment and analyzes it based on different parameters like energy consumption, SLA violation, server shutdown, and the number of VM migrations to analyze energy efficiency improvement. This modified approach exhibited better performance on all the parameters as compared to the existing algorithms.
  • Analysis of Machine Learning Technologies for the Detection of Diabetic Retinopathy

    Mohanty B.C.S., Mishra S., Mishra S.K.

    Book chapter, Machine Learning for Healthcare Applications, 2021, DOI Link

    View abstract ⏷

    In Today’s world, disease diagnosis plays a vital role in the area of medical imaging. Medical imaging is the method and procedure of making visual descriptions of the interior of a body for clinical investigation and clinical mediation, as well as visual depiction of the function of some organs or tissues. Medical imaging also deals with disease detection. We can get a better view of detecting the disease by using machine learning in medical imaging. So Now what is Machine Learning (ML)? ML is an artificial intelligence (AI) utilization that presents the system with the capacity to learn and develop itself. It mainly focuses on the development of computer programs that can access the data and use it for themselves. In this chapter we will focus on detection Diabetic retinopathy using machine learning. Diabetes is a type of disease that result in too much sugar in blood. There are three main types of diabetes. Diabetic retinopathy is one of them. Diabetic retinopathy is an eye infection brought about by the inconvenience of diabetes and we ought to recognize it right on time for effective treatment. As the disease advances, the sight of a patient may begin to break down and lead to diabetic retinopathy. Thus, two groups were recognized, in particular non-proliferative diabetic retinopathy and proliferative diabetic retinopathy. We should detect it as soon as possible as it can cause permanent loss of vision. By using ML in medical imaging we can detect it much faster and more accurately. In this chapter we will analyze about different ML technologies, algorithms and models to diagnose diabetic retinopathy in an efficient manner to support the healthcare system.
  • Facial expression recognition system (fers): A survey

    Mishra S., Gupta R., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    Human facial expressions and emotions are considered as the fastest way of the communication medium for expressing thoughts. The ability to identify the emotional states of people surrounding us is an essential component of natural communication. Facial expression and emotion detector can be used to know whether a person is sad, happy, angry, and so on. We can better understand the thoughts and ideas of a person. This paper briefly explores the idea of recognizing the computerized facial expression detection system. First, we have discussed an overview of the facial expression recognition system (FERS). Also, we have presented a glimpse of current technologies that are used for the detection of FERS. A comparative analysis of existing methodologies is also presented in this paper. It provides the basic information and general understanding of up-to-date state-of-the-art studies; also, experienced researchers can look productive directions for future work.
  • Crop Recommendation System using KNN and Random Forest considering Indian Data set

    Mishra T.K., Mishra S.K., Sai K.J., Alekhya B.S., Nishith A.R.

    Conference paper, Proceedings - 2021 19th OITS International Conference on Information Technology, OCIT 2021, 2021, DOI Link

    View abstract ⏷

    The agriculture plays crucial role in the growth of the country's economy. In comparison to other countries, India has the highest production rate in agriculture. Agriculture when combined with technology can bring the finest results. Crop prediction is a highly complex trait determined by multiple factors such as Contents of Nitrogen, Phosphorous, Potassium, Rainfall, Temperature, Humidity, Ph level. Predicting the crop in advance would help the policymakers and farmers for taking appropriate measures for farming, marketing and storage. Thus, in this paper we propose crop selection using machine learning techniques such as K-Nearest Neighbour (KNN) and Random Forest. Both of the models are simulated comprehensively on Indian Data set and an analytical report has been presented. This model will help the farmers to know the type of the crop before cultivating onto the agricultural field and thus help them to make appropriate decisions.
  • A Static Approach for Access Control with an Application-Derived Intrusion System

    Chattopadhyay S., Mishra S., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    In the era of cyberspace, enforcing an Intrusion Detection System (IDS) and Firewall on a system is a common practice among network administrators or engineers. But, with the due time, just implementing IDS and firewall isn’t just enough to secure our systems, especially with the present trend of spreading new malware attacks. Its quite easy to victimize a machine, even with IDS and firewalls enforced on the networks by easily uploading shells in the form of pdf, jpg, txt, etc. Due to which machine can easily be victimized without much effort, for this, we probe to apply a new approach to overcome this anomaly. Understandably, with the increasing demand for IoT devices in the market, safeguarding these devices are also a big challenge. Motivated by this problem, we try to perform inspections to maintain stability and functionality by adding code that allows the application to keep track of operating constraints of the application during an attack. Hence, in the background of this, we discuss intrusion detection systems, firewalls, and applicability. Further, we tend to identify open challenges in this direction.
  • A real-time sentiments analysis system using twitter data

    Dave A., Bharti S., Patel S., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    As social media platforms become the go-to for knee-jerk reactions on events by the current populous, it has become extremely important for event managers, celebrities, and organizations to constantly monitor their perceived social image online. This becomes especially difficult during key periods of heightened activity, like events, announcements, etc. As the rate at which the tweets are posted is much higher than what a human can read or comprehend. In this paper, we exploit existing sentiment analysis techniques to develop a real-time sentiment analysis system that provides us with real-time sentiments of the audience on the micro-blogging site, Twitter, toward an event, organization, or person. This system serves to act as a feedback mechanism helping the users to understand, the perceived image of the event/organization. This feedback, if provided in a timely manner, can be used to improve the situation at hand or act as a positive reinforcement for the team. In today’s world, neglecting social media can prove detrimental to the success of an event or organization. We analyze two different events from two separate domains to understand and demonstrate the benefits of our system.
  • Energy-efficient clustering with rotational supporter in wsn

    Parida P., Sahu B., Parida A.K., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    The wireless sensor network is an evergreen field of research. Everywhere we are using the sensor. Since the sensors are small in size and have less amount of initial energy, the energy saving becomes highly important and challenging. Wherever we deploy these sensors, it may or may not be accessible all the time. Hence, these should be implemented with a suitable algorithm to utilize energy efficiently. We have proposed an energy-saving algorithm by reducing the overheads of the cluster head (CH). In order to assist the CH, an assistant is selected called the supporting CH (SCH). Generally, this responsibility is rotational. Most of the nodes get a chance to serve CH so that the energy utilization is uniform. Through the proposed algorithm, the lifetime of the network in creased. This proposed algorithm is simulated using NS3 simulator and proves the energy-efficient clustering and increased lifetime as compared to other algorithms without the use of SCH.
  • Energy-aware task allocation for multi-cloud networks

    Mishra S.K., Mishra S., Alsayat A., Jhanjhi N.Z., Humayun M., Sahoo K.S., Luhach A.K.

    Article, IEEE Access, 2020, DOI Link

    View abstract ⏷

    In recent years, the growth rate of Cloud computing technology is increasing exponentially, mainly for its extraordinary services with expanding computation power, the possibility of massive storage, and all other services with the maintained quality of services (QoSs). The task allocation is one of the best solutions to improve different performance parameters in the cloud, but when multiple heterogeneous clouds come into the picture, the allocation problem becomes more challenging. This research work proposed a resource-based task allocation algorithm. The same is implemented and analyzed to understand the improved performance of the heterogeneous multi-cloud network. The proposed task allocation algorithm (Energy-aware Task Allocation in Multi-Cloud Networks (ETAMCN)) minimizes the overall energy consumption and also reduces the makespan. The results show that the makespan is approximately overlapped for different tasks and does not show a significant difference. However, the average energy consumption improved through ETAMCN is approximately 14%, 6.3%, and 2.8% in opposed to the random allocation algorithm, Cloud Z-Score Normalization (CZSN) algorithm, and multi-objective scheduling algorithm with Fuzzy resource utilization (FR-MOS), respectively. An observation of the average SLA-violation of ETAMCN for different scenarios is performed.
  • Autonomic cloud resource provisioning and scheduling using meta-heuristic algorithm

    Kumar M., Sharma S.C., Goel S., Mishra S.K., Husain A.

    Article, Neural Computing and Applications, 2020, DOI Link

    View abstract ⏷

    We investigate that resource provisioning and scheduling is a prominent problem due to heterogeneity as well as dispersion of cloud resources. Cloud service providers are building more and more datacenters due to demand of high computational power which is a serious threat to environment in terms of energy requirement. To overcome these issues, we need an efficient meta-heuristic technique that allocates applications among the virtual machines fairly and optimizes the quality of services (QoS) parameters to meet the end user objectives. Binary particle swarm optimization (BPSO) is used to solve real-world discrete optimization problems but simple BPSO does not provide optimal solution due to improper behavior of transfer function. To overcome this problem, we have modified transfer function of binary PSO that provides exploration and exploitation capability in better way and optimize various QoS parameters such as makespan time, energy consumption, and execution cost. The computational results demonstrate that modified transfer function-based BPSO algorithm is more efficient and outperform in comparison with other baseline algorithm over various synthetic datasets.
  • Leukemia Diagnosis Based on Machine Learning Algorithms

    Patil Babaso S., Mishra S.K., Junnarkar A.

    Conference paper, 2020 IEEE International Conference for Innovation in Technology, INOCON 2020, 2020, DOI Link

    View abstract ⏷

    Leukemia is brought about by the quick generation of unusual white platelets. The high number of strange white platelets are not ready to battle contamination, and they impede the capacity of the bone marrow to create red platelets and platelets. Machine Learning techniques are widely used in the dignosis and classification of different leukemia types in the patients. In this paper, we have described the different machine learning algorithms like Support Vector Machines, k-Nearest Neighbour, Neural Networks, Naïve Bayes and Deep Learning algorithms which are used to classify leukemia into its sub-types and presented a comparative study of these algorithms.
  • Energy-Efficient Service Allocation Techniques in Cloud: A Survey

    Mishra S.K., Sahoo S., Sahoo B., Jena S.K.

    Review, IETE Technical Review (Institution of Electronics and Telecommunication Engineers, India), 2020, DOI Link

    View abstract ⏷

    The demand for cloud computing infrastructure is increasing day by day to meet the requirement of small and medium enterprises. The data center-centric cloud technology has a high share of energy consumption from the IT-industry. The amount of energy consumption in a data center depends on the allocation of user service requests to virtual machines running on the different host. Minimization of energy consumption in the data center is a significant issue and addressed by optimal allocation of cloud resources. In this paper, we have discussed how service allocation strategies have been used to optimize the energy consumption in a cloud system. A generalized system architecture is presented based on which we define the service allocation problem and energy model. Further, we present the taxonomy of various energy-efficient resource allocation techniques found in the literature. In the end, various research challenges related to the energy-efficient service allocation in cloud are discussed.
  • Token based data security in inter cluster communication in wireless sensor network

    Sahu B., Parida P., Parida A.K., Mishra S.K.

    Conference paper, 2020 International Conference on Computer Science, Engineering and Applications, ICCSEA 2020, 2020, DOI Link

    View abstract ⏷

    In this paper, the data security operation is performed in case of inter-cluster communication. It is based on token identification of the clusters for their identification. The sender cluster checks the identification of the receiver cluster before any comm3unication is initiated. Each cluster is represented by its head node. The head nodes are assigned with a token by the base station. The token number is called as the identification number (IN) of the head node and hence the cluster. The proposed idea is simulated using NS3 simulator and the performance with respect to security. The performance is compared with other algorithms.
  • Load balancing in cloud computing: A big picture

    Mishra S.K., Sahoo B., Parida P.P.

    Review, Journal of King Saud University - Computer and Information Sciences, 2020, DOI Link

    View abstract ⏷

    Scheduling or the allocation of user requests (tasks) in the cloud environment is an NP-hard optimization problem. According to the cloud infrastructure and the user requests, the cloud system is assigned with some load (that may be underloaded or overloaded or load is balanced). Situations like underloaded and overloaded cause different system failure concerning the power consumption, execution time, machine failure, etc. Therefore, load balancing is required to overcome all mentioned problems. This load balancing of tasks (those are may be dependent or independent) on virtual machines (VMs) is a significant aspect of task scheduling in clouds. There are various types of loads in the cloud network such as memory load, Computation (CPU) load, network load, etc. Load balancing is the mechanism of detecting overloaded and underloaded nodes and then balance the load among them. Researchers proposed various load balancing approaches in cloud computing to optimize different performance parameters. We have presented a taxonomy for the load balancing algorithms in the cloud. A brief explanation of considered performance parameters in the literature and their effects is presented in this paper. To analyze the performance of heuristic-based algorithms, the simulation is carried out in CloudSim simulator and the results are presented in detail.
  • Allocation of energy-efficient task in cloud using DVFS

    Mishra S.K., Khan M.A., Sahoo S., Sahoo B.

    Article, International Journal of Computational Science and Engineering, 2019, DOI Link

    View abstract ⏷

    Nowadays, the expanding computational capabilities of the cloud system rely on the minimisation of the consumed power to make them sustainable and economically productive. Power management of cloud data centres received a great attention from industry and academia as it consumes high energy and thus increases the operational cost. One of the core approaches for the conservation of energy in the cloud data centre is the task scheduling. This task allocation in a heterogeneous environment is a well known NP-hard problem due to which researchers pay attention for proposing various heuristic techniques for the problem. In this paper, a technique is proposed based on dynamic voltage frequency scaling (DVFS) for optimising the energy consumption in the cloud environment. The basic idea is to address the trade-off between energy consumption and makespan of the system. Here, we formally introduce a model that includes various subsystems and assess the implementation of the algorithm in the heterogeneous environment.
  • A secure VM consolidation in cloud using learning automata

    Mishra S.K., Sahoo B., Jena S.K.

    Book chapter, Advances in Intelligent Systems and Computing, 2019, DOI Link

    View abstract ⏷

    Cloud computing system is a progression of distributed system that has been adopted by worldwide scientifically and commercially. For optimal utilization of cloud’s potential power, effective and efficient algorithms are expected, which will select best resources from available cloud resources for different applications. This allocation of user requests to the cloud resources can optimize several parameters like energy consumption, makespan, throughput, etc. In this paper, we have proposed a learning automata based algorithm to minimize the makespan of the cloud system and also to increase the resource utilization that holds secured resource allocation. We have simulated our algorithm, ALOLA with the help of CloudSim simulator in a heterogeneous environment. During the comparison of the algorithm, we provide a finite set of tasks to the ALOLA algorithm once and estimate the makespan of the system. We have compared our proposed technique (ALOLA), i.e., with learning automata and without learning automata (random allocation algorithm), and show the system performance.
  • Secure Big Data Computing in Cloud: An Overview

    Mishra S.K., Sahoo S., Sahoo B.

    Book chapter, Encyclopedia of Big Data Technologies, 2019, DOI Link

    View abstract ⏷

    Advancement in information technology with the rapid growth in all other areas like business, medical, engineering, and scientific research has resulted in a generation of huge data. Decisionmaking from a rapidly growing huge data is a challenging job in terms of management and processing of data, which is termed as big data computing. The big data computing demands voluminous storage and computing for data processing which is delivered to the user through cloud infrastructures. The complexity of the system reduces the security level which is a challenging task for the researchers. This paper elaborates the evolution of big data computing, security issues of big data computing in cloud, different solutions for providing better security level, and finally open technical challenges and future directions.
  • An Improved Approach for Sarcasm Detection Avoiding Null Tweets

    Bharti S.K., Babu K.S., Mishra S.K.

    Conference paper, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, DOI Link

    View abstract ⏷

    Among the plethora of social media, Twitter has emerged as the favorite destination for researchers in recent times. Many researchers are inclined to work on Twitter due to the availability of massive tweets and its unique features like hashtags and short messages. In recent times, various studies have preferred the hashtags (#sarcasm and #sarcastic) to collect Twitter dataset for sarcasm detection. However, hashtag-based distant supervision suffers from the problem of the inclusion of null tweets in the datasets which can be considered as a critical one for sarcasm detection. In this article, an algorithm is proposed for automatic detection and filtration of null tweets in the Twitter data. Additionally, an algorithm to identify sarcastic tweets using context within a tweet is also proposed. This approach use dictionaries of handpicked hashtag words, emoticons as the context within a tweet. Finally, we deployed a rule-based algorithm to analyse the performance of the proposed approach. The proposed approach attains the accuracy of 97.3% (after filtering null tweets) and 83.13% (without filtering null tweets) using a rule-based approach. The attained results conclude that after elimination of null tweets, the performance of the proposed system improved significantly.
  • Co-resident Attack in Cloud Computing: An Overview

    Sahoo S., Mishra S.K., Sahoo B., Turuk A.K.

    Book chapter, Encyclopedia of Big Data Technologies, 2019, DOI Link

    View abstract ⏷

    A cloud rewards organizations with agility and cost-efficiency, but goods of the cloud come with security challenges. The sheer volume and immense size of modern-day clouds (big data) make them hard to protect and consequently, vulnerable to abuse. Security and privacy issues are intensified by velocity, volume, and a variety of big data, such as large-scale cloud infrastructures, diversity of data sources and formats, and a massive amount of inter-cloud migration. The virtualization method allows sharing of computing resources among many tenants, which may be business partners, suppliers, competitors, or attackers. Even though there is substantial logical isolation among the virtual machines (VMs), shared hardware creates vulnerabilities to co-resident attacks. This paper gives a glimpse of security issues in the cloud, specifically related to VMs. Here, we concentrate our study on co-resident VM attack and its defense methods.
  • Resource allocation for video transcoding in the multimedia cloud

    Sahoo S., Parida I., Mishra S.K., Sahoo B., Turuk A.K.

    Book chapter, Advances in Intelligent Systems and Computing, 2019, DOI Link

    View abstract ⏷

    Video content providers like YouTube and Netflix cater their content, i.e., news and shows, on the web which is accessible anytime anywhere. The multi-screens like TVs, smartphones, and laptops created a demand to transcode the video into the appropriate video specification ensuring different quality of services (QoS) such as delay. Transcoding a large, high-definition video requires a lot of time, computation. The cloud transcoding solution allows video service providers to overcome the above difficulties through the pay-as-you-use scheme, with the assurance of providing online support to handle unpredictable demands. This paper presents a cost-efficient cloud-based transcoding framework and algorithm (CVS) for streaming service providers. The dynamic resource provisioning policy used in framework finds the number of virtual machines required for a particular set of video streams. Simulation results based on YouTube dataset show that the CVS algorithm performs better compared to FCFS scheme.
  • An adaptive task allocation technique for green cloud computing

    Mishra S.K., Puthal D., Sahoo B., Jena S.K., Obaidat M.S.

    Article, Journal of Supercomputing, 2018, DOI Link

    View abstract ⏷

    The rapid growth of todays IT demands reflects the increased use of cloud data centers. Reducing computational power consumption in cloud data center is one of the challenging research issues in the current era. Power consumption is directly proportional to a number of resources assigned to tasks. So, the power consumption can be reduced by a demotivating number of resources assigned to serve the task. In this paper, we have studied the energy consumption in cloud environment based on varieties of services and achieved the provisions to promote green cloud computing. This will help to preserve overall energy consumption of the system. Task allocation in the cloud computing environment is a well-known problem, and through this problem, we can facilitate green cloud computing. We have proposed an adaptive task allocation algorithm for the heterogeneous cloud environment. We applied the proposed technique to minimize the makespan of the cloud system and reduce the energy consumption. We have evaluated the proposed algorithm in CloudSim simulation environment, and simulation results show that our proposed algorithm is energy efficient in cloud environment compared to other existing techniques.
  • On the placement of controllers in software-Defined-WAN using meta-heuristic approach

    Sahoo K.S., Puthal D., Obaidat M.S., Sarkar A., Mishra S.K., Sahoo B.

    Article, Journal of Systems and Software, 2018, DOI Link

    View abstract ⏷

    Software Defined Networks (SDN) is a popular modern network technology that decouples the control logic from the underlying hardware devices. The control logic has implemented as a software entity that resides in a server called controller. In a Software-Defined Wide Area Network (SDWAN) with n nodes; deploying k number of controllers (k < n) is one of the challenging issue. Due to some internal or external factors, when the primary path between switch to controller fails, it severely interrupt the networks’ availability. In this regard, the proposed approach provides a seamless backup mechanism against single link failure with minimum communication delay based on the survivability model. In order to obtain an efficient solution, we have considered controller placement problem (CPP) as a multi-objective combinatorial optimization problem and solve it using two population-based meta-heuristic techniques such as: Particle Swarm Optimization (PSO) and FireFly Algorithm (FFA). For CPP, three metrics have been considered: (a) controller to switch latency, (b) inter-controller latency and (c) multi-path connectivity between the switch and controller. The performance of the algorithms is evaluated on a set of publicly available network topologies in order to obtain the optimum number of controllers, and controller positions. Then we present Average Delay Rise (ADR) metric to measure the increased delay due to the failure of the primary path. By comparing the performance of our scheme to competing scheme, it was found that our proposed scheme effectively improves the survivability of the control path and the performance of the network as well.
  • 2D-DWT and Bhattacharyya Distance Based Classification Scheme for the Detection of Acute Lymphoblastic Leukemia

    Mishra S., Mishra S.K., Majhi B., Sa P.K.

    Conference paper, Proceedings - 2018 International Conference on Information Technology, ICIT 2018, 2018, DOI Link

    View abstract ⏷

    This paper proposes an efficient classification system for separating normal blood cells from the pathological cells. The suggested system employs an adaptive histogram equalization scheme to reduce the noise present in the microscopic images. Two-dimensional discrete wavelet transform (2D-DWT) is applied separately to the nucleus and cytoplasm region to generate the feature matrix. The significant and uncorrelated features are chosen using a combination of PCA and Bhattacharyya distance. Subsequently, the reduced feature set is fed to the back propagation neural network for classification purpose. A public dataset ALL-IDB1 is used to validate the proposed scheme. It can be seen that the proposed methodology has a better result as compared to its competent schemes. The accuracy of the suggested scheme is found to be 97.11% in case of combined features from nucleus and cytoplasm region whereas the same is found to be 95.19% and 90.38% if the features are taken separately.
  • VM Selection using DVFS Technique to Minimize Energy Consumption in Cloud System

    Mishra S.K., Mishra S., Bharti S.K., Sahoo B., Puthal D., Kumar M.

    Conference paper, Proceedings - 2018 International Conference on Information Technology, ICIT 2018, 2018, DOI Link

    View abstract ⏷

    Energy consumption becoming a key issue for the execution of operation and maintenance of cloud system. The virtual machine selection plays an important role in the execution of the task without violating SLA. In this paper, a VM selection technique is proposed using Dynamic Voltage Frequency Scaling (DVFS) for optimizing the energy consumption and makespan in the cloud system.We have proposed a heuristic for the selection of VM for each task to optimize the energy utilization by applying the DVFS technique. The proposal extends to incorporate an energy model supporting the evaluation of energy consumption in cloud data centers. Each task has an energy-based SLA to execute in the cloud system. The DVFS Mechanism is applied to the virtual machines level to reduce the energy of the cloud system. Moreover, the performance of the diverse algorithms (Random allocation, and FCFS) are compared with the proposed DVFS-based VM selection strategy with the help of CloudSim.
  • Energy-efficient VM-placement in cloud data center

    Mishra S.K., Puthal D., Sahoo B., Jayaraman P.P., Jun S., Zomaya A.Y., Ranjan R.

    Article, Sustainable Computing: Informatics and Systems, 2018, DOI Link

    View abstract ⏷

    Employing cloud computing to acquire the benefit of cloud by optimizing various parameters that meet changing demands is a challenging task. The optimal mapping of tasks to virtual machines (VMs) and VMs to physical machines (PMs) (known as VM placement) problem are necessary for advancing energy consumption and resource utilization. High heterogeneity of tasks as well as resources, great dynamism and virtualization make the consolidation issue more complicated in the cloud computing system. In this paper, a complete mapping (i.e., task VM and VM to PM) algorithm is proposed. The tasks are classified according to their resource requirement and then searching for the appropriate VM and again searching for the appropriate PM where the selected VM can be deployed. The proposed algorithm reduces the energy consumption by depreciating the number of active PMs, while also minimizes the makespan and task rejection rate. We have evaluated our proposed approach in CloudSim simulator, and the results demonstrate the effectiveness of the proposed algorithm over some existing standard algorithms.
  • Sustainable Service Allocation Using a Metaheuristic Technique in a Fog Server for Industrial Applications

    Mishra S.K., Puthal D., Rodrigues J.J.P.C., Sahoo B., Dutkiewicz E.

    Article, IEEE Transactions on Industrial Informatics, 2018, DOI Link

    View abstract ⏷

    Reducing energy consumption in the fog computing environment is both a research and an operational challenge for the current research community and industry. There are several industries such as finance industry or healthcare industry that require a rich resource platform to process big data along with edge computing in fog architecture. As a result, sustainable computing in a fog server plays a key role in fog computing hierarchy. The energy consumption in fog servers depends on the allocation techniques of services (user requests) to a set of virtual machines (VMs). This service request allocation in a fog computing environment is a nondeterministic polynomial-time hard problem. In this paper, the scheduling of service requests to VMs is presented as a bi-objective minimization problem, where a tradeoff is maintained between the energy consumption and makespan. Specifically, this paper proposes a metaheuristic-based service allocation framework using three metaheuristic techniques, such as particle swarm optimization (PSO), binary PSO, and bat algorithm. These proposed techniques allow us to deal with the heterogeneity of resources in the fog computing environment. This paper has validated the performance of these metaheuristic-based service allocation algorithms by conducting a set of rigorous evaluations.
  • First score auction for pricing-based resource selection in vehicular cloud

    Mishra S., Mishra S.K., Sahoo B., Obaidat M.S., Puthal D.

    Conference paper, CITS 2018 - 2018 International Conference on Computer, Information and Telecommunication Systems, 2018, DOI Link

    View abstract ⏷

    Selecting vehicles to supply resources is a crucial research problem in the vehicular cloud and highly depends on the pricing of the resources. Subsequently, resource pricing is an intricate problem influenced by the market demand and quality of service provided. Widespread and autonomous vehicular network requires reputation as a medium for trusting the supplier vehicles. Taking into account the above factors, we design the utility of supplier and consumer vehicles. Subsequently, a 1st score auction mechanism is proposed and modeled for the consumer vehicles to obtain maximum utility. Additionally, the protocol enables the supplier vehicles to decide the optimal pricing of resources. The 1st auction protocol is then simulated and the experimental results indicate better performance of our protocol than other standard protocols.
  • Improving Energy Usage in Cloud Computing Using DVFS

    Mishra S.K., Parida P.P., Sahoo S., Sahoo B., Jena S.K.

    Conference paper, Advances in Intelligent Systems and Computing, 2018, DOI Link

    View abstract ⏷

    The energy-related issues in distributed systems that may be energy conservation or energy utilization have turned out to be a critical one. Researchers worked for this energy issue and most of them used Dynamic Voltage and Frequency Scaling (DVFS) as a power management technique where less voltage supply is allowed due to a reduction of the clock frequency of processors. The cloud environment has multiple physical hosts, and each host has several numbers of virtual machines (VMs). All online tasks or service requests are scheduled to different VMs. In this paper, an energy-optimized allocation algorithm is proposed where DVFS technique is used for virtual machines. The fundamental idea behind this is to make a compromise balance in between energy consumption and the set up time of different modes of hosts or VMs. Here, the system model that includes different sub-system models is explained formally and the implementation of algorithms in homogeneous as well as heterogeneous environment is evaluated.
  • Energy-efficient deployment of edge dataenters for mobile clouds in sustainable iot

    Mishra S.K., Puthal D., Sahoo B., Sharma S., Xue Z., Zomaya A.Y.

    Article, IEEE Access, 2018, DOI Link

    View abstract ⏷

    Achieving quick responses with limited energy consumption in mobile cloud computing is an active area of research. The energy consumption increases when a user's request (task) runs in the local mobile device instead of executing in the cloud. Whereas, latency become an issue when the task executes in the cloud environment instead of the mobile device. Therefore, a tradeoff between energy consumption and latency is required in building sustainable Internet of Things (IoT), and for that, we have introduced a middle layer named an edge computing layer to avoid latency in IoT. There are several real-time applications, such as smart city and smart health, where mobile users upload their tasks into the cloud or execute locally. We have intended to minimize the energy consumption of a mobile device as well as the energy consumption of the cloud system while meeting a task's deadline, by offloading the task to the edge datacenter or cloud. This paper proposes an adaptive technique to optimize both parameters, i.e., energy consumption and latency by offloading the task and also by selecting the appropriate virtual machine for the execution of the task. In the proposed technique, if the specified edge datacenter is unable to provide resources, then the user's request will be sent to the cloud system. Finally, the proposed technique is evaluated using a real-world scenario to measure its performance and efficiency. The simulation results show that the total energy consumption and execution time decrease after introducing an edge datacenters as a middle layer.
  • Time efficient dynamic threshold-based load balancing technique for Cloud Computing

    Mishra S.K., Khan M.A., Sahoo B., Puthal D., Obaidat M.S., Hsiao K.F.

    Conference paper, IEEE CITS 2017 - 2017 International Conference on Computer, Information and Telecommunication Systems, 2017, DOI Link

    View abstract ⏷

    Cloud computing is a novel technology leads several new challenges to all organizations worldwide. Cloud computing supports virtual machines (VMs) to host multiple applications simultaneously. Balancing the large numbers of applications in the heterogeneous cloud environment becomes challenging as the hypervisor scheduling controls all VMs. When the scheduler allocates tasks to the overloaded VMs, the performance of the cloud system degrades. In this paper, we present a novel load balancing approach to organizing the virtualized resources of the data center efficiently. In our approach, the load to a VM scales up and down according to the resource capacity of the VM. The proposed scheme minimizes the makespan of the system, maximizes resource utilization and reduces the overall energy consumption. We have evaluated our approach in CloudSim simulation environment, and our devised approach has reduced the waiting time compared to existing approaches and optimized the makespan of the cloud data center.
  • Metaheuristic solutions for solving controller placement problem in SDN-based WAN architecture

    Sahoo K.S., Sarkar A., Mishra S.K., Sahoo B., Puthal D., Obaidat M.S., Sadun B.

    Conference paper, ICETE 2017 - Proceedings of the 14th International Joint Conference on e-Business and Telecommunications, 2017, DOI Link

    View abstract ⏷

    Software Defined Networks (SDN) is a popular paradigm in the modern networking systems that decouples the control logic from the underlying hardware devices. The control logic has implemented as a software component and residing in a server called controller. To increase the performance, deploying multiple controllers in a large-scale network is one of the key challenges of SDN. To solve this, authors have considered controller placement problem (CPP) as a multi-objective combinatorial optimization problem and used different heuristics. Such heuristics can be executed within a specific time-frame for small and medium sized topology, but out of scope for large scale instances like Wide Area Network (WAN). In order to obtain better results, we propose Particle Swarm Optimization (PSO) and Firefly two population-based meta-heuristic algorithms for optimal placement of the controllers, which take a particular set of objective functions and return the best possible position out of them. The problem has been defined, taking into consideration both controllers to switch and inter-controller latency as the objective functions. The performance of the algorithms evaluated on a set of publicly available network topologies in terms execution time. The results show that the FireFly algorithm performs better than PSO and random approach under various conditions.
  • Time efficient task allocation in cloud computing environment

    Mishra S.K., Khan M.A., Sahoo B., Jena S.K.

    Conference paper, 2017 2nd International Conference for Convergence in Technology, I2CT 2017, 2017, DOI Link

    View abstract ⏷

    Cloud computing is an evolution of Distributed system that has been adopted by worldwide scientifically and commercially. For optimal use of cloud's potential power, effective and efficient algorithm are required, which will select best resources from available cloud resources for different applications. This allocation of user requests to the cloud resource can optimize various parameters like energy consumption, makespan, throughput, etc. This task allocation or mapping problem is a well-known NP-Complete problem. In this paper, we have proposed an algorithm, Task Based allocation to minimize the makespan of the cloud system and also to increase the resource utilization. We have simulated our algorithm, TBA in CloudSim Simulator in a heterogeneous environment. CloudSim is one of the simulation tools of cloud environment which provides evaluation and testing of cloud services and infrastructure before the development of the real world. During the comparison of the algorithm, we provide the sorted tasks to the TBA algorithm once and un-sorted tasks in the second time. We have compared sorted-TBA, unsorted-TBA and random algorithm where the sorted-TBA algorithm performs better.
  • Evaluating performance of the Non-linear data structure for job queuing in the cloud environment

    Sahoo S., Mishra S.K., Swami D., Khan A., Sahoo B.

    Conference paper, 2017 2nd International Conference for Convergence in Technology, I2CT 2017, 2017, DOI Link

    View abstract ⏷

    Cloud Computing era comes with the advancement of technologies in the fields of processing, storage, bandwidth network access, security of the internet, etc. Several advantages of Cloud Computing include scalability, high computing power, ondemand resource access, high availability, etc. One of the biggest challenges faced by Cloud provider is to schedule incoming jobs to virtual machines(VMs) such that certain constraints satisfied. The development of automatic applications, smart devices, and applications, sensor-based applications need large data storage and computing resources and need output within a particular time limit. Many works have been proposed and commented on various data structures and allocation policies for a realtime job on the cloud. Most of these technologies use a queuebased mapping of tasks to VMs. This work presents a novel, min-heap based VM allocation (MHVA) designed for real-time jobs. The proposed MHVA is compared with a queue based random allocation taking performance metrics makespan and energy consumption. Simulations are performed for different scenarios varying the number of tasks and VMs. The simulation results show that MHVA is significantly better than the random algorithm.
  • Adaptive scheduling of cloud tasks using ant colony optimization

    Mishra S.K., Sahoo B., Manikyam P.S.

    Conference paper, ACM International Conference Proceeding Series, 2017, DOI Link

    View abstract ⏷

    Efficient scheduling of heterogeneous tasks to heterogeneous processors for any application is crucial to attain high performance. Cloud computing provides a heterogeneous environment to perform various operations. The scheduling of user requests (tasks) in the cloud environment is a NP-hard optimization problem. Researchers present various heuristic and metaheuristic techniques to provide the sub-optimal solution to the problem. In this paper, we have proposed an Ant Colony Optimization (ACO) based task scheduling (ACOTS) algorithm to optimize the makespan of the system and reducing the average waiting time. The designed algorithm is implemented and simulated in CloudSim simulator. Results of simulations are compared to Round Robin and Random algorithms which show satisfactory output.
  • Improved energy-efficient target coverage in wireless sensor networks

    Panda B.S., Bhatta B.K., Mishra S.K.

    Conference paper, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2017, DOI Link

    View abstract ⏷

    Achieving optimal field coverage is a significant challenge in various sensor network applications. In some specific situations, the sensor field (target) may have coverage gaps due to the random deployment of sensors; hence, the optimized level of target coverage cannot be obtained. Given a set of sensors in the plane, the target coverage problem is to separate the sensor into different groups and provide them specific time intervals, so that the coverage lifetime can be maximized. Here, the constraint is that the network should be connected. Presently, target coverage problem is widely studied due to its lot of practical application in Wireless Sensor Network (WSN). This paper focuses on target coverage problem along with the minimum energy usage of the network so that the lifetime of the whole network can be increased. Since constructing a minimum connected target coverage problem is known to be NP-Complete, so several heuristics, as well as approximation algorithms, have been proposed. Here, we propose a heuristic for connected target coverage problem in WSN. We compare the performance of our heuristic with the existing heuristic, which states that our algorithm performs better than the existing algorithm for connected target coverage problem. Again, we have implemented the 2-connected target coverage properties for the network which provide fault tolerance as well as robustness to the network. So, we propose one algorithm which gives the target coverage along with 2-connectivity.
  • Deadline-constraint services in cloud with heterogeneous servers

    Sahoo S., Mishra S.K., Sahoo B., Puthal D., Obaidat M.S.

    Conference paper, IEEE CITS 2017 - 2017 International Conference on Computer, Information and Telecommunication Systems, 2017, DOI Link

    View abstract ⏷

    The development of delay sensitive applications needs massive data storage and computing resources, especially in a typical cloud environment. The cloud computing paradigm provides a broad range of services viz. software, platform, and infrastructure for various applications (both real-time and non real-time) over the Internet. But, in the case of Infrastructure-as-a-Service (IaaS) cloud platform, either over provisioning or under-provisioning of resources becomes a challenging issue for time constraint applications. An accurate modeling of cloud centers is not feasible due to the nature of cloud centers and diversity of user requests. We present an analytical model to estimate the performance of the cloud center for deadline sensitive tasks. We used the model to find the number of task miss deadline, waiting time of a task, and response time of the service, among others.
  • Execution of real time task on cloud environment

    Sahoo S., Nawaz S., Mishra S.K., Sahoo B.

    Conference paper, 12th IEEE International Conference Electronics, Energy, Environment, Communication, Computer, Control: (E3-C3), INDICON 2015, 2016, DOI Link

    View abstract ⏷

    Cloud computing is an internet-based computing where resources, soft wares and information are shared on demand basis i.e. user can access documents anytime anywhere. Execution of real time task on cloud computing environment is an emerging research area. Real-time task needs to meet their deadlines regardless of system load or makespan. This paper discusses about scheduling of real time task on cloud environment considering Basic Earliest deadline first (BEDF), FFE (first fit EDF), BFE (best fit EDF), WFE (Worst fit EDF) algorithms. Different performance parameters such as guarantee ratio (GR), utilization of VMs (UV), throughput (TP) are used to measure the effectiveness of the algorithms.
  • Improving energy consumption in cloud

    Mishra S.K., Deswal R., Sahoo S., Sahoo B.

    Conference paper, 12th IEEE International Conference Electronics, Energy, Environment, Communication, Computer, Control: (E3-C3), INDICON 2015, 2016, DOI Link

    View abstract ⏷

    To meet the service level agreement (SLA) between the cloud user and the cloud service provider, the service provider has to pay more. The cloud resources are allocated not only to satisfy the quality of services (QoS) those are specified in SLA, but also need to reduce energy utilization. Therefore, task consolidation plays an important role in cloud computing, which map users service requests to appropriate resources resulting in proper utilization of various cloud resources. The enhancement of overall performance of cloud computing also depends on the Task Consolidation approaches. Here, for task consolidation problem, we present an energy aware model which includes description of physical hosts, virtual machines and service requests (tasks) submitted by users. For the proposed model, an Energy Aware Task Consolidation (EATC) algorithm is developed where heterogeneity also affects the performance and show significant improvement in energy savings.
  • Metaheuristic approaches to task consolidation problem in the cloud

    Mishra S.K., Sahoo B., Sahoo K.S., Jena S.K.

    Book chapter, Resource Management and Efficiency in Cloud Computing Environments, 2016, DOI Link

    View abstract ⏷

    The service (task) allocation problem in the distributed computing is one form of multidimensional knapsack problem which is one of the best examples of the combinatorial optimization problem. Nature-inspired techniques represent powerful mechanisms for addressing a large number of combinatorial optimization problems. Computation of getting an optimal solution for various industrial and scientific problems is usually intractable. The service request allocation problem in distributed computing belongs to a particular group of problems, i.e., NP-hard problem. The major portion of this chapter constitutes a survey of various mechanisms for service allocation problem with the availability of different cloud computing architecture. Here, there is a brief discussion towards the implementation issues of various metaheuristic techniques like Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), BAT algorithm, etc. with various environments for the service allocation problem in the cloud.
  • Honeypot-based intrusion detection system: A performance analysis

    Kondra J.R., Bharti S.K., Mishra S.K., Babu K.S.

    Conference paper, Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016, 2016,

    View abstract ⏷

    Attacks on the internet keep on increasing and it causes harm to our security system. In order to minimize this threat, it is necessary to have a security system that has the ability to detect zero-day attacks and block them. 'Honeypot is the proactive defense technology, in which resources placed in a network with the aim to observe and capture new attacks'. This paper proposes a honeypot-based model for intrusion detection system (IDS) to obtain the best useful data about the attacker. The ability and the limitations of Honeypots were tested and aspects of it that need to be improved were identified. In the future, we aim to use this trend for early prevention so that pre-emptive action is taken before any unexpected harm to our security system.
  • Real time task execution in cloud using mapreduce framework

    Sahoo S., Sahoo B., Turuk A.K., Mishra S.K.

    Book chapter, Resource Management and Efficiency in Cloud Computing Environments, 2016, DOI Link

    View abstract ⏷

    Cloud Computing era comes with the advancement of technologies in the fields of processing, storage, bandwidth network access, security of internet etc. The development of automatic applications, smart devices and applications, sensor based applications need huge data storage and computing resources and need output within a particular time limit. Now users are becoming more sensitive towards, delay in applications they are using. So, a scalable platform like Cloud Computing is required that can provide huge computing resource, and data storage required for processing such applications. MapReduce framework is used to process huge amounts of data. Data processing on a cloud based on MapReduce would provide added benefits such as fault tolerant, heterogeneous, ease of use, free and open, efficient. This chapter discusses about cloud system model, real-time MapReduce framework, Cloud based MapReduce framework examples, quality attributes of MapReduce scheduling and various MapReduce scheduling algorithm based on quality attributes.
  • A comparative analysis of packet scheduling schemes for multimedia services in LTE networks

    Sahoo B.P.S., Puthal D., Swain S., Mishra S.

    Conference paper, Proceedings - 1st International Conference on Computational Intelligence and Networks, CINE 2015, 2015, DOI Link

    View abstract ⏷

    The revolution in high-speed broadband network is the requirement of the current time, in other words here is an unceasing demand for high data rate and mobility. Both provider and customer see, the long time evolution (LTE) could be the promising technology for providing broadband, mobile Internet access. To provide better quality of service (QoS) to customers, the resources must be utilized at its fullest impeccable way. Resource scheduling is one of the important functions for remanufacturing or upgrading system performance. This paper studies the recently proposed packet scheduling schemes for LTE systems. The study has been concentrated in implication to real-time services such as online video streaming and Voice over Internet Protocol (VOIP). For performance study, the LTE-Sim simulator is used. The primary objective of this paper is to provide results that will help researchers to design more efficient scheduling schemes, aiming to get better overall system performance. For the simulation study, two scenarios, one for video traffic and other for VoIP have been created. Various performances metric such as packet loss, fairness, end-to-end (E2E) delay, cell throughput and spectral efficiency has been measured for both the scenarios varying numbers of users. In the light of the simulation result analysis, the frame level scheduler (FLS) algorithms outperform other, by balancing the QoS requirements for multimedia services.
  • Cloud computing features, issues, and challenges: A big picture

    Puthal D., Sahoo B.P.S., Mishra S., Swain S.

    Conference paper, Proceedings - 1st International Conference on Computational Intelligence and Networks, CINE 2015, 2015, DOI Link

    View abstract ⏷

    Since the phenomenon of cloud computing was proposed, there is an unceasing interest for research across the globe. Cloud computing has been seen as unitary of the technology that poses the next-generation computing revolution and rapidly becomes the hottest topic in the field of IT. This fast move towards Cloud computing has fuelled concerns on a fundamental point for the success of information systems, communication, virtualization, data availability and integrity, public auditing, scientific application, and information security. Therefore, cloud computing research has attracted tremendous interest in recent years. In this paper, we aim to precise the current open challenges and issues of Cloud computing. We have discussed the paper in three-fold: first we discuss the cloud computing architecture and the numerous services it offered. Secondly we highlight several security issues in cloud computing based on its service layer. Then we identify several open challenges from the Cloud computing adoption perspective and its future implications. Finally, we highlight the available platforms in the current era for cloud research and development.

Patents

  • A system and a method for controlling smart street lights

    Dr Tapas Kumar Mishra, Dr Sambit Kumar Mishra, Dr Kshira Sagar Sahoo, Dr Abinash Pujahari

    Patent Application No: 202241046793, Date Filed: 17/08/2022, Date Published: 26/08/2022, Status: Published

  • A system and a method for detecting building damage

    Dr Sambit Kumar Mishra

    Patent Application No: 202341050457, Date Filed: 26/07/2023, Date Published: 01/09/2023, Status: Published

  • a system and a method for bone fracture detection

    Dr Sambit Kumar Mishra

    Patent Application No: 202441041807, Date Filed: 29/05/2024, Date Published: 07/06/2024, Status: Published

  • A system for video surveillance and alerting mechanism

    Dr Sambit Kumar Mishra

    Patent Application No: 202441043640, Date Filed: 05/06/2024, Date Published: 14/06/2024, Status: Published

  • A system and a method for fuzzy logic-based dynamic resource allocation in edge computing environments

    Dr Sambit Kumar Mishra

    Patent Application No: 202441073852, Date Filed: 30/09/2024, Date Published: 04/10/2024, Status: Published

  • A system for intrusion detection in integrated environments

    Dr Sambit Kumar Mishra

    Patent Application No: 202441103343, Date Filed: 26/12/2024, Date Published: 03/01/2025, Status: Published

  • System and method detecting and preventing distributed denial of service attacks in a cloud environment

    Dr Sambit Kumar Mishra

    Patent Application No: 202541010545, Date Filed: 07/02/2025, Date Published: 14/02/2025, Status: Published

  • A hybrid encryption system for secure data communication in an edge-cloud environment and method thereof

    Dr Sambit Kumar Mishra

    Patent Application No: A hybrid encryption system for secure data communication in an edge-cloud environment and method thereof, Date Filed: 05/12/2024, Date Published: 13/12/2024,

  • A system and a method for federated learning-based edge computing in  smart cities

    Dr Sambit Kumar Mishra

    Patent Application No: 202441052707, Date Filed: 10/07/2024, Date Published: 19/07/2024,

  • A system for cardiac disease prediction

    Dr Sambit Kumar Mishra

    Patent Application No: 202541000540, Date Filed: 02/01/2025, Date Published: 10/01/2025, Status: Published

  • A dynamic time quantum-based task scheduling system and method for  vehicular edge computing (vec)

    Mr Sugyan Kumar Mishra, Mr M Ratna Raju, Dr Sambit Kumar Mishra

    Patent Application No: 202441073304, Date Filed: 27/09/2024, Date Published: 04/10/2024, Status: Published

  • A Signal Processing System For Detection And Confirmation Of Pattern-Based Events And A Method Thereof

    Mr Sugyan Kumar Mishra, Dr Sambit Kumar Mishra

    Patent Application No: 202541055915, Date Filed: 10/06/2025, Date Published: 20/06/2025, Status: Published

Projects

Scholars

Doctoral Scholars

  • Ms Abdhisuta Dash
  • Ms Jasmini Kumari
  • Mr Subham Kumar Sahoo

Interests

  • Cloud Computing
  • Distributed Computing
  • Graph Theory
  • IoT

Thought Leaderships

There are no Thought Leaderships associated with this faculty.

Top Achievements

Research Area

No research areas found for this faculty.

Computer Science and Engineering is a fast-evolving discipline and this is an exciting time to become a Computer Scientist!

Computer Science and Engineering is a fast-evolving discipline and this is an exciting time to become a Computer Scientist!

Recent Updates

No recent updates found.

Education
2011
M.Sc. (CS)
Utkal University, Bhubaneswar
India
2014
M.Tech. (CS)
Utkal University, Bhubaneswar
India
2019
Ph.D. (CSE)
National Institute of Technology, Rourkela
India
Experience
  • 13th May 2019- 2020 | Assistant Professor | SoA Deemed to be University, Bhubaneswar, India
  • 2nd July 2018 – 10th May 2019 | Assistant Professor | Oriental University, Indore, India
  • 4th July 2011 – 10th July 2012 | Assistant Professor | Padmashree Kurtartha Acharya College of Engineering, Bargarh, Odisha, India
Research Interests
  • Multi-Objective service allocation in cloud and multi-cloud environment.
  • Integration of IoT and Cloud Computing.
  • Proposing new algorithms for optimizing multiple parameters in distributed system.
  • in
Awards & Fellowships
  • 2018 – IETE Best Research Award in IETE Seminar on Advances in Smart Hardware Technologies (ASH-Tech 2018) – IETE
  • 2019 - InSc Young Achiever Award - InSc
Memberships
  • IEEE Computer Society
  • IETE Associate Member
  • InSc Member
Publications
  • A Robust Model for Quantum-Resistant Cryptography to Tackle Quantum Risks

    Guha D., Lenka R., Sharma V., Mishra S.K., Alkhayyat A., Tripathy H.K.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link

    View abstract ⏷

    As quantum computing advances, conventional cryptographic algorithms face developing threats, necessitating the improvement of quantum-resistant security mechanisms. Winternitz One-Time Signature (WOTS) is a promising cryptographic scheme that offers robust resistance in competition to quantum attacks. This paper explores the software of WOTS in enhancing the protection of digital communications and information integrity in a quantum computing generation. By manner of analysing the fundamental standards, sensible implementations, and ability demanding situations of WOTS, this research dreams to provide insights into its effectiveness as a quantum-resistant protection solution.
  • Tomato Leaf Disease Detection Using Deep Learning and Machine Learning

    Chebrolu M., Garikapati K., Veeramachaneni Y., Annabathina J., Mishra S.K., Mishra S.K.

    Conference paper, 2025 International Conference on Artificial Intelligence and Machine Vision, AIMV 2025, 2025, DOI Link

    View abstract ⏷

    Detecting diseases in tomato leaves at an early stage is crucial for preventing crop damage and improving food security. Traditional diagnostic methods are often inefficient, requiring significant expertise and time. To address this challenge, we explore AI-driven approaches, integrating DL and ML methods for automated disease detection. This study employs CNNs, specifically leveraging the VGG16 architecture for feature extraction. Additionally, we compare its effectiveness with classical classifiers such as KNN and SVM. Using a publicly available dataset of healthy and diseased tomato leaves, our results indicate that CNN-based models outperform conventional machine learning classifiers in both accuracy and efficiency. Moreover, integrating IoT-based analytics enhances early detection, reducing crop losses and promoting sustainable agricultural practices.
  • When latent features meet side information: A preference relation based graph neural network for collaborative filtering

    Shi X., Zhang Y., Pujahari A., Mishra S.K.

    Article, Expert Systems with Applications, 2025, DOI Link

    View abstract ⏷

    As recommender systems shift from rating-based to interaction-based models, graph neural network-based collaborative filtering models are gaining popularity due to their powerful representation of user-item interactions. However, these models may not produce good item ranking since they focus on explicit preference predictions. Further, these models do not consider side information since they only capture latent feature information of user-item interactions. This study proposes an approach to overcome these two issues by employing preference relation in the graph neural network model for collaborative filtering. Using preference relation ensures the model will generate a good ranking of items. The item side information is integrated into the model through a trainable matrix, which is crucial when the data is highly sparse. The main advantage of this approach is that the model can be generalized to any recommendation scenario where a graph neural network is used for collaborative filtering. Experimental results obtained using the recent RS datasets show that the proposed model outperformed the related baselines.
  • Trading Strategy with EMA’s and Risk Management

    Pranav Somisetty S.D., Jagadishwar Gatte S., Kosuri N.B., Gowrish Chinta L., Mishra S.K., Kumar Mishra S.

    Conference paper, 2025 International Conference on Artificial Intelligence and Machine Vision, AIMV 2025, 2025, DOI Link

    View abstract ⏷

    The trading world often appears mysterious, filled with stories of fear, hope, addiction, and occasional profits. However, many fail to recognize that consistent profitability in trading is driven by discipline, a well-defined strategy, and strict adherence to rules. This lack of awareness is a key reason why 75-90% of new traders enter the market with high expectations but end up losing their hard-earned money. In this research we propose a quantitative trading strategy based on exponential moving average (EMA) crossovers, volume analysis, and structured profit booking. The strategy utilises a short-term 9-period EMA and along-term 15-period EMA to identify trend reversals, generating buy signals when the two different EMA's crosses under some conditions and sell signals are generated when the opposite occurs. Meanwhile, a confirmation mechanism is introduced, requiring the price to move at least 0.06% above the crossover price while ensuring the crossover candle remains bullish. Additionally, volume conditions are incorporated to validate momentum, ensuring buy signals are triggered only when the trading volume increases in ascending order. To optimize trade management, a multi-tier profit booking system is implemented, allowing partial exits at predefined levels. which ensures that the traders secure gains while allowing profitable trades to run. The strategy's performance is evaluated through historical back-testing, assessing profitability, accuracy, and risk-reward dynamics. The results demonstrate the effectiveness of integrating EMA crossings with volumes and structured exit points in improving trade success rates. This might become the future of so many people to convert their portfolio from a losing streak to a winning streak.
  • Enhancing Heart Disease Prediction with Data Augmentation and ML Classifiers

    Rachapalli V.K., Meenavalli C., Nunna S.P., Yarramaneni P., Mishra S.K., Mishra S.K.

    Conference paper, 2025 International Conference on Artificial Intelligence and Machine Vision, AIMV 2025, 2025, DOI Link

    View abstract ⏷

    Heart disease is a significant cause of death worldwide, and early prediction is vital for prevention and treatment. This project uses the Framingham Heart Study dataset for the early prediction of Coronary Heart Disease (CHD) using machine learning methods. The Framingham Heart Study is a highly unbalanced dataset, with only 16 % cases of CHD, which impacts the accuracy of the model. To overcome this, data augmentation techniques such as SMOTE and cGAN are applied to create synthetic cases of CHD. The machine learning algorithms that are compared: Random Forest, XGBoost, SVM, and MLP. XGBoost has achieved the highest AUC-ROC of 0.973 when cGAN-augmented data is used, while cGAN-augmented data improves recall and overall model performance significantly. This study identifies the potential for combining machine learning with data augmentation to improve CHD prediction.
  • Ensembling AI and Federated Learning for Industry 4.0: A Privacy-Preserving Approach in Edge Computing

    Sahoo S.K., Dash A., Mishra S.K., Humayun M.

    Book chapter, Advances in Science, Technology and Innovation, 2025, DOI Link

    View abstract ⏷

    The emergence of Industry 4.0 resulted in a disruptive era marked by the incorporation of cutting-edge technology, such as edge computing and artificial intelligence (AI), into industrial processes. The integration of AI and Federated Learning (FL) methodologies and the creation of intelligent solutions that protect privacy within the framework of Industry 4.0 are two key ideas that will be explored in this chapter. The chapter highlights that one major obstacle to edge computing’s widespread adoption in Industry 4.0 is privacy concerns. It emphasizes the necessity of finding solutions that balance the demands for real-time processing with the strictest privacy regulation. The main goal is to investigate how intelligent edge device solutions can be implemented while maintaining privacy protection through the use of FL. The goal of this chapter is to shed light on how to use the synergies between AI and FL to address privacy concerns related to Industry 4.0. The chapter ends with a call for Industry 4.0, which will see the standardization of edge computing, federated learning techniques, and artificial intelligence. By putting in place privacy-preserving safeguards, organizations are encouraged to adopt new technologies while maintaining strict data privacy and security standards. In the rapidly changing context of Industry 4.0, this symbiotic connection is expected to transform industrial landscapes, guiding them towards unmatched efficiency and creativity.
  • Intent-Driven VM Allocation Strategy for Optimizing Cloudlet Processing in Edge-Cloud Computing

    Sahoo S.K., Mishra S.K., Puthal D.

    Article, IEEE Internet of Things Journal, 2025, DOI Link

    View abstract ⏷

    Edge-cloud computing refers to a paradigm that combines the benefits of edge and cloud computing to optimize data processing and resource utilization. Edge-cloud computing plays a crucial role in resource allocation by optimizing the distribution of computational resources between edge devices and centralized cloud infrastructures. In the rapidly evolving landscape of edge-cloud computing, efficient VM allocation is critical for optimizing resource utilization, minimizing latency, and ensuring high SLA compliance. This paper introduces a novel heuristic VM allocation strategy, named LLCD, to enhance cloudlet or task processing in edge-cloud data centers. By employing a heuristic approach inspired by mixed-integer nonlinear programming models, this strategy dynamically assigns VMs based on their current load and the impending deadlines of tasks, significantly reducing overall system latency and enhancing SLA success rates. Simulation was conducted across various computational intensities. The findings reveal that the proposed approach substantially improves resource utilization and operational efficiency, adapting to dynamic workloads, by achieving an SLA success ratio as 74.26% and 83.7% in different deadline scenarios. The adaptive nature of the LLCD algorithm allows real-time task reallocation based on system feedback, which mirrors the operational principles of AI-driven orchestration in distributed IoT environments. The validation is achieved through a multi-iteration simulation model that emulates dynamic IoT workloads, demonstrating LLCD’s learning capability in maintaining SLA stability and consistent latency reduction across changing task distributions. Moreover, the proposed heuristic provides a foundation for latency-efficient and learning-based management in distributed computing environments.
  • Container Placement Using Penalty-Based PSO in the Cloud Data Center

    Akram Khan M., Sahoo B., Kumar Mishra S.

    Article, Concurrency and Computation: Practice and Experience, 2025, DOI Link

    View abstract ⏷

    Containerization has transformed application deployment by offering a lightweight, scalable, and portable architecture for the deployment of container applications and their dependencies. In contemporary cloud computing data centers, where virtual machines (VMs) are frequently utilized to host containerized applications, the challenge of effective placement of the container has garnered significant attention. Container placement (CP) involves placing a container over the VM to execute a container. CP is a nontrivial problem in the container cloud data center (CCDC). Poor placement decisions can lead to decreased service performance or wastage of cloud resources. Efficient placement of containers within a virtual environment is critical while optimizing resource utilization and performance. This paper proposes a penalty-based particle swarm optimization (PB-PSO) CP algorithm. In the proposed algorithm, we have considered the makespan, cost, and load of the VM while making the CP decisions. We have proposed the concept of a load-balancing penalty to prevent a VM from becoming overloaded. This algorithm solves various CP challenges by varying container application sizes in heterogeneous cloud environments. The primary goal of the proposed algorithm is to minimize the makespan and computational cost of containers through efficient resource utilization. We have performed extensive simulation studies to verify the efficacy of the proposed algorithm using the CloudSim 4.0 simulator. The proposed optimization algorithm (PB-PSO) aims to minimize both the makespan and the execution monetary costs and maximize the resource utilization simultaneously. During the simulation, we observed a reduction of 10% to 15% in both execution cost and makespan. Furthermore, our algorithm achieved the most optimal cost-makespan trade-offs compared to other competing algorithms.
  • A Survey on Task Scheduling in Edge-Cloud

    Sahoo S.K., Mishra S.K.

    Article, SN Computer Science, 2025, DOI Link

    View abstract ⏷

    In this modern era, cloud computing is not enough to meet today’s intelligent society’s data processing needs, so edge computing has emerged. In contrast to computation in the cloud, it elaborates user proximity and proximity to the data source. To store local, small sized, and processed data on the edges of the network is more effective. The edge paradigm, intended to be a leading computation due to its low latency, also faces many challenges due to computational capabilities and resource availability. Edge computing allows edge devices to release heavy loads and computational operations on the remote server. This allows us to take full advantage of the server-side computing and storage in edge devices. However, the offload of all highly compressed computing operations on a remote server at the same time may become overcrowded, leading to intensive processing delays for many computing operations and unexpectedly elevated power usage. Instead of that, it is possible that spare edge resources may need to be utilized effectively and the access to expensive cloud resources would be restricted. As a result, it is important to investigate the collaborative planning process (scheduling) for the edge servers with a cloud server based on task features, development objectives, and system status. It can assist in performing all the computing functions efficiently and effectively. This paper analyzes and summarizes computing conditions for the edge computing context and classifies the computation of tasks into various edge-cloud computing scenarios. At the end, based on the problem structure, various collaborative planning methods for computational functions are presented.
  • Multi-objective based container placement strategy in CaaS

    Khan M.A., Sahoo B., Mishra S.K., Shankar A.

    Article, Software - Practice and Experience, 2025, DOI Link

    View abstract ⏷

    In contrast to a conventional virtual machine (VM), a container is a lightweight virtualization technology. Containers are becoming a prominent technology for cloud services because of their portable, scalable, and flexible deployments, especially in the Internet of Things (IoT), smart devices, and fog and edge computing. It is a type of operating system-level virtualization in which the kernel allows multiple isolated containers to run independently. Container placement (CP) is a nontrivial problem in Container-as-a-Service (CaaS). CP is mapping to a container over virtual machines (VMs) to execute an application. Designing an efficient CP strategy is complex due to several intertwined challenges. These challenges arise from a diverse spectrum of computing resources, like on-demand and unpredictable fluctuations of IT resources by multiple tenants. In this article, we propose a modified sum-based container placement algorithm called a multi-objective optimization-based container placement algorithm (MSBCPA). In the proposed algorithm, we have considered two metrics: makespan and monetary costs for optimizing available IT resources. We have conducted comprehensive simulation experiments to validate the effectiveness of the proposed algorithm over the CloudSim 4.0 simulator. The proposed optimization algorithm (MSBCPA) aims to minimize the makespan and the execution monetary costs simultaneously. In the simulation, we found that the execution cost and energy consumption cost reduce by 20% to 30% and achieve the best possible cost-makespan trade-offs compared to competing algorithms.
  • An Integrated ELM Based Feature Reduction Combination Detection for Gene Expression Data Analysis

    Tripathy J., Dash R., Pattanayak B.K., Mishra S.K.

    Article, SN Computer Science, 2025, DOI Link

    View abstract ⏷

    Globally, cancer stands as the second leading cause of mortality. Various strategies have been proposed to address this issue, with a strong emphasis on utilizing gene expression data to enhance cancer detection methods. However, challenges arise due to the high dimensionality, limited sample size relative to its dimensions, and the inherent redundancy and noise in many genes. Consequently, it is advisable to employ a subset of genes rather than the entire set for classifying gene expression data. This research introduces a model that incorporates Ranked-based Filter (RF) techniques for extracting significant features and employs Extreme Learning Machine (ELM) for data classification. The computational cost of using RF technique over high dimensional data is low. However extraction of significant genes using one or two stage of reduction is not effective. Thus, a 4-stage feature reduction strategy is applied. The reduced data is then utilized for classification using few variants of ELM model and activation function. Subsequently, a two-stage grading approach is implemented to determine the most suitable classifier for data classification. This analysis is conducted over four microarray gene expression data using four activation function with seven learning based classifiers, from which it is shown that II-ELM classifier outperforms in terms of performance matrix and ROC graph.
  • Message from ICEC Steering Committee Chair ICEC 2024

    Mishra S.K., Puthal D.

    Editorial, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

  • A Systematic Review on Federated Learning in Edge-Cloud Continuum

    Mishra S.K., Sahoo S.K., Swain C.K.

    Review, SN Computer Science, 2024, DOI Link

    View abstract ⏷

    Federated learning (FL) is a cutting-edge machine learning platform that protects user privacy while enabling collaborative learning across various devices. It is particularly relevant in the current environment when massive volumes of data are generated at the edge of networks by developing technologies like social networking, cloud computing, edge computing, and the Internet of Things. FL reduces the possibility of unauthorized access by third parties by allowing data to stay on local devices, hence mitigating any privacy breaches. The integration of FL in Cloud, Edge, and hybrid Edge-Cloud settings are some of the computing paradigms that this study investigates. We highlight the salient features of FL, go over the main obstacles to its implementation and use, and make recommendations for future study directions. Furthermore, we assess how FL, by facilitating safe and cooperative data sharing among vehicles, can improve service quality in the Internet of Vehicles (IoV). Our study findings are intended to offer practical insights and suggestions that may have an impact on a variety of computing technology research topics.
  • Special issue on collaborative edge computing for secure and scalable Internet of Things

    Puthal D., Mishra A.K., Mishra S.K.

    Editorial, Software - Practice and Experience, 2024, DOI Link

  • Message from Convener and Co-Conveners ICEC-2024

    Mishra S.K., Enduri M.K., Dash J.K., Manikandan V.M.

    Editorial, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

  • Applications of Federated Learning in Computing Technologies

    Mishra S.K., Sindhu K., Teja M.S., Akhil V., Krishna R.H., Praveen P., Mishra T.K.

    Book chapter, Convergence of Cloud with AI for Big Data Analytics: Foundations and Innovation, 2024, DOI Link

    View abstract ⏷

    Federated learning is a technique that trains the knowledge across different decentralized devices holding samples of information without exchanging them. The concept is additionally called collaborative learning. In federated learning, the clients are allowed separately to teach the deep neural network models with the local data combined at the deep neural network model at the central server. All the local datasets are uploaded to a minimum of one server, so it assumes that local data samples are identically distributed. It doesn’t transmit the information to the server. Because of its security and privacy concerns, it’s widely utilized in many applications like IoT, cloud computing; Edge computing, Vehicular edge computing, and many more. The details of implementation for the privacy of information in federated learning for shielding the privacy of local uploaded data are described. Since there will be trillions of edge devices, the system efficiency and privacy should be taken with no consideration in evaluating federated learning algorithms in computing technologies. This will incorporate the effectiveness, privacy, and usage of federated learning in several computing technologies. Here, different applications of federated learning, its privacy concerns, and its definition in various fields of computing like IoT, Edge, and Cloud Computing are presented.
  • Designing a GSM and ARDUINO based Reliable Home Automation System

    Tripathy J., Dash S., Dash R., Pal J., Padhi S., Mishra S.K.

    Conference paper, Proceedings - 2024 OITS International Conference on Information Technology, OCIT 2024, 2024, DOI Link

    View abstract ⏷

    This paper introduces the design and prototype of a new home automation system that utilizes GSM technology as the network infrastructure to connect its components. The proposed system is composed of two primary parts: the first is the GSM module, which acts as the core of the system, managing, controlling, and monitoring the user's home. Users and system administrators can connect to the GSM locally to access devices and manage system functions. The second part is the hardware interface module, which provides the necessary interface for relays and actuators within the home automation system. The mobile phone, originally designed for making calls and sending text messages, has evolved into a versatile device, especially with the advent of smartphones. In this study, the researcher develops a home automation system using GSM and Arduino, allowing users to control household appliances by simply sending SMS commands through their GSM-based phones.This paper states that a smartphone is not necessary; but an old GSM phone can effectively be used to turn home electronic appliances on and off from any location. The proposed system offers greater scalability and flexibility compared to commercially available home automation systems.
  • A deep transfer learning model for green environment security analysis in smart city

    Sahu M., Dash R., Kumar Mishra S., Humayun M., Alfayad M., Assiri M.

    Article, Journal of King Saud University - Computer and Information Sciences, 2024, DOI Link

    View abstract ⏷

    Green environmental security refers to the state of human-environment interactions that include reducing resource shortages, pollution, and biological dangers that can cause societal disorder. In IoT-enabled smart cities, due to the advancement of technologies, sensors and actuators collect vast quantities of data that are analyzed to extract potentially useful information. However, due to the noise and diversity of the data generated, only a small portion of the massive data collected from smart cities is used. In sustainable Land Use and Land Cover (LULC) management, environmental deterioration resulting from improper land usage in the digital ecosystem is a global issue that has garnered attention. The deep learning techniques of AI are recognized for their capacity to manage vast amounts of erroneous and unstructured data. In this paper, we propose a morphologically augmented fine-tuned DenseNet-121(MAFDN) LULC classification model to automate the categorization of high spatial resolution scene images for environmental conservation. This work includes an augmentation process (i.e. erosion, dilation, blurring, and contrast enhancement operations) to extract spatial patterns and enlarge the training size of the dataset. A few state-of-the-art techniques are incorporated for contrasting the efficacy of the proposed approach. This facilitates green resource management and personalized provision of services.
  • Enhancing Edge Intelligence with Layer-wise Adaptive Precision and Randomized PCA

    Mishra S.K., Velankani Joise Divya G.C., Maddi P.A., Tanniru N.M., Manthena S.L.P.

    Conference paper, Proceedings of 2nd International Conference on Advancements in Smart, Secure and Intelligent Computing, ASSIC 2024, 2024, DOI Link

    View abstract ⏷

    Edge intelligence is the ability of edge devices to carry out intelligent operations, such as object identification, speech recognition, or natural language processing, utilizing machine learning algorithms. The primary goal is to fix edge computing's problems and improve its performance. The main goal of this work is to apply RPCA to increase energy efficiency and reduce memory usage. The algorithm computes the covariance matrix of the centered data, finds the eigenvectors and eigenvalues of the covariance matrix, sorts the eigenvectors and eigenvalues in descending order of the eigenvalues, chooses the first set of eigenvectors, and projects the data onto the chosen eigenvectors. This article employs a technique known as layer-wise adaptive precision (LAP), which decreases the precision of activations in neural network layers that contribute less to output accuracy.
  • Role of federated learning in edge computing: A survey

    Mishra S.K., Kumar N.S., Rao B., Brahmendra, Teja L.

    Article, Journal of Autonomous Intelligence, 2024, DOI Link

    View abstract ⏷

    This paper explores various approaches to enhance federated learning (FL) through the utilization of edge computing. Three techniques, namely Edge-Fed, hybrid federated learning at edge devices, and cluster federated learning, are investigated. The Edge-Fed approach implements the computational and communication challenges faced by mobile devices in FL by offloading calculations to edge servers. It introduces a network architecture comprising a central cloud server, an edge server, and IoT devices, enabling local aggregations and reducing global communication frequency. Edge-Fed offers benefits such as reduced computational costs, faster training, and decreased bandwidth requirements. Hybrid federated learning at edge devices aims to optimize FL in multi-access edge computing (MAEC) systems. Cluster federated learning introduces a cluster-based hierarchical aggregation system to enhance FL performance. The paper explores the applications of these techniques in various domains, including smart cities, vehicular networks, healthcare, cybersecurity, natural language processing, autonomous vehicles and smart homes. The combination of edge computing (EC) and federated learning (FL) is a promising technique gaining popularity across many applications. EC brings cloud computing services closer to data sources, further enhancing FL. The integration of FL and EC offers potential benefits in terms of collaborative learning.
  • Task Offloading Technique Selection In Mobile Edge Computing

    Mishra S.K., Challa H.K., Kotha K.S., Yarramreddy D.P.

    Conference paper, Proceedings of 2nd International Conference on Advancements in Smart, Secure and Intelligent Computing, ASSIC 2024, 2024, DOI Link

    View abstract ⏷

    In distributed computing environments, computation offloading is a vital strategy for maximizing the performance and energy efficiency of mobile devices. Distributed deep learning-based offloading (DDLO) [10] and deep reinforcement learning for online computation offloading (DROO) [10] are two popular methods for solving the computation offloading problem. In DDLO, the data is divided into smaller pieces during offloading and distributed throughout the systems or devices. In DROO, an agent is trained to determine the optimum offloading choices based on the resources at hand, the network environment, and the application's performance requirements. Comparison is presented of both approaches, emphasizing their benefits and drawbacks and the situations when one approach is more suitable than the other. Precision, effectiveness, and adaptability are just a few of the different metrics we use to evaluate the performance of both techniques in a variety of workload and network configuration scenarios. Our findings indicate that while deep reinforcement learning is more able to respond to environmental changes, distributed deep learning-based offloading is more efficient in terms of computational resources.
  • Message from General Chairs ICEC-2024

    Mishra S.K., Mohapatra P.

    Editorial, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

  • Advanced Temporal Attention Mechanism Based 5G Traffic Prediction Model for IoT Ecosystems

    Samudrala D.S., Mishra S.K., Senapati R.

    Conference paper, Proceedings - 2024 IEEE 21st International Conference on Mobile Ad-Hoc and Smart Systems, MASS 2024, 2024, DOI Link

    View abstract ⏷

    Traffic prediction in5G is important for effective deployment and operation of Internet of Things (IoT) ecosystems. It enables resource management and optimization, guaranteeing that the network can handle unpredictable traffic volumes with-out experiencing traffic jams. This helps to ensure high quality of service and low latency for applications such as autonomous automobiles and virtual reality. Predictive traffic management further enhances user experience by keeping services consistent and reliable, particularly during busy hours. There are various approaches to traffic prediction in 5G networks, and each has advantages and disadvantages of its own. The choice of model will depend on how precise, adaptable, and computationally demanding the network must be. The model proposed in this paper integrates lightweight convolution with temporal attention to deliver accurate and efficient traffic prediction for 5G networks that may further be useful for developing IoT ecosystem.
  • Maximizing Resource Utilization Using Hybrid Cloud-based Task Allocation Algorithm

    Mishra S.K., Mohith G.K.H., Ambati S.T., Guduru K.K., Senapati R.

    Conference paper, Proceedings - 2024 IEEE 21st International Conference on Mobile Ad-Hoc and Smart Systems, MASS 2024, 2024, DOI Link

    View abstract ⏷

    Cloud computing operates similarly to a utility, providing users with on-demand access to various hardware and software resources, billed according to usage. These resources are primarily virtualized, with virtual machines (VMs) serving as critical components. However, task allocation within VMs presents significant challenges, as uneven distribution can lead to underloading or overloading, causing system inefficiencies and potential failures. This study addresses these issues by proposing a novel hybrid task allocation algorithm that combines the strengths of the Artificial Bee Colony (ABC) algorithm with Particle Swarm Optimization (PSO). Our approach aims to enhance resource utilization and reduce the risks of VM overload or underload. We conduct a comprehensive evaluation of the proposed hybrid algorithm against traditional ABC and PSO algorithms, focusing on their effectiveness in managing diverse task loads. The results of our empirical analysis indicate that our hybrid approach outperforms the conventional algorithms, leading to better resource utilization and more accurate task allocation. These findings have significant implications for optimizing task allocation in cloud computing environments, and we suggest potential avenues for future research to further refine these strategies.
  • Enhancing Traffic Flow Through Advanced ACO Mechanism

    Divya G C V.J., Mishra S.K., Puthal D.

    Conference paper, IEEE INFOCOM 2024 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2024, 2024, DOI Link

    View abstract ⏷

    Severe traffic congestion is a significant challenge for urban areas, and improving sustainable urban development is critical, yet traditional traffic management systems often struggle to cope with dynamic real-time conditions due to their reliance on predetermined schedules and fixed control mechanisms. This paper advocates for the application of optimizing techniques, specifically an enhanced version of ant colony optimization (ACO), to alleviate this challenge. By effectively managing and enhancing vehicle movement, these approaches target the reduction of congestion, travel times, and costs while concurrently enhancing fuel efficiency. This approach can also be adapted to optimize the deployment and movement of drones in wireless communication networks, ensuring optimal coverage and resource utilization. Implementations, comparisons, and visualizations show how these approaches help improve traffic movement, thereby minimizing congestion-associated problems.
  • AI Based Feature Selection for Intrusion Detection Classifiers in Cloud of Things

    Ravala R.K., Polisetty K.B., Mishra S.K.

    Conference paper, 2024 1st International Conference on Cognitive, Green and Ubiquitous Computing, IC-CGU 2024, 2024, DOI Link

    View abstract ⏷

    The popularity of cloud computing can be attributed to its on-demand nature, scalability, and flexibility. However, because of its heightened vulnerability and propensity for so-phisticated, widespread attacks, safeguarding this distributed en-vironment presents difficulties. Conventional IDS are insufficient. The proposed IDS for cloud environments in this study makes use of ensemble feature selection and classification techniques. This approach robustly distinguishes between attacks and normal traf-fic by merging individual classifiers through voting. Performance measures and ROC-AUC analysis show that the new approach is significantly more accurate and has fewer false alarms than the previous one. For cloud intrusion detection, this method provides a statistically better option.
  • A Panoramic Review on Cutting-Edge Methods for Video Anomaly Localization

    Nayak R., Mishra S.K., Dalai A.K., Pati U.C., Das S.K.

    Review, IEEE Access, 2024, DOI Link

    View abstract ⏷

    Video anomaly detection and localization is the process of spatiotemporally localizing the anomalous video segment corresponding to the abnormal event or activities. It is challenging due to the inherent ambiguity of anomalies, diverse environmental factors, the intricate nature of human activities, and the absence of adequate datasets. Further, the spatial localization of the video anomalies (video anomaly localization) after the temporal localization of the video anomalies (video anomaly detection) is also a complex task. Video anomaly localization is essential for pinpointing the anomalous event or object in the spatial domain. Hence, the intelligent video surveillance system must have video anomaly detection and localization as key functionalities. However, the state-of-the-art lacks a dedicated survey of video anomaly localization. Hence, this article comprehensively surveys the cutting-edge approaches for video anomaly localization, associated threshold selection strategies, publicly available datasets, performance evaluation criteria, and open trending research challenges with potential solution strategies.
  • An Ensemble Deep Learning Model for Oral Squamous Cell Carcinoma Detection Using Histopathological Image Analysis

    Das M., Dash R., Kumar Mishra S., Kumar Dalai A.

    Article, IEEE Access, 2024, DOI Link

    View abstract ⏷

    Deep learning approaches for medical image analysis are widely applied for the recognition and classification of different kinds of cancer. In this study, histopathological images of oral cells are analyzed for the programmed recognition of Oral squamous cell carcinoma (OSCC) using the proposed framework. The suggested model applies transfer learning and ensemble learning in two phases. In the 1st phase, a few Convolutional neural network (CNN) models are considered through transfer learning applications for OSCC detection. In the 2nd phase, the ensemble model is constructed considering the best two pre-trained CNN from the 1st phase. The proposed classifier is compared with leading-edge models like Alexnet, Resnet50, Resnet101, Inception net, Xception net, and InceptionresnetV2. Results are analyzed to demonstrate the effectiveness of the suggested framework. A three-phase comparative analysis is considered. Firstly, various metrics including accuracy, recall, F-score, and precision are evaluated. Secondly, a graphical analysis using a loss and accuracy graph is performed. Lastly, the accuracy of the proposed classifier is compared with that of other models from existing literature. Following the three-stage performance evaluation, the proposed ensemble classifier exhibits enhanced performance with an accuracy of 97.88%.
  • Comparative Evaluation of Optimization Techniques for Industrial Wireless Sensor Network Hello Flood Attack Mitigation

    Srinivas S., Tejaswi S., Mishra S.K.

    Conference paper, Proceedings - 2024 3rd International Conference on Computational Modelling, Simulation and Optimization, ICCMSO 2024, 2024, DOI Link

    View abstract ⏷

    Protecting Industrial Wireless Sensor Networks (IWSNs) means ensuring that crucial industrial processes remain as stable and whole as ever. In order to mitigate the 'Hello Flood Attack' in IWSNs, this paper compares three optimization heuristic techniques: Genetic Algorithm (GA), Simulated Annealing (SA) and Particle Swarm Optimization (PSO). Genetic Algorithm (GA) progresses remedies, Simulated Annealing (SA) interactively fixes communication setup and Particle Swarm Optimization (PSO) upgrades features to elevate vigor. The study looks into how well each optimization technique enhances network resilience and protects against the negative effects of Hello Flood Attacks. There is also a benchmark scenario for comparison. These results offer valuable information on the development of safe, secure IWSNs by pointing out the benefits and drawbacks of these systems.
  • Predictive VM Consolidation for Latency Sensitive Tasks in Heterogeneous Cloud

    Kumar Swain C., Routray P., Kumar Mishra S., Alwabel A.

    Conference paper, Lecture Notes in Networks and Systems, 2023, DOI Link

    View abstract ⏷

    Virtualization technology plays a crucial role for reducing the cost in a cloud environment. Efficient virtual machine (VM) packing method that focuses on compaction of hosts such that most of its resources are used when it serves the user requests. Here our aim is to reduce the power requirements of a cloud system by focusing on minimizing the number of hosts. We propose a predictive scheduling approach considering the deadline of a task request and make flexible decisions to allocate the tasks to hosts. Experimental results show that the proposed approach can save around 5 to 10% power consumption than the standard VM packing methods in most scenarios. Even when the total power consumption requirements remain the same as that of standard methods in some scenarios, the average number of hosts required in the cloud environment are reduced and thereby reducing the cost.
  • Blockchain-Based Medical Report Management and Distribution System

    Sahoo S.K., Mishra S.K., Guru A.

    Book chapter, 6G Enabled Fog Computing in IoT: Applications and Opportunities, 2023, DOI Link

    View abstract ⏷

    Generally, the Hospital operations contain loads of scientific reviews which can be a crucial part of operations. As a result of integrating pathology and other testing labs within the medical center, hospitals today have improved their business operations while also achieving greener and faster diagnoses. Many dif-ferent strategies are used in hospital operations, from patient admission and control to health center cost management. This will raise operational complexity and make it more challenging to manage, especially when combined with newly introduced offerings like pathology and pharmaceutical control. In order to overcome this issue, we employ the Hyperledger notion and a blockchain era to retain the data of each individual transaction with 100% authenticity. Instead of using a centralized server, all transactions are encrypted and kept as blocks, which are then used to authenticate within a network of computers. Additionally, we employ the hyper ledger concept to associate and store all associated scientific files for each transaction with a date stamp. This makes it possible to confirm the legitimacy of each document and identify any changes made by someone else. This consultation defines that affected person's clinical record is personal and every affected person has his very own privacy. To guard the reviews from hackers or enemies, who will make changes on clinical reviews and additionally saving the statistics without lacking any content material which performs an important position to shape a life. To study reviews, we are using a block chain method which splits the information into modules. Using this method hackers or enemies can't get the right information. "To bring forward a secure, safe, efficient, and legitimate medical report man-agement system" is the primary goal of this project.
  • LiDAR-based Building Damage Detection in Edge-Cloud Continuum

    Mishra S.K., Sanisetty M.L., Shaik A.Z., Thotakura S.L., Aluru S.L., Puthal D.

    Conference paper, 2023 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress, DASC/PiCom/CBDCom/CyberSciTech 2023, 2023, DOI Link

    View abstract ⏷

    In recent years, natural disasters such as earth-quakes and hurricanes have caused significant damage to buildings and infrastructure worldwide. As a result, there has been an increasing demand for efficient and accurate methods of assessing the extent of building damage to facilitate effective recovery efforts. One emerging technology that shows great promise in this area is Light Detection and Ranging (Li-DAR). Therefore, this paper proposes a novel detection framework utilizing textural feature extraction strategies for Li-DAR-based building damage detection. Li-DAR, a remote sensing technology, has ability to create detailed maps of buildings and other infrastructure, allowing for precise identification and measurement of damage caused by natural disasters. Integration of the popular paradigm Edge cloud continuum extends cloud's capabilities to the edge of the network, enabling more effective post-disaster recovery efforts. Smart Li-DAR sensors pre-process the captured data and send it to the nearest edge device for further processing.. Inclusion of machine learning algorithms like K-means clustering algorithm here is used to classify the buildings into damaged and undamaged classes by analyzing the extracted textural features. The scheme can detect various types of building damage. The cloud server is utilized to store the processed maps. The integration of the Edge-Cloud Continuum (ECC) has added more value by reducing the network usage, and latency of the Li-DAR-based building damage detection system. ECC enables processing and analysis of data at the point of origin as well as large-scale data processing and storage in cloud-based systems. This proposed framework has shown promising results in preliminary experiments and has the potential to revolutionize post-disaster recovery efforts by providing efficient building damage maps.
  • CS-Based Energy-Efficient Service Allocation in Cloud

    Kumar Mishra S., Kumar Sahoo S., Kumar Swain C., Guru A., Kumar Sethy P., Sahoo B.

    Conference paper, Lecture Notes in Networks and Systems, 2023, DOI Link

    View abstract ⏷

    Nowadays, cloud computing is growing rapidly and has been developed as an adequate and adaptable paradigm in solving large-scale problems. Since the number of cloud users and their requests are increasing fast, the loads on the cloud data center may be under-loaded or over-loaded. These circumstances induce various problems, such as high response time and energy consumption. High energy consumption in the cloud data center has drastic negative impacts on the environment. Literature shows that scheduling plays a significant role in the reduction of energy consumption. In the recent decade, this problem has attracted huge interest among researchers, and several solutions have been proposed. Energy-efficient service (task) allocation with high Customer Satisfaction (CS) constraint has become a critical problem of a cloud. In this paper, a high CS-based energy-efficient service allocation framework has been designed. This optimizes the energy consumption as well as the CS level in the cloud. The proposed algorithm is simulated in CloudSim simulator and compared with some standard algorithms. The simulation results show in favor of the proposed algorithm.
  • Automatic Detection of Oral Squamous Cell Carcinoma from Histopathological Images of Oral Mucosa Using Deep Convolutional Neural Network

    Das M., Dash R., Mishra S.K.

    Article, International Journal of Environmental Research and Public Health, 2023, DOI Link

    View abstract ⏷

    Worldwide, oral cancer is the sixth most common type of cancer. India is in 2nd position, with the highest number of oral cancer patients. To the population of oral cancer patients, India contributes to almost one-third of the total count. Among several types of oral cancer, the most common and dominant one is oral squamous cell carcinoma (OSCC). The major reason for oral cancer is tobacco consumption, excessive alcohol consumption, unhygienic mouth condition, betel quid eating, viral infection (namely human papillomavirus), etc. The early detection of oral cancer type OSCC, in its preliminary stage, gives more chances for better treatment and proper therapy. In this paper, author proposes a convolutional neural network model, for the automatic and early detection of OSCC, and for experimental purposes, histopathological oral cancer images are considered. The proposed model is compared and analyzed with state-of-the-art deep learning models like VGG16, VGG19, Alexnet, ResNet50, ResNet101, Mobile Net and Inception Net. The proposed model achieved a cross-validation accuracy of 97.82%, which indicates the suitability of the proposed approach for the automatic classification of oral cancer data.
  • A Hybrid Encryption Approach using DNA-Based Shift Protected Algorithm and AES for Edge-Cloud System Security

    Mishra S.K., Cherukuri C., Dheeraj P.V., Puthal D.

    Conference paper, OCIT 2023 - 21st International Conference on Information Technology, Proceedings, 2023, DOI Link

    View abstract ⏷

    The modern applications, such as smart cities, connected homes, and crisis management systems, has driven the emergence of the edge-cloud continuum to enable data processing to occur closer to the source, reducing latency and enhancing data processing efficiency. However, due to the distributed nature of edge nodes and cloud environments, data security remains a critical concern. Malicious actors may intercept or eavesdrop on communication channels between edge devices and the cloud. DNA computing, a groundbreaking security concept inspired by biological DNA, offers a promising solution to address these security challenges. This paper proposes a DNA-based cryptographic method for secure data transfer and communication in edge-cloud computing environments. The research also examines into various data security threats in the edge-cloud continuum and explores potential countermeasures.
  • A comparative study of different scheduling approaches for splittable latency sensitive tasks in Fog-Cloud environment

    Sandeep K.S., Koundinya C.A., Prabhas A.V., Swain C.K., Mishra S.K.

    Conference paper, 2023 2nd International Conference on Ambient Intelligence in Health Care, ICAIHC 2023, 2023, DOI Link

    View abstract ⏷

    IoT has revolutionized the way we live and the work we do by connecting different devices through the Internet. In the present scenario, the number of IoT devices are increasing rapidly due to the increase in technology and the increase in the comforts of life. Nowadays we can see that many of them are using IoT devices regularly, it's estimated that by the end of 2030, there will be 30 billion users who will be using IoT applications. These devices send data to the cloud for processing. Due to the distance of the cloud from the IoT devices, the application requests get delayed service responses. So to handle the latency sensitive applications we require the micro cloud service like fog servers deployed near to the data generation points. The fog layer lies between the IoT devices and Cloud which acts as an intermediate layer. This helps in reducing latency of the tasks and provide better performance. As the number of IoT applications keeps on increasing, the resources available with the fog nodes may not handle the upcoming demands. So to overcome these demands, we are using splittable methods to allocate the tasks to Fog/ Cloud nodes more compactly. If a task can be splitted before the deadline into different modules, then we split the given task and allocate those tasks to different fog nodes/ servers and then collecting back the data from the fog nodes/ servers and merging them into a single unit. With the help of this method, we can increase the performance of the system.
  • Latency Aware – Resource Planning in Edge Using Fuzzy Logic

    Sahoo S.K., Dash A., Vemula D.R., Swain C.K., Mishra S.K.

    Conference paper, 2023 2nd International Conference on Ambient Intelligence in Health Care, ICAIHC 2023, 2023, DOI Link

    View abstract ⏷

    As a potential paradigm for enabling effective and low-latency computation at the network's edge, edge computing has recently come into the spotlight. In edge computing environments, resource allocation is essential for ensuring the best possible resource utilization while still satisfying application requirements. Traditional resource allocation algorithms, however, struggle to effectively capture the uncertainties and ambiguity associated with resource availability and application needs because of the dynamic and varied nature of edge environments. This research offers a fuzzy logic-based method for planning to allocate resources in edge computing. Fuzzy logic offers a flexible and understandable framework for modeling and reasoning with imperfect and ambiguous data. The suggested method offers a more reliable and adaptable resource allocation system that can successfully address the uncertainties present in edge computing by utilizing fuzzy logic. The resource allocation process incorporates fuzzy membership functions to capture the vagueness of resource availability and application requirements. Fuzzy rules are defined to map the linguistic variables representing resource availability, application demands, and performance objectives to appropriate resource allocation decisions. The fuzzy inference engine then utilizes these rules to make intelligent decisions regarding resource allocation, considering the fuzzy inputs and the system's predefined objectives.
  • A Smart Logistic Classification Method for Remote Sensed Image Land Cover Data

    Sahu M., Dash R., Mishra S.K., Puthal D.

    Article, SN Computer Science, 2022, DOI Link

    View abstract ⏷

    A smart system integrates appliances of sensing, acquisition, classification and managing with regard to interpreting and analyzing a situation to generate decisions depending on the available data in a predictive way. Remotely sensed images are an essential tool for evaluating and analyzing land cover dynamics, particularly for forest-cover change. The remote data gathered for this operation from different sensors are of high spatial resolution and thus suffer from high interclass and low intraclass vulnerability issues which retards classification accuracy. To address this problem, in this research analysis, a smart logistic fusion-based supervised multi-class classification (SLFSMC) model is proposed to obtain a thematic map of different land cover types and thereby performing smart actions. In the pre-processing stage of the proposed work, a pair of closing and opening morphological operations is employed to produce the fused image to exploit the contextual information of adjacent pixels. Thereafter quality assessment of the fused image is estimated on four fusion metrics. In the second phase, this fused image is taken as input to the proposed classifiers. Afterward, a multi-class classification model is designed based on the supervised learning concept to generate maps for analyzing and exporting decisions based on any critical climatic situation. In our paper, for estimating the performance of proposed SLFSMC among few conventional classification techniques such as the Naïve Bayes classifier, decision tree, Support vector machine, and K-nearest neighbors, a statistical tool called as Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is involved. We have implemented proposed SLFSMC system on some of the regions of Victoria, a state of Australia, after the deforestation caused due to different reasons.
  • Crop Recommendation System Using Support Vector Machine Considering Indian Dataset

    Mishra T.K., Mishra S.K., Sai K.J., Peddi S., Surusomayajula M.

    Conference paper, Lecture Notes in Networks and Systems, 2022, DOI Link

    View abstract ⏷

    Since a long years, agriculture is considered as a major profession for livelihoods of the Indians. Still, agriculture is not profitable as many farmers take the worse step as they cannot survive from the burden of loans. So, one such place where there is yet large scope to develop is agriculture. In comparison with other countries, India has the highest production rate in agriculture. However, still, most agricultural fields are underdeveloped due to the lack of deployment of ecosystem control technologies. Agriculture when combined with technology can bring the finest results. Crop yield depends on multiple climatic conditions such as air temperature, soil temperature, humidity, and soil moisture. In general, farmers depend on self-monitoring and experience for harvesting fields. Scarcity of water is a main issue in today’s life. This scarcity is affecting people worldwide. So water is also a vital component of crop yield, here we are considering rainfall instead direct water. Predicting the crop selection/yield in advance of its harvest would help the policymakers and farmers for taking appropriate measures for farming, marketing, and storage. Thus, in this paper we propose a crop selection using machine learning technique as support vector machine (SVM) and polynomial regression. This model will help the farmers to know the yield of their crop before cultivating the agricultural field and thus help them to make the appropriate decisions. It attempts to solve the issue by building a prototype of an interactive prediction system. Accurate yield prediction is required to be done after understanding the functional relationship between yield and these parameters because along with all advances in the machines and technologies used in farming, useful and accurate information about different matters also plays a significant role in it. In this paper, we have simulated SVM and polynomial regression technique to predict which crop can yield better profit. Both of the models are simulated comprehensively on the Indian dataset, and an analytical report has been presented.
  • Combination of Reduction Detection Using TOPSIS for Gene Expression Data Analysis

    Tripathy J., Dash R., Pattanayak B.K., Mishra S.K., Mishra T.K., Puthal D.

    Article, Big Data and Cognitive Computing, 2022, DOI Link

    View abstract ⏷

    In high-dimensional data analysis, Feature Selection (FS) is one of the most fundamental issues in machine learning and requires the attention of researchers. These datasets are characterized by huge space due to a high number of features, out of which only a few are significant for analysis. Thus, significant feature extraction is crucial. There are various techniques available for feature selection; among them, the filter techniques are significant in this community, as they can be used with any type of learning algorithm and drastically lower the running time of optimization algorithms and improve the performance of the model. Furthermore, the application of a filter approach depends on the characteristics of the dataset as well as on the machine learning model. Thus, to avoid these issues in this research, a combination of feature reduction (CFR) is considered designing a pipeline of filter approaches for high-dimensional microarray data classification. Considering four filter approaches, sixteen combinations of pipelines are generated. The feature subset is reduced in different levels, and ultimately, the significant feature set is evaluated. The pipelined filter techniques are Correlation-Based Feature Selection (CBFS), Chi-Square Test (CST), Information Gain (InG), and Relief Feature Selection (RFS), and the classification techniques are Decision Tree (DT), Logistic Regression (LR), Random Forest (RF), and k-Nearest Neighbor (k-NN). The performance of CFR depends highly on the datasets as well as on the classifiers. Thereafter, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking all reduction combinations and evaluating the superior filter combination among all.
  • A Data Aggregation Approach Exploiting Spatial and Temporal Correlation among Sensor Data in Wireless Sensor Networks

    Dash L., Pattanayak B.K., Mishra S.K., Sahoo K.S., Jhanjhi N.Z., Baz M., Masud M.

    Article, Electronics (Switzerland), 2022, DOI Link

    View abstract ⏷

    Wireless sensor networks (WSNs) have various applications which include zone surveillance, environmental monitoring, event tracking where the operation mode is long term. WSNs are characterized by low-powered and battery-operated sensor devices with a finite source of energy. Due to the dense deployment of these devices practically it is impossible to replace the batteries. The finite source of energy should be utilized in a meaningful way to maximize the overall network lifetime. In the space domain, there is a high correlation among sensor surveillance constituting the large volume of the sensor network topology. Each consecutive observation constitutes the temporal correlation depending on the physical phenomenon nature of the sensor nodes. These spatio-temporal correlations can be efficiently utilized in order to enhance the maximum savings in energy uses. In this paper, we have proposed a Spatial and Temporal Correlation-based Data Redundancy Reduction (STCDRR) protocol which eliminates redundancy at the source level and aggregator level. The estimated performance score of proposed algorithms is approximately 7.2 when the score of existing algorithms such as the KAB (K-means algorithm based on the ANOVA model and Bartlett test) and ED (Euclidian distance) are 5.2, 0.5, respectively. It reflects that the STCDRR protocol can achieve a higher data compression rate, lower false-negative rate, lower false-positive rate. These results are valid for numeric data collected from a real data set. This experiment does not consider non-numeric values.
  • Task Allocation in Containerized Cloud Computing Environment

    Akram Khan M., Kumar Mishra S., Kumari A., Sahoo B.

    Conference paper, ASSIC 2022 - Proceedings: International Conference on Advancements in Smart, Secure and Intelligent Computing, 2022, DOI Link

    View abstract ⏷

    Containerization technology makes use of operating system-level virtualization to pack application that runs with required libraries and is isolated from other processes on the same host. The lightweight easy deployment of containers made them popular at many data centers. It has captured the market of virtual machines and emerged as lightweight technology that offers better microservices support. Many organizations are widely deploying container technology for handling their diverse and unexpected workload derived from modern applications such as Edge/ Fog computing, Big Data, and IoT in either proprietary clusters or public, private cloud data centers. In the cloud computing environment, scheduling plays a pivotal role. In the same way in container technology, scheduling also plays a critical role in achieving the optimum utilization of available resources. Designing an efficient scheduler is itself a challenging task. The challenges arise from various aspects like the diversity of computing resources and maintaining fairness among numerous tenants, sharing resources with each other as per their requirements, unexpected variation in resource demands and heterogeneity of jobs, etc. This survey provides a multi-perspective overview of container scheduling. Here, we have organized the container scheduling problem into four categories based on the type of optimization algorithm applied to get the linear programming Modeling, heuristic, meta-heuristic, machine learning, and artificial intelligence-based mathematical model. In the previous research work has been done on either Virtual machine placements to Physical Machines or Container instances to Physical machines. This leads to either underutilized PMs or over-utilized PMs. But in this paper, we try to combine both virtualization technology Containers as well as VMs. The primary aim is to optimize resource utilization in terms of CPU time. in this paper, we proposed a meta-heuristics algorithm named Sorted Task-based allocation. Simulation results show that the proposed Sorted TBA algorithm performs better than the Random and Unsorted TBA algorithms.
  • VM consolidation based on overload detection and VM selection policy

    Jena S., Sahu L.K., Mishra S.K., Sahoo B.

    Conference paper, Proceedings of the Confluence 2021: 11th International Conference on Cloud Computing, Data Science and Engineering, 2021, DOI Link

    View abstract ⏷

    Even though cloud computing has been a big boon to the ICT (Information and communication technology) industry, it faces high energy consumption and substantial CO2 emission. Due to the increase in demand for computational resources, it is now necessary and of utmost significance to improve the energy consumption of the cloud system. Virtual Machine (VM) consolidation is one of the powerful tools to improve energy efficiency as it reduces the number of VM migrations by managing VMs from overloaded/underloaded hosts. Implementation of VM consolidation techniques leads to a decrease in the amount of hardware consumption, energy consumption, and data footprints which leads to an increased Quality of Service (QoS). In this paper, an energy aware VM selection algorithm is proposed along with an overload detection algorithm. The proposed algorithm runs on the CloudSim toolkit environment and analyzes it based on different parameters like energy consumption, SLA violation, server shutdown, and the number of VM migrations to analyze energy efficiency improvement. This modified approach exhibited better performance on all the parameters as compared to the existing algorithms.
  • Analysis of Machine Learning Technologies for the Detection of Diabetic Retinopathy

    Mohanty B.C.S., Mishra S., Mishra S.K.

    Book chapter, Machine Learning for Healthcare Applications, 2021, DOI Link

    View abstract ⏷

    In Today’s world, disease diagnosis plays a vital role in the area of medical imaging. Medical imaging is the method and procedure of making visual descriptions of the interior of a body for clinical investigation and clinical mediation, as well as visual depiction of the function of some organs or tissues. Medical imaging also deals with disease detection. We can get a better view of detecting the disease by using machine learning in medical imaging. So Now what is Machine Learning (ML)? ML is an artificial intelligence (AI) utilization that presents the system with the capacity to learn and develop itself. It mainly focuses on the development of computer programs that can access the data and use it for themselves. In this chapter we will focus on detection Diabetic retinopathy using machine learning. Diabetes is a type of disease that result in too much sugar in blood. There are three main types of diabetes. Diabetic retinopathy is one of them. Diabetic retinopathy is an eye infection brought about by the inconvenience of diabetes and we ought to recognize it right on time for effective treatment. As the disease advances, the sight of a patient may begin to break down and lead to diabetic retinopathy. Thus, two groups were recognized, in particular non-proliferative diabetic retinopathy and proliferative diabetic retinopathy. We should detect it as soon as possible as it can cause permanent loss of vision. By using ML in medical imaging we can detect it much faster and more accurately. In this chapter we will analyze about different ML technologies, algorithms and models to diagnose diabetic retinopathy in an efficient manner to support the healthcare system.
  • Facial expression recognition system (fers): A survey

    Mishra S., Gupta R., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    Human facial expressions and emotions are considered as the fastest way of the communication medium for expressing thoughts. The ability to identify the emotional states of people surrounding us is an essential component of natural communication. Facial expression and emotion detector can be used to know whether a person is sad, happy, angry, and so on. We can better understand the thoughts and ideas of a person. This paper briefly explores the idea of recognizing the computerized facial expression detection system. First, we have discussed an overview of the facial expression recognition system (FERS). Also, we have presented a glimpse of current technologies that are used for the detection of FERS. A comparative analysis of existing methodologies is also presented in this paper. It provides the basic information and general understanding of up-to-date state-of-the-art studies; also, experienced researchers can look productive directions for future work.
  • Crop Recommendation System using KNN and Random Forest considering Indian Data set

    Mishra T.K., Mishra S.K., Sai K.J., Alekhya B.S., Nishith A.R.

    Conference paper, Proceedings - 2021 19th OITS International Conference on Information Technology, OCIT 2021, 2021, DOI Link

    View abstract ⏷

    The agriculture plays crucial role in the growth of the country's economy. In comparison to other countries, India has the highest production rate in agriculture. Agriculture when combined with technology can bring the finest results. Crop prediction is a highly complex trait determined by multiple factors such as Contents of Nitrogen, Phosphorous, Potassium, Rainfall, Temperature, Humidity, Ph level. Predicting the crop in advance would help the policymakers and farmers for taking appropriate measures for farming, marketing and storage. Thus, in this paper we propose crop selection using machine learning techniques such as K-Nearest Neighbour (KNN) and Random Forest. Both of the models are simulated comprehensively on Indian Data set and an analytical report has been presented. This model will help the farmers to know the type of the crop before cultivating onto the agricultural field and thus help them to make appropriate decisions.
  • A Static Approach for Access Control with an Application-Derived Intrusion System

    Chattopadhyay S., Mishra S., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    In the era of cyberspace, enforcing an Intrusion Detection System (IDS) and Firewall on a system is a common practice among network administrators or engineers. But, with the due time, just implementing IDS and firewall isn’t just enough to secure our systems, especially with the present trend of spreading new malware attacks. Its quite easy to victimize a machine, even with IDS and firewalls enforced on the networks by easily uploading shells in the form of pdf, jpg, txt, etc. Due to which machine can easily be victimized without much effort, for this, we probe to apply a new approach to overcome this anomaly. Understandably, with the increasing demand for IoT devices in the market, safeguarding these devices are also a big challenge. Motivated by this problem, we try to perform inspections to maintain stability and functionality by adding code that allows the application to keep track of operating constraints of the application during an attack. Hence, in the background of this, we discuss intrusion detection systems, firewalls, and applicability. Further, we tend to identify open challenges in this direction.
  • A real-time sentiments analysis system using twitter data

    Dave A., Bharti S., Patel S., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    As social media platforms become the go-to for knee-jerk reactions on events by the current populous, it has become extremely important for event managers, celebrities, and organizations to constantly monitor their perceived social image online. This becomes especially difficult during key periods of heightened activity, like events, announcements, etc. As the rate at which the tweets are posted is much higher than what a human can read or comprehend. In this paper, we exploit existing sentiment analysis techniques to develop a real-time sentiment analysis system that provides us with real-time sentiments of the audience on the micro-blogging site, Twitter, toward an event, organization, or person. This system serves to act as a feedback mechanism helping the users to understand, the perceived image of the event/organization. This feedback, if provided in a timely manner, can be used to improve the situation at hand or act as a positive reinforcement for the team. In today’s world, neglecting social media can prove detrimental to the success of an event or organization. We analyze two different events from two separate domains to understand and demonstrate the benefits of our system.
  • Energy-efficient clustering with rotational supporter in wsn

    Parida P., Sahu B., Parida A.K., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    The wireless sensor network is an evergreen field of research. Everywhere we are using the sensor. Since the sensors are small in size and have less amount of initial energy, the energy saving becomes highly important and challenging. Wherever we deploy these sensors, it may or may not be accessible all the time. Hence, these should be implemented with a suitable algorithm to utilize energy efficiently. We have proposed an energy-saving algorithm by reducing the overheads of the cluster head (CH). In order to assist the CH, an assistant is selected called the supporting CH (SCH). Generally, this responsibility is rotational. Most of the nodes get a chance to serve CH so that the energy utilization is uniform. Through the proposed algorithm, the lifetime of the network in creased. This proposed algorithm is simulated using NS3 simulator and proves the energy-efficient clustering and increased lifetime as compared to other algorithms without the use of SCH.
  • Energy-aware task allocation for multi-cloud networks

    Mishra S.K., Mishra S., Alsayat A., Jhanjhi N.Z., Humayun M., Sahoo K.S., Luhach A.K.

    Article, IEEE Access, 2020, DOI Link

    View abstract ⏷

    In recent years, the growth rate of Cloud computing technology is increasing exponentially, mainly for its extraordinary services with expanding computation power, the possibility of massive storage, and all other services with the maintained quality of services (QoSs). The task allocation is one of the best solutions to improve different performance parameters in the cloud, but when multiple heterogeneous clouds come into the picture, the allocation problem becomes more challenging. This research work proposed a resource-based task allocation algorithm. The same is implemented and analyzed to understand the improved performance of the heterogeneous multi-cloud network. The proposed task allocation algorithm (Energy-aware Task Allocation in Multi-Cloud Networks (ETAMCN)) minimizes the overall energy consumption and also reduces the makespan. The results show that the makespan is approximately overlapped for different tasks and does not show a significant difference. However, the average energy consumption improved through ETAMCN is approximately 14%, 6.3%, and 2.8% in opposed to the random allocation algorithm, Cloud Z-Score Normalization (CZSN) algorithm, and multi-objective scheduling algorithm with Fuzzy resource utilization (FR-MOS), respectively. An observation of the average SLA-violation of ETAMCN for different scenarios is performed.
  • Autonomic cloud resource provisioning and scheduling using meta-heuristic algorithm

    Kumar M., Sharma S.C., Goel S., Mishra S.K., Husain A.

    Article, Neural Computing and Applications, 2020, DOI Link

    View abstract ⏷

    We investigate that resource provisioning and scheduling is a prominent problem due to heterogeneity as well as dispersion of cloud resources. Cloud service providers are building more and more datacenters due to demand of high computational power which is a serious threat to environment in terms of energy requirement. To overcome these issues, we need an efficient meta-heuristic technique that allocates applications among the virtual machines fairly and optimizes the quality of services (QoS) parameters to meet the end user objectives. Binary particle swarm optimization (BPSO) is used to solve real-world discrete optimization problems but simple BPSO does not provide optimal solution due to improper behavior of transfer function. To overcome this problem, we have modified transfer function of binary PSO that provides exploration and exploitation capability in better way and optimize various QoS parameters such as makespan time, energy consumption, and execution cost. The computational results demonstrate that modified transfer function-based BPSO algorithm is more efficient and outperform in comparison with other baseline algorithm over various synthetic datasets.
  • Leukemia Diagnosis Based on Machine Learning Algorithms

    Patil Babaso S., Mishra S.K., Junnarkar A.

    Conference paper, 2020 IEEE International Conference for Innovation in Technology, INOCON 2020, 2020, DOI Link

    View abstract ⏷

    Leukemia is brought about by the quick generation of unusual white platelets. The high number of strange white platelets are not ready to battle contamination, and they impede the capacity of the bone marrow to create red platelets and platelets. Machine Learning techniques are widely used in the dignosis and classification of different leukemia types in the patients. In this paper, we have described the different machine learning algorithms like Support Vector Machines, k-Nearest Neighbour, Neural Networks, Naïve Bayes and Deep Learning algorithms which are used to classify leukemia into its sub-types and presented a comparative study of these algorithms.
  • Energy-Efficient Service Allocation Techniques in Cloud: A Survey

    Mishra S.K., Sahoo S., Sahoo B., Jena S.K.

    Review, IETE Technical Review (Institution of Electronics and Telecommunication Engineers, India), 2020, DOI Link

    View abstract ⏷

    The demand for cloud computing infrastructure is increasing day by day to meet the requirement of small and medium enterprises. The data center-centric cloud technology has a high share of energy consumption from the IT-industry. The amount of energy consumption in a data center depends on the allocation of user service requests to virtual machines running on the different host. Minimization of energy consumption in the data center is a significant issue and addressed by optimal allocation of cloud resources. In this paper, we have discussed how service allocation strategies have been used to optimize the energy consumption in a cloud system. A generalized system architecture is presented based on which we define the service allocation problem and energy model. Further, we present the taxonomy of various energy-efficient resource allocation techniques found in the literature. In the end, various research challenges related to the energy-efficient service allocation in cloud are discussed.
  • Token based data security in inter cluster communication in wireless sensor network

    Sahu B., Parida P., Parida A.K., Mishra S.K.

    Conference paper, 2020 International Conference on Computer Science, Engineering and Applications, ICCSEA 2020, 2020, DOI Link

    View abstract ⏷

    In this paper, the data security operation is performed in case of inter-cluster communication. It is based on token identification of the clusters for their identification. The sender cluster checks the identification of the receiver cluster before any comm3unication is initiated. Each cluster is represented by its head node. The head nodes are assigned with a token by the base station. The token number is called as the identification number (IN) of the head node and hence the cluster. The proposed idea is simulated using NS3 simulator and the performance with respect to security. The performance is compared with other algorithms.
  • Load balancing in cloud computing: A big picture

    Mishra S.K., Sahoo B., Parida P.P.

    Review, Journal of King Saud University - Computer and Information Sciences, 2020, DOI Link

    View abstract ⏷

    Scheduling or the allocation of user requests (tasks) in the cloud environment is an NP-hard optimization problem. According to the cloud infrastructure and the user requests, the cloud system is assigned with some load (that may be underloaded or overloaded or load is balanced). Situations like underloaded and overloaded cause different system failure concerning the power consumption, execution time, machine failure, etc. Therefore, load balancing is required to overcome all mentioned problems. This load balancing of tasks (those are may be dependent or independent) on virtual machines (VMs) is a significant aspect of task scheduling in clouds. There are various types of loads in the cloud network such as memory load, Computation (CPU) load, network load, etc. Load balancing is the mechanism of detecting overloaded and underloaded nodes and then balance the load among them. Researchers proposed various load balancing approaches in cloud computing to optimize different performance parameters. We have presented a taxonomy for the load balancing algorithms in the cloud. A brief explanation of considered performance parameters in the literature and their effects is presented in this paper. To analyze the performance of heuristic-based algorithms, the simulation is carried out in CloudSim simulator and the results are presented in detail.
  • Allocation of energy-efficient task in cloud using DVFS

    Mishra S.K., Khan M.A., Sahoo S., Sahoo B.

    Article, International Journal of Computational Science and Engineering, 2019, DOI Link

    View abstract ⏷

    Nowadays, the expanding computational capabilities of the cloud system rely on the minimisation of the consumed power to make them sustainable and economically productive. Power management of cloud data centres received a great attention from industry and academia as it consumes high energy and thus increases the operational cost. One of the core approaches for the conservation of energy in the cloud data centre is the task scheduling. This task allocation in a heterogeneous environment is a well known NP-hard problem due to which researchers pay attention for proposing various heuristic techniques for the problem. In this paper, a technique is proposed based on dynamic voltage frequency scaling (DVFS) for optimising the energy consumption in the cloud environment. The basic idea is to address the trade-off between energy consumption and makespan of the system. Here, we formally introduce a model that includes various subsystems and assess the implementation of the algorithm in the heterogeneous environment.
  • A secure VM consolidation in cloud using learning automata

    Mishra S.K., Sahoo B., Jena S.K.

    Book chapter, Advances in Intelligent Systems and Computing, 2019, DOI Link

    View abstract ⏷

    Cloud computing system is a progression of distributed system that has been adopted by worldwide scientifically and commercially. For optimal utilization of cloud’s potential power, effective and efficient algorithms are expected, which will select best resources from available cloud resources for different applications. This allocation of user requests to the cloud resources can optimize several parameters like energy consumption, makespan, throughput, etc. In this paper, we have proposed a learning automata based algorithm to minimize the makespan of the cloud system and also to increase the resource utilization that holds secured resource allocation. We have simulated our algorithm, ALOLA with the help of CloudSim simulator in a heterogeneous environment. During the comparison of the algorithm, we provide a finite set of tasks to the ALOLA algorithm once and estimate the makespan of the system. We have compared our proposed technique (ALOLA), i.e., with learning automata and without learning automata (random allocation algorithm), and show the system performance.
  • Secure Big Data Computing in Cloud: An Overview

    Mishra S.K., Sahoo S., Sahoo B.

    Book chapter, Encyclopedia of Big Data Technologies, 2019, DOI Link

    View abstract ⏷

    Advancement in information technology with the rapid growth in all other areas like business, medical, engineering, and scientific research has resulted in a generation of huge data. Decisionmaking from a rapidly growing huge data is a challenging job in terms of management and processing of data, which is termed as big data computing. The big data computing demands voluminous storage and computing for data processing which is delivered to the user through cloud infrastructures. The complexity of the system reduces the security level which is a challenging task for the researchers. This paper elaborates the evolution of big data computing, security issues of big data computing in cloud, different solutions for providing better security level, and finally open technical challenges and future directions.
  • An Improved Approach for Sarcasm Detection Avoiding Null Tweets

    Bharti S.K., Babu K.S., Mishra S.K.

    Conference paper, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, DOI Link

    View abstract ⏷

    Among the plethora of social media, Twitter has emerged as the favorite destination for researchers in recent times. Many researchers are inclined to work on Twitter due to the availability of massive tweets and its unique features like hashtags and short messages. In recent times, various studies have preferred the hashtags (#sarcasm and #sarcastic) to collect Twitter dataset for sarcasm detection. However, hashtag-based distant supervision suffers from the problem of the inclusion of null tweets in the datasets which can be considered as a critical one for sarcasm detection. In this article, an algorithm is proposed for automatic detection and filtration of null tweets in the Twitter data. Additionally, an algorithm to identify sarcastic tweets using context within a tweet is also proposed. This approach use dictionaries of handpicked hashtag words, emoticons as the context within a tweet. Finally, we deployed a rule-based algorithm to analyse the performance of the proposed approach. The proposed approach attains the accuracy of 97.3% (after filtering null tweets) and 83.13% (without filtering null tweets) using a rule-based approach. The attained results conclude that after elimination of null tweets, the performance of the proposed system improved significantly.
  • Co-resident Attack in Cloud Computing: An Overview

    Sahoo S., Mishra S.K., Sahoo B., Turuk A.K.

    Book chapter, Encyclopedia of Big Data Technologies, 2019, DOI Link

    View abstract ⏷

    A cloud rewards organizations with agility and cost-efficiency, but goods of the cloud come with security challenges. The sheer volume and immense size of modern-day clouds (big data) make them hard to protect and consequently, vulnerable to abuse. Security and privacy issues are intensified by velocity, volume, and a variety of big data, such as large-scale cloud infrastructures, diversity of data sources and formats, and a massive amount of inter-cloud migration. The virtualization method allows sharing of computing resources among many tenants, which may be business partners, suppliers, competitors, or attackers. Even though there is substantial logical isolation among the virtual machines (VMs), shared hardware creates vulnerabilities to co-resident attacks. This paper gives a glimpse of security issues in the cloud, specifically related to VMs. Here, we concentrate our study on co-resident VM attack and its defense methods.
  • Resource allocation for video transcoding in the multimedia cloud

    Sahoo S., Parida I., Mishra S.K., Sahoo B., Turuk A.K.

    Book chapter, Advances in Intelligent Systems and Computing, 2019, DOI Link

    View abstract ⏷

    Video content providers like YouTube and Netflix cater their content, i.e., news and shows, on the web which is accessible anytime anywhere. The multi-screens like TVs, smartphones, and laptops created a demand to transcode the video into the appropriate video specification ensuring different quality of services (QoS) such as delay. Transcoding a large, high-definition video requires a lot of time, computation. The cloud transcoding solution allows video service providers to overcome the above difficulties through the pay-as-you-use scheme, with the assurance of providing online support to handle unpredictable demands. This paper presents a cost-efficient cloud-based transcoding framework and algorithm (CVS) for streaming service providers. The dynamic resource provisioning policy used in framework finds the number of virtual machines required for a particular set of video streams. Simulation results based on YouTube dataset show that the CVS algorithm performs better compared to FCFS scheme.
  • An adaptive task allocation technique for green cloud computing

    Mishra S.K., Puthal D., Sahoo B., Jena S.K., Obaidat M.S.

    Article, Journal of Supercomputing, 2018, DOI Link

    View abstract ⏷

    The rapid growth of todays IT demands reflects the increased use of cloud data centers. Reducing computational power consumption in cloud data center is one of the challenging research issues in the current era. Power consumption is directly proportional to a number of resources assigned to tasks. So, the power consumption can be reduced by a demotivating number of resources assigned to serve the task. In this paper, we have studied the energy consumption in cloud environment based on varieties of services and achieved the provisions to promote green cloud computing. This will help to preserve overall energy consumption of the system. Task allocation in the cloud computing environment is a well-known problem, and through this problem, we can facilitate green cloud computing. We have proposed an adaptive task allocation algorithm for the heterogeneous cloud environment. We applied the proposed technique to minimize the makespan of the cloud system and reduce the energy consumption. We have evaluated the proposed algorithm in CloudSim simulation environment, and simulation results show that our proposed algorithm is energy efficient in cloud environment compared to other existing techniques.
  • On the placement of controllers in software-Defined-WAN using meta-heuristic approach

    Sahoo K.S., Puthal D., Obaidat M.S., Sarkar A., Mishra S.K., Sahoo B.

    Article, Journal of Systems and Software, 2018, DOI Link

    View abstract ⏷

    Software Defined Networks (SDN) is a popular modern network technology that decouples the control logic from the underlying hardware devices. The control logic has implemented as a software entity that resides in a server called controller. In a Software-Defined Wide Area Network (SDWAN) with n nodes; deploying k number of controllers (k < n) is one of the challenging issue. Due to some internal or external factors, when the primary path between switch to controller fails, it severely interrupt the networks’ availability. In this regard, the proposed approach provides a seamless backup mechanism against single link failure with minimum communication delay based on the survivability model. In order to obtain an efficient solution, we have considered controller placement problem (CPP) as a multi-objective combinatorial optimization problem and solve it using two population-based meta-heuristic techniques such as: Particle Swarm Optimization (PSO) and FireFly Algorithm (FFA). For CPP, three metrics have been considered: (a) controller to switch latency, (b) inter-controller latency and (c) multi-path connectivity between the switch and controller. The performance of the algorithms is evaluated on a set of publicly available network topologies in order to obtain the optimum number of controllers, and controller positions. Then we present Average Delay Rise (ADR) metric to measure the increased delay due to the failure of the primary path. By comparing the performance of our scheme to competing scheme, it was found that our proposed scheme effectively improves the survivability of the control path and the performance of the network as well.
  • 2D-DWT and Bhattacharyya Distance Based Classification Scheme for the Detection of Acute Lymphoblastic Leukemia

    Mishra S., Mishra S.K., Majhi B., Sa P.K.

    Conference paper, Proceedings - 2018 International Conference on Information Technology, ICIT 2018, 2018, DOI Link

    View abstract ⏷

    This paper proposes an efficient classification system for separating normal blood cells from the pathological cells. The suggested system employs an adaptive histogram equalization scheme to reduce the noise present in the microscopic images. Two-dimensional discrete wavelet transform (2D-DWT) is applied separately to the nucleus and cytoplasm region to generate the feature matrix. The significant and uncorrelated features are chosen using a combination of PCA and Bhattacharyya distance. Subsequently, the reduced feature set is fed to the back propagation neural network for classification purpose. A public dataset ALL-IDB1 is used to validate the proposed scheme. It can be seen that the proposed methodology has a better result as compared to its competent schemes. The accuracy of the suggested scheme is found to be 97.11% in case of combined features from nucleus and cytoplasm region whereas the same is found to be 95.19% and 90.38% if the features are taken separately.
  • VM Selection using DVFS Technique to Minimize Energy Consumption in Cloud System

    Mishra S.K., Mishra S., Bharti S.K., Sahoo B., Puthal D., Kumar M.

    Conference paper, Proceedings - 2018 International Conference on Information Technology, ICIT 2018, 2018, DOI Link

    View abstract ⏷

    Energy consumption becoming a key issue for the execution of operation and maintenance of cloud system. The virtual machine selection plays an important role in the execution of the task without violating SLA. In this paper, a VM selection technique is proposed using Dynamic Voltage Frequency Scaling (DVFS) for optimizing the energy consumption and makespan in the cloud system.We have proposed a heuristic for the selection of VM for each task to optimize the energy utilization by applying the DVFS technique. The proposal extends to incorporate an energy model supporting the evaluation of energy consumption in cloud data centers. Each task has an energy-based SLA to execute in the cloud system. The DVFS Mechanism is applied to the virtual machines level to reduce the energy of the cloud system. Moreover, the performance of the diverse algorithms (Random allocation, and FCFS) are compared with the proposed DVFS-based VM selection strategy with the help of CloudSim.
  • Energy-efficient VM-placement in cloud data center

    Mishra S.K., Puthal D., Sahoo B., Jayaraman P.P., Jun S., Zomaya A.Y., Ranjan R.

    Article, Sustainable Computing: Informatics and Systems, 2018, DOI Link

    View abstract ⏷

    Employing cloud computing to acquire the benefit of cloud by optimizing various parameters that meet changing demands is a challenging task. The optimal mapping of tasks to virtual machines (VMs) and VMs to physical machines (PMs) (known as VM placement) problem are necessary for advancing energy consumption and resource utilization. High heterogeneity of tasks as well as resources, great dynamism and virtualization make the consolidation issue more complicated in the cloud computing system. In this paper, a complete mapping (i.e., task VM and VM to PM) algorithm is proposed. The tasks are classified according to their resource requirement and then searching for the appropriate VM and again searching for the appropriate PM where the selected VM can be deployed. The proposed algorithm reduces the energy consumption by depreciating the number of active PMs, while also minimizes the makespan and task rejection rate. We have evaluated our proposed approach in CloudSim simulator, and the results demonstrate the effectiveness of the proposed algorithm over some existing standard algorithms.
  • Sustainable Service Allocation Using a Metaheuristic Technique in a Fog Server for Industrial Applications

    Mishra S.K., Puthal D., Rodrigues J.J.P.C., Sahoo B., Dutkiewicz E.

    Article, IEEE Transactions on Industrial Informatics, 2018, DOI Link

    View abstract ⏷

    Reducing energy consumption in the fog computing environment is both a research and an operational challenge for the current research community and industry. There are several industries such as finance industry or healthcare industry that require a rich resource platform to process big data along with edge computing in fog architecture. As a result, sustainable computing in a fog server plays a key role in fog computing hierarchy. The energy consumption in fog servers depends on the allocation techniques of services (user requests) to a set of virtual machines (VMs). This service request allocation in a fog computing environment is a nondeterministic polynomial-time hard problem. In this paper, the scheduling of service requests to VMs is presented as a bi-objective minimization problem, where a tradeoff is maintained between the energy consumption and makespan. Specifically, this paper proposes a metaheuristic-based service allocation framework using three metaheuristic techniques, such as particle swarm optimization (PSO), binary PSO, and bat algorithm. These proposed techniques allow us to deal with the heterogeneity of resources in the fog computing environment. This paper has validated the performance of these metaheuristic-based service allocation algorithms by conducting a set of rigorous evaluations.
  • First score auction for pricing-based resource selection in vehicular cloud

    Mishra S., Mishra S.K., Sahoo B., Obaidat M.S., Puthal D.

    Conference paper, CITS 2018 - 2018 International Conference on Computer, Information and Telecommunication Systems, 2018, DOI Link

    View abstract ⏷

    Selecting vehicles to supply resources is a crucial research problem in the vehicular cloud and highly depends on the pricing of the resources. Subsequently, resource pricing is an intricate problem influenced by the market demand and quality of service provided. Widespread and autonomous vehicular network requires reputation as a medium for trusting the supplier vehicles. Taking into account the above factors, we design the utility of supplier and consumer vehicles. Subsequently, a 1st score auction mechanism is proposed and modeled for the consumer vehicles to obtain maximum utility. Additionally, the protocol enables the supplier vehicles to decide the optimal pricing of resources. The 1st auction protocol is then simulated and the experimental results indicate better performance of our protocol than other standard protocols.
  • Improving Energy Usage in Cloud Computing Using DVFS

    Mishra S.K., Parida P.P., Sahoo S., Sahoo B., Jena S.K.

    Conference paper, Advances in Intelligent Systems and Computing, 2018, DOI Link

    View abstract ⏷

    The energy-related issues in distributed systems that may be energy conservation or energy utilization have turned out to be a critical one. Researchers worked for this energy issue and most of them used Dynamic Voltage and Frequency Scaling (DVFS) as a power management technique where less voltage supply is allowed due to a reduction of the clock frequency of processors. The cloud environment has multiple physical hosts, and each host has several numbers of virtual machines (VMs). All online tasks or service requests are scheduled to different VMs. In this paper, an energy-optimized allocation algorithm is proposed where DVFS technique is used for virtual machines. The fundamental idea behind this is to make a compromise balance in between energy consumption and the set up time of different modes of hosts or VMs. Here, the system model that includes different sub-system models is explained formally and the implementation of algorithms in homogeneous as well as heterogeneous environment is evaluated.
  • Energy-efficient deployment of edge dataenters for mobile clouds in sustainable iot

    Mishra S.K., Puthal D., Sahoo B., Sharma S., Xue Z., Zomaya A.Y.

    Article, IEEE Access, 2018, DOI Link

    View abstract ⏷

    Achieving quick responses with limited energy consumption in mobile cloud computing is an active area of research. The energy consumption increases when a user's request (task) runs in the local mobile device instead of executing in the cloud. Whereas, latency become an issue when the task executes in the cloud environment instead of the mobile device. Therefore, a tradeoff between energy consumption and latency is required in building sustainable Internet of Things (IoT), and for that, we have introduced a middle layer named an edge computing layer to avoid latency in IoT. There are several real-time applications, such as smart city and smart health, where mobile users upload their tasks into the cloud or execute locally. We have intended to minimize the energy consumption of a mobile device as well as the energy consumption of the cloud system while meeting a task's deadline, by offloading the task to the edge datacenter or cloud. This paper proposes an adaptive technique to optimize both parameters, i.e., energy consumption and latency by offloading the task and also by selecting the appropriate virtual machine for the execution of the task. In the proposed technique, if the specified edge datacenter is unable to provide resources, then the user's request will be sent to the cloud system. Finally, the proposed technique is evaluated using a real-world scenario to measure its performance and efficiency. The simulation results show that the total energy consumption and execution time decrease after introducing an edge datacenters as a middle layer.
  • Time efficient dynamic threshold-based load balancing technique for Cloud Computing

    Mishra S.K., Khan M.A., Sahoo B., Puthal D., Obaidat M.S., Hsiao K.F.

    Conference paper, IEEE CITS 2017 - 2017 International Conference on Computer, Information and Telecommunication Systems, 2017, DOI Link

    View abstract ⏷

    Cloud computing is a novel technology leads several new challenges to all organizations worldwide. Cloud computing supports virtual machines (VMs) to host multiple applications simultaneously. Balancing the large numbers of applications in the heterogeneous cloud environment becomes challenging as the hypervisor scheduling controls all VMs. When the scheduler allocates tasks to the overloaded VMs, the performance of the cloud system degrades. In this paper, we present a novel load balancing approach to organizing the virtualized resources of the data center efficiently. In our approach, the load to a VM scales up and down according to the resource capacity of the VM. The proposed scheme minimizes the makespan of the system, maximizes resource utilization and reduces the overall energy consumption. We have evaluated our approach in CloudSim simulation environment, and our devised approach has reduced the waiting time compared to existing approaches and optimized the makespan of the cloud data center.
  • Metaheuristic solutions for solving controller placement problem in SDN-based WAN architecture

    Sahoo K.S., Sarkar A., Mishra S.K., Sahoo B., Puthal D., Obaidat M.S., Sadun B.

    Conference paper, ICETE 2017 - Proceedings of the 14th International Joint Conference on e-Business and Telecommunications, 2017, DOI Link

    View abstract ⏷

    Software Defined Networks (SDN) is a popular paradigm in the modern networking systems that decouples the control logic from the underlying hardware devices. The control logic has implemented as a software component and residing in a server called controller. To increase the performance, deploying multiple controllers in a large-scale network is one of the key challenges of SDN. To solve this, authors have considered controller placement problem (CPP) as a multi-objective combinatorial optimization problem and used different heuristics. Such heuristics can be executed within a specific time-frame for small and medium sized topology, but out of scope for large scale instances like Wide Area Network (WAN). In order to obtain better results, we propose Particle Swarm Optimization (PSO) and Firefly two population-based meta-heuristic algorithms for optimal placement of the controllers, which take a particular set of objective functions and return the best possible position out of them. The problem has been defined, taking into consideration both controllers to switch and inter-controller latency as the objective functions. The performance of the algorithms evaluated on a set of publicly available network topologies in terms execution time. The results show that the FireFly algorithm performs better than PSO and random approach under various conditions.
  • Time efficient task allocation in cloud computing environment

    Mishra S.K., Khan M.A., Sahoo B., Jena S.K.

    Conference paper, 2017 2nd International Conference for Convergence in Technology, I2CT 2017, 2017, DOI Link

    View abstract ⏷

    Cloud computing is an evolution of Distributed system that has been adopted by worldwide scientifically and commercially. For optimal use of cloud's potential power, effective and efficient algorithm are required, which will select best resources from available cloud resources for different applications. This allocation of user requests to the cloud resource can optimize various parameters like energy consumption, makespan, throughput, etc. This task allocation or mapping problem is a well-known NP-Complete problem. In this paper, we have proposed an algorithm, Task Based allocation to minimize the makespan of the cloud system and also to increase the resource utilization. We have simulated our algorithm, TBA in CloudSim Simulator in a heterogeneous environment. CloudSim is one of the simulation tools of cloud environment which provides evaluation and testing of cloud services and infrastructure before the development of the real world. During the comparison of the algorithm, we provide the sorted tasks to the TBA algorithm once and un-sorted tasks in the second time. We have compared sorted-TBA, unsorted-TBA and random algorithm where the sorted-TBA algorithm performs better.
  • Evaluating performance of the Non-linear data structure for job queuing in the cloud environment

    Sahoo S., Mishra S.K., Swami D., Khan A., Sahoo B.

    Conference paper, 2017 2nd International Conference for Convergence in Technology, I2CT 2017, 2017, DOI Link

    View abstract ⏷

    Cloud Computing era comes with the advancement of technologies in the fields of processing, storage, bandwidth network access, security of the internet, etc. Several advantages of Cloud Computing include scalability, high computing power, ondemand resource access, high availability, etc. One of the biggest challenges faced by Cloud provider is to schedule incoming jobs to virtual machines(VMs) such that certain constraints satisfied. The development of automatic applications, smart devices, and applications, sensor-based applications need large data storage and computing resources and need output within a particular time limit. Many works have been proposed and commented on various data structures and allocation policies for a realtime job on the cloud. Most of these technologies use a queuebased mapping of tasks to VMs. This work presents a novel, min-heap based VM allocation (MHVA) designed for real-time jobs. The proposed MHVA is compared with a queue based random allocation taking performance metrics makespan and energy consumption. Simulations are performed for different scenarios varying the number of tasks and VMs. The simulation results show that MHVA is significantly better than the random algorithm.
  • Adaptive scheduling of cloud tasks using ant colony optimization

    Mishra S.K., Sahoo B., Manikyam P.S.

    Conference paper, ACM International Conference Proceeding Series, 2017, DOI Link

    View abstract ⏷

    Efficient scheduling of heterogeneous tasks to heterogeneous processors for any application is crucial to attain high performance. Cloud computing provides a heterogeneous environment to perform various operations. The scheduling of user requests (tasks) in the cloud environment is a NP-hard optimization problem. Researchers present various heuristic and metaheuristic techniques to provide the sub-optimal solution to the problem. In this paper, we have proposed an Ant Colony Optimization (ACO) based task scheduling (ACOTS) algorithm to optimize the makespan of the system and reducing the average waiting time. The designed algorithm is implemented and simulated in CloudSim simulator. Results of simulations are compared to Round Robin and Random algorithms which show satisfactory output.
  • Improved energy-efficient target coverage in wireless sensor networks

    Panda B.S., Bhatta B.K., Mishra S.K.

    Conference paper, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2017, DOI Link

    View abstract ⏷

    Achieving optimal field coverage is a significant challenge in various sensor network applications. In some specific situations, the sensor field (target) may have coverage gaps due to the random deployment of sensors; hence, the optimized level of target coverage cannot be obtained. Given a set of sensors in the plane, the target coverage problem is to separate the sensor into different groups and provide them specific time intervals, so that the coverage lifetime can be maximized. Here, the constraint is that the network should be connected. Presently, target coverage problem is widely studied due to its lot of practical application in Wireless Sensor Network (WSN). This paper focuses on target coverage problem along with the minimum energy usage of the network so that the lifetime of the whole network can be increased. Since constructing a minimum connected target coverage problem is known to be NP-Complete, so several heuristics, as well as approximation algorithms, have been proposed. Here, we propose a heuristic for connected target coverage problem in WSN. We compare the performance of our heuristic with the existing heuristic, which states that our algorithm performs better than the existing algorithm for connected target coverage problem. Again, we have implemented the 2-connected target coverage properties for the network which provide fault tolerance as well as robustness to the network. So, we propose one algorithm which gives the target coverage along with 2-connectivity.
  • Deadline-constraint services in cloud with heterogeneous servers

    Sahoo S., Mishra S.K., Sahoo B., Puthal D., Obaidat M.S.

    Conference paper, IEEE CITS 2017 - 2017 International Conference on Computer, Information and Telecommunication Systems, 2017, DOI Link

    View abstract ⏷

    The development of delay sensitive applications needs massive data storage and computing resources, especially in a typical cloud environment. The cloud computing paradigm provides a broad range of services viz. software, platform, and infrastructure for various applications (both real-time and non real-time) over the Internet. But, in the case of Infrastructure-as-a-Service (IaaS) cloud platform, either over provisioning or under-provisioning of resources becomes a challenging issue for time constraint applications. An accurate modeling of cloud centers is not feasible due to the nature of cloud centers and diversity of user requests. We present an analytical model to estimate the performance of the cloud center for deadline sensitive tasks. We used the model to find the number of task miss deadline, waiting time of a task, and response time of the service, among others.
  • Execution of real time task on cloud environment

    Sahoo S., Nawaz S., Mishra S.K., Sahoo B.

    Conference paper, 12th IEEE International Conference Electronics, Energy, Environment, Communication, Computer, Control: (E3-C3), INDICON 2015, 2016, DOI Link

    View abstract ⏷

    Cloud computing is an internet-based computing where resources, soft wares and information are shared on demand basis i.e. user can access documents anytime anywhere. Execution of real time task on cloud computing environment is an emerging research area. Real-time task needs to meet their deadlines regardless of system load or makespan. This paper discusses about scheduling of real time task on cloud environment considering Basic Earliest deadline first (BEDF), FFE (first fit EDF), BFE (best fit EDF), WFE (Worst fit EDF) algorithms. Different performance parameters such as guarantee ratio (GR), utilization of VMs (UV), throughput (TP) are used to measure the effectiveness of the algorithms.
  • Improving energy consumption in cloud

    Mishra S.K., Deswal R., Sahoo S., Sahoo B.

    Conference paper, 12th IEEE International Conference Electronics, Energy, Environment, Communication, Computer, Control: (E3-C3), INDICON 2015, 2016, DOI Link

    View abstract ⏷

    To meet the service level agreement (SLA) between the cloud user and the cloud service provider, the service provider has to pay more. The cloud resources are allocated not only to satisfy the quality of services (QoS) those are specified in SLA, but also need to reduce energy utilization. Therefore, task consolidation plays an important role in cloud computing, which map users service requests to appropriate resources resulting in proper utilization of various cloud resources. The enhancement of overall performance of cloud computing also depends on the Task Consolidation approaches. Here, for task consolidation problem, we present an energy aware model which includes description of physical hosts, virtual machines and service requests (tasks) submitted by users. For the proposed model, an Energy Aware Task Consolidation (EATC) algorithm is developed where heterogeneity also affects the performance and show significant improvement in energy savings.
  • Metaheuristic approaches to task consolidation problem in the cloud

    Mishra S.K., Sahoo B., Sahoo K.S., Jena S.K.

    Book chapter, Resource Management and Efficiency in Cloud Computing Environments, 2016, DOI Link

    View abstract ⏷

    The service (task) allocation problem in the distributed computing is one form of multidimensional knapsack problem which is one of the best examples of the combinatorial optimization problem. Nature-inspired techniques represent powerful mechanisms for addressing a large number of combinatorial optimization problems. Computation of getting an optimal solution for various industrial and scientific problems is usually intractable. The service request allocation problem in distributed computing belongs to a particular group of problems, i.e., NP-hard problem. The major portion of this chapter constitutes a survey of various mechanisms for service allocation problem with the availability of different cloud computing architecture. Here, there is a brief discussion towards the implementation issues of various metaheuristic techniques like Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), BAT algorithm, etc. with various environments for the service allocation problem in the cloud.
  • Honeypot-based intrusion detection system: A performance analysis

    Kondra J.R., Bharti S.K., Mishra S.K., Babu K.S.

    Conference paper, Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016, 2016,

    View abstract ⏷

    Attacks on the internet keep on increasing and it causes harm to our security system. In order to minimize this threat, it is necessary to have a security system that has the ability to detect zero-day attacks and block them. 'Honeypot is the proactive defense technology, in which resources placed in a network with the aim to observe and capture new attacks'. This paper proposes a honeypot-based model for intrusion detection system (IDS) to obtain the best useful data about the attacker. The ability and the limitations of Honeypots were tested and aspects of it that need to be improved were identified. In the future, we aim to use this trend for early prevention so that pre-emptive action is taken before any unexpected harm to our security system.
  • Real time task execution in cloud using mapreduce framework

    Sahoo S., Sahoo B., Turuk A.K., Mishra S.K.

    Book chapter, Resource Management and Efficiency in Cloud Computing Environments, 2016, DOI Link

    View abstract ⏷

    Cloud Computing era comes with the advancement of technologies in the fields of processing, storage, bandwidth network access, security of internet etc. The development of automatic applications, smart devices and applications, sensor based applications need huge data storage and computing resources and need output within a particular time limit. Now users are becoming more sensitive towards, delay in applications they are using. So, a scalable platform like Cloud Computing is required that can provide huge computing resource, and data storage required for processing such applications. MapReduce framework is used to process huge amounts of data. Data processing on a cloud based on MapReduce would provide added benefits such as fault tolerant, heterogeneous, ease of use, free and open, efficient. This chapter discusses about cloud system model, real-time MapReduce framework, Cloud based MapReduce framework examples, quality attributes of MapReduce scheduling and various MapReduce scheduling algorithm based on quality attributes.
  • A comparative analysis of packet scheduling schemes for multimedia services in LTE networks

    Sahoo B.P.S., Puthal D., Swain S., Mishra S.

    Conference paper, Proceedings - 1st International Conference on Computational Intelligence and Networks, CINE 2015, 2015, DOI Link

    View abstract ⏷

    The revolution in high-speed broadband network is the requirement of the current time, in other words here is an unceasing demand for high data rate and mobility. Both provider and customer see, the long time evolution (LTE) could be the promising technology for providing broadband, mobile Internet access. To provide better quality of service (QoS) to customers, the resources must be utilized at its fullest impeccable way. Resource scheduling is one of the important functions for remanufacturing or upgrading system performance. This paper studies the recently proposed packet scheduling schemes for LTE systems. The study has been concentrated in implication to real-time services such as online video streaming and Voice over Internet Protocol (VOIP). For performance study, the LTE-Sim simulator is used. The primary objective of this paper is to provide results that will help researchers to design more efficient scheduling schemes, aiming to get better overall system performance. For the simulation study, two scenarios, one for video traffic and other for VoIP have been created. Various performances metric such as packet loss, fairness, end-to-end (E2E) delay, cell throughput and spectral efficiency has been measured for both the scenarios varying numbers of users. In the light of the simulation result analysis, the frame level scheduler (FLS) algorithms outperform other, by balancing the QoS requirements for multimedia services.
  • Cloud computing features, issues, and challenges: A big picture

    Puthal D., Sahoo B.P.S., Mishra S., Swain S.

    Conference paper, Proceedings - 1st International Conference on Computational Intelligence and Networks, CINE 2015, 2015, DOI Link

    View abstract ⏷

    Since the phenomenon of cloud computing was proposed, there is an unceasing interest for research across the globe. Cloud computing has been seen as unitary of the technology that poses the next-generation computing revolution and rapidly becomes the hottest topic in the field of IT. This fast move towards Cloud computing has fuelled concerns on a fundamental point for the success of information systems, communication, virtualization, data availability and integrity, public auditing, scientific application, and information security. Therefore, cloud computing research has attracted tremendous interest in recent years. In this paper, we aim to precise the current open challenges and issues of Cloud computing. We have discussed the paper in three-fold: first we discuss the cloud computing architecture and the numerous services it offered. Secondly we highlight several security issues in cloud computing based on its service layer. Then we identify several open challenges from the Cloud computing adoption perspective and its future implications. Finally, we highlight the available platforms in the current era for cloud research and development.
Contact Details

sambitkumar.m@srmap.edu.in

Scholars

Doctoral Scholars

  • Ms Abdhisuta Dash
  • Ms Jasmini Kumari
  • Mr Subham Kumar Sahoo

Interests

  • Cloud Computing
  • Distributed Computing
  • Graph Theory
  • IoT

Education
2011
M.Sc. (CS)
Utkal University, Bhubaneswar
India
2014
M.Tech. (CS)
Utkal University, Bhubaneswar
India
2019
Ph.D. (CSE)
National Institute of Technology, Rourkela
India
Experience
  • 13th May 2019- 2020 | Assistant Professor | SoA Deemed to be University, Bhubaneswar, India
  • 2nd July 2018 – 10th May 2019 | Assistant Professor | Oriental University, Indore, India
  • 4th July 2011 – 10th July 2012 | Assistant Professor | Padmashree Kurtartha Acharya College of Engineering, Bargarh, Odisha, India
Research Interests
  • Multi-Objective service allocation in cloud and multi-cloud environment.
  • Integration of IoT and Cloud Computing.
  • Proposing new algorithms for optimizing multiple parameters in distributed system.
  • in
Awards & Fellowships
  • 2018 – IETE Best Research Award in IETE Seminar on Advances in Smart Hardware Technologies (ASH-Tech 2018) – IETE
  • 2019 - InSc Young Achiever Award - InSc
Memberships
  • IEEE Computer Society
  • IETE Associate Member
  • InSc Member
Publications
  • A Robust Model for Quantum-Resistant Cryptography to Tackle Quantum Risks

    Guha D., Lenka R., Sharma V., Mishra S.K., Alkhayyat A., Tripathy H.K.

    Conference paper, Lecture Notes in Networks and Systems, 2026, DOI Link

    View abstract ⏷

    As quantum computing advances, conventional cryptographic algorithms face developing threats, necessitating the improvement of quantum-resistant security mechanisms. Winternitz One-Time Signature (WOTS) is a promising cryptographic scheme that offers robust resistance in competition to quantum attacks. This paper explores the software of WOTS in enhancing the protection of digital communications and information integrity in a quantum computing generation. By manner of analysing the fundamental standards, sensible implementations, and ability demanding situations of WOTS, this research dreams to provide insights into its effectiveness as a quantum-resistant protection solution.
  • Tomato Leaf Disease Detection Using Deep Learning and Machine Learning

    Chebrolu M., Garikapati K., Veeramachaneni Y., Annabathina J., Mishra S.K., Mishra S.K.

    Conference paper, 2025 International Conference on Artificial Intelligence and Machine Vision, AIMV 2025, 2025, DOI Link

    View abstract ⏷

    Detecting diseases in tomato leaves at an early stage is crucial for preventing crop damage and improving food security. Traditional diagnostic methods are often inefficient, requiring significant expertise and time. To address this challenge, we explore AI-driven approaches, integrating DL and ML methods for automated disease detection. This study employs CNNs, specifically leveraging the VGG16 architecture for feature extraction. Additionally, we compare its effectiveness with classical classifiers such as KNN and SVM. Using a publicly available dataset of healthy and diseased tomato leaves, our results indicate that CNN-based models outperform conventional machine learning classifiers in both accuracy and efficiency. Moreover, integrating IoT-based analytics enhances early detection, reducing crop losses and promoting sustainable agricultural practices.
  • When latent features meet side information: A preference relation based graph neural network for collaborative filtering

    Shi X., Zhang Y., Pujahari A., Mishra S.K.

    Article, Expert Systems with Applications, 2025, DOI Link

    View abstract ⏷

    As recommender systems shift from rating-based to interaction-based models, graph neural network-based collaborative filtering models are gaining popularity due to their powerful representation of user-item interactions. However, these models may not produce good item ranking since they focus on explicit preference predictions. Further, these models do not consider side information since they only capture latent feature information of user-item interactions. This study proposes an approach to overcome these two issues by employing preference relation in the graph neural network model for collaborative filtering. Using preference relation ensures the model will generate a good ranking of items. The item side information is integrated into the model through a trainable matrix, which is crucial when the data is highly sparse. The main advantage of this approach is that the model can be generalized to any recommendation scenario where a graph neural network is used for collaborative filtering. Experimental results obtained using the recent RS datasets show that the proposed model outperformed the related baselines.
  • Trading Strategy with EMA’s and Risk Management

    Pranav Somisetty S.D., Jagadishwar Gatte S., Kosuri N.B., Gowrish Chinta L., Mishra S.K., Kumar Mishra S.

    Conference paper, 2025 International Conference on Artificial Intelligence and Machine Vision, AIMV 2025, 2025, DOI Link

    View abstract ⏷

    The trading world often appears mysterious, filled with stories of fear, hope, addiction, and occasional profits. However, many fail to recognize that consistent profitability in trading is driven by discipline, a well-defined strategy, and strict adherence to rules. This lack of awareness is a key reason why 75-90% of new traders enter the market with high expectations but end up losing their hard-earned money. In this research we propose a quantitative trading strategy based on exponential moving average (EMA) crossovers, volume analysis, and structured profit booking. The strategy utilises a short-term 9-period EMA and along-term 15-period EMA to identify trend reversals, generating buy signals when the two different EMA's crosses under some conditions and sell signals are generated when the opposite occurs. Meanwhile, a confirmation mechanism is introduced, requiring the price to move at least 0.06% above the crossover price while ensuring the crossover candle remains bullish. Additionally, volume conditions are incorporated to validate momentum, ensuring buy signals are triggered only when the trading volume increases in ascending order. To optimize trade management, a multi-tier profit booking system is implemented, allowing partial exits at predefined levels. which ensures that the traders secure gains while allowing profitable trades to run. The strategy's performance is evaluated through historical back-testing, assessing profitability, accuracy, and risk-reward dynamics. The results demonstrate the effectiveness of integrating EMA crossings with volumes and structured exit points in improving trade success rates. This might become the future of so many people to convert their portfolio from a losing streak to a winning streak.
  • Enhancing Heart Disease Prediction with Data Augmentation and ML Classifiers

    Rachapalli V.K., Meenavalli C., Nunna S.P., Yarramaneni P., Mishra S.K., Mishra S.K.

    Conference paper, 2025 International Conference on Artificial Intelligence and Machine Vision, AIMV 2025, 2025, DOI Link

    View abstract ⏷

    Heart disease is a significant cause of death worldwide, and early prediction is vital for prevention and treatment. This project uses the Framingham Heart Study dataset for the early prediction of Coronary Heart Disease (CHD) using machine learning methods. The Framingham Heart Study is a highly unbalanced dataset, with only 16 % cases of CHD, which impacts the accuracy of the model. To overcome this, data augmentation techniques such as SMOTE and cGAN are applied to create synthetic cases of CHD. The machine learning algorithms that are compared: Random Forest, XGBoost, SVM, and MLP. XGBoost has achieved the highest AUC-ROC of 0.973 when cGAN-augmented data is used, while cGAN-augmented data improves recall and overall model performance significantly. This study identifies the potential for combining machine learning with data augmentation to improve CHD prediction.
  • Ensembling AI and Federated Learning for Industry 4.0: A Privacy-Preserving Approach in Edge Computing

    Sahoo S.K., Dash A., Mishra S.K., Humayun M.

    Book chapter, Advances in Science, Technology and Innovation, 2025, DOI Link

    View abstract ⏷

    The emergence of Industry 4.0 resulted in a disruptive era marked by the incorporation of cutting-edge technology, such as edge computing and artificial intelligence (AI), into industrial processes. The integration of AI and Federated Learning (FL) methodologies and the creation of intelligent solutions that protect privacy within the framework of Industry 4.0 are two key ideas that will be explored in this chapter. The chapter highlights that one major obstacle to edge computing’s widespread adoption in Industry 4.0 is privacy concerns. It emphasizes the necessity of finding solutions that balance the demands for real-time processing with the strictest privacy regulation. The main goal is to investigate how intelligent edge device solutions can be implemented while maintaining privacy protection through the use of FL. The goal of this chapter is to shed light on how to use the synergies between AI and FL to address privacy concerns related to Industry 4.0. The chapter ends with a call for Industry 4.0, which will see the standardization of edge computing, federated learning techniques, and artificial intelligence. By putting in place privacy-preserving safeguards, organizations are encouraged to adopt new technologies while maintaining strict data privacy and security standards. In the rapidly changing context of Industry 4.0, this symbiotic connection is expected to transform industrial landscapes, guiding them towards unmatched efficiency and creativity.
  • Intent-Driven VM Allocation Strategy for Optimizing Cloudlet Processing in Edge-Cloud Computing

    Sahoo S.K., Mishra S.K., Puthal D.

    Article, IEEE Internet of Things Journal, 2025, DOI Link

    View abstract ⏷

    Edge-cloud computing refers to a paradigm that combines the benefits of edge and cloud computing to optimize data processing and resource utilization. Edge-cloud computing plays a crucial role in resource allocation by optimizing the distribution of computational resources between edge devices and centralized cloud infrastructures. In the rapidly evolving landscape of edge-cloud computing, efficient VM allocation is critical for optimizing resource utilization, minimizing latency, and ensuring high SLA compliance. This paper introduces a novel heuristic VM allocation strategy, named LLCD, to enhance cloudlet or task processing in edge-cloud data centers. By employing a heuristic approach inspired by mixed-integer nonlinear programming models, this strategy dynamically assigns VMs based on their current load and the impending deadlines of tasks, significantly reducing overall system latency and enhancing SLA success rates. Simulation was conducted across various computational intensities. The findings reveal that the proposed approach substantially improves resource utilization and operational efficiency, adapting to dynamic workloads, by achieving an SLA success ratio as 74.26% and 83.7% in different deadline scenarios. The adaptive nature of the LLCD algorithm allows real-time task reallocation based on system feedback, which mirrors the operational principles of AI-driven orchestration in distributed IoT environments. The validation is achieved through a multi-iteration simulation model that emulates dynamic IoT workloads, demonstrating LLCD’s learning capability in maintaining SLA stability and consistent latency reduction across changing task distributions. Moreover, the proposed heuristic provides a foundation for latency-efficient and learning-based management in distributed computing environments.
  • Container Placement Using Penalty-Based PSO in the Cloud Data Center

    Akram Khan M., Sahoo B., Kumar Mishra S.

    Article, Concurrency and Computation: Practice and Experience, 2025, DOI Link

    View abstract ⏷

    Containerization has transformed application deployment by offering a lightweight, scalable, and portable architecture for the deployment of container applications and their dependencies. In contemporary cloud computing data centers, where virtual machines (VMs) are frequently utilized to host containerized applications, the challenge of effective placement of the container has garnered significant attention. Container placement (CP) involves placing a container over the VM to execute a container. CP is a nontrivial problem in the container cloud data center (CCDC). Poor placement decisions can lead to decreased service performance or wastage of cloud resources. Efficient placement of containers within a virtual environment is critical while optimizing resource utilization and performance. This paper proposes a penalty-based particle swarm optimization (PB-PSO) CP algorithm. In the proposed algorithm, we have considered the makespan, cost, and load of the VM while making the CP decisions. We have proposed the concept of a load-balancing penalty to prevent a VM from becoming overloaded. This algorithm solves various CP challenges by varying container application sizes in heterogeneous cloud environments. The primary goal of the proposed algorithm is to minimize the makespan and computational cost of containers through efficient resource utilization. We have performed extensive simulation studies to verify the efficacy of the proposed algorithm using the CloudSim 4.0 simulator. The proposed optimization algorithm (PB-PSO) aims to minimize both the makespan and the execution monetary costs and maximize the resource utilization simultaneously. During the simulation, we observed a reduction of 10% to 15% in both execution cost and makespan. Furthermore, our algorithm achieved the most optimal cost-makespan trade-offs compared to other competing algorithms.
  • A Survey on Task Scheduling in Edge-Cloud

    Sahoo S.K., Mishra S.K.

    Article, SN Computer Science, 2025, DOI Link

    View abstract ⏷

    In this modern era, cloud computing is not enough to meet today’s intelligent society’s data processing needs, so edge computing has emerged. In contrast to computation in the cloud, it elaborates user proximity and proximity to the data source. To store local, small sized, and processed data on the edges of the network is more effective. The edge paradigm, intended to be a leading computation due to its low latency, also faces many challenges due to computational capabilities and resource availability. Edge computing allows edge devices to release heavy loads and computational operations on the remote server. This allows us to take full advantage of the server-side computing and storage in edge devices. However, the offload of all highly compressed computing operations on a remote server at the same time may become overcrowded, leading to intensive processing delays for many computing operations and unexpectedly elevated power usage. Instead of that, it is possible that spare edge resources may need to be utilized effectively and the access to expensive cloud resources would be restricted. As a result, it is important to investigate the collaborative planning process (scheduling) for the edge servers with a cloud server based on task features, development objectives, and system status. It can assist in performing all the computing functions efficiently and effectively. This paper analyzes and summarizes computing conditions for the edge computing context and classifies the computation of tasks into various edge-cloud computing scenarios. At the end, based on the problem structure, various collaborative planning methods for computational functions are presented.
  • Multi-objective based container placement strategy in CaaS

    Khan M.A., Sahoo B., Mishra S.K., Shankar A.

    Article, Software - Practice and Experience, 2025, DOI Link

    View abstract ⏷

    In contrast to a conventional virtual machine (VM), a container is a lightweight virtualization technology. Containers are becoming a prominent technology for cloud services because of their portable, scalable, and flexible deployments, especially in the Internet of Things (IoT), smart devices, and fog and edge computing. It is a type of operating system-level virtualization in which the kernel allows multiple isolated containers to run independently. Container placement (CP) is a nontrivial problem in Container-as-a-Service (CaaS). CP is mapping to a container over virtual machines (VMs) to execute an application. Designing an efficient CP strategy is complex due to several intertwined challenges. These challenges arise from a diverse spectrum of computing resources, like on-demand and unpredictable fluctuations of IT resources by multiple tenants. In this article, we propose a modified sum-based container placement algorithm called a multi-objective optimization-based container placement algorithm (MSBCPA). In the proposed algorithm, we have considered two metrics: makespan and monetary costs for optimizing available IT resources. We have conducted comprehensive simulation experiments to validate the effectiveness of the proposed algorithm over the CloudSim 4.0 simulator. The proposed optimization algorithm (MSBCPA) aims to minimize the makespan and the execution monetary costs simultaneously. In the simulation, we found that the execution cost and energy consumption cost reduce by 20% to 30% and achieve the best possible cost-makespan trade-offs compared to competing algorithms.
  • An Integrated ELM Based Feature Reduction Combination Detection for Gene Expression Data Analysis

    Tripathy J., Dash R., Pattanayak B.K., Mishra S.K.

    Article, SN Computer Science, 2025, DOI Link

    View abstract ⏷

    Globally, cancer stands as the second leading cause of mortality. Various strategies have been proposed to address this issue, with a strong emphasis on utilizing gene expression data to enhance cancer detection methods. However, challenges arise due to the high dimensionality, limited sample size relative to its dimensions, and the inherent redundancy and noise in many genes. Consequently, it is advisable to employ a subset of genes rather than the entire set for classifying gene expression data. This research introduces a model that incorporates Ranked-based Filter (RF) techniques for extracting significant features and employs Extreme Learning Machine (ELM) for data classification. The computational cost of using RF technique over high dimensional data is low. However extraction of significant genes using one or two stage of reduction is not effective. Thus, a 4-stage feature reduction strategy is applied. The reduced data is then utilized for classification using few variants of ELM model and activation function. Subsequently, a two-stage grading approach is implemented to determine the most suitable classifier for data classification. This analysis is conducted over four microarray gene expression data using four activation function with seven learning based classifiers, from which it is shown that II-ELM classifier outperforms in terms of performance matrix and ROC graph.
  • Message from ICEC Steering Committee Chair ICEC 2024

    Mishra S.K., Puthal D.

    Editorial, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

  • A Systematic Review on Federated Learning in Edge-Cloud Continuum

    Mishra S.K., Sahoo S.K., Swain C.K.

    Review, SN Computer Science, 2024, DOI Link

    View abstract ⏷

    Federated learning (FL) is a cutting-edge machine learning platform that protects user privacy while enabling collaborative learning across various devices. It is particularly relevant in the current environment when massive volumes of data are generated at the edge of networks by developing technologies like social networking, cloud computing, edge computing, and the Internet of Things. FL reduces the possibility of unauthorized access by third parties by allowing data to stay on local devices, hence mitigating any privacy breaches. The integration of FL in Cloud, Edge, and hybrid Edge-Cloud settings are some of the computing paradigms that this study investigates. We highlight the salient features of FL, go over the main obstacles to its implementation and use, and make recommendations for future study directions. Furthermore, we assess how FL, by facilitating safe and cooperative data sharing among vehicles, can improve service quality in the Internet of Vehicles (IoV). Our study findings are intended to offer practical insights and suggestions that may have an impact on a variety of computing technology research topics.
  • Special issue on collaborative edge computing for secure and scalable Internet of Things

    Puthal D., Mishra A.K., Mishra S.K.

    Editorial, Software - Practice and Experience, 2024, DOI Link

  • Message from Convener and Co-Conveners ICEC-2024

    Mishra S.K., Enduri M.K., Dash J.K., Manikandan V.M.

    Editorial, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

  • Applications of Federated Learning in Computing Technologies

    Mishra S.K., Sindhu K., Teja M.S., Akhil V., Krishna R.H., Praveen P., Mishra T.K.

    Book chapter, Convergence of Cloud with AI for Big Data Analytics: Foundations and Innovation, 2024, DOI Link

    View abstract ⏷

    Federated learning is a technique that trains the knowledge across different decentralized devices holding samples of information without exchanging them. The concept is additionally called collaborative learning. In federated learning, the clients are allowed separately to teach the deep neural network models with the local data combined at the deep neural network model at the central server. All the local datasets are uploaded to a minimum of one server, so it assumes that local data samples are identically distributed. It doesn’t transmit the information to the server. Because of its security and privacy concerns, it’s widely utilized in many applications like IoT, cloud computing; Edge computing, Vehicular edge computing, and many more. The details of implementation for the privacy of information in federated learning for shielding the privacy of local uploaded data are described. Since there will be trillions of edge devices, the system efficiency and privacy should be taken with no consideration in evaluating federated learning algorithms in computing technologies. This will incorporate the effectiveness, privacy, and usage of federated learning in several computing technologies. Here, different applications of federated learning, its privacy concerns, and its definition in various fields of computing like IoT, Edge, and Cloud Computing are presented.
  • Designing a GSM and ARDUINO based Reliable Home Automation System

    Tripathy J., Dash S., Dash R., Pal J., Padhi S., Mishra S.K.

    Conference paper, Proceedings - 2024 OITS International Conference on Information Technology, OCIT 2024, 2024, DOI Link

    View abstract ⏷

    This paper introduces the design and prototype of a new home automation system that utilizes GSM technology as the network infrastructure to connect its components. The proposed system is composed of two primary parts: the first is the GSM module, which acts as the core of the system, managing, controlling, and monitoring the user's home. Users and system administrators can connect to the GSM locally to access devices and manage system functions. The second part is the hardware interface module, which provides the necessary interface for relays and actuators within the home automation system. The mobile phone, originally designed for making calls and sending text messages, has evolved into a versatile device, especially with the advent of smartphones. In this study, the researcher develops a home automation system using GSM and Arduino, allowing users to control household appliances by simply sending SMS commands through their GSM-based phones.This paper states that a smartphone is not necessary; but an old GSM phone can effectively be used to turn home electronic appliances on and off from any location. The proposed system offers greater scalability and flexibility compared to commercially available home automation systems.
  • A deep transfer learning model for green environment security analysis in smart city

    Sahu M., Dash R., Kumar Mishra S., Humayun M., Alfayad M., Assiri M.

    Article, Journal of King Saud University - Computer and Information Sciences, 2024, DOI Link

    View abstract ⏷

    Green environmental security refers to the state of human-environment interactions that include reducing resource shortages, pollution, and biological dangers that can cause societal disorder. In IoT-enabled smart cities, due to the advancement of technologies, sensors and actuators collect vast quantities of data that are analyzed to extract potentially useful information. However, due to the noise and diversity of the data generated, only a small portion of the massive data collected from smart cities is used. In sustainable Land Use and Land Cover (LULC) management, environmental deterioration resulting from improper land usage in the digital ecosystem is a global issue that has garnered attention. The deep learning techniques of AI are recognized for their capacity to manage vast amounts of erroneous and unstructured data. In this paper, we propose a morphologically augmented fine-tuned DenseNet-121(MAFDN) LULC classification model to automate the categorization of high spatial resolution scene images for environmental conservation. This work includes an augmentation process (i.e. erosion, dilation, blurring, and contrast enhancement operations) to extract spatial patterns and enlarge the training size of the dataset. A few state-of-the-art techniques are incorporated for contrasting the efficacy of the proposed approach. This facilitates green resource management and personalized provision of services.
  • Enhancing Edge Intelligence with Layer-wise Adaptive Precision and Randomized PCA

    Mishra S.K., Velankani Joise Divya G.C., Maddi P.A., Tanniru N.M., Manthena S.L.P.

    Conference paper, Proceedings of 2nd International Conference on Advancements in Smart, Secure and Intelligent Computing, ASSIC 2024, 2024, DOI Link

    View abstract ⏷

    Edge intelligence is the ability of edge devices to carry out intelligent operations, such as object identification, speech recognition, or natural language processing, utilizing machine learning algorithms. The primary goal is to fix edge computing's problems and improve its performance. The main goal of this work is to apply RPCA to increase energy efficiency and reduce memory usage. The algorithm computes the covariance matrix of the centered data, finds the eigenvectors and eigenvalues of the covariance matrix, sorts the eigenvectors and eigenvalues in descending order of the eigenvalues, chooses the first set of eigenvectors, and projects the data onto the chosen eigenvectors. This article employs a technique known as layer-wise adaptive precision (LAP), which decreases the precision of activations in neural network layers that contribute less to output accuracy.
  • Role of federated learning in edge computing: A survey

    Mishra S.K., Kumar N.S., Rao B., Brahmendra, Teja L.

    Article, Journal of Autonomous Intelligence, 2024, DOI Link

    View abstract ⏷

    This paper explores various approaches to enhance federated learning (FL) through the utilization of edge computing. Three techniques, namely Edge-Fed, hybrid federated learning at edge devices, and cluster federated learning, are investigated. The Edge-Fed approach implements the computational and communication challenges faced by mobile devices in FL by offloading calculations to edge servers. It introduces a network architecture comprising a central cloud server, an edge server, and IoT devices, enabling local aggregations and reducing global communication frequency. Edge-Fed offers benefits such as reduced computational costs, faster training, and decreased bandwidth requirements. Hybrid federated learning at edge devices aims to optimize FL in multi-access edge computing (MAEC) systems. Cluster federated learning introduces a cluster-based hierarchical aggregation system to enhance FL performance. The paper explores the applications of these techniques in various domains, including smart cities, vehicular networks, healthcare, cybersecurity, natural language processing, autonomous vehicles and smart homes. The combination of edge computing (EC) and federated learning (FL) is a promising technique gaining popularity across many applications. EC brings cloud computing services closer to data sources, further enhancing FL. The integration of FL and EC offers potential benefits in terms of collaborative learning.
  • Task Offloading Technique Selection In Mobile Edge Computing

    Mishra S.K., Challa H.K., Kotha K.S., Yarramreddy D.P.

    Conference paper, Proceedings of 2nd International Conference on Advancements in Smart, Secure and Intelligent Computing, ASSIC 2024, 2024, DOI Link

    View abstract ⏷

    In distributed computing environments, computation offloading is a vital strategy for maximizing the performance and energy efficiency of mobile devices. Distributed deep learning-based offloading (DDLO) [10] and deep reinforcement learning for online computation offloading (DROO) [10] are two popular methods for solving the computation offloading problem. In DDLO, the data is divided into smaller pieces during offloading and distributed throughout the systems or devices. In DROO, an agent is trained to determine the optimum offloading choices based on the resources at hand, the network environment, and the application's performance requirements. Comparison is presented of both approaches, emphasizing their benefits and drawbacks and the situations when one approach is more suitable than the other. Precision, effectiveness, and adaptability are just a few of the different metrics we use to evaluate the performance of both techniques in a variety of workload and network configuration scenarios. Our findings indicate that while deep reinforcement learning is more able to respond to environmental changes, distributed deep learning-based offloading is more efficient in terms of computational resources.
  • Message from General Chairs ICEC-2024

    Mishra S.K., Mohapatra P.

    Editorial, Intelligent Computing and Emerging Communication Technologies, ICEC 2024, 2024, DOI Link

  • Advanced Temporal Attention Mechanism Based 5G Traffic Prediction Model for IoT Ecosystems

    Samudrala D.S., Mishra S.K., Senapati R.

    Conference paper, Proceedings - 2024 IEEE 21st International Conference on Mobile Ad-Hoc and Smart Systems, MASS 2024, 2024, DOI Link

    View abstract ⏷

    Traffic prediction in5G is important for effective deployment and operation of Internet of Things (IoT) ecosystems. It enables resource management and optimization, guaranteeing that the network can handle unpredictable traffic volumes with-out experiencing traffic jams. This helps to ensure high quality of service and low latency for applications such as autonomous automobiles and virtual reality. Predictive traffic management further enhances user experience by keeping services consistent and reliable, particularly during busy hours. There are various approaches to traffic prediction in 5G networks, and each has advantages and disadvantages of its own. The choice of model will depend on how precise, adaptable, and computationally demanding the network must be. The model proposed in this paper integrates lightweight convolution with temporal attention to deliver accurate and efficient traffic prediction for 5G networks that may further be useful for developing IoT ecosystem.
  • Maximizing Resource Utilization Using Hybrid Cloud-based Task Allocation Algorithm

    Mishra S.K., Mohith G.K.H., Ambati S.T., Guduru K.K., Senapati R.

    Conference paper, Proceedings - 2024 IEEE 21st International Conference on Mobile Ad-Hoc and Smart Systems, MASS 2024, 2024, DOI Link

    View abstract ⏷

    Cloud computing operates similarly to a utility, providing users with on-demand access to various hardware and software resources, billed according to usage. These resources are primarily virtualized, with virtual machines (VMs) serving as critical components. However, task allocation within VMs presents significant challenges, as uneven distribution can lead to underloading or overloading, causing system inefficiencies and potential failures. This study addresses these issues by proposing a novel hybrid task allocation algorithm that combines the strengths of the Artificial Bee Colony (ABC) algorithm with Particle Swarm Optimization (PSO). Our approach aims to enhance resource utilization and reduce the risks of VM overload or underload. We conduct a comprehensive evaluation of the proposed hybrid algorithm against traditional ABC and PSO algorithms, focusing on their effectiveness in managing diverse task loads. The results of our empirical analysis indicate that our hybrid approach outperforms the conventional algorithms, leading to better resource utilization and more accurate task allocation. These findings have significant implications for optimizing task allocation in cloud computing environments, and we suggest potential avenues for future research to further refine these strategies.
  • Enhancing Traffic Flow Through Advanced ACO Mechanism

    Divya G C V.J., Mishra S.K., Puthal D.

    Conference paper, IEEE INFOCOM 2024 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2024, 2024, DOI Link

    View abstract ⏷

    Severe traffic congestion is a significant challenge for urban areas, and improving sustainable urban development is critical, yet traditional traffic management systems often struggle to cope with dynamic real-time conditions due to their reliance on predetermined schedules and fixed control mechanisms. This paper advocates for the application of optimizing techniques, specifically an enhanced version of ant colony optimization (ACO), to alleviate this challenge. By effectively managing and enhancing vehicle movement, these approaches target the reduction of congestion, travel times, and costs while concurrently enhancing fuel efficiency. This approach can also be adapted to optimize the deployment and movement of drones in wireless communication networks, ensuring optimal coverage and resource utilization. Implementations, comparisons, and visualizations show how these approaches help improve traffic movement, thereby minimizing congestion-associated problems.
  • AI Based Feature Selection for Intrusion Detection Classifiers in Cloud of Things

    Ravala R.K., Polisetty K.B., Mishra S.K.

    Conference paper, 2024 1st International Conference on Cognitive, Green and Ubiquitous Computing, IC-CGU 2024, 2024, DOI Link

    View abstract ⏷

    The popularity of cloud computing can be attributed to its on-demand nature, scalability, and flexibility. However, because of its heightened vulnerability and propensity for so-phisticated, widespread attacks, safeguarding this distributed en-vironment presents difficulties. Conventional IDS are insufficient. The proposed IDS for cloud environments in this study makes use of ensemble feature selection and classification techniques. This approach robustly distinguishes between attacks and normal traf-fic by merging individual classifiers through voting. Performance measures and ROC-AUC analysis show that the new approach is significantly more accurate and has fewer false alarms than the previous one. For cloud intrusion detection, this method provides a statistically better option.
  • A Panoramic Review on Cutting-Edge Methods for Video Anomaly Localization

    Nayak R., Mishra S.K., Dalai A.K., Pati U.C., Das S.K.

    Review, IEEE Access, 2024, DOI Link

    View abstract ⏷

    Video anomaly detection and localization is the process of spatiotemporally localizing the anomalous video segment corresponding to the abnormal event or activities. It is challenging due to the inherent ambiguity of anomalies, diverse environmental factors, the intricate nature of human activities, and the absence of adequate datasets. Further, the spatial localization of the video anomalies (video anomaly localization) after the temporal localization of the video anomalies (video anomaly detection) is also a complex task. Video anomaly localization is essential for pinpointing the anomalous event or object in the spatial domain. Hence, the intelligent video surveillance system must have video anomaly detection and localization as key functionalities. However, the state-of-the-art lacks a dedicated survey of video anomaly localization. Hence, this article comprehensively surveys the cutting-edge approaches for video anomaly localization, associated threshold selection strategies, publicly available datasets, performance evaluation criteria, and open trending research challenges with potential solution strategies.
  • An Ensemble Deep Learning Model for Oral Squamous Cell Carcinoma Detection Using Histopathological Image Analysis

    Das M., Dash R., Kumar Mishra S., Kumar Dalai A.

    Article, IEEE Access, 2024, DOI Link

    View abstract ⏷

    Deep learning approaches for medical image analysis are widely applied for the recognition and classification of different kinds of cancer. In this study, histopathological images of oral cells are analyzed for the programmed recognition of Oral squamous cell carcinoma (OSCC) using the proposed framework. The suggested model applies transfer learning and ensemble learning in two phases. In the 1st phase, a few Convolutional neural network (CNN) models are considered through transfer learning applications for OSCC detection. In the 2nd phase, the ensemble model is constructed considering the best two pre-trained CNN from the 1st phase. The proposed classifier is compared with leading-edge models like Alexnet, Resnet50, Resnet101, Inception net, Xception net, and InceptionresnetV2. Results are analyzed to demonstrate the effectiveness of the suggested framework. A three-phase comparative analysis is considered. Firstly, various metrics including accuracy, recall, F-score, and precision are evaluated. Secondly, a graphical analysis using a loss and accuracy graph is performed. Lastly, the accuracy of the proposed classifier is compared with that of other models from existing literature. Following the three-stage performance evaluation, the proposed ensemble classifier exhibits enhanced performance with an accuracy of 97.88%.
  • Comparative Evaluation of Optimization Techniques for Industrial Wireless Sensor Network Hello Flood Attack Mitigation

    Srinivas S., Tejaswi S., Mishra S.K.

    Conference paper, Proceedings - 2024 3rd International Conference on Computational Modelling, Simulation and Optimization, ICCMSO 2024, 2024, DOI Link

    View abstract ⏷

    Protecting Industrial Wireless Sensor Networks (IWSNs) means ensuring that crucial industrial processes remain as stable and whole as ever. In order to mitigate the 'Hello Flood Attack' in IWSNs, this paper compares three optimization heuristic techniques: Genetic Algorithm (GA), Simulated Annealing (SA) and Particle Swarm Optimization (PSO). Genetic Algorithm (GA) progresses remedies, Simulated Annealing (SA) interactively fixes communication setup and Particle Swarm Optimization (PSO) upgrades features to elevate vigor. The study looks into how well each optimization technique enhances network resilience and protects against the negative effects of Hello Flood Attacks. There is also a benchmark scenario for comparison. These results offer valuable information on the development of safe, secure IWSNs by pointing out the benefits and drawbacks of these systems.
  • Predictive VM Consolidation for Latency Sensitive Tasks in Heterogeneous Cloud

    Kumar Swain C., Routray P., Kumar Mishra S., Alwabel A.

    Conference paper, Lecture Notes in Networks and Systems, 2023, DOI Link

    View abstract ⏷

    Virtualization technology plays a crucial role for reducing the cost in a cloud environment. Efficient virtual machine (VM) packing method that focuses on compaction of hosts such that most of its resources are used when it serves the user requests. Here our aim is to reduce the power requirements of a cloud system by focusing on minimizing the number of hosts. We propose a predictive scheduling approach considering the deadline of a task request and make flexible decisions to allocate the tasks to hosts. Experimental results show that the proposed approach can save around 5 to 10% power consumption than the standard VM packing methods in most scenarios. Even when the total power consumption requirements remain the same as that of standard methods in some scenarios, the average number of hosts required in the cloud environment are reduced and thereby reducing the cost.
  • Blockchain-Based Medical Report Management and Distribution System

    Sahoo S.K., Mishra S.K., Guru A.

    Book chapter, 6G Enabled Fog Computing in IoT: Applications and Opportunities, 2023, DOI Link

    View abstract ⏷

    Generally, the Hospital operations contain loads of scientific reviews which can be a crucial part of operations. As a result of integrating pathology and other testing labs within the medical center, hospitals today have improved their business operations while also achieving greener and faster diagnoses. Many dif-ferent strategies are used in hospital operations, from patient admission and control to health center cost management. This will raise operational complexity and make it more challenging to manage, especially when combined with newly introduced offerings like pathology and pharmaceutical control. In order to overcome this issue, we employ the Hyperledger notion and a blockchain era to retain the data of each individual transaction with 100% authenticity. Instead of using a centralized server, all transactions are encrypted and kept as blocks, which are then used to authenticate within a network of computers. Additionally, we employ the hyper ledger concept to associate and store all associated scientific files for each transaction with a date stamp. This makes it possible to confirm the legitimacy of each document and identify any changes made by someone else. This consultation defines that affected person's clinical record is personal and every affected person has his very own privacy. To guard the reviews from hackers or enemies, who will make changes on clinical reviews and additionally saving the statistics without lacking any content material which performs an important position to shape a life. To study reviews, we are using a block chain method which splits the information into modules. Using this method hackers or enemies can't get the right information. "To bring forward a secure, safe, efficient, and legitimate medical report man-agement system" is the primary goal of this project.
  • LiDAR-based Building Damage Detection in Edge-Cloud Continuum

    Mishra S.K., Sanisetty M.L., Shaik A.Z., Thotakura S.L., Aluru S.L., Puthal D.

    Conference paper, 2023 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress, DASC/PiCom/CBDCom/CyberSciTech 2023, 2023, DOI Link

    View abstract ⏷

    In recent years, natural disasters such as earth-quakes and hurricanes have caused significant damage to buildings and infrastructure worldwide. As a result, there has been an increasing demand for efficient and accurate methods of assessing the extent of building damage to facilitate effective recovery efforts. One emerging technology that shows great promise in this area is Light Detection and Ranging (Li-DAR). Therefore, this paper proposes a novel detection framework utilizing textural feature extraction strategies for Li-DAR-based building damage detection. Li-DAR, a remote sensing technology, has ability to create detailed maps of buildings and other infrastructure, allowing for precise identification and measurement of damage caused by natural disasters. Integration of the popular paradigm Edge cloud continuum extends cloud's capabilities to the edge of the network, enabling more effective post-disaster recovery efforts. Smart Li-DAR sensors pre-process the captured data and send it to the nearest edge device for further processing.. Inclusion of machine learning algorithms like K-means clustering algorithm here is used to classify the buildings into damaged and undamaged classes by analyzing the extracted textural features. The scheme can detect various types of building damage. The cloud server is utilized to store the processed maps. The integration of the Edge-Cloud Continuum (ECC) has added more value by reducing the network usage, and latency of the Li-DAR-based building damage detection system. ECC enables processing and analysis of data at the point of origin as well as large-scale data processing and storage in cloud-based systems. This proposed framework has shown promising results in preliminary experiments and has the potential to revolutionize post-disaster recovery efforts by providing efficient building damage maps.
  • CS-Based Energy-Efficient Service Allocation in Cloud

    Kumar Mishra S., Kumar Sahoo S., Kumar Swain C., Guru A., Kumar Sethy P., Sahoo B.

    Conference paper, Lecture Notes in Networks and Systems, 2023, DOI Link

    View abstract ⏷

    Nowadays, cloud computing is growing rapidly and has been developed as an adequate and adaptable paradigm in solving large-scale problems. Since the number of cloud users and their requests are increasing fast, the loads on the cloud data center may be under-loaded or over-loaded. These circumstances induce various problems, such as high response time and energy consumption. High energy consumption in the cloud data center has drastic negative impacts on the environment. Literature shows that scheduling plays a significant role in the reduction of energy consumption. In the recent decade, this problem has attracted huge interest among researchers, and several solutions have been proposed. Energy-efficient service (task) allocation with high Customer Satisfaction (CS) constraint has become a critical problem of a cloud. In this paper, a high CS-based energy-efficient service allocation framework has been designed. This optimizes the energy consumption as well as the CS level in the cloud. The proposed algorithm is simulated in CloudSim simulator and compared with some standard algorithms. The simulation results show in favor of the proposed algorithm.
  • Automatic Detection of Oral Squamous Cell Carcinoma from Histopathological Images of Oral Mucosa Using Deep Convolutional Neural Network

    Das M., Dash R., Mishra S.K.

    Article, International Journal of Environmental Research and Public Health, 2023, DOI Link

    View abstract ⏷

    Worldwide, oral cancer is the sixth most common type of cancer. India is in 2nd position, with the highest number of oral cancer patients. To the population of oral cancer patients, India contributes to almost one-third of the total count. Among several types of oral cancer, the most common and dominant one is oral squamous cell carcinoma (OSCC). The major reason for oral cancer is tobacco consumption, excessive alcohol consumption, unhygienic mouth condition, betel quid eating, viral infection (namely human papillomavirus), etc. The early detection of oral cancer type OSCC, in its preliminary stage, gives more chances for better treatment and proper therapy. In this paper, author proposes a convolutional neural network model, for the automatic and early detection of OSCC, and for experimental purposes, histopathological oral cancer images are considered. The proposed model is compared and analyzed with state-of-the-art deep learning models like VGG16, VGG19, Alexnet, ResNet50, ResNet101, Mobile Net and Inception Net. The proposed model achieved a cross-validation accuracy of 97.82%, which indicates the suitability of the proposed approach for the automatic classification of oral cancer data.
  • A Hybrid Encryption Approach using DNA-Based Shift Protected Algorithm and AES for Edge-Cloud System Security

    Mishra S.K., Cherukuri C., Dheeraj P.V., Puthal D.

    Conference paper, OCIT 2023 - 21st International Conference on Information Technology, Proceedings, 2023, DOI Link

    View abstract ⏷

    The modern applications, such as smart cities, connected homes, and crisis management systems, has driven the emergence of the edge-cloud continuum to enable data processing to occur closer to the source, reducing latency and enhancing data processing efficiency. However, due to the distributed nature of edge nodes and cloud environments, data security remains a critical concern. Malicious actors may intercept or eavesdrop on communication channels between edge devices and the cloud. DNA computing, a groundbreaking security concept inspired by biological DNA, offers a promising solution to address these security challenges. This paper proposes a DNA-based cryptographic method for secure data transfer and communication in edge-cloud computing environments. The research also examines into various data security threats in the edge-cloud continuum and explores potential countermeasures.
  • A comparative study of different scheduling approaches for splittable latency sensitive tasks in Fog-Cloud environment

    Sandeep K.S., Koundinya C.A., Prabhas A.V., Swain C.K., Mishra S.K.

    Conference paper, 2023 2nd International Conference on Ambient Intelligence in Health Care, ICAIHC 2023, 2023, DOI Link

    View abstract ⏷

    IoT has revolutionized the way we live and the work we do by connecting different devices through the Internet. In the present scenario, the number of IoT devices are increasing rapidly due to the increase in technology and the increase in the comforts of life. Nowadays we can see that many of them are using IoT devices regularly, it's estimated that by the end of 2030, there will be 30 billion users who will be using IoT applications. These devices send data to the cloud for processing. Due to the distance of the cloud from the IoT devices, the application requests get delayed service responses. So to handle the latency sensitive applications we require the micro cloud service like fog servers deployed near to the data generation points. The fog layer lies between the IoT devices and Cloud which acts as an intermediate layer. This helps in reducing latency of the tasks and provide better performance. As the number of IoT applications keeps on increasing, the resources available with the fog nodes may not handle the upcoming demands. So to overcome these demands, we are using splittable methods to allocate the tasks to Fog/ Cloud nodes more compactly. If a task can be splitted before the deadline into different modules, then we split the given task and allocate those tasks to different fog nodes/ servers and then collecting back the data from the fog nodes/ servers and merging them into a single unit. With the help of this method, we can increase the performance of the system.
  • Latency Aware – Resource Planning in Edge Using Fuzzy Logic

    Sahoo S.K., Dash A., Vemula D.R., Swain C.K., Mishra S.K.

    Conference paper, 2023 2nd International Conference on Ambient Intelligence in Health Care, ICAIHC 2023, 2023, DOI Link

    View abstract ⏷

    As a potential paradigm for enabling effective and low-latency computation at the network's edge, edge computing has recently come into the spotlight. In edge computing environments, resource allocation is essential for ensuring the best possible resource utilization while still satisfying application requirements. Traditional resource allocation algorithms, however, struggle to effectively capture the uncertainties and ambiguity associated with resource availability and application needs because of the dynamic and varied nature of edge environments. This research offers a fuzzy logic-based method for planning to allocate resources in edge computing. Fuzzy logic offers a flexible and understandable framework for modeling and reasoning with imperfect and ambiguous data. The suggested method offers a more reliable and adaptable resource allocation system that can successfully address the uncertainties present in edge computing by utilizing fuzzy logic. The resource allocation process incorporates fuzzy membership functions to capture the vagueness of resource availability and application requirements. Fuzzy rules are defined to map the linguistic variables representing resource availability, application demands, and performance objectives to appropriate resource allocation decisions. The fuzzy inference engine then utilizes these rules to make intelligent decisions regarding resource allocation, considering the fuzzy inputs and the system's predefined objectives.
  • A Smart Logistic Classification Method for Remote Sensed Image Land Cover Data

    Sahu M., Dash R., Mishra S.K., Puthal D.

    Article, SN Computer Science, 2022, DOI Link

    View abstract ⏷

    A smart system integrates appliances of sensing, acquisition, classification and managing with regard to interpreting and analyzing a situation to generate decisions depending on the available data in a predictive way. Remotely sensed images are an essential tool for evaluating and analyzing land cover dynamics, particularly for forest-cover change. The remote data gathered for this operation from different sensors are of high spatial resolution and thus suffer from high interclass and low intraclass vulnerability issues which retards classification accuracy. To address this problem, in this research analysis, a smart logistic fusion-based supervised multi-class classification (SLFSMC) model is proposed to obtain a thematic map of different land cover types and thereby performing smart actions. In the pre-processing stage of the proposed work, a pair of closing and opening morphological operations is employed to produce the fused image to exploit the contextual information of adjacent pixels. Thereafter quality assessment of the fused image is estimated on four fusion metrics. In the second phase, this fused image is taken as input to the proposed classifiers. Afterward, a multi-class classification model is designed based on the supervised learning concept to generate maps for analyzing and exporting decisions based on any critical climatic situation. In our paper, for estimating the performance of proposed SLFSMC among few conventional classification techniques such as the Naïve Bayes classifier, decision tree, Support vector machine, and K-nearest neighbors, a statistical tool called as Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is involved. We have implemented proposed SLFSMC system on some of the regions of Victoria, a state of Australia, after the deforestation caused due to different reasons.
  • Crop Recommendation System Using Support Vector Machine Considering Indian Dataset

    Mishra T.K., Mishra S.K., Sai K.J., Peddi S., Surusomayajula M.

    Conference paper, Lecture Notes in Networks and Systems, 2022, DOI Link

    View abstract ⏷

    Since a long years, agriculture is considered as a major profession for livelihoods of the Indians. Still, agriculture is not profitable as many farmers take the worse step as they cannot survive from the burden of loans. So, one such place where there is yet large scope to develop is agriculture. In comparison with other countries, India has the highest production rate in agriculture. However, still, most agricultural fields are underdeveloped due to the lack of deployment of ecosystem control technologies. Agriculture when combined with technology can bring the finest results. Crop yield depends on multiple climatic conditions such as air temperature, soil temperature, humidity, and soil moisture. In general, farmers depend on self-monitoring and experience for harvesting fields. Scarcity of water is a main issue in today’s life. This scarcity is affecting people worldwide. So water is also a vital component of crop yield, here we are considering rainfall instead direct water. Predicting the crop selection/yield in advance of its harvest would help the policymakers and farmers for taking appropriate measures for farming, marketing, and storage. Thus, in this paper we propose a crop selection using machine learning technique as support vector machine (SVM) and polynomial regression. This model will help the farmers to know the yield of their crop before cultivating the agricultural field and thus help them to make the appropriate decisions. It attempts to solve the issue by building a prototype of an interactive prediction system. Accurate yield prediction is required to be done after understanding the functional relationship between yield and these parameters because along with all advances in the machines and technologies used in farming, useful and accurate information about different matters also plays a significant role in it. In this paper, we have simulated SVM and polynomial regression technique to predict which crop can yield better profit. Both of the models are simulated comprehensively on the Indian dataset, and an analytical report has been presented.
  • Combination of Reduction Detection Using TOPSIS for Gene Expression Data Analysis

    Tripathy J., Dash R., Pattanayak B.K., Mishra S.K., Mishra T.K., Puthal D.

    Article, Big Data and Cognitive Computing, 2022, DOI Link

    View abstract ⏷

    In high-dimensional data analysis, Feature Selection (FS) is one of the most fundamental issues in machine learning and requires the attention of researchers. These datasets are characterized by huge space due to a high number of features, out of which only a few are significant for analysis. Thus, significant feature extraction is crucial. There are various techniques available for feature selection; among them, the filter techniques are significant in this community, as they can be used with any type of learning algorithm and drastically lower the running time of optimization algorithms and improve the performance of the model. Furthermore, the application of a filter approach depends on the characteristics of the dataset as well as on the machine learning model. Thus, to avoid these issues in this research, a combination of feature reduction (CFR) is considered designing a pipeline of filter approaches for high-dimensional microarray data classification. Considering four filter approaches, sixteen combinations of pipelines are generated. The feature subset is reduced in different levels, and ultimately, the significant feature set is evaluated. The pipelined filter techniques are Correlation-Based Feature Selection (CBFS), Chi-Square Test (CST), Information Gain (InG), and Relief Feature Selection (RFS), and the classification techniques are Decision Tree (DT), Logistic Regression (LR), Random Forest (RF), and k-Nearest Neighbor (k-NN). The performance of CFR depends highly on the datasets as well as on the classifiers. Thereafter, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking all reduction combinations and evaluating the superior filter combination among all.
  • A Data Aggregation Approach Exploiting Spatial and Temporal Correlation among Sensor Data in Wireless Sensor Networks

    Dash L., Pattanayak B.K., Mishra S.K., Sahoo K.S., Jhanjhi N.Z., Baz M., Masud M.

    Article, Electronics (Switzerland), 2022, DOI Link

    View abstract ⏷

    Wireless sensor networks (WSNs) have various applications which include zone surveillance, environmental monitoring, event tracking where the operation mode is long term. WSNs are characterized by low-powered and battery-operated sensor devices with a finite source of energy. Due to the dense deployment of these devices practically it is impossible to replace the batteries. The finite source of energy should be utilized in a meaningful way to maximize the overall network lifetime. In the space domain, there is a high correlation among sensor surveillance constituting the large volume of the sensor network topology. Each consecutive observation constitutes the temporal correlation depending on the physical phenomenon nature of the sensor nodes. These spatio-temporal correlations can be efficiently utilized in order to enhance the maximum savings in energy uses. In this paper, we have proposed a Spatial and Temporal Correlation-based Data Redundancy Reduction (STCDRR) protocol which eliminates redundancy at the source level and aggregator level. The estimated performance score of proposed algorithms is approximately 7.2 when the score of existing algorithms such as the KAB (K-means algorithm based on the ANOVA model and Bartlett test) and ED (Euclidian distance) are 5.2, 0.5, respectively. It reflects that the STCDRR protocol can achieve a higher data compression rate, lower false-negative rate, lower false-positive rate. These results are valid for numeric data collected from a real data set. This experiment does not consider non-numeric values.
  • Task Allocation in Containerized Cloud Computing Environment

    Akram Khan M., Kumar Mishra S., Kumari A., Sahoo B.

    Conference paper, ASSIC 2022 - Proceedings: International Conference on Advancements in Smart, Secure and Intelligent Computing, 2022, DOI Link

    View abstract ⏷

    Containerization technology makes use of operating system-level virtualization to pack application that runs with required libraries and is isolated from other processes on the same host. The lightweight easy deployment of containers made them popular at many data centers. It has captured the market of virtual machines and emerged as lightweight technology that offers better microservices support. Many organizations are widely deploying container technology for handling their diverse and unexpected workload derived from modern applications such as Edge/ Fog computing, Big Data, and IoT in either proprietary clusters or public, private cloud data centers. In the cloud computing environment, scheduling plays a pivotal role. In the same way in container technology, scheduling also plays a critical role in achieving the optimum utilization of available resources. Designing an efficient scheduler is itself a challenging task. The challenges arise from various aspects like the diversity of computing resources and maintaining fairness among numerous tenants, sharing resources with each other as per their requirements, unexpected variation in resource demands and heterogeneity of jobs, etc. This survey provides a multi-perspective overview of container scheduling. Here, we have organized the container scheduling problem into four categories based on the type of optimization algorithm applied to get the linear programming Modeling, heuristic, meta-heuristic, machine learning, and artificial intelligence-based mathematical model. In the previous research work has been done on either Virtual machine placements to Physical Machines or Container instances to Physical machines. This leads to either underutilized PMs or over-utilized PMs. But in this paper, we try to combine both virtualization technology Containers as well as VMs. The primary aim is to optimize resource utilization in terms of CPU time. in this paper, we proposed a meta-heuristics algorithm named Sorted Task-based allocation. Simulation results show that the proposed Sorted TBA algorithm performs better than the Random and Unsorted TBA algorithms.
  • VM consolidation based on overload detection and VM selection policy

    Jena S., Sahu L.K., Mishra S.K., Sahoo B.

    Conference paper, Proceedings of the Confluence 2021: 11th International Conference on Cloud Computing, Data Science and Engineering, 2021, DOI Link

    View abstract ⏷

    Even though cloud computing has been a big boon to the ICT (Information and communication technology) industry, it faces high energy consumption and substantial CO2 emission. Due to the increase in demand for computational resources, it is now necessary and of utmost significance to improve the energy consumption of the cloud system. Virtual Machine (VM) consolidation is one of the powerful tools to improve energy efficiency as it reduces the number of VM migrations by managing VMs from overloaded/underloaded hosts. Implementation of VM consolidation techniques leads to a decrease in the amount of hardware consumption, energy consumption, and data footprints which leads to an increased Quality of Service (QoS). In this paper, an energy aware VM selection algorithm is proposed along with an overload detection algorithm. The proposed algorithm runs on the CloudSim toolkit environment and analyzes it based on different parameters like energy consumption, SLA violation, server shutdown, and the number of VM migrations to analyze energy efficiency improvement. This modified approach exhibited better performance on all the parameters as compared to the existing algorithms.
  • Analysis of Machine Learning Technologies for the Detection of Diabetic Retinopathy

    Mohanty B.C.S., Mishra S., Mishra S.K.

    Book chapter, Machine Learning for Healthcare Applications, 2021, DOI Link

    View abstract ⏷

    In Today’s world, disease diagnosis plays a vital role in the area of medical imaging. Medical imaging is the method and procedure of making visual descriptions of the interior of a body for clinical investigation and clinical mediation, as well as visual depiction of the function of some organs or tissues. Medical imaging also deals with disease detection. We can get a better view of detecting the disease by using machine learning in medical imaging. So Now what is Machine Learning (ML)? ML is an artificial intelligence (AI) utilization that presents the system with the capacity to learn and develop itself. It mainly focuses on the development of computer programs that can access the data and use it for themselves. In this chapter we will focus on detection Diabetic retinopathy using machine learning. Diabetes is a type of disease that result in too much sugar in blood. There are three main types of diabetes. Diabetic retinopathy is one of them. Diabetic retinopathy is an eye infection brought about by the inconvenience of diabetes and we ought to recognize it right on time for effective treatment. As the disease advances, the sight of a patient may begin to break down and lead to diabetic retinopathy. Thus, two groups were recognized, in particular non-proliferative diabetic retinopathy and proliferative diabetic retinopathy. We should detect it as soon as possible as it can cause permanent loss of vision. By using ML in medical imaging we can detect it much faster and more accurately. In this chapter we will analyze about different ML technologies, algorithms and models to diagnose diabetic retinopathy in an efficient manner to support the healthcare system.
  • Facial expression recognition system (fers): A survey

    Mishra S., Gupta R., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    Human facial expressions and emotions are considered as the fastest way of the communication medium for expressing thoughts. The ability to identify the emotional states of people surrounding us is an essential component of natural communication. Facial expression and emotion detector can be used to know whether a person is sad, happy, angry, and so on. We can better understand the thoughts and ideas of a person. This paper briefly explores the idea of recognizing the computerized facial expression detection system. First, we have discussed an overview of the facial expression recognition system (FERS). Also, we have presented a glimpse of current technologies that are used for the detection of FERS. A comparative analysis of existing methodologies is also presented in this paper. It provides the basic information and general understanding of up-to-date state-of-the-art studies; also, experienced researchers can look productive directions for future work.
  • Crop Recommendation System using KNN and Random Forest considering Indian Data set

    Mishra T.K., Mishra S.K., Sai K.J., Alekhya B.S., Nishith A.R.

    Conference paper, Proceedings - 2021 19th OITS International Conference on Information Technology, OCIT 2021, 2021, DOI Link

    View abstract ⏷

    The agriculture plays crucial role in the growth of the country's economy. In comparison to other countries, India has the highest production rate in agriculture. Agriculture when combined with technology can bring the finest results. Crop prediction is a highly complex trait determined by multiple factors such as Contents of Nitrogen, Phosphorous, Potassium, Rainfall, Temperature, Humidity, Ph level. Predicting the crop in advance would help the policymakers and farmers for taking appropriate measures for farming, marketing and storage. Thus, in this paper we propose crop selection using machine learning techniques such as K-Nearest Neighbour (KNN) and Random Forest. Both of the models are simulated comprehensively on Indian Data set and an analytical report has been presented. This model will help the farmers to know the type of the crop before cultivating onto the agricultural field and thus help them to make appropriate decisions.
  • A Static Approach for Access Control with an Application-Derived Intrusion System

    Chattopadhyay S., Mishra S., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    In the era of cyberspace, enforcing an Intrusion Detection System (IDS) and Firewall on a system is a common practice among network administrators or engineers. But, with the due time, just implementing IDS and firewall isn’t just enough to secure our systems, especially with the present trend of spreading new malware attacks. Its quite easy to victimize a machine, even with IDS and firewalls enforced on the networks by easily uploading shells in the form of pdf, jpg, txt, etc. Due to which machine can easily be victimized without much effort, for this, we probe to apply a new approach to overcome this anomaly. Understandably, with the increasing demand for IoT devices in the market, safeguarding these devices are also a big challenge. Motivated by this problem, we try to perform inspections to maintain stability and functionality by adding code that allows the application to keep track of operating constraints of the application during an attack. Hence, in the background of this, we discuss intrusion detection systems, firewalls, and applicability. Further, we tend to identify open challenges in this direction.
  • A real-time sentiments analysis system using twitter data

    Dave A., Bharti S., Patel S., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    As social media platforms become the go-to for knee-jerk reactions on events by the current populous, it has become extremely important for event managers, celebrities, and organizations to constantly monitor their perceived social image online. This becomes especially difficult during key periods of heightened activity, like events, announcements, etc. As the rate at which the tweets are posted is much higher than what a human can read or comprehend. In this paper, we exploit existing sentiment analysis techniques to develop a real-time sentiment analysis system that provides us with real-time sentiments of the audience on the micro-blogging site, Twitter, toward an event, organization, or person. This system serves to act as a feedback mechanism helping the users to understand, the perceived image of the event/organization. This feedback, if provided in a timely manner, can be used to improve the situation at hand or act as a positive reinforcement for the team. In today’s world, neglecting social media can prove detrimental to the success of an event or organization. We analyze two different events from two separate domains to understand and demonstrate the benefits of our system.
  • Energy-efficient clustering with rotational supporter in wsn

    Parida P., Sahu B., Parida A.K., Mishra S.K.

    Conference paper, Smart Innovation, Systems and Technologies, 2021, DOI Link

    View abstract ⏷

    The wireless sensor network is an evergreen field of research. Everywhere we are using the sensor. Since the sensors are small in size and have less amount of initial energy, the energy saving becomes highly important and challenging. Wherever we deploy these sensors, it may or may not be accessible all the time. Hence, these should be implemented with a suitable algorithm to utilize energy efficiently. We have proposed an energy-saving algorithm by reducing the overheads of the cluster head (CH). In order to assist the CH, an assistant is selected called the supporting CH (SCH). Generally, this responsibility is rotational. Most of the nodes get a chance to serve CH so that the energy utilization is uniform. Through the proposed algorithm, the lifetime of the network in creased. This proposed algorithm is simulated using NS3 simulator and proves the energy-efficient clustering and increased lifetime as compared to other algorithms without the use of SCH.
  • Energy-aware task allocation for multi-cloud networks

    Mishra S.K., Mishra S., Alsayat A., Jhanjhi N.Z., Humayun M., Sahoo K.S., Luhach A.K.

    Article, IEEE Access, 2020, DOI Link

    View abstract ⏷

    In recent years, the growth rate of Cloud computing technology is increasing exponentially, mainly for its extraordinary services with expanding computation power, the possibility of massive storage, and all other services with the maintained quality of services (QoSs). The task allocation is one of the best solutions to improve different performance parameters in the cloud, but when multiple heterogeneous clouds come into the picture, the allocation problem becomes more challenging. This research work proposed a resource-based task allocation algorithm. The same is implemented and analyzed to understand the improved performance of the heterogeneous multi-cloud network. The proposed task allocation algorithm (Energy-aware Task Allocation in Multi-Cloud Networks (ETAMCN)) minimizes the overall energy consumption and also reduces the makespan. The results show that the makespan is approximately overlapped for different tasks and does not show a significant difference. However, the average energy consumption improved through ETAMCN is approximately 14%, 6.3%, and 2.8% in opposed to the random allocation algorithm, Cloud Z-Score Normalization (CZSN) algorithm, and multi-objective scheduling algorithm with Fuzzy resource utilization (FR-MOS), respectively. An observation of the average SLA-violation of ETAMCN for different scenarios is performed.
  • Autonomic cloud resource provisioning and scheduling using meta-heuristic algorithm

    Kumar M., Sharma S.C., Goel S., Mishra S.K., Husain A.

    Article, Neural Computing and Applications, 2020, DOI Link

    View abstract ⏷

    We investigate that resource provisioning and scheduling is a prominent problem due to heterogeneity as well as dispersion of cloud resources. Cloud service providers are building more and more datacenters due to demand of high computational power which is a serious threat to environment in terms of energy requirement. To overcome these issues, we need an efficient meta-heuristic technique that allocates applications among the virtual machines fairly and optimizes the quality of services (QoS) parameters to meet the end user objectives. Binary particle swarm optimization (BPSO) is used to solve real-world discrete optimization problems but simple BPSO does not provide optimal solution due to improper behavior of transfer function. To overcome this problem, we have modified transfer function of binary PSO that provides exploration and exploitation capability in better way and optimize various QoS parameters such as makespan time, energy consumption, and execution cost. The computational results demonstrate that modified transfer function-based BPSO algorithm is more efficient and outperform in comparison with other baseline algorithm over various synthetic datasets.
  • Leukemia Diagnosis Based on Machine Learning Algorithms

    Patil Babaso S., Mishra S.K., Junnarkar A.

    Conference paper, 2020 IEEE International Conference for Innovation in Technology, INOCON 2020, 2020, DOI Link

    View abstract ⏷

    Leukemia is brought about by the quick generation of unusual white platelets. The high number of strange white platelets are not ready to battle contamination, and they impede the capacity of the bone marrow to create red platelets and platelets. Machine Learning techniques are widely used in the dignosis and classification of different leukemia types in the patients. In this paper, we have described the different machine learning algorithms like Support Vector Machines, k-Nearest Neighbour, Neural Networks, Naïve Bayes and Deep Learning algorithms which are used to classify leukemia into its sub-types and presented a comparative study of these algorithms.
  • Energy-Efficient Service Allocation Techniques in Cloud: A Survey

    Mishra S.K., Sahoo S., Sahoo B., Jena S.K.

    Review, IETE Technical Review (Institution of Electronics and Telecommunication Engineers, India), 2020, DOI Link

    View abstract ⏷

    The demand for cloud computing infrastructure is increasing day by day to meet the requirement of small and medium enterprises. The data center-centric cloud technology has a high share of energy consumption from the IT-industry. The amount of energy consumption in a data center depends on the allocation of user service requests to virtual machines running on the different host. Minimization of energy consumption in the data center is a significant issue and addressed by optimal allocation of cloud resources. In this paper, we have discussed how service allocation strategies have been used to optimize the energy consumption in a cloud system. A generalized system architecture is presented based on which we define the service allocation problem and energy model. Further, we present the taxonomy of various energy-efficient resource allocation techniques found in the literature. In the end, various research challenges related to the energy-efficient service allocation in cloud are discussed.
  • Token based data security in inter cluster communication in wireless sensor network

    Sahu B., Parida P., Parida A.K., Mishra S.K.

    Conference paper, 2020 International Conference on Computer Science, Engineering and Applications, ICCSEA 2020, 2020, DOI Link

    View abstract ⏷

    In this paper, the data security operation is performed in case of inter-cluster communication. It is based on token identification of the clusters for their identification. The sender cluster checks the identification of the receiver cluster before any comm3unication is initiated. Each cluster is represented by its head node. The head nodes are assigned with a token by the base station. The token number is called as the identification number (IN) of the head node and hence the cluster. The proposed idea is simulated using NS3 simulator and the performance with respect to security. The performance is compared with other algorithms.
  • Load balancing in cloud computing: A big picture

    Mishra S.K., Sahoo B., Parida P.P.

    Review, Journal of King Saud University - Computer and Information Sciences, 2020, DOI Link

    View abstract ⏷

    Scheduling or the allocation of user requests (tasks) in the cloud environment is an NP-hard optimization problem. According to the cloud infrastructure and the user requests, the cloud system is assigned with some load (that may be underloaded or overloaded or load is balanced). Situations like underloaded and overloaded cause different system failure concerning the power consumption, execution time, machine failure, etc. Therefore, load balancing is required to overcome all mentioned problems. This load balancing of tasks (those are may be dependent or independent) on virtual machines (VMs) is a significant aspect of task scheduling in clouds. There are various types of loads in the cloud network such as memory load, Computation (CPU) load, network load, etc. Load balancing is the mechanism of detecting overloaded and underloaded nodes and then balance the load among them. Researchers proposed various load balancing approaches in cloud computing to optimize different performance parameters. We have presented a taxonomy for the load balancing algorithms in the cloud. A brief explanation of considered performance parameters in the literature and their effects is presented in this paper. To analyze the performance of heuristic-based algorithms, the simulation is carried out in CloudSim simulator and the results are presented in detail.
  • Allocation of energy-efficient task in cloud using DVFS

    Mishra S.K., Khan M.A., Sahoo S., Sahoo B.

    Article, International Journal of Computational Science and Engineering, 2019, DOI Link

    View abstract ⏷

    Nowadays, the expanding computational capabilities of the cloud system rely on the minimisation of the consumed power to make them sustainable and economically productive. Power management of cloud data centres received a great attention from industry and academia as it consumes high energy and thus increases the operational cost. One of the core approaches for the conservation of energy in the cloud data centre is the task scheduling. This task allocation in a heterogeneous environment is a well known NP-hard problem due to which researchers pay attention for proposing various heuristic techniques for the problem. In this paper, a technique is proposed based on dynamic voltage frequency scaling (DVFS) for optimising the energy consumption in the cloud environment. The basic idea is to address the trade-off between energy consumption and makespan of the system. Here, we formally introduce a model that includes various subsystems and assess the implementation of the algorithm in the heterogeneous environment.
  • A secure VM consolidation in cloud using learning automata

    Mishra S.K., Sahoo B., Jena S.K.

    Book chapter, Advances in Intelligent Systems and Computing, 2019, DOI Link

    View abstract ⏷

    Cloud computing system is a progression of distributed system that has been adopted by worldwide scientifically and commercially. For optimal utilization of cloud’s potential power, effective and efficient algorithms are expected, which will select best resources from available cloud resources for different applications. This allocation of user requests to the cloud resources can optimize several parameters like energy consumption, makespan, throughput, etc. In this paper, we have proposed a learning automata based algorithm to minimize the makespan of the cloud system and also to increase the resource utilization that holds secured resource allocation. We have simulated our algorithm, ALOLA with the help of CloudSim simulator in a heterogeneous environment. During the comparison of the algorithm, we provide a finite set of tasks to the ALOLA algorithm once and estimate the makespan of the system. We have compared our proposed technique (ALOLA), i.e., with learning automata and without learning automata (random allocation algorithm), and show the system performance.
  • Secure Big Data Computing in Cloud: An Overview

    Mishra S.K., Sahoo S., Sahoo B.

    Book chapter, Encyclopedia of Big Data Technologies, 2019, DOI Link

    View abstract ⏷

    Advancement in information technology with the rapid growth in all other areas like business, medical, engineering, and scientific research has resulted in a generation of huge data. Decisionmaking from a rapidly growing huge data is a challenging job in terms of management and processing of data, which is termed as big data computing. The big data computing demands voluminous storage and computing for data processing which is delivered to the user through cloud infrastructures. The complexity of the system reduces the security level which is a challenging task for the researchers. This paper elaborates the evolution of big data computing, security issues of big data computing in cloud, different solutions for providing better security level, and finally open technical challenges and future directions.
  • An Improved Approach for Sarcasm Detection Avoiding Null Tweets

    Bharti S.K., Babu K.S., Mishra S.K.

    Conference paper, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, DOI Link

    View abstract ⏷

    Among the plethora of social media, Twitter has emerged as the favorite destination for researchers in recent times. Many researchers are inclined to work on Twitter due to the availability of massive tweets and its unique features like hashtags and short messages. In recent times, various studies have preferred the hashtags (#sarcasm and #sarcastic) to collect Twitter dataset for sarcasm detection. However, hashtag-based distant supervision suffers from the problem of the inclusion of null tweets in the datasets which can be considered as a critical one for sarcasm detection. In this article, an algorithm is proposed for automatic detection and filtration of null tweets in the Twitter data. Additionally, an algorithm to identify sarcastic tweets using context within a tweet is also proposed. This approach use dictionaries of handpicked hashtag words, emoticons as the context within a tweet. Finally, we deployed a rule-based algorithm to analyse the performance of the proposed approach. The proposed approach attains the accuracy of 97.3% (after filtering null tweets) and 83.13% (without filtering null tweets) using a rule-based approach. The attained results conclude that after elimination of null tweets, the performance of the proposed system improved significantly.
  • Co-resident Attack in Cloud Computing: An Overview

    Sahoo S., Mishra S.K., Sahoo B., Turuk A.K.

    Book chapter, Encyclopedia of Big Data Technologies, 2019, DOI Link

    View abstract ⏷

    A cloud rewards organizations with agility and cost-efficiency, but goods of the cloud come with security challenges. The sheer volume and immense size of modern-day clouds (big data) make them hard to protect and consequently, vulnerable to abuse. Security and privacy issues are intensified by velocity, volume, and a variety of big data, such as large-scale cloud infrastructures, diversity of data sources and formats, and a massive amount of inter-cloud migration. The virtualization method allows sharing of computing resources among many tenants, which may be business partners, suppliers, competitors, or attackers. Even though there is substantial logical isolation among the virtual machines (VMs), shared hardware creates vulnerabilities to co-resident attacks. This paper gives a glimpse of security issues in the cloud, specifically related to VMs. Here, we concentrate our study on co-resident VM attack and its defense methods.
  • Resource allocation for video transcoding in the multimedia cloud

    Sahoo S., Parida I., Mishra S.K., Sahoo B., Turuk A.K.

    Book chapter, Advances in Intelligent Systems and Computing, 2019, DOI Link

    View abstract ⏷

    Video content providers like YouTube and Netflix cater their content, i.e., news and shows, on the web which is accessible anytime anywhere. The multi-screens like TVs, smartphones, and laptops created a demand to transcode the video into the appropriate video specification ensuring different quality of services (QoS) such as delay. Transcoding a large, high-definition video requires a lot of time, computation. The cloud transcoding solution allows video service providers to overcome the above difficulties through the pay-as-you-use scheme, with the assurance of providing online support to handle unpredictable demands. This paper presents a cost-efficient cloud-based transcoding framework and algorithm (CVS) for streaming service providers. The dynamic resource provisioning policy used in framework finds the number of virtual machines required for a particular set of video streams. Simulation results based on YouTube dataset show that the CVS algorithm performs better compared to FCFS scheme.
  • An adaptive task allocation technique for green cloud computing

    Mishra S.K., Puthal D., Sahoo B., Jena S.K., Obaidat M.S.

    Article, Journal of Supercomputing, 2018, DOI Link

    View abstract ⏷

    The rapid growth of todays IT demands reflects the increased use of cloud data centers. Reducing computational power consumption in cloud data center is one of the challenging research issues in the current era. Power consumption is directly proportional to a number of resources assigned to tasks. So, the power consumption can be reduced by a demotivating number of resources assigned to serve the task. In this paper, we have studied the energy consumption in cloud environment based on varieties of services and achieved the provisions to promote green cloud computing. This will help to preserve overall energy consumption of the system. Task allocation in the cloud computing environment is a well-known problem, and through this problem, we can facilitate green cloud computing. We have proposed an adaptive task allocation algorithm for the heterogeneous cloud environment. We applied the proposed technique to minimize the makespan of the cloud system and reduce the energy consumption. We have evaluated the proposed algorithm in CloudSim simulation environment, and simulation results show that our proposed algorithm is energy efficient in cloud environment compared to other existing techniques.
  • On the placement of controllers in software-Defined-WAN using meta-heuristic approach

    Sahoo K.S., Puthal D., Obaidat M.S., Sarkar A., Mishra S.K., Sahoo B.

    Article, Journal of Systems and Software, 2018, DOI Link

    View abstract ⏷

    Software Defined Networks (SDN) is a popular modern network technology that decouples the control logic from the underlying hardware devices. The control logic has implemented as a software entity that resides in a server called controller. In a Software-Defined Wide Area Network (SDWAN) with n nodes; deploying k number of controllers (k < n) is one of the challenging issue. Due to some internal or external factors, when the primary path between switch to controller fails, it severely interrupt the networks’ availability. In this regard, the proposed approach provides a seamless backup mechanism against single link failure with minimum communication delay based on the survivability model. In order to obtain an efficient solution, we have considered controller placement problem (CPP) as a multi-objective combinatorial optimization problem and solve it using two population-based meta-heuristic techniques such as: Particle Swarm Optimization (PSO) and FireFly Algorithm (FFA). For CPP, three metrics have been considered: (a) controller to switch latency, (b) inter-controller latency and (c) multi-path connectivity between the switch and controller. The performance of the algorithms is evaluated on a set of publicly available network topologies in order to obtain the optimum number of controllers, and controller positions. Then we present Average Delay Rise (ADR) metric to measure the increased delay due to the failure of the primary path. By comparing the performance of our scheme to competing scheme, it was found that our proposed scheme effectively improves the survivability of the control path and the performance of the network as well.
  • 2D-DWT and Bhattacharyya Distance Based Classification Scheme for the Detection of Acute Lymphoblastic Leukemia

    Mishra S., Mishra S.K., Majhi B., Sa P.K.

    Conference paper, Proceedings - 2018 International Conference on Information Technology, ICIT 2018, 2018, DOI Link

    View abstract ⏷

    This paper proposes an efficient classification system for separating normal blood cells from the pathological cells. The suggested system employs an adaptive histogram equalization scheme to reduce the noise present in the microscopic images. Two-dimensional discrete wavelet transform (2D-DWT) is applied separately to the nucleus and cytoplasm region to generate the feature matrix. The significant and uncorrelated features are chosen using a combination of PCA and Bhattacharyya distance. Subsequently, the reduced feature set is fed to the back propagation neural network for classification purpose. A public dataset ALL-IDB1 is used to validate the proposed scheme. It can be seen that the proposed methodology has a better result as compared to its competent schemes. The accuracy of the suggested scheme is found to be 97.11% in case of combined features from nucleus and cytoplasm region whereas the same is found to be 95.19% and 90.38% if the features are taken separately.
  • VM Selection using DVFS Technique to Minimize Energy Consumption in Cloud System

    Mishra S.K., Mishra S., Bharti S.K., Sahoo B., Puthal D., Kumar M.

    Conference paper, Proceedings - 2018 International Conference on Information Technology, ICIT 2018, 2018, DOI Link

    View abstract ⏷

    Energy consumption becoming a key issue for the execution of operation and maintenance of cloud system. The virtual machine selection plays an important role in the execution of the task without violating SLA. In this paper, a VM selection technique is proposed using Dynamic Voltage Frequency Scaling (DVFS) for optimizing the energy consumption and makespan in the cloud system.We have proposed a heuristic for the selection of VM for each task to optimize the energy utilization by applying the DVFS technique. The proposal extends to incorporate an energy model supporting the evaluation of energy consumption in cloud data centers. Each task has an energy-based SLA to execute in the cloud system. The DVFS Mechanism is applied to the virtual machines level to reduce the energy of the cloud system. Moreover, the performance of the diverse algorithms (Random allocation, and FCFS) are compared with the proposed DVFS-based VM selection strategy with the help of CloudSim.
  • Energy-efficient VM-placement in cloud data center

    Mishra S.K., Puthal D., Sahoo B., Jayaraman P.P., Jun S., Zomaya A.Y., Ranjan R.

    Article, Sustainable Computing: Informatics and Systems, 2018, DOI Link

    View abstract ⏷

    Employing cloud computing to acquire the benefit of cloud by optimizing various parameters that meet changing demands is a challenging task. The optimal mapping of tasks to virtual machines (VMs) and VMs to physical machines (PMs) (known as VM placement) problem are necessary for advancing energy consumption and resource utilization. High heterogeneity of tasks as well as resources, great dynamism and virtualization make the consolidation issue more complicated in the cloud computing system. In this paper, a complete mapping (i.e., task VM and VM to PM) algorithm is proposed. The tasks are classified according to their resource requirement and then searching for the appropriate VM and again searching for the appropriate PM where the selected VM can be deployed. The proposed algorithm reduces the energy consumption by depreciating the number of active PMs, while also minimizes the makespan and task rejection rate. We have evaluated our proposed approach in CloudSim simulator, and the results demonstrate the effectiveness of the proposed algorithm over some existing standard algorithms.
  • Sustainable Service Allocation Using a Metaheuristic Technique in a Fog Server for Industrial Applications

    Mishra S.K., Puthal D., Rodrigues J.J.P.C., Sahoo B., Dutkiewicz E.

    Article, IEEE Transactions on Industrial Informatics, 2018, DOI Link

    View abstract ⏷

    Reducing energy consumption in the fog computing environment is both a research and an operational challenge for the current research community and industry. There are several industries such as finance industry or healthcare industry that require a rich resource platform to process big data along with edge computing in fog architecture. As a result, sustainable computing in a fog server plays a key role in fog computing hierarchy. The energy consumption in fog servers depends on the allocation techniques of services (user requests) to a set of virtual machines (VMs). This service request allocation in a fog computing environment is a nondeterministic polynomial-time hard problem. In this paper, the scheduling of service requests to VMs is presented as a bi-objective minimization problem, where a tradeoff is maintained between the energy consumption and makespan. Specifically, this paper proposes a metaheuristic-based service allocation framework using three metaheuristic techniques, such as particle swarm optimization (PSO), binary PSO, and bat algorithm. These proposed techniques allow us to deal with the heterogeneity of resources in the fog computing environment. This paper has validated the performance of these metaheuristic-based service allocation algorithms by conducting a set of rigorous evaluations.
  • First score auction for pricing-based resource selection in vehicular cloud

    Mishra S., Mishra S.K., Sahoo B., Obaidat M.S., Puthal D.

    Conference paper, CITS 2018 - 2018 International Conference on Computer, Information and Telecommunication Systems, 2018, DOI Link

    View abstract ⏷

    Selecting vehicles to supply resources is a crucial research problem in the vehicular cloud and highly depends on the pricing of the resources. Subsequently, resource pricing is an intricate problem influenced by the market demand and quality of service provided. Widespread and autonomous vehicular network requires reputation as a medium for trusting the supplier vehicles. Taking into account the above factors, we design the utility of supplier and consumer vehicles. Subsequently, a 1st score auction mechanism is proposed and modeled for the consumer vehicles to obtain maximum utility. Additionally, the protocol enables the supplier vehicles to decide the optimal pricing of resources. The 1st auction protocol is then simulated and the experimental results indicate better performance of our protocol than other standard protocols.
  • Improving Energy Usage in Cloud Computing Using DVFS

    Mishra S.K., Parida P.P., Sahoo S., Sahoo B., Jena S.K.

    Conference paper, Advances in Intelligent Systems and Computing, 2018, DOI Link

    View abstract ⏷

    The energy-related issues in distributed systems that may be energy conservation or energy utilization have turned out to be a critical one. Researchers worked for this energy issue and most of them used Dynamic Voltage and Frequency Scaling (DVFS) as a power management technique where less voltage supply is allowed due to a reduction of the clock frequency of processors. The cloud environment has multiple physical hosts, and each host has several numbers of virtual machines (VMs). All online tasks or service requests are scheduled to different VMs. In this paper, an energy-optimized allocation algorithm is proposed where DVFS technique is used for virtual machines. The fundamental idea behind this is to make a compromise balance in between energy consumption and the set up time of different modes of hosts or VMs. Here, the system model that includes different sub-system models is explained formally and the implementation of algorithms in homogeneous as well as heterogeneous environment is evaluated.
  • Energy-efficient deployment of edge dataenters for mobile clouds in sustainable iot

    Mishra S.K., Puthal D., Sahoo B., Sharma S., Xue Z., Zomaya A.Y.

    Article, IEEE Access, 2018, DOI Link

    View abstract ⏷

    Achieving quick responses with limited energy consumption in mobile cloud computing is an active area of research. The energy consumption increases when a user's request (task) runs in the local mobile device instead of executing in the cloud. Whereas, latency become an issue when the task executes in the cloud environment instead of the mobile device. Therefore, a tradeoff between energy consumption and latency is required in building sustainable Internet of Things (IoT), and for that, we have introduced a middle layer named an edge computing layer to avoid latency in IoT. There are several real-time applications, such as smart city and smart health, where mobile users upload their tasks into the cloud or execute locally. We have intended to minimize the energy consumption of a mobile device as well as the energy consumption of the cloud system while meeting a task's deadline, by offloading the task to the edge datacenter or cloud. This paper proposes an adaptive technique to optimize both parameters, i.e., energy consumption and latency by offloading the task and also by selecting the appropriate virtual machine for the execution of the task. In the proposed technique, if the specified edge datacenter is unable to provide resources, then the user's request will be sent to the cloud system. Finally, the proposed technique is evaluated using a real-world scenario to measure its performance and efficiency. The simulation results show that the total energy consumption and execution time decrease after introducing an edge datacenters as a middle layer.
  • Time efficient dynamic threshold-based load balancing technique for Cloud Computing

    Mishra S.K., Khan M.A., Sahoo B., Puthal D., Obaidat M.S., Hsiao K.F.

    Conference paper, IEEE CITS 2017 - 2017 International Conference on Computer, Information and Telecommunication Systems, 2017, DOI Link

    View abstract ⏷

    Cloud computing is a novel technology leads several new challenges to all organizations worldwide. Cloud computing supports virtual machines (VMs) to host multiple applications simultaneously. Balancing the large numbers of applications in the heterogeneous cloud environment becomes challenging as the hypervisor scheduling controls all VMs. When the scheduler allocates tasks to the overloaded VMs, the performance of the cloud system degrades. In this paper, we present a novel load balancing approach to organizing the virtualized resources of the data center efficiently. In our approach, the load to a VM scales up and down according to the resource capacity of the VM. The proposed scheme minimizes the makespan of the system, maximizes resource utilization and reduces the overall energy consumption. We have evaluated our approach in CloudSim simulation environment, and our devised approach has reduced the waiting time compared to existing approaches and optimized the makespan of the cloud data center.
  • Metaheuristic solutions for solving controller placement problem in SDN-based WAN architecture

    Sahoo K.S., Sarkar A., Mishra S.K., Sahoo B., Puthal D., Obaidat M.S., Sadun B.

    Conference paper, ICETE 2017 - Proceedings of the 14th International Joint Conference on e-Business and Telecommunications, 2017, DOI Link

    View abstract ⏷

    Software Defined Networks (SDN) is a popular paradigm in the modern networking systems that decouples the control logic from the underlying hardware devices. The control logic has implemented as a software component and residing in a server called controller. To increase the performance, deploying multiple controllers in a large-scale network is one of the key challenges of SDN. To solve this, authors have considered controller placement problem (CPP) as a multi-objective combinatorial optimization problem and used different heuristics. Such heuristics can be executed within a specific time-frame for small and medium sized topology, but out of scope for large scale instances like Wide Area Network (WAN). In order to obtain better results, we propose Particle Swarm Optimization (PSO) and Firefly two population-based meta-heuristic algorithms for optimal placement of the controllers, which take a particular set of objective functions and return the best possible position out of them. The problem has been defined, taking into consideration both controllers to switch and inter-controller latency as the objective functions. The performance of the algorithms evaluated on a set of publicly available network topologies in terms execution time. The results show that the FireFly algorithm performs better than PSO and random approach under various conditions.
  • Time efficient task allocation in cloud computing environment

    Mishra S.K., Khan M.A., Sahoo B., Jena S.K.

    Conference paper, 2017 2nd International Conference for Convergence in Technology, I2CT 2017, 2017, DOI Link

    View abstract ⏷

    Cloud computing is an evolution of Distributed system that has been adopted by worldwide scientifically and commercially. For optimal use of cloud's potential power, effective and efficient algorithm are required, which will select best resources from available cloud resources for different applications. This allocation of user requests to the cloud resource can optimize various parameters like energy consumption, makespan, throughput, etc. This task allocation or mapping problem is a well-known NP-Complete problem. In this paper, we have proposed an algorithm, Task Based allocation to minimize the makespan of the cloud system and also to increase the resource utilization. We have simulated our algorithm, TBA in CloudSim Simulator in a heterogeneous environment. CloudSim is one of the simulation tools of cloud environment which provides evaluation and testing of cloud services and infrastructure before the development of the real world. During the comparison of the algorithm, we provide the sorted tasks to the TBA algorithm once and un-sorted tasks in the second time. We have compared sorted-TBA, unsorted-TBA and random algorithm where the sorted-TBA algorithm performs better.
  • Evaluating performance of the Non-linear data structure for job queuing in the cloud environment

    Sahoo S., Mishra S.K., Swami D., Khan A., Sahoo B.

    Conference paper, 2017 2nd International Conference for Convergence in Technology, I2CT 2017, 2017, DOI Link

    View abstract ⏷

    Cloud Computing era comes with the advancement of technologies in the fields of processing, storage, bandwidth network access, security of the internet, etc. Several advantages of Cloud Computing include scalability, high computing power, ondemand resource access, high availability, etc. One of the biggest challenges faced by Cloud provider is to schedule incoming jobs to virtual machines(VMs) such that certain constraints satisfied. The development of automatic applications, smart devices, and applications, sensor-based applications need large data storage and computing resources and need output within a particular time limit. Many works have been proposed and commented on various data structures and allocation policies for a realtime job on the cloud. Most of these technologies use a queuebased mapping of tasks to VMs. This work presents a novel, min-heap based VM allocation (MHVA) designed for real-time jobs. The proposed MHVA is compared with a queue based random allocation taking performance metrics makespan and energy consumption. Simulations are performed for different scenarios varying the number of tasks and VMs. The simulation results show that MHVA is significantly better than the random algorithm.
  • Adaptive scheduling of cloud tasks using ant colony optimization

    Mishra S.K., Sahoo B., Manikyam P.S.

    Conference paper, ACM International Conference Proceeding Series, 2017, DOI Link

    View abstract ⏷

    Efficient scheduling of heterogeneous tasks to heterogeneous processors for any application is crucial to attain high performance. Cloud computing provides a heterogeneous environment to perform various operations. The scheduling of user requests (tasks) in the cloud environment is a NP-hard optimization problem. Researchers present various heuristic and metaheuristic techniques to provide the sub-optimal solution to the problem. In this paper, we have proposed an Ant Colony Optimization (ACO) based task scheduling (ACOTS) algorithm to optimize the makespan of the system and reducing the average waiting time. The designed algorithm is implemented and simulated in CloudSim simulator. Results of simulations are compared to Round Robin and Random algorithms which show satisfactory output.
  • Improved energy-efficient target coverage in wireless sensor networks

    Panda B.S., Bhatta B.K., Mishra S.K.

    Conference paper, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2017, DOI Link

    View abstract ⏷

    Achieving optimal field coverage is a significant challenge in various sensor network applications. In some specific situations, the sensor field (target) may have coverage gaps due to the random deployment of sensors; hence, the optimized level of target coverage cannot be obtained. Given a set of sensors in the plane, the target coverage problem is to separate the sensor into different groups and provide them specific time intervals, so that the coverage lifetime can be maximized. Here, the constraint is that the network should be connected. Presently, target coverage problem is widely studied due to its lot of practical application in Wireless Sensor Network (WSN). This paper focuses on target coverage problem along with the minimum energy usage of the network so that the lifetime of the whole network can be increased. Since constructing a minimum connected target coverage problem is known to be NP-Complete, so several heuristics, as well as approximation algorithms, have been proposed. Here, we propose a heuristic for connected target coverage problem in WSN. We compare the performance of our heuristic with the existing heuristic, which states that our algorithm performs better than the existing algorithm for connected target coverage problem. Again, we have implemented the 2-connected target coverage properties for the network which provide fault tolerance as well as robustness to the network. So, we propose one algorithm which gives the target coverage along with 2-connectivity.
  • Deadline-constraint services in cloud with heterogeneous servers

    Sahoo S., Mishra S.K., Sahoo B., Puthal D., Obaidat M.S.

    Conference paper, IEEE CITS 2017 - 2017 International Conference on Computer, Information and Telecommunication Systems, 2017, DOI Link

    View abstract ⏷

    The development of delay sensitive applications needs massive data storage and computing resources, especially in a typical cloud environment. The cloud computing paradigm provides a broad range of services viz. software, platform, and infrastructure for various applications (both real-time and non real-time) over the Internet. But, in the case of Infrastructure-as-a-Service (IaaS) cloud platform, either over provisioning or under-provisioning of resources becomes a challenging issue for time constraint applications. An accurate modeling of cloud centers is not feasible due to the nature of cloud centers and diversity of user requests. We present an analytical model to estimate the performance of the cloud center for deadline sensitive tasks. We used the model to find the number of task miss deadline, waiting time of a task, and response time of the service, among others.
  • Execution of real time task on cloud environment

    Sahoo S., Nawaz S., Mishra S.K., Sahoo B.

    Conference paper, 12th IEEE International Conference Electronics, Energy, Environment, Communication, Computer, Control: (E3-C3), INDICON 2015, 2016, DOI Link

    View abstract ⏷

    Cloud computing is an internet-based computing where resources, soft wares and information are shared on demand basis i.e. user can access documents anytime anywhere. Execution of real time task on cloud computing environment is an emerging research area. Real-time task needs to meet their deadlines regardless of system load or makespan. This paper discusses about scheduling of real time task on cloud environment considering Basic Earliest deadline first (BEDF), FFE (first fit EDF), BFE (best fit EDF), WFE (Worst fit EDF) algorithms. Different performance parameters such as guarantee ratio (GR), utilization of VMs (UV), throughput (TP) are used to measure the effectiveness of the algorithms.
  • Improving energy consumption in cloud

    Mishra S.K., Deswal R., Sahoo S., Sahoo B.

    Conference paper, 12th IEEE International Conference Electronics, Energy, Environment, Communication, Computer, Control: (E3-C3), INDICON 2015, 2016, DOI Link

    View abstract ⏷

    To meet the service level agreement (SLA) between the cloud user and the cloud service provider, the service provider has to pay more. The cloud resources are allocated not only to satisfy the quality of services (QoS) those are specified in SLA, but also need to reduce energy utilization. Therefore, task consolidation plays an important role in cloud computing, which map users service requests to appropriate resources resulting in proper utilization of various cloud resources. The enhancement of overall performance of cloud computing also depends on the Task Consolidation approaches. Here, for task consolidation problem, we present an energy aware model which includes description of physical hosts, virtual machines and service requests (tasks) submitted by users. For the proposed model, an Energy Aware Task Consolidation (EATC) algorithm is developed where heterogeneity also affects the performance and show significant improvement in energy savings.
  • Metaheuristic approaches to task consolidation problem in the cloud

    Mishra S.K., Sahoo B., Sahoo K.S., Jena S.K.

    Book chapter, Resource Management and Efficiency in Cloud Computing Environments, 2016, DOI Link

    View abstract ⏷

    The service (task) allocation problem in the distributed computing is one form of multidimensional knapsack problem which is one of the best examples of the combinatorial optimization problem. Nature-inspired techniques represent powerful mechanisms for addressing a large number of combinatorial optimization problems. Computation of getting an optimal solution for various industrial and scientific problems is usually intractable. The service request allocation problem in distributed computing belongs to a particular group of problems, i.e., NP-hard problem. The major portion of this chapter constitutes a survey of various mechanisms for service allocation problem with the availability of different cloud computing architecture. Here, there is a brief discussion towards the implementation issues of various metaheuristic techniques like Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), BAT algorithm, etc. with various environments for the service allocation problem in the cloud.
  • Honeypot-based intrusion detection system: A performance analysis

    Kondra J.R., Bharti S.K., Mishra S.K., Babu K.S.

    Conference paper, Proceedings of the 10th INDIACom; 2016 3rd International Conference on Computing for Sustainable Global Development, INDIACom 2016, 2016,

    View abstract ⏷

    Attacks on the internet keep on increasing and it causes harm to our security system. In order to minimize this threat, it is necessary to have a security system that has the ability to detect zero-day attacks and block them. 'Honeypot is the proactive defense technology, in which resources placed in a network with the aim to observe and capture new attacks'. This paper proposes a honeypot-based model for intrusion detection system (IDS) to obtain the best useful data about the attacker. The ability and the limitations of Honeypots were tested and aspects of it that need to be improved were identified. In the future, we aim to use this trend for early prevention so that pre-emptive action is taken before any unexpected harm to our security system.
  • Real time task execution in cloud using mapreduce framework

    Sahoo S., Sahoo B., Turuk A.K., Mishra S.K.

    Book chapter, Resource Management and Efficiency in Cloud Computing Environments, 2016, DOI Link

    View abstract ⏷

    Cloud Computing era comes with the advancement of technologies in the fields of processing, storage, bandwidth network access, security of internet etc. The development of automatic applications, smart devices and applications, sensor based applications need huge data storage and computing resources and need output within a particular time limit. Now users are becoming more sensitive towards, delay in applications they are using. So, a scalable platform like Cloud Computing is required that can provide huge computing resource, and data storage required for processing such applications. MapReduce framework is used to process huge amounts of data. Data processing on a cloud based on MapReduce would provide added benefits such as fault tolerant, heterogeneous, ease of use, free and open, efficient. This chapter discusses about cloud system model, real-time MapReduce framework, Cloud based MapReduce framework examples, quality attributes of MapReduce scheduling and various MapReduce scheduling algorithm based on quality attributes.
  • A comparative analysis of packet scheduling schemes for multimedia services in LTE networks

    Sahoo B.P.S., Puthal D., Swain S., Mishra S.

    Conference paper, Proceedings - 1st International Conference on Computational Intelligence and Networks, CINE 2015, 2015, DOI Link

    View abstract ⏷

    The revolution in high-speed broadband network is the requirement of the current time, in other words here is an unceasing demand for high data rate and mobility. Both provider and customer see, the long time evolution (LTE) could be the promising technology for providing broadband, mobile Internet access. To provide better quality of service (QoS) to customers, the resources must be utilized at its fullest impeccable way. Resource scheduling is one of the important functions for remanufacturing or upgrading system performance. This paper studies the recently proposed packet scheduling schemes for LTE systems. The study has been concentrated in implication to real-time services such as online video streaming and Voice over Internet Protocol (VOIP). For performance study, the LTE-Sim simulator is used. The primary objective of this paper is to provide results that will help researchers to design more efficient scheduling schemes, aiming to get better overall system performance. For the simulation study, two scenarios, one for video traffic and other for VoIP have been created. Various performances metric such as packet loss, fairness, end-to-end (E2E) delay, cell throughput and spectral efficiency has been measured for both the scenarios varying numbers of users. In the light of the simulation result analysis, the frame level scheduler (FLS) algorithms outperform other, by balancing the QoS requirements for multimedia services.
  • Cloud computing features, issues, and challenges: A big picture

    Puthal D., Sahoo B.P.S., Mishra S., Swain S.

    Conference paper, Proceedings - 1st International Conference on Computational Intelligence and Networks, CINE 2015, 2015, DOI Link

    View abstract ⏷

    Since the phenomenon of cloud computing was proposed, there is an unceasing interest for research across the globe. Cloud computing has been seen as unitary of the technology that poses the next-generation computing revolution and rapidly becomes the hottest topic in the field of IT. This fast move towards Cloud computing has fuelled concerns on a fundamental point for the success of information systems, communication, virtualization, data availability and integrity, public auditing, scientific application, and information security. Therefore, cloud computing research has attracted tremendous interest in recent years. In this paper, we aim to precise the current open challenges and issues of Cloud computing. We have discussed the paper in three-fold: first we discuss the cloud computing architecture and the numerous services it offered. Secondly we highlight several security issues in cloud computing based on its service layer. Then we identify several open challenges from the Cloud computing adoption perspective and its future implications. Finally, we highlight the available platforms in the current era for cloud research and development.
Contact Details

sambitkumar.m@srmap.edu.in

Scholars

Doctoral Scholars

  • Ms Abdhisuta Dash
  • Ms Jasmini Kumari
  • Mr Subham Kumar Sahoo