Accepted papers

  • Anywhere at anytime internet : google loon balloons
    Remesh Babu and Sheba Jiju George, Government Engineering College, India
    "LOON BALLOONS", the network of balloons which provides internet for people in rural and remote area, is a recent project undertake by Google. Though internet is a global network, large number of people doesn't have access to it. LOON technology is also helpful in disaster management. E.g.: during floods in Kashmir, where a reliable communication system was not available, Google LOON Balloons could be one of the solutions. These balloons with the help of Wi-Fi technology, helps to attain unlimited connectivity of people to the global community of internet. The LOON BALLOONS provide connectivity to a ground area about 40 km in diameter using LTE. Using LTE enabled devices and also through their phones, people can directly access the internet. Google implemented LOON project in New Zealand on June 2013 as a pilot experiment and now improvements are made in LOON technology based on the pilot test results
  • Turnover Prediction of Shares Using Data Mining Techniques : A Case Study
    Shashaank D.S, Sruthi.V, Vijayalashimi M.L.S and Shomona Garcia Jacob, SSNCE, India
    Predicting the Total turnover of a company in the ever fluctuating Stock market has always proved to be a precarious situation and most certainly a difficult task at hand. Data mining is a well-known sphere of Computer Science that aims at extracting meaningful information from large databases. However, despite the existence of many algorithms for the purpose of predicting future trends, their efficiency is questionable as their predictions suffer from a high error rate. The objective of this paper is to investigate various existing classification algorithms to predict the turnover of different companies based on the Stock price. The authorized dataset for predicting the turnover was taken from and included the stock market values of various companies over the past 10 years. The algorithms were investigated using the 'R' tool. The feature selection algorithm, Boruta, was run on this dataset to extract the important and influential features for classification. With these extracted features, the Total Turnover of the company was predicted using various algorithms like Random Forest, Decision Tree, SVM and Multinomial Regression. This prediction mechanism was implemented to predict the turnover of a company on an everyday basis and hence could help navigate through dubious stock markets trades. An accuracy rate of 95% was achieved by the above prediction process. Moreover, the importance of the stock market attributes was established as well.
  • Wolf Routing to Detect Vampire Attacks in Wireless Sensor Networks
    Besty Haris, MVJCE, India
    Ad-hoc low-power wireless networks are a high price research direction in sensing and pervasive computing. Prior security work in this area has focused primarily on denial of communication at the routing or medium access control levels. This paper proposes a scheme to detect resource depletion attacks, called Vampire Attacks at the routing protocol layer, which permanently disable networks by quickly draining nodes' battery power. The scheme is based on the preying behaviour of wolves. These "Vampire" attacks are not specific to any specific protocol, but rather rely on the properties of many popular classes of routing protocols. Most of the general protocols are susceptible to Vampire attacks, which are devastating, difficult to detect, and are easy to carry out using as few as one malicious insider sending only protocol compliant messages. In the worst case, a single Vampire can increase network-wide energy usage by a factor of O(N), where N in the number of network nodes. In this paper the author discusses about a bio-inspired Vampire attack detection method in Wireless Sensor Network using Wolf-Routing Algorithm.
  • Solving Task Assignment To The Worker Using Genetic Algorithm
    Jameer. G. Kotwal and Tanuja S. Dhope, GHRCEM, India
    This paper deals with the task-scheduling and worker-allocation problem, in which each skillful worker is capable to perform multiple tasks and has various regular and worker capacity to perform tasks, and the task allocation is operated with daily overhead. We propose a worker assignment model and develop a heuristic algorithm that is genetic algorithm whose performance is to be evaluated against the optimal seeking methods in terms of small sized problems. The genetic algorithm is applied in a way that reduces the amount of involvement required to understand the existing solution. Genetic algorithm is basically used to minimize the total make-spam for scheduling jobs and assigning task to the worker. An attempt is made with an analytic review of the literature on the Genetic Algorithmic approach to GAP (generalized assignment problem), which is proved to be convenient and efficient in deriving the required solutions. Here a crossover and mutation operator respectively has been defined by focusing to solve the assignment problems. Here we have taken the simulation result of different tasks and different workers and it is solved through various algorithms. And the result in the graph, the same data set task-worker set is being provided to GA, ACO (Ant colony optimization), simulated annealing and also tabu search and the time is calculated for the comparison purpose as shown with different task and different worker. Each column represents the task performed by worker.
  • Pragmatic Analysis of Data Mining Techniques to Detect Intrusions in Cloud
    Chetna Vaid Kwatra, Mandeep Singh, Navjyot Kaur and Amandeep Kaur, Lovely Professional University, India
    Cloud Computing is the prolific exemplar of innovations in the field of technology. Enterprises are striving to condense their computing cost through the virtualization and distribution techniques. The requirement of dipping the computing cost that expedites refined usage of access along robust scalability has led to the success of cloud computing. Cloud computing offers improvised computing through comprehensive utilization, abridged administration and infrastructure expenditure. The major concerns voiced by the increasing number of companies resorting to use resources in the cloud is the governance that enthral the necessity of data protection under centralized resources. This piece of work tackles the issue of security caused under overlapping of trust boundaries from different cloud consumers. The main intention of this research is to evaluate the designed intrusion detection system that operates on anomaly-detection concept that applies on the data generated from the transactions captured within a cloud network. The data captured are undergone intense data mining techniques of clustering and classification. This research work projects in empirical analysis of various classification techniques like KStar, ADTree, LMT, Multilayer Percepron, CART, J48, Naive Bayes, VFI and SVM on the intended intrusion detection system. The classification approaches will consider a finite set of data attributes to scrutinize the user behaviour.
  • A Survey on Quality Assessment and Enhancement of Document Images
    Pooja Sharma and Shanu Sharma, Amity University, India
    With the advancement of technology, there has been a tremendous rise in the volume of captured and distributed content. Image acquisition can be done with the help of scanners, cameras, smart phones, tablets etc. Document retrieval and recognition systems require high quality document images but most of the time, the images acquired suffer from various degradations like blur, uneven illumination, low resolution etc. To reduce the processing time and get good results, we require methods to evaluate and improve the quality of such images. This paper reviews the quality assessment methods and enhancement techniques for document images. It presents a survey of the work that has been performed in the field of document image quality assessment and enhancement.
  • The Socio-Economic Impact of IoT Towards Smart Cities
    Elias Tabane, Tshwane University of Technology, South Africa
    The dynamic, rapidly changing and technology rich digital environment enables the provision of added value applications that exploited a multitude of devices contributing service and information. With the dawn of the Internet of Things (IoT) from the internet as a web service, large numbers of devices (objects) and environment are also becoming smarter and are in the position to interact with one another than ever before. IoT has gained substantial attention over the preceding decades, with the intentions of connecting Billions of sensors to the internet, in order to effectively and efficiently utilise them resourceful for smart cities
  • A Novel System for Document Classification Using Genetic Programming
    Saad M. Darwish, Adel A. El-Zoghabi and Doaa B. Ebaid, Alexandria University, Egypt
    According to the rapid growth of the online textual data, automatic document classification become a necessary for management. Document classification assigns a document to one class from a set of predefined classes. Foremost challenge in the document classification is achieving high accuracy. The variety of datasets places both efficiency and accuracy depends on classification systems.The majority of classification techniques deal with the multi-class classification as two-class classification. In this paper, a document classification model is proposed using genetic programming based on multi-tree representation that allows classifying documents belong to more than two categories (multi-class classification) at the same time and in a single run. The proposed model combines multi-objective technique with genetic programming (NSGA-II) to improve the classification accuracy. Empirical evaluations show encouraging results and the proposed model is feasible and effective.
  • Topic Modeling : Clustering of Deep Webpages
    Muhunthaadithya C, Rohit J.V., Sadhana Kesavan and E. Sivasankar, NIT-Trichy, India
    The internet is comprised of massive amount of information in the form of zillions of web pages. This information can be categorized into the surface web and the deep web. The existing search engines can effectively make use of the surface web information .But the deep web remains unexploited yet. Machine learning techniques have been commonly employed to access deep web content. Under Machine Learning, Topic models provide a simple way to analyze large volumes of unlabeled text. A "topic" consists of a cluster of words that frequently occur together. Using contextual clues, topic models can connect words with similar meanings and distinguish between uses of words with multiple meanings. Clustering is one of the key solutions to organize the deep web databases .In this paper, we cluster deep web databases based on the relevance found among deep web forms by employing a generative probabilistic model called Latent Dirichlet Allocation(LDA) for modeling content representative of deep web databases. This is implemented after preprocessing the set of web pages to extract page contents and form contents .Further, we contrive the distribution of "topics per document" and "words per topic" using the technique of Gibbs sampling. Experimental results show that the proposed method clearly outperforms the existing clustering methods.
  • Designing Routing Protocol for Ubiquitous Networks Using ECA Scheme
    Chandrashekhar Pomu Chavan and Pallapa Venkataram, Indian Institute of Science, India
    We have designed a novel Event-Condition-Action (ECA) scheme based Ad hoc On-demand Distance Vector(ECA-AODV) routing protocol for a Ubiquitous Network(UbiNet). ECA-AODV is designed to make routing decision dynamically and quicker response to dynamic network conditions as and when event occur. ECA scheme essentially consists of three modules to make runtime routing decision quicker. First, event module receive event that occur in a UbiNet and split up the event into event type and event attributes. Second, condition module obtain event details from event module split up each conditions into condition attributes that matches event and fire the rule as soon as condition hold. Third, action module make runtime decisions based on event obtained and condition applied, Every event is mapped into a node and broadcast an occurred event to all adjacent nodes in the UbiNet. We have simulated and tested the designed ECA scheme by considering ubiquitous museum environment as a case study with nodes range from 10 to 100. The simulation results show the time efficient with minimal operations.
  • Advanced Unsupervised Clustering Method to Identify Outliers in High Dimension
    Pushpendra Bhatt1 and Bharat Tidke2, 1Pune University, India and 2Flora Institute of Technology, India
    Discovering outlier is very difficult in high dimension dataset where records hold down big quantity of blare, which has been deliberate in the context of a huge amount of application domains which causes effectiveness problem, they are more usable based on their detection of data distinctive which go away significantly from average. Various subspace based method has been proposed for searching unusual sparse density unit in subspace, this paper pertain a Clique density based clustering algorithm that attempt to compact with subspace that craft dense reason when projected onto lower subspace in high dimension and the improved K-Means algorithm pertained on spawned subspaces for effectively and adeptly identifying outliers in high dimensional dataset for getting the more significant and interpretable result.
  • Issues of memory leak and its handling techniques in different programming languages: An overview
    Divya Arora and Sukhdip Singh, Deenbandhu Chhotu Ram University, India
    This review focuses on memory leakage handling problem, types of memory leaks and its effects. Memory leakage issue arises when memory allocations are incorrectly managed. Every language has different way to handle this issue. Presently, C/C++ and Java are the languages whose specifications are completely developed and for them multiple memory leakage handling algorithms, strong mechanisms and automated tools are already been in use. However, this is not the case with Fortran whose specification is continuously been updated to meet the growing challenges of high end applications leaving a room for doing further research in the area.
  • Relevance Matrix Based Structuring of Epidemic Outbreak Data
    Sunaina Sharma and Veenu Mangat
    Big Data is concerned with bulk of data presented in large volume with complex architecture and with ever increasing data. The data in such system can be taken from multiple sources and sometimes from autonomous sources. The Big Data is directly available to the end users so the criticality there exists in terms of fast retrieval of structured data from the system. Its classification is stalledthrough the bulky volume of often high-dimensional data, missing or vague features and, in streaming operation, the want for real-time processing.This paper aims at learning a kernelized relevance-vector-machine (RVM) classifier from large-scale unstructured data.RVM technique has beenused to evaluatethe outbreak of EBOLA disease
  • An effective and efficient feature selection method for lung cancer detection
    R. Kishore, K.L.N College of Engineering, India
    Feature selection is applied to reduce the number of features in many applications where data has hundreds or thousands of features. In order to extract the accurate features of an image, a relevance and redundancy analysis is being considered for effective and efficient feature selection. Three methods to be considered for feature selection are selecting, screening and ranking. With the help of these methods, the quality of the image can be improved and hence it enhances the performance of the learning system.
Copyright @ ACITY 2015