Accepted Papers

  • Analysis of Indian weather data sets using Data Mining Techniques
    T V Rajini kanth, V V SSS Balaram and N.Rajasekhar, SNIST, India
    India has a typical weather conditions consisting of various seasons and geographical conditions. Country has extreme high temperatures at rajasthan desert, cold climate at Himalayas and heavy rainfall at chirapunji. These extreme variations in temperatures make us to feel difficult in inferring / predictions of weather effectively. It requires higher scientific techniques / methods like machine learning algorithms applications for effective study and predictions of weather conditions. In this paper, we applied K-means cluster algorithm for grouping similar data sets together and also applied J48 classification technique along with linear regression analysis.
  • Efficient Multi-Cloud Storage System
    Tushar G. Wable, Mahesh B. Gavhane, Hemant S. Jadhav, Gaurav R. Deokar and Sanjay A. Agrawal, University of Pune, India
    Now a day's cloud computing plays a major role in IT industries and Personal use. It is not used for business activities but it is now also used for educational purposes. Cloud computing is just internet based computing you can say it is another property of internet. Customers plug into "cloud" and access applications, services which is priced and on demand. One of the prominent service offered in cloud computing is the cloud data storage, in that, subscribers do not have to store their data on their own servers, where instead their data will be stored on the Cloud Service Provider's servers. In cloud computing, subscribers have to pay the service providers for this storage service. This service does not only provides flexibility and scalability for the data storage, it also provide customers with the benefit of paying only for the amount of data they need to store for a particular period of time, without any concerns for efficient storage mechanisms and maintainability issues with large amounts of data storage.
  • Re-Weighting of Features in Content Based Image Retrieval System to Reduce Semantic Gap
    Kranthi Kumar.K1, Sunil Bhutada1 and P. Prasanna Rani2,1SNIST, India and 2GCET, India
    Digital images are produced at an ever increasing rate from different sources. Content Based Image Retrieval (CBIR) system is a prominent research area in effective digital data management and retrieval model. Users of image databases often prefer to retrieve relevant images from large image databases. Generally, retrieval is performed by using the low-level features of an image such as color, shape and Texture. In the way of retrieval process, existing systems are having some limitations such as Semantic Gap and lack of human perception to overcome these difficulties. In this paper, we are proposed an adaptive approach for re-weighting of features in the retrieval process. This approach is targeted to reduce semantic gap, which is a bottle neck problem in many CBIR systems to solve, many approaches have been proposed. Among them, Relevance Feedback (RF) is a technique absorbed into CBIR systems to improve retrieval accuracy using user given feedback. One of the traditional methods to enact relevance feedback is Feature Re-weighting (FRW) is useful technique to enhance retrieval performance based on the acquired feedback from user. The main issues in Relevance Feedback are learning of the system parameters feedback samples and high dimensional feature space. In our proposed method, we have done experiments on COREL image databases of size varying between 100 to 2000 images, and having number of categories between 20 and 68. The experimental results demonstrated the advantage of our method in terms of precision, recall and accuracy. The results show the success of the proposed approach and it is shown that our perspective is outperforms previous work.
  • Improved Unchoking Policy for BitTorrent
    Rohit Ranjan and Arup Bhattacharjee, National Institute of Technology - silchar, India
    Peer to Peer communication has become very popular these days .This popularity and increase in P2P traffic has given birth to many internet traffic management problems . Every Unchoking algorithm suggested so far are independent of each other , all focusing on some specific problem whether it is cross network traffic reduction or free riding reduction but there is no peer selection algorithm which could take all factors like upload rate, spreading factor, hop distance into account to decide which peer should be unchoked from all those who send interested message to current peer. This paper focus on providing a unchoking policy which collectively deals with all these factors to decide which peer would be best , to be unchoked.
  • Mapping of Neural Networks with Software Reliability Model
    Shailee Choudhary and Yogita Kansal, Manav Rachna College of Engineering, India
    The production of any software highly relies on the quality of the software, which in turn relates to the number of errors observed by the external customer. Thus, the reliability of software is measured which acts as a deciding factor for the release of the software. In this paper, our concern is how to develop general prediction models. As the existing models are highly depend on certain assumptions which must be made before project begins, but every project is unique hence the assumptions made for one project is not applicable to other. This makes things worst. The final idea is to use failure history from similar projects, hence various neural network models are constructed for prediction. Earlier researchers have only work with Feed forward and Jordan network, here we try to predict using recurrent and self organizing networks. The paper shows how different data sets can be applied to the NN models for predicting the faults and concludes the best among them.
  • Software In-Loop Simulation Test Facility - A Case Study on Mars Orbiter Mission
    Kiran Desai, Shuvo Dutta, Sumith Shankar S, Rajiv R Chetwani, M. Ravindra and K.M.Bharadwaj, ISRO Satellite Centre, India
    ISRO Satellite Centre (ISAC) is the lead centre of the Indian Space Research Organisation in the development and operationalisation of satellites for communication, navigation and remote sensing applications. In all these spacecrafts, highly advanced embedded systems carryout variety of mission critical functions. Each of these embedded systems house heterogeneous Processor and Embedded Software Combinations to carry out their functionalities. As per existing practices, testing of on board software to confirm its functioning takes place only when the software is integrated with its associated hardware. On the contrary, by a technique called the Software in Loop Simulation (SILS) test method, the on-board software can be fully tested in a software simulated dynamic environment without Hardware. This method of flight software validation is demonstrated in MARS ORBITER MISSION for AOCE, TCP, SSR, BDH software. The results very well demonstrate the effectiveness of the technique in early performance prediction and assessment of on-board software. This validation philosophy will be followed for all future spacecrafts. In a development environment where software requirements are too complex and requirement changes are to be incorporated even during final stages of development, this technique offers an excellent solution in fully validating on board software at source code level before it gets integrated with target hardware. This additional validation step not only improves software quality but also enhances productivity and reduces system turnaround time.
  • IRIS Biometric Watermarking Using Singular Value Decomposition and Wavelet Based Transform
    Anoopa C J and Amshakala K, Coimbatore Institute of Technology, India
    With the current technological advancement, it is very easy for intruders to produce illegal multimedia data copies. Digital watermarking is one such technique , where digital embedding of the copyright information or watermark into the data to be protected. The two major ways of doing that are spatial domain and the robust transform domain. The spatial domain method is comparatively simple, but it lacks the basic robustness that may be expected in any watermarking applications. It can survive simple operations such as cropping, any addition of noise. However , lossy compression is going to defeat the watermark . The product of high quality watermarked image is by first transforming the original image into the frequency domain by the use of Fourier, Discrete Cosine Transform (DCT) or Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) . In this project, method for watermarking of digital images with biometric data is presented . The usage of biometric instead of the traditional watermark increases the security of the image data. The biometric that is used here is human iris. The iris biometric template is generated from subject's eye images. The discrete cosine values of templates are extracted through discrete cosine transform and converted to binary code. This binary code is embedded in the singular values of the host image's coefficients generated through wavelet transform. The original image is thus applied first with the discrete wavelet transform followed up by the singular value decomposition of the sub-band coefficients. The result of this project is that the person identity is detected using his iris image. Compared with the previous existing works DWT-DCT, support vector regression (SVR)-DWT-DCT , DWT-SVD method obtains more robustness against the selected attacks.
  • Crushup - A Mishmash Social Networking Android Application
    E. Sathiya Lakshmi and K.S Palanisamy, Coimbatore Institute of Technology, India
    In recent years, due to the advent of social networks in mobile phones the social nature of human-being is brought into limelight. People tend to form social groups to share their idea and to make collaborative relationship among the members. Recent developments in cloud computing and mobile computing domains have drawn attention of many researchers to develop a mishmash application that would combine the benefits of these domains. These mishmash applications would meet the increasing demands of various mobile applications such as storage space, processing power and energy by utilizing the available cloud services.This paper aims to present an android mash-up application that portrays the social network - CRUSHUP. In CRUSHUP the user could form social groups of common interest and would be notified when their friends or group members are around. This gives the opportunity to set up a meeting or not. The application collects pictures or videos from a storage cloud, uses cloud services for face recognition and video processing to identify people and form a social group. It also alerts the user via SMS or CRUSHUP about their friends or group members visit to a nearby location. The android location and message services are used to find the approximate location of the mobile phone running the application. Information display has been achieved using heterogeneous list CWAC, Google Maps and default Android components while augmented rea using OpenCV.
  • Analyzing Behavior of Cloud Applications for derivation of Resource Usage Patterns
    Pankaj Deep Kaur and Inderveer Chana, Thapar University, India
    Cloud computing is an advantageous alternative for users to procure resources on rental basis for hosting their application components while minimizing the overall costs. Heterogeneous applications promulgate stringent Quality of Service (QoS) requirements with fluctuating workloads and resource usage needs. This extreme dynamism prevalent in cloud environments requires intelligent procedures underneath to manage the provisioning and scheduling decisions of computational resources. In this paper, we analyze the behavior of cloud applications so as to derive the patterns of resource usage. Further, we identify the key QoS metrics imperative in diverse application execution models. The behavior analysis along with resource usage patterns are instrumental for achieving QoS based resource scheduling in cloud computing environments.
  • An effective model for e-Governance using Cloud computing- (eGaaS) e-Governance as a Service
    Manish Kumar and Kunwar Singh Vaisla, BT Kumaon Institute of Technology, India
    The objective of this paper is utilize the cloud computing using the e-Governance and discuss effective use of e-Governance with the help of ICT (Information and Communication Technology). e-Governance helps to the all government department move to cloud using e-governance based cloud. e-Governance is the application of ICT (Information and Communication Technology). e-Governance provide faster way to communicate all government department through the cloud. cloud is the best way to integrate the all department using the e-Governance. e-Governance is the services provided by the government to the citizens that improve the service delivery and save the time. In this paper we discuss a delivery model that integrate the cloud computing facility with the help of e-Governance. There are many e-Governance applications use in cloud. In this paper we integrate all services of e-Governance through the cloud and propose a model for e-Governance application that is use in cloud.
  • Central Architecture Framework for e-Governance System in India using ICT infrastructure
    Manish Kumar and Kunwar Singh Vaisla, BT Kumaon Institute of Technology, India
    There are many countries involve in the field of e-Governance through the ICT ( Information and Communication Technology ) which is play important role in the world of internet and mobile because the growth of Information Technology is going extreme level. The effective model of e-Governance can change the scenario of the information accessing power from the internet and mobile that is needed to the empower for the user of government department or citizen. This paper is discussed and represents the effective architecture framework of e-Governance for India which includes the Center all States and respective District and gram panchayat through the ICT infrastructure. This paper is also discuss is the central architecture for the Indian government which is control the all government department of whole states of the India. The main objective of this paper is to create or implement the architecture framework for the e-Governance that is beneficial or cost effective for the Indian government. Through this model all government organization and agencies will interact effectively and conveniently share the data and information.
  • Resolving of Static and Dynamic Variability in a Generic Business Process
    Moufida Aouachria, Centre for Development of Advanced Technologies, Algeria
    We noticed that the approaches for reuse with two phases developed in domain engineering and adapted by Caplinskas to the business process, are implemented successfully, but they do not offer dynamic execution. In addition, they do not agree very well with the changing dynamic environments. Where, in this paper, we propose an approach in three phases: the engineering phase of the process domain, the engineering phase of the process application and the process execution phase. This approach based on ontology enables the process knowledge reuse in the application domain engineering and in the appropriate runtime environment. To achieve this approach, we must firstly separate the business process ontology from that of the application domain. Then reuse the process ontology in different application domains to be run in a suitable execution environment. Taking into consideration all static or dynamic variability, that can occur in a generic business process.
  • Translation of Telugu-Marathi and Vice-Versa using Rule Based Machine Translation
    Siddhartha Ghosh, Kalyani U.R.S and Sujata Thamke, KMIT, India
    In today's digital world automated Machine Translation of one language to another has covered a long way to achieve different kinds of success stories. As for e.g. the Google Translator, the BabelFish etc. Whereas Babel Fish supports a good number of foreign languages and only Hindi from Indian languages, the Google Translator takes care of about 10 Indian languages. Though most of the Automated Machine Translation Systems are doing well but handling Indian languages needs a major care while handling the local proverbs/ idioms. As it is known that out of the 22 recognized local languages of India only few got recognition on the digital world map and systems needed to translate languages among each other is less. Most of the Machine Translation system follows the direct translation approach while translating one Indian language to other. Our research at KMIT R&D Lab found that much more research and development work needed for computing in Indian languages and automated Machine translation is one of them. The research also found that handling the local proverbs/idioms is not given enough attention by the earlier research work. This paper focuses on two of the majorly spoken Indian languages Marathi and Telugu, and translation between them. Handling proverbs and idiom of both the languages have been given a special care, and the research outcome shows a significant achievement in this direction. A Machine Translation for converting these two languages also have been developed as a part of this research work.
  • Program Recognition System for C - A novel take on use of plans and clichés for program understanding
    Manasi Deshmukh, Rohan Ingale, Rajat Doshi and Priyanka Sathe, University of Pune, India
    This paper presents an approach for deriving an English language description of a C program directly from the source code. Two levels of translation are presented: cliche extraction, to identify commonly used programming constructs, and concept abstraction, to deduce the purpose of the program. Concept abstraction can serve as a basis for intelligent query support for providing relevant documentation. In this paper, we compare prominent works on program understanding systems, and propose an efficient method for plan representation and storage of plans in plan library, as an alternate approach to program recognition using flow graph parsing.
  • PETA : Penetration Testing of Algorithm using JUNIT
    Mavera Usmani, Rahul Johari and Sakshi Goyal, GGS Indraprastha University, India
    Verifying the correctness of a program's behaviour by inspecting the content of output statements using your eyes is a manual, or more specifically, a visual process.there are problems associated with this visual approach first being unable to check our code's correctness again.The second problem is that the pair of eyes used is tightly coupled with the brain of the owner of the eyes. If the author of the code also owns the eyes used in the visual inspection process, the process of verifying correctness has a dependency on the knowledge about the program internalized in the visual inspector's brain.It is difficult for a new pair of eyes to come in and verify the correctness of the code simply because they are not partnered up with brain of the original coder.Reading other people's code that is not covered by unit tests is more difficult than reading code that has associated unit tests. At best reading other peoples code is tricky work, at its worst this is the most turgid task in software in this paper we are unit testing the correctness and efficiency of cryptographic algorithms-AES,DES and MD5. With unit testing we divide code up into its component parts. For each component we then set out our stall stating how the program should behave. Each unit test tells a story of how that part of the program should act in a specific scenario. Each unit test is like a clause in a contract that describes what should happen from the client code's point of view.This then means that a new pair of eyes has two strands of live and accurate documentation on the code in question.we are unit testing the corresponding algorithms with the help of JUnit 4.10.We automate the process of running tests so that they are run each time we do a build of the project. We also automate the generation of code coverage reports that details what percentage of code that is covered and exercised by tests. We strive for high percentages.c
  • Secure and Energy Efficient CDAMA Scheme in Wireless Sensor Network Using DAS Model
    Nidhi Mouje and Nikita Chavhan, G. H. Raisoni College of Engineering & Technology, India
    Wireless sensor networks (WSNs) are ad-hoc networks composed of tiny devices with limited computation and energy capacities. For such devices, data transmission is a very energy-consuming operation. It thus becomes essential to the lifetime of a WSN to minimize the number of bits sent by each device. Concealed data aggregation (CDA) schemes that are based on the homomorphic characteristics of a privacy homomorphism (PH) enable end-to-end encryption in wireless sensor networks CDAMA is designed by using multiple points, each of which has different order. it is designed for a multi-application environment. it mitigates the impact of compromising attacks in single application environments and degrades the damage from unauthorized aggregations. In the database-service-provider model To maintain data privacy , clients need to outsource their data to servers in encrypted form. So that time, clients must still be able to execute queries over encrypted data.
  • Distance Based Transformation for Privacy Preserving Data Mining using Hybrid Transformation
    Hanumantha Rao Jalla1 and P.N Girija2, 1Chaitanya Bharathi Institute of Technology, India and 2University of Hyderabad, India
    Data mining techniques are used to retrieve the knowledge from large databases that helps the organizations to establish the business effectively in the competitive world. Sometimes, it violates privacy issues of individual customers. This paper addresses the problem of privacy issues related to the individual customers and also propose a transformation technique based on a Walsh-Hadamard transformation (WHT) and Rotation. The WHT generates an orthogonal matrix, it transfers entire data into new domain but maintain the distance between the data records these records can be reconstructed by applying statistical based techniques i.e inverse matrix, so this problem is resolved by applying Rotation transformation. In this work, we increase the complexity to unauthorized persons for accessing original data of other organizations by applying Rotation transformation. The experimental results show that, the proposed transformation gives same classification accuracy like original data set. In this paper we compare the results with existing techniques such as Data perturbation like Simple Additive Noise (SAN) and Multiplicative Noise (MN), Discrete Cosine Transformation (DCT), Wavelet and First and Second order sum and Inner product Preservation (FISIP) transformation techniques. Based on privacy measures we conclude that proposed transformation technique is better to maintain the privacy of individual customers.
  • Multiday Pair Trading Strategy using Copula
    Apoorva Uday Nayak, Vishwanjali Gaikwad and Megha Ugalmugale, College of Engineering, India
    Pair trading is a market neutral strategy meant to generate profit regardless of whether equities rise or fall. The strategy of matching a long position(buy) with a short position(sell) in two historically correlated stocks is known as pair trading. Some traditional pair trading techniques use correlation or cointegration as a dependence measure and assume symmetric distribution of data along the mean zero. However, the occurrence of non-linear distributions is quite frequent in financial assets and thus the use of linear correlation coefficient as a dependence measure is erroneous and may lead to misleading results. This may trigger wrong trading signals and may fail to recognise profit opportunities.This paper presents an overview of applications of different copulas to develop a model to suggest profitable strategy by analyzing data and providing trading signals using machine learning and other statistical analysis techniques. Copula is a relatively new pair trading technique. Copulas are much more realistic. They can be applied regardless of the form of marginal distribution thus providing much more robustness and flexibility in practical applications.
  • Optimization of Cloud Computing Resources using Genetic Algorithms
    Navneet Kaur1 and Jaspreet Singh Budwal2, 1Apeejay Institute of Management Technical Campus, India and 2GSSS, India
    Cloud computing is developed as an efficient way to allocate resources for the execution of tasks and services which are geographically dispersed. It has been emerging as a commercial reality in the information technology domain. For evolutionary algorithms (EA) researchers, the cloud represents both a huge opportunity and a great challenge. This paper describes how cloud computing resources can be optimized using one of the evolutionary algorithms namely genetic algorithms (GA). Genetic algorithms has great potential to optimize the allocation of resources in cloud computing. The function iterates genetic operations to make an optimal resource allocation. The proposed genetic algorithm mechanism is easy to implement and it is flexible enough to be used with a huge set of parameters. This model can be used to make decisions in complex and diversified environments. It applies GA operators like reproduction operator, crossover operator and mutation operator on cloud computing resources to get an optimal solution evaluated by a fitness function. Selection of fitness value in genetic algorithms will discard all the invalid parameters. The function iterates reproducing populations to output the best resource allocation among the available resources. Model with genetic algorithm in cloud computing is proposed considering characteristics and key features of cloud computing. The experimental results shows that this mechanism can better realize load balancing and proper resource utilization.
  • Designing of Road Signs Information Broadcasting Scheme in VANET
    Noopur Patne and Mangala Madankar, G.H. Raisoni College of Engineering, India
    Establishment of vehicular adhoc network is most demanding in smart traffic management system. In recent years many researcher are working on VANET and trying to implement the concepts in real world. Vehicular adhoc networks (VANETs) are being developed to provide on-demand wireless communication infrastructure among vehicles and authorities. Such an infrastructure is expected to deliver multiple road safety and driving assistance applications. In this paper, a system is proposed where the main objective is to detect road signs from a moving vehicle. The system will use one signal transmitter in each and every symbol or message board at road side and whenever any vehicle passes from that symbol the receiver situated inside the vehicle i.e. In-Car System will receive the signals and display proper message or the symbol details on display connected in car. Road Traffic Sign Detection is a technology by which a vehicle is able to recognize the traffic signs which are on the road e.g. "speed limit" or "school" or "turn ahead".
  • An Improvised Model for Identifying Influential Nodes in Multi-Parameter Social Networks
    Abhishek Singh and A.K.Agrawal, Banaras Hindu University, India
    Influence Maximization is one of the major tasks in the field of viral marketing and community detection. Based on the obervation that social networks in general are multi-parameter graphs and viral marketing or Influence Maximization is based on few parameters, we propose to convert the general social networks into "interest graphs". We have proposed an improvised model for identifying influential nodes in multi-parameter social networks using these "interest graphs". The experiments conducted on these interst graphs have shown better results than the method proposed in [8].
  • Finding Prominent Features in Communities in Social Networks using Ontology
    Vijay Nayak and Bhaskar Biswas, Indian Institute of Technology (BHU), India ABSTRACT: Community detection is one of the major tasks in social networks. The success of any community depends upon the features that were selected to form the community. So it is important to have the knowledge of the main featues that may affect the community. In this work we have proposed a method to find prominent features based on which community can be formed. Ontology has been used for the said purpose.
  • Designing of SMI based multi-beam smart antenna using FPGA
    Neha Deshmukh and Ratnaprabha Jasutkar , G. H. Raisoni College of Engineering & Technology, India ABSTRACT: A smart antenna is an antenna array system aided by some algorithm designed to use in diverse signal environments. It consists of multiple antenna elements that are connected to a digital signal processor where spatial filtering takes place. In unmanned aerospace vehicles (UAV) communicat ion high bandwidth is required to move large amount of mission informat ion to/from users in real time. So in order to optimize transmit & receive power & to support high data throughput communication, antenna beam forming for point-to-point, high bandwidth UAV communicat ion is important. This can be done using adaptive beam forming. Adaptive antenna is the most advanced smart antenna approach which will be implemented using SMI technique to provide better performance and throughput. The smart antenna implementation using SMI consist of complex multiply and accumulate (CMACC) operations over the block adaptive length k to estimate sample covariance matrix, followed by a mat rix inversion operation to obtain the inverse sample covariance matrix. Complex multiplications are then used to multiply the inverse sample covariance matrix with the spatial steering vector to obtain an estimate of the beam forming weights. Finally complex multiply operations are used at each element in the smart antenna to multiply the beam forming weights with the individual channels in the delay and sum beam former. The resulting output in each channel is summed together to obtain the beam former output.
  • An Effective Resource Scheduling strategy In Cloud Computing Based on Parallel Genetic Algorithm
    S R Chandra Murthy Vedula, B.S.P. Mishra and Neetesh Mehrotra, KIIT University, India ABSTRACT: Cloud computing is the one of the emerging field in the distributed environment. One of its research area is fair allocation of resource i.e. a fair scheduling strategy by using advancement in Virtualization technology and to achieve load balancing problem by ensuring maximum utilization of resources. In most of the Load Balancing Algorithms, they are mainly focuses on the current load state of the system which sometimes causes a great impact on the system which leads to poor performance. So In order to overcome this problem, this paper presents an approach of an effective scheduling strategy using Master-Slave Model of parallel genetic algorithm, based on the previous load state and present load state of the system. The proposed Algorithm will work out in advance the impact on it after the deployment of VM resources to the system. To achieve Load balancing problem the Algorithm will select Least affective solution and the Algorithm will dynamically solve the load imbalance problem and achieve maximum utilization of resources.
  • A Novel Heuristic Resolving Deadline-Oriented Task Scheduling In Cloud
    Rohit Kalita and Harish Patnaik, KIIT University, India
    In Cloud computing is a framework that allows easy network access to a shared pool of customizable computing resources, whenever required. It uses virtualization technology to provide computing resources to the clients which is why, scheduling and resource allocation are a piece of important research issue in cloud computing. In order to exploit the ability of cloud computing, we need an effective task scheduling algorithm. A Scheduler should involve in scheduling client requests so as to achieve greater degree of load balancing and resource utilization. In this paper, an approach is portrayed in which the basic computing elements are the virtual machines (VMs) which may be of varying sizes. The performance requirements are specified by the clients in terms of deadlines to the submitted tasks. Therefore the goal is to ensure that all the assigned tasks are completed within their deadline and also that tasks are evenly allocated across the resources so that the utilization is effective. The algorithm is designed based on the striking features of the Max-Min algorithm and a novel concept termed as flexibility time so as to schedule tasks to conform to their expected finish time . The objective of this determination is to harness the utilization of the internal data center and to minimize outsourcing tasks in the cloud, while attaining the applications’ quality of service(QOS) constraints.
  • An Improved Approach to Minimize Context Switching in Round Robin Scheduling Algorithm Using Optimization Techniques
    Mahesh Kumar M R, Renuka Rajendra B, Sreenatha M and Niranjan C K, JSSATE, India
    Scheduling is a fundamental operating system function. Almost all computer resources are scheduled before use. All major CPU scheduling algorithms concentrates more on to maximize cpu utilization, throughput, waiting time and turnaround time. In particularly, the performance of round robin algorithm depends heavily on the size of the time quantum. To improve the performance of CPU and to minimize the overhead on the CPU, time quantum should be large with respect to the context switch time. Otherwise, context switching will be more. In this research paper, we propose a method to minimize the context switching and to break the fixed time quantum size in round robin scheduling algorithm using optimization techniques. Both results and calculations show that, our proposed method is more efficient than the existing round robin scheduling algorithm.
  • Effect of Refactoring On Software Quality
    Noble Kumari and Anju Saha, USICT, India
    Software quality is an important issue in the development of successful software application. Many methods have been applied to improve the software quality. Refactoring is one of those methods. But, the effect of refactoring in general on all the software quality attributes is ambiguous. The goal of this paper is to find out the effect of various refactoring methods on quality attributes and to classify them based on their measurable effect on particular software quality attribute. The paper focuses on studying the Reusability, Complexity, Maintainability, Testability, Adaptability, Understandability, Fault Proneness, Stability and Completeness attribute of a software .This, in turn, will assist the developer in determining that whether to apply a certain refactoring method to improve a desirable quality attribute.
  • Managing Uncertainty of Time in Agile Environment
    Rashmi Popli and Priyanka Malhotra and Naresh Chauhan, YMCAUST, India
    Agile software development represents a major departure from traditional methods of software engineering. It had huge impact on how software is developed worldwide. Agile software development solutions are targeted at enhancing work at project level. But it may encounter some uncertainties in its working. One of the key measures of the resilience of a project is its ability to reach completion on time and on budget, regardless of the turbulent and uncertain environment it may operate within. So this paper tries to solve the uncertainty of time.
  • Analysis of Various Sorting Algorithms in OpenCL
    Shreshtha Sharma and Maninder Singh, Thapar University, India
    With the availability of multi-core processors and graphics processing units in the market, heterogeneous computing environment with the immense performance capability can be easily constructed. Heterogeneous computing accelerates the performance by transferring the computation intensive code to the GPU and the remaining code runs in the CPU. OpenCL is a standardized framework for the heterogeneous computing. Sorting algorithms are considered as fundamental building blocks for algorithms and has many applications in computer science and technology. In this paper, we implement Selection Sort, Btonic Sort and Radix Sort in OpenCL. The algorithms are designed to exploit the parallelism model available on multi-core GPUs with the help of OpenCL specification. In addition, the comparison between the traditional sequential sorting algorithms and parallel sorting algorithms are made on the Intel Core i5-3317U CPU @ 1.70 GHz architecture and AMD Radeon HD 7600 M series GPU.
  • A Fangled Method for Improving the Ranking Score of Web Pages Based on Reading Time
    Garima Garg1, Harleen Kaur1 and Ritu Chauhan2, 1Hamdard University, India and 2Amity University, India
    The www consists billion of web pages. When a user enters a query to the search engine, it generally returns a list of pages as a result. To assist the users in the navigation of this result list, ranking methods are applied on it. But results returned are not relevant and ranking of the pages are not efficient as per the user's query. In order to improve the rank of web pages, after analysing the original page rank, we propose an algorithm Time Rank algorithm for Paging (TRAP) to find more relevant web page and thus helps in increasing the accuracy of web PageRank according to user's query. So most valuable pages are displayed on the top of the result list, which reduces the search space to a large scale. However, the time complexity of the proposed TRAP algorithm is less as compared to the PageRank algorithm.
  • Reconstruction of Tollan-Xicocotitlan city by Augmented Reality (Extended)
    Martha Rosa Cordero Lopez and Marco Antonio Dorantes Gonzalez, Escuela Superior de Computo, Mexico
    Work In Terminal presents the analysis, design, implementation and results of Reconstruction Xicocotitlan Tollan-through augmented reality (Extended), which will release information about the Toltec capital supplemented by presenting an overview of the main premises of the Xicocotitlan Tollan city supported dimensional models based on the augmented reality technique showing the user a virtual representation of buildings in Tollan phase.
  • Official voting system for electronic voting: E-Vote
    Marco Antonio Dorantes Gonzalez, Martha Rosa Cordero Lopez and Jorge Benjamin Silva Gonzalez, Escuela Superior de Computo, Mexico
    This paper describes the Official voting system by electronic ballot: E-Vote, which aims to streamline primary electoral processes performed in the country, beginning with the District Federal benefits and improvements. The principal benefices are economic and ecological time, taking into account process security features and the integrity of the captured votes. This system represents an alternative to the currently devices and systems implemented in countries like Venezuela, Brazil and the United States, as well formalized as a prototype able to compete with others developed by the Institute Federal Electoral District (IEDF).
  • Reorganization of Links to Improve User Navigation
    Deepshree A. Vadeyar and Yogish H.K, EWIT, India
    Website can be easily design but to efficient user navigation is not a easy task since user behavior is keep changing and developer view is quite different from what user wants, so to improve navigation one way is reorganization of website structure. For reorganization here proposed strategy is farthest first traversal clustering algorithm perform clustering on two numeric parameters and for finding frequent traversal path of user Apriori algorithm is used. Our aim is to perform reorganization with fewer changes in website structure.



Copyright @ ACITY 2014