Venue : Coral Deira - Dubai, Deira, Dubai, UAE.  &  Date : April 4~5, 2014

Accepted papers

  • Text Summarization For Turkish By Sentence Extraction
    Murat Oruc and Ilyas Cicekli ,Hacettepe University, Turkey
    In this paper, we suggest a text summarization method by extracting sentences with a specific algorithm consisting of term frequency, title similarity, key phrase, centrality, sentence position and positive words. The method starts with parsing the original text into its sentences. After the separate sentences are obtained, scoring part of the method takes place using the specific features that compose the algorithm and weight values learned from the data corpus including 120 texts and their human-generated, sentence-selected summaries. Besides, key phrases are requested from the user on the interface to make the algorithm easier. After the scoring process is completed, the summary gets formed by sorting high scored sentences. The interface asks the user whether the text contains a title or not. If there is a title, algorithm assigns it as the title of the summary, too. The application and the user interface is designed and formed in the Microsoft Visual Studio platform using C# as the programming language.
  • An Authenticated Privacy Preserving Multiword Data Retrieval Over Encrypted Cloud
    Durga Shankar Viswanathan,Dhaanish Ahmed Institute of Technology,India
    Cloud Computing, the long term goal of utility computing has transformed the IT industry a lot. Data constantly contain sensitive information should be secured by policies and regulation defined by various organizations. However, information on cloud always causes privacy problems. Data encryption protects privacy and confidentiality but efficiency is not ensured. Symmetric Searchable Encryption (SSE) and Order Preserving Encryption (OPE) inevitably leaks the data and cloud users are not authorized by the cloud. We propose, Trusted Centre Searchable Encryption (TCSE) which support multi keyword retrieval and user authentication. In TCSE, we employ a vector space model, Homomorphic encryption and a trusted center. The vector space model provides efficiency in accurate searching, homomorphic encryption involves in reducing computation cost and trusted center ensures access control in the cloud. Our approach guarantees practical efficiency for authentication and high security by eliminating data outflow.
  • Morphological Segmentation Of Amharic Real-Life Document Images
    Fasil Dires and Bekele Abebe,University Of Gondar,Ethiopia
    Digitization has a wide number of applications, such as preserving historical collections of documents or speeding up manual processes in some organizations. To this end, it is necessary to have efficient document image analysis techniques. For further processing of a document images there must be effective segmentation techniques that can detects objects from the document images. This paper uses a morphological approach for the segmentation of real life Amharic document images. Morphological image analysis efficiently deals with geometrical features such as size, shape, contrast, or connectivity that can be considered as segmentation oriented features. In this paper, considering the advantages of morphological approach, it is used for segmenting the document images. The proposed technique is robust enough to include various Amharic document images of different noise level. Besides the noise levels of the document images are identified. Finally the performance of the segmentation technique is measured.
  • Another Proof Of The Denumerability Of The Complex Numbers
    Jose Ulisses Ferreira,UFBA,Brazil
    This short paper suggests that there might be numerals that do not represent numbers. It introduces an alternative proof that the set of complex numbers is denumerable, and also an algorithm for denumerating them.
  • A New Multi Hybrid Congestion Control Method For WMSNs Using Backpressure Routing And Dynamic Prioritization For Service Differentiation
    Akbar MAJIDI1, Hamid MIRVAZIRI2 ,1Islamic Azad University ,2 University of Kerman , Iran
    This paper proposes a new multi hybrid method for congestion control in wireless multimedia sensor networks (WMSNs). Concerns in the WMSNs are limitations in process power, memory capacity and specifically power supply. In WMSNs, it is believed that communication dominates energy consumption. Energy cost for sensing and computation is always less than communication. Inspiring from backpressure routing, we prioritize data in specified times, reduce communications, congestion control and avoid more energy usage. The method in the proposed algorithm is that the routing algorithms do not precalculate the routes and the next step is chosen dynamically. Decisions are made based on the congestion degree on neighbor nodes; each node sees its own queue backlog and neighbor's queue backlog and chooses its own degree and route based on the queue backlogs obtained from its neighbors. If there is two or more data with the same condition in the backpressure routing, we use service differentiation to prioritize packets. The results obtained from simulation test done by NS-2 simulator indicate that the proposed model in short (BDC) is more innovative and presents better performance in compare with CCF, PCCP and DCCP protocols. Terms of lower latency, less packets loss, using less hops,higher throughput, better normalized routing load and packet delivery ratio, and most important, optimized energy consumption.
  • Review Paper Of Performance Analysis For Random Access Channel In Wireless Network With MAC Protocol
    Seema Narvare, Rahul Dubey and Manish Shrivastava , LNCT ,India
    In order to progress the performance of Medium Access Control (MAC) schemes in wireless communication networks, a number of researchers planned to take apart a single shared channel into numerous sub-channels: one as direct subchannel and the others as data sub-channels [1]. Aloha is feasibly the simplest and most-studied medium access control protocol in existence. Earlier study on Aloha using an Unslotted Aloha and OFDMA aloha has shown that the usage trend analysis very much depends on the performance of the multiple channel schemes. But this algorithm has some disadvantage: It may not develop the throughput performance of a MAC scheme [3], [4]. The first drawback does not only concern pure aloha, but all other algorithms. In this paper, we will extend and examine a novel scheme to enhance the performance analysis of the medium access control (MAC) mechanism using IEEE 802.11 protocol. IEEE 802.11 is a set of medium access control and physical layer (PHY) specification for implementing wireless local area network (WLAN) computer communication in the frequency bands.
  • A Routing Framework In Software Defined Network Environment
    Can Zhang1, Jian Li1, Chaojiong Zheng1, Zhaowen Lin1, Yan Ma1,2 ,1Institute of Network Technology ,2Beijing University of Posts and Telecommunications,China
    In this paper we propose a new design and prototype implementation of a routing framework in Software Defined Network (SDN) environment. This framework features a logically centralized control plane by which users could easily manage and monitor their network. It exposes APIs to support various routing protocols and algorithms so as to integrate OpenFlow network with homogeneous or heterogeneous networks. We also discuss the motivation, design considerations and potential use cases in this paper.
  • 3D Reconstruction And Finite Element Analyzing Of Local Blood Flow In Arteries
    LingNa He, JiaHao Liu and HeRong Zheng , Zhejiang University of Technology,China
    Vascular disease has become most dangerous factor threatened to public health in developed countries. With the rapid development of computer modeling and finite element simulation technology, scholars have launched a large number of studies on vascular disease and believed that the vessel wall and homodynamic shear stress play an important role on the formation and deterioration of vascular disease. This paper adopted two MRI brain arteries topographic datasets, achieved 3D reconstruction and fluid dynamics of the local environment study, to examine the characteristics of brain hemodynamics. Verifying the construction of the human cerebral vascular finite element computer model and providing an efficient simulation methodology foundation for finite element analysis in a wide range of clinical applications.
  • Web Accessibility Evaluation Of E-Government Website In Saudi Arabia
    Ahlam J. Al-Khiebari and Khalid A. Alnafjan ,King Saud University ,Saudi Arabia
    This paper presents the results of a new accessibility study carried out in 2013 based on a sample of Saudi e-government websites. The main objective of this study is to investigate the accessibility of Saudi e-government websites with reference to the Web Content Accessibility Guidelines 2 (WCAG 2.0), conformance level A by using automatic evaluation tools. It aims to address problems of accessibility experienced by people with disabilities in Saudi Arabia while using e-government websites. It also aims to get a view on the degree to which the web accessibility is maintained over time in Saudi Arabia. This could be obtained by comparing the evaluation results with earlier study and analyze the progress in web accessibility. The analysis of results reveals that none of Saudi e-government sample's websites passed the lowest level of WCAG2. No indications for web accessibility progress was noticed as regarded the sample evaluated in 2010 study.
  • A Developed Model-Based Monitoring For Fault Detection In Dynamic Systems
    B. Tolbi, A. Zeroual, F. Bouriachi and H. Tebbikh, University 8 Mai 1945, Algeria
    size and complexity of processes present a great importance in industry; it lead to monitoring an increasing number of variables witch the understanding is generally based on measurements and / or models of processes. When fault precision becomes from noise of model or measurements, here the aim searchers are become to develop new and powerful technics for detection of faults. This paper presents a method to monitoring interruptions, permanent and / or intermittent in hybrid dynamic systems it's based on the principle to modeling the behavior of the system by a stopwatch hybrid automaton. Results are showed that the monitoring system is able to detect any malfunction caused by defects. This technique is illustrated on a didactic example relating to the field of classical engineering, a hydraulic system.
  • Off-Line Signature Recognition Using Weightless Neural Network And Feature Extraction
    Ali Al-Saegh, University of Mosul, Iraq
    The problem of automatic signature recognition and verification has extensively investigated due to the vitality of this field of research. Handwritten signatures are broadly used in daily life as a secure way for personal identification. In this paper a novel approach is proposed for handwritten signature recognition in an off-line environment based on weightless neural network (WNN) and feature extraction. This type of neural networks is characterized with its simplicity in the design and implementation. Whereas no weights, transfer functions, multipliers, are required. Implementing the WNN needs only RAM slices. Moreover, the whole process of training can be accomplished with few number of training samples and by presenting them once to the neural network. Employing the proposed approach in signature recognition area yields promising results with rates of 99.67% and 99.55% for recognition of signatures that the network has trained on and rejection of signatures that the network has not trained on, respectively.
  • Influence Of Quantity Of Principal Component In Discriminative Filtering
    Kenny V. dos Santos,Luiz Eduardo S. e Silva and Waldir S. S. Junior,Federal University of Amazonas,Brazil
    Discriminative filtering is a pattern recognition technique which aim maximize the energy of output signal when a pattern is found. Looking improve the performance of filter response, was incorporated the principal component analysis in discriminative filters design. In this work, we investigate the influence of the quantity of principal components in the performance of discriminative filtering applied to a facial fiducial point detection system. We show that quantity of principal components directly affects the performance of the system, both in relation of true and false positives rate.
  • Decentralized Key Management Schemes In Mobile Ad-Hoc Networks (MANETs)
    Mrs.G.Indirani ,K.Selvakumar and V. Kaveri , Annamalai University , India
    In Mobile Ad Hoc Networks, due to lack of the infrastructure providing secure communications is a big challenge. Key management is one of the central aspects of providing security in MANET. Group key management is one of the schemes applied for providing secure communications in MANET with three categories of management protocols such as centralized, decentralized and distributed. Existing approaches of centralized and distributed key management protocol schemes are not well suited for highly dynamic, spontaneous nature of MANET and also brings the challenge due to node mobility, these key shares may not be available in the neighborhood when they are needed for key reconstruction. The proposed system proposes a fully decentralized key management scheme which uses a cryptography-based secret sharing method by which the keys are splits into multiple sub keys and sent to multiple nodes. In the proposed system, the collector node is created which manages the Key reconstruction and also reduces the communication overhead by using TRAP protocol in the effective manner in the mobile environment. The proposed system proposes the effective key management which maximizes the chances of successful key reconstruction in the mobile Environment.
  • Mining User Behaviour Patterns Using Mobile Call Detail Record
    Mrs.G.Indirani ,K.Selvakumar and V. Kaveri , Annamalai University , India
    Now a day's users request various kinds of services by mobile devices at anytime and anywhere, due to the advancements in wireless and web technologies. Thus making the information effectively available to the users is an important issue in the mobile computing systems. Detecting the user behavior can highly benefit the enhancements on system performance and quality of services. Based on changeable user location behavior patterns, mobile service systems have the capability of effectively mining a special request from abundant data. In this paper user location behavior patterns, are studied in terms of the problem of mining matching mobile access patterns based on joining the following four kinds of characteristics U, L, T, and S, where U is the mobile user, L is the location, T is the dwell time in the timestamp, and S is the service request. By introducing standard graph-matching algorithms along with the primitives of a database management system, which comprises grouping, sorting, and joining, these joint operations are defined. Finally, performance studies are conducted to show that, in terms of execution efficiency and scalability, the proposed procedures produced excellent performance results.
  • Fiducial Points Detection Using Svm Linear Classifiers
    Luiz Eduardo S. e Silva, Kenny V. dos Santos, Pedro Donadio de T. Junior and Waldir S. S. Junior, Federal University of Amazonas, Brazil
    Currently, there is a growing interest from the scientific and/or industrial community in respect to methods that offer solutions to the problem of fiducial points detection in human faces. Some methods use the SVM for classification, but we observed that some formulations of optimization problems were not discussed. In this article, we propose to investigate the performance of mathematical formulation C-SVC when applied in fiducial point detection system. Futhermore, we explore new parameters for training the proposed system. The performance of the proposed system is evaluated in a fiducial points detection problem. The results demonstrate that the method is competitive.
  • Improving Prosodic Information For HMM-Based Synthesized Vietnamese Aviation Announcements
    Tuan Dinh Anh1, Hung Phan Dang1, Quan Tran Lam2, Thang Vu Tat1 ,1Institute of Information Technology,2Vietnam Aviation Institute - Vietnam Airlines,Vietnam
    In most languages, the quality of a text to speech system is relating directly to the diversity of language' domain. Each domain, such as sports, entertainments, etc.. has its own grammar structure that will decide the auto-pronunciation of the text to speech system. The prosodic information plays as a crucial role for analyzing the grammar structure of each domain. In this research, we will analyze characteristics of prosodic information of aviation domains in Vietnamese, which are represented by a set of common airline announcements.
  • Android Mapping Application
    Abdalwhab Bakheet, Ahmed Abd Almahmoud and Wigdan Ahmed,University of Khartoum,Sudan
    Location-aware and mapping applications have gone from a desirable feature to an essential part of any smart phone. Whether a user is checking into a social network, looking for a pharmacy in the middle of the night, or located in somewhere and needs help, the key is always the same: location. In this project, an Android mapping application is developed. The application is able to display the map of the whole world while online or, display a pre-downloaded map while offline, track the user's location, display a compass to determine north, send the user's location to others in case of emergency using SMS, receive and interpret received location from the message, display it on the map, and notify the user by the reception of the location.

    The application was developed using agile methodology. It, met its objectives and successfully passed 91% of the final system test, recording that some limitations were discovered, the application needs further testing and can be implemented for particular company or university using their own maps or editing the maps in OSM (open street maps).
  • Review Methods For Multi-Document Text Summarization
    Soniya Patil and Ashish T. Bhole ,S.S.B.T.'s College of Engg.& Tech,Jalgaon(M.S.)
    In today's busy schedule, everybody expects to get the information in short but meaningful manner. Huge long documents consume more time to read. For this, we need summarization of document. Work has been done on Singledocument but need of multiple document summarization is encouraging. Existing methods like cluster approach, graph-based approach, fuzzy- based approach for multiple document summaries are improving. But Statistical approach based on algebraic method is still the topic of research. Effort has to be done to improve this approach by considering the limitations of LSA (Latent Semantic Analysis). Firstly, it reads only input text and does not consider world knowledge, for example women and lady it does not consider synonyms. Secondly, does not consider word order, I will deliver to you tomorrow. deliver i will to you. tomorrow i will deliver to you. These different clauses which may convey same meaning in different parts of document. Lastly, all the approaches give their output in Text form. So, would try to bring the output in tabular form. Expected result is to overcome this limitation and can conclude that LSA method has been improved.
  • Estimating The Effort Of Mobile Application Development
    Laudson Silva de Souza and Gibeon Soares de Aquino Jr,Federal University of Rio Grande do Norte,Brazil
    The rise of the use of mobile technologies in the world, such as smartphones and tablets, connected to mobile networks is changing old habits and creating new ways for the society to access information and interact with computer systems. Thus, traditional information systems are undergoing a process of adaptation to this new computing context. However, it is important to note that the characteristics of this new context are different. There are new features and, thereafter, new possibilities, as well as restrictions that did not exist before. Finally, the systems developed for this environment have different requirements and characteristics than the traditional information systems. For this reason, there is the need to reassess the current knowledge about the processes of planning and building for the development of systems in this new environment. One area in particular that demands such adaptation is software estimation. The estimation processes, in general, are based on characteristics of the systems, trying to quantify the complexity of implementing them. Hence, the main objective of this paper is to present a proposal for an estimation model for mobile applications, as well as discuss the applicability of traditional estimation models for the purpose of developing systems in the context of mobile computing. Throughout the paper, the existing estimation methods are analyzed, specific characteristics of systems for mobile devices are identified and, finally, an adaptation of an existing estimation method for this area is proposed.
  • RGB Color Preserving Cryptography And Its Applications In Secure Data Transmission
    Anchal A. Solio and S. A. Ladhake,Sipna College of Engineering & Technology,India
    To maintaining the secrecy and confidentiality of images is a vibrant area of research, with two different approaches being followed, the first being encrypting the images through encryption algorithms using keys, the other approach involves hiding the data using data hiding algorithm to maintain the images secrecy.A content owner encrypts the original image using an encryption key, and a data-hider can embed additional data into the encrypted image using a data-hiding key though he does not know the original content. With an encrypted image containing additional data, a receiver may first decrypt it according to the encryption key, and then extract the embedded data and recover the original image according to the data-hiding key.
  • A Blind Robust Watermarking Scheme Based On SVD And Circulant Matrices
    Oussama Noui and Lemnouar Noui,University of Batna,Algeria
    Multimedia security has been the aim point of considerable research activity because of its wide application area. The major technology to achieve copyright protection, content authentication, access control and multimedia security is watermarking which is the process of embedding data into a multimedia element such as image or audio, this embedded data can later be extracted from, or detected in the embedded element for different purposes. In this work, a blind watermarking algorithm based on SVD and circulant matrices has been presented.Every circulant matrix is associated with a matrix for which the SVD decomposition coincides with the spectral decomposition. This leads to improve the Chandra algorithm [1], our presentation will include a discussion on the data hiding capacity, watermark transparency and robustness against a wide range of common image processing attacks.
  • A Real-Time H.264/AVC Encoder & Decoder With Vertical Mode For Intra Frame And Three Step Search Algorithm For P-Frame
    Mohammed H. Al-Jammas1 ,Mrs. Noor N. Hamdoon2 ,1University of Mosul ,2College of Engineering ,Iraq
    The video coding standards are being developed to satisfy the requirements of applications for various purposes, better picture quality, higher coding efficiency, and more error robustness. The new international video coding standard H.264 /AVC aims at having significant improvements in coding efficiency, and error robustness in comparison with the previous standards such as MPEG-2, H261, H.263, and H264. Video stream needs to be processed from several steps in order to encode and decode the video such that it is compressed efficiently with available limited resources of hardware and software. Each step can be implemented with different algorithms to accomplish required task. All advantages and disadvantages of available algorithms should be known to implement a codec to accomplish final requirement. The purpose of this project is to implement all basic building blocks of H.264 video encoder and decoder. The significance of the project is the inclusion of all components required to encode and decode a video in MatLab .