Accepted Papers

  • Touchscreen Using Web Camera
    Kuntal B. Adak, Adarsh N. Singh, Abhilash B. Kamble, University of Pune, India
    ABSTRACT
    In this paper we present a web camera based touchscreen system which uses a simple technique to detect and locate finger. We have used a camera and regular screen to achieve our goal. By capturing the video and calculating position of finger on the screen, we can determine the touch position and do some function on that location. Our method is very easy and simple to implement. Even our system requirement is less expensive compare to other techniques.
  • Comparison of Speckle Spatial Filters on Radarsat - 2 Dataset
    Lakshminath Singanapudi and L.Anajaneyulu
    ABSTRACT
    Speckle is a signal dependent granular noise inherent in all coherent imaging systems that visually impairs the appearance of images. It affects the performances of automated scene analysis and information extraction techniques, therefore despeckling is of crucial importance for number of applications. Such a post-processing technique should be carefully designed to avoid spoiling of useful information like local mean of backscatter, point targets, linear features and textures. No matter which method is used to reduce the effect of speckle noise, the ideal speckle reduction method preserves radiometric information and the edges between different areas and spatial signal variability, i.e., textural information. In this paper, spatial speckle reduction filters such as Frost filter, Lee filter, Gamma Map filter etc., are applied on Radarsat - 2 data and compared by Speckle suppression indices and Statistical characteristics.
  • Alexander Fractional Integral Filtering of Wavelet Coefficients for image denoising
    Atul Kumar Verma and Barjinder Singh Saini, Dr. B.R Ambedkar National Institute of Technology, India
    ABSTRACT
    The present paper, proposes an efficient denoising algorithm which works well for images corrupted with Gaussian and speckle noise. The denoising algorithm utilizes the alexander fractional integral filter which works by the construction of fractional masks window computed using alexander polynomial. Prior to the application of the designed filter, the corrupted image is decomposed using symlet wavelet. Only the horizontal, vertical and diagonal components are denoised using the alexander integral filter. Significant increase in the reconstruction quality was noticed when the approach was applied on the wavelet decomposed image rather than applying it directly on the noisy image. Quantitatively the results are evaluated using the peak signal to noise ratio (PSNR) which was 30.8059 on an average for images corrupted with Gaussian noise and 36.52 for images corrupted with speckle noise, which clearly outperforms the existing methods.
  • A Proposed Model of Graph Based Chain Code Method for Identifying Printed & Handwritten Bengali Character
    Arindam Pramanik and Sreeparna Banerjee, West Bengal University of Technology, India
    ABSTRACT
    In this paper we tried to present an optimal solution for the handwritten character recognition problem. The problem of Optical Character Recognition is as follows- here input is a scanned image or a printed or handwritten text and the output is a computer readable version of the input content. A lot of research work has been done on OCR worldwide for different languages. Though "Bengali? is the second most popular script and language in Indian subcontinent and the fifth most popular language in the world, a fewer no. of research publications are there. The application areas of character recognition are increasing remarkably. Here we have proposed a new approach of handwritten Bengali character recognition using Graph Based Chain Code Method. Our aim is to achieve maximum recognition rate with minimum classification time.
  • Improved Stereo Vision Algorithm and its FGPA Implementation Using Cost Aggregation and Disparity Inheritance
    Binoy Bhanujan P and R. K. Sharma, National Institute of Technology, India
    ABSTRACT
    Stereo Vision is an actively researched topic in image processing. The goal is to recover quantitative depth information from a set of input images, based on the visual disparity between corresponding points. Algorithms used for stereo matching are mainly based on pixels. In this paper a similar stereo vision algorithm using Cost Aggregation (CA), occlusion detection and disparity inheritance refinement is proposed. CA employs the Winner-Takes-All (WTA) strategy using non-parametric stereo correlation methods, such as the sum of absolute difference and mini-census transform, to extract the initial disparity maps. The disparity maps produced helps to identify the occluded points which acts as the control points for the disparity refinement algorithm. The disparity inheritance is a disparity refinement method which further improves the accuracy and robustness of stereo correspondence. The same algorithm is implemented in FPGA, evaluated using Middlebury stereo dataset, and results are found to be competitive against existing algorithms.
  • Single Image fog Removal Based on Fusion Strategy
    V. Thulasika and A. Ramanan, University of Jaffna, Sri Lanka
    ABSTRACT
    Images of outdoor scenes are degraded by absorption and scattering by the suspended particles and water droplets in the atmosphere. The light coming from a scene towards the camera is attenuated by fog and is blended with the airlight which adds more whiteness into the scene. Fog removal is highly desired in computer vision applications. Removing fog from images can significantly increase the visibility of the scene and is more visually pleasing. In this paper, we propose a method that can handle both homogeneous and heterogeneous fog which has been tested on several types of synthetic and camera images. We formulate the restoration problem based on fusion strategy that combines two derived images from a single foggy image. One of the images is derived using contrast based method while the other is derived using statistical based approach. These derived images are then weighted by a specific weight map to restore the image. We have performed a qualitative and quantitative evaluation on 60 images. We use the mean square error and peak signal-to-noise ratio as the performance metrics to compare our technique with the state-of-the-art algorithms. The proposed technique is simple and shows comparable or even slightly better results with the state-of-the-art algorithms used for defogging a single image.
  • Optic Disc Boundary Detection in Diabetic Retinopathy
    Arpita I Patil and Shantala Giraddi, BVB College of Engineering and Technology, India
    ABSTRACT
    Microaneurysm and Exudates are the key indicators of Diabetic Retinopathy that can potentially cause retinal damage. The early detection of exudates and its grading are important to prevent further cause of retinal damage. Detecting the optic disc in the retinal image is the necessary step in the detection of the diabetic retinopathy because optic disc has the characteristics of exudates. In this paper we are detecting the optic disc using Active Contour Model, and then we quantify whether the optic disc detected for the above method is accurate. Many algorithms have been proposed and evaluated for the detection of the optic disc. But most of them depend on the fact that optic disc is the brightest part among all the components of retina. The aim of this work is performing accurate detection of optic disc and verifies whether the detected Optic Disc is really an Optic Disc using SVM classifier.
  • Enhancement and Segmentation of Historical Records
    Soumya A1 and G Hemantha Kumar2, 1R V College of Engineering, India and 2University of Mysore, India
    ABSTRACT
    Document Analysis and Recognition (DAR) aims to extract automatically the information in the document and also addresses to human comprehension. The automatic processing of degraded historical documents are applications of document image analysis field which is confronted with many difficulties due to the storage condition and the complexity of the script. The main interest of enhancement of historical documents is to remove undesirable statistics that appear in the background and highlight the foreground, so as to enable automatic recognition of documents with high accuracy. This paper addresses preprocessing and segmentation of ancient scripts, as an initial step to automate the task of an epigraphist in reading and deciphering inscriptions. Preprocessing involves, enhancement of degraded ancient document images which is achieved through four different Spatial filtering methods for smoothing or sharpening namely Median, Gaussian blur, Mean and Bilateral filter, with different mask sizes. This is followed by binarization of the enhanced image to highlight the foreground information, using Otsu thresholding algorithm. In the second phase Segmentation is carried out using Drop Fall and WaterReservoir approaches, to obtain sampled characters, which can be used in later stages of OCR. The system showed good results when tested on the nearly 150 samples of varying degraded epigraphic images and works well giving better enhanced output for, 4x4 mask size for Median filter, 2x2 mask size for Gaussian blur, 4x4 mask size for Mean and Bilateral filter. The system can effectively sample characters from enhanced images, giving a segmentation rate of 85%-90% for Drop Fall and 85%-90% for Water Reservoir techniques respectively.