......

..

IJCTA-Volume 7 Issue 1 / January-February 2016
S.No
Title/Author Name
Page No
1
A Survey on Influence Maximization on Definite Users in Social Networks
-Vidya A Khairnar,N.K.Zalte
Abstract
Influence augmentation is acquainted with amplify the profit of viral showcasing in informal communities. The shortcoming of influence augmentation is that it doesn't recognize specific clients from others, regardless of the possibility that a few things can be helpful for the specific clients. For such things, it is a superior procedure to concentrate on expanding the influence on the specific clients. In this paper, we define an influence augmentation issue as inquiry preparing to recognize specific clients from others. We demonstrate that the inquiry handling issue is NP-hard and its target capacity is sub modular. We propose a desire model for the target's estimation capacity and a quick ravenous based guess technique utilizing the desire model. For the desire model, we examine a relationship of ways between clients. For the voracious strategy, we work out an efficient incremental redesigning of the minimal increase to our goal capacity. We direct analyses to assess the proposed technique with genuine datasets, and contrast the outcomes and those of existing systems that are adjusted to the issue.
Index Terms — Graph algorithms, influence maximization, independent cascade model, social networks
1-4
PDF
2
Workflow Planning and Execution - Final Results
-
Ravikant Dewan,Prabhakar Sharma
Abstract
An abstract workflow generation is choosing and configuring with application components to form an abstract workflow. The application components are chosen by examining the specification of their capabilities and checking to see if they can generate the desired data products. They are configured by assigning input files that exist or that may be generated by other application components. The abstract workflow specifies the order in which the components must be executed. A concrete workflow generation is selecting specific resources, files, and additional jobs required to form a concrete workflow that can be executed in the Grid environment. In order to generate a concrete workflow, each component in the abstract workflow is turned into an executable job by specifying the locations of the physical files of the component and data, as well as the resources assigned to the component in the execution environment. Additional jobs may be included in the concrete workflow, for example, jobs that transfer files to the appropriate locations where resources are available to execute the application components.
Keywords:Workflow, DAGs, Grid, LIGO, LISA
5-8
PDF
3
R-tree Based Filtering Algorithms for Location-Aware Based System
-
Priya H.Jagtap,Prof.N.K.Zalte
Abstract
Location Based services (LBS) have attracted and valuable attention from both industrial and academic communities. Existing LBS systems used a pull model also called as user-initiated model, from where a user can issues a query to server which responds with location-aware answers. A push model also called as server-initiated model is becoming an unavoidable computing model in next-generation LBS to provide users with instant replies. In the push model, subscribers register spatio-textual subscriptions to capture their interest of user. For Location-aware publish/subscribe, a system return result to achieve high-performance. While designing a location-aware publish/subscribe system their are multiple research problems arises . We use efficient filtering algorithms that find candidate nodes and effective pruning techniques to achieve high performance and to provide users with instant replies. We propose another algorithm called FlexRPSet, which provide one extra parameter K to allow users to make a trade-off between result size and efficiency. We adopt an incremental approach to let the users make the trade-off conveniently. FlexRPset produce fewer representative patterns than RPLocal and MinRPset – an efficient algorithm is design to improve scalability.
9-12
PDF
4
Active Contour Model for object segmentation: A Brief Review
-
Snehal.V.Talikoti,Prof.J.V.Shinde
Abstract
Active contour are computer generated curves that move within the images to find object boundaries. They are often used in computer vision and image analysis to identify and locate objects, and to describe their shape. Region based methods are used for finding an object in an image instead of its edges. Region based ACM that segments one or more image regions that are visually similar to an object of interest called prior or trained dataset. It contains the object to be segmented from an image. The prior and evolving region are described by probability density function (PDF) of a photometric feature as shape. The proposed approach uses the probability density functions of the inhomogeneous regions as well as the shapes of the objects to be segmented.
Keywords: probability density function, Active contour model.
13-16
PDF
5
Image Segmentation for Document Image Binarization: A Brief Review
-
Kirti.S.Datir,Prof.J.V.Shinde
Abstract
Binarization is nothing but to generate binary image from document image. Document image binarization has already under study from past many years, and many binarization algorithms have been proposed for degraded document images. Document image Binarization is very trendy to improve old handwritten and machine printed documents. Still to recover degraded document is very tedious job. Such document has the much scratched also presence of noise and degradation. There is a lot of scale to improve old and degraded documents. Image segmentation is method which used usually in image processing. Thresholding is an important pre-processing step for the degraded image to improve their quality. The between the foreground text and the background of different document images is a tricky task. New Binarization method using image segmentation is proposed for better results.
Keywords-Document image binarization, Color-to-gray image conversion, Thresholding, Historical document analysis.
17-20
PDF
6
A Review on Robust Language Identification
-
Snehal.V.Gite,Prof.J.V.Shinde
Abstract
Automatic language identification of audio data has become an important pre-processing step for speech/speaker recognition. In order to identify the language on short utterance is critical challenge for achieving accurate performance. In this paper, we review language identification methods. First we discuss Phonotactic systems use phone recognition and language modelling (PRLM) .The second category of LID techniques attempts to classify languages by using Gaussian mixture models (GMMs) to capture the acoustic properties of speech. For the accuracy of language identification on short utterances the proposed system works on robust language classification that transforms the spoken words to a low dimensional i-vector representation on which language classification methods are applied. Language classification model use the approach of universal background model for the better performance.
Keywords: Automatic language identification, PRLM, GMM, i-vector
21-24
PDF
7
Reversible Watermarking for Relational Data: A Brief Review
-
Priyanka R.Gadiya,Prof.P.A.Kale
Abstract
Watermarking is method of embedding data into such form that it is not readily available to user rather than authenticated user. These data embedding may affects certain alteration of underlying data. In advance to watermarking Reversible Watermarking is evolved which assures the data quality along with data recovery. Watermarking for relational database has already under research from past many years, and many watermarking algorithms have been proposed for the embedding purpose. Watermarking scheme is very popular to assures the securities in terms of data manipulation and occupancy rights for the image, audio, video data type too. Still such technique is not robust against malicious attack. The attack may causes alteration, deletion or false addition which degrades quality and performance of the data. There is a lot of scope to improve and advance the current watermarking schemes and also provide feature like selective watermarking. Selective watermarking allows to particular attribute watermarking according to its role of function used in knowledge analysis. New Reversible and Robust watermarking technique for the relational database is proposed for better results.
Keywords-Reversible Watermarking, Robust Watermarking, selective watermarking
25-28
PDF
8
On the Dynamics of the NonLinear Rational Differecne Equation
-
E M Elabbasy,M Y Barsoum,H S Alshawe
Abstract
The main objective of this paper is to study the quali- tative behavior for a class of nonlinear rational di¤erence equation. We study the local stability, periodicity, Oscillation, boundedness, and the global stability for the positive solutions of equation.Examples illustrate the importance of the results numbers.
29-39
PDF
9
Review on:Data Security Policies Inference on Content Sharing Sites
-
Prachiti S.Pimple,Prof.B.R.Nandwalkar
Abstract
Nowadays Social media’s become extremely popular. It allows us to communicate with a lot of people. Creation of social networking sites such as LinkedIn, and Facebook, individuals are given opportunities to meet new people and friends across the world. Users of social-networking services share a large volume of personal information with a large number of “friends.” In a case where the users are sharing the large volumes of images across more number of people in that case this improved technology leads to privacy violation. This privacy need to be taken care in order to improve the user satisfaction level. The goal of this survey is to provide a complete review of various privacy policy methods to improve the security of information shared in the social media sites.
Keywords: Content sharing sites, Social media, privacy.
40-43
PDF
10
A Survey on: Document Recommendation Using Keyword Extraction for Meeting Analysis
-
Kumodini V.Tate,Bhushan R.Nandwalkar
Abstract
In this topic there are bulky documents which cover most of the information about any topic. We are extracting a keyword from that document, when we are extracting this keyword can easily retrieve entire document. However, even a small piece contains different words, which are possibly related to several topics; also, using an automatic speech recognition (ASR) system introduces faults among them. Therefore, it is challenging to understand exactly the sequence requirements of the conversation participants. We first propose an algorithm to extract keywords from the output of an ASR system which makes use of topic modeling techniques and of a sub modular reward function which favors range in the keyword set, to match the possible range of topics and reduce ASR noise. This method is to develop numerous topically divided queries starting this keyword set; in organize to take full improvement of the possibility of making at least one related reference when with these queries to search over the English Wikipedia. Examples like Fisher, AMI, and ELEA conversational corpora.
Keywords: Document recommendation, information retrieval, keyword extraction, meeting analysis, topic modeling.
44-48
PDF
11
A Review On Enhancement of Number Plate Recognition Based on Artificial Neural Network
-
Apurva Biswas,Dr.Bhupesh Gaur
Abstract
Due to the diversity of number plate, the process of recognition faced a problem. For the improvement of number plate recognition various authors used neural network model such as RBF neural network model, BP neural network model and SOM neural network model. In this paper present the review of number plate recognition based on different neural network model. The processing of number plate recognition is also very dificlut due to background and noise. Due to the problem of recognition faced a problem of road security surviliance.
Keywords: - ANPR, Neural network, Image Processing.
49-52
PDF
12
A Review of Dynamic Texture Clustering Technique for Video Segmentation
-
Ankita Bhadoria,Surendra Dubey
Abstract
Video modeling is vital research area in the field of video tracking and video motion detection. The process of motion detection and video tracking is very challenging task. The texture feature of video is major part of analysis in segmentation and tracking. The dynamic nature of texture, the process of clustering and segmentation are very difficult. In this paper presents the dynamic texture clustering technique and problem of current system and video texture modeling.One significant limitation of the original dynamic texture is, however, its inability to provide a perceptualdecomposition into multiple regions, each of which belongsto a semantically different visual process: for example, aflock of birds flying in front of a water fountain, highwaytraffic moving in opposite directions, video containing bothsmoke and fire, and so forth. One possibility to address thisproblem is to apply the dynamic texture model locally,by splitting the video into a collection of localized spatiotemporalpatches, fitting the dynamic texture to each patch,and clustering the resulting models. However, this method,along with other recent technique, lacks some ofthe attractive properties of the original dynamic texture model.
Keywords: - Video Modeling, Dynamic Feature, Clustering Technique, Segmentation
53-59
PDF
13
A Probabilistic Approach for Efficient Web Search Using String Transformation Technique
-
Prof.Nitin Mishra,Anil E.Patil
Abstract
End-user interaction with the system recovery information required to run a successful query to the system is not always feasible and if the user is not technical then forming successful query will be difficult. So in this work we are suggesting is that the search for users must be satisfied in minimum time and also this should consume less time and the system must be easy to use and accurate. The main focus of our proposed system is that if any user enters a wrong or incorrect query, also will try to fix it first and have the "n" possible that incorrect query results. Reformulation of queries in search is aimed at addressing the problem of maturity mismatch. For example, if the query is "TOI" and document only contains "New York Times", the query and the document does not fit well and the document does not classified high. Query Reformulation tries to transform "TOI" the "Times of India" and therefore make a better match between the query and the document. In the task, given a query that is needed to generate all similar queries from the original query. So here we are achieving system usability and nature became our easy to use, since the end user does not insist to give correct query only. The proposed technique is applied to the correction of spelling errors in queries, query reformulation and search the web. Experimental results on a large data sizes show that the proposed approach is very accurate and effective in improving the existing methods in terms of accuracy and reliability in different contexts. Key Words— String Transformation, Log Linear Model, Spelling Check, Query Reformulation.
60-64
PDF
14
Survey on: Rule Based Phonetic Search for Slavic Surnames
-
Janki.B.Pardeshi,Prof.B.R.Nandwalkar
Abstract
There are different applications in NLP (Natural Language Processing) where searching of surname plays an important role. This paper is the survey about solution to searching algorithms for the databases of communications service providers, person registries, social networks or genealogy. This paper overcomes the problem of phonetic algorithm for Slovak and (territorial) neighbouring languages (Czech, Polish, Ukrainian, Russian, German, Hungarian, Jewish) surnames. This solution provides high precision and recall for searching surnames in these languages.
Keywords: Phonetic Algorithm, Rules, Natural Language Processing.
65-67
PDF
15
Performance Analysis of Noise Reduction Technologies in Brain MRI Image
-
Sheela.V.K,Dr.S.Suresh Babu
Abstract
Rapid advancement in icon-based analysis for the treatment of diseases which are affected on internal organs of human body drives medical imaging processing into an important technique among various methods of psychoanalysis. Among all the available imaging modalities magnetic resonance imaging techniques are extensively used for the analysis and discussion of diseases in soft tissue. MRI image provides insight into the anatomical structure within the body. Accuracy of the construction of the target within the body depends upon the overall imaging process. The quality of MRI image determines the effectiveness in feature extraction, analysis, recognition and quantitative measurements. The primary factors which decrease the visibility of the structure are blurring effect and noises. This leads to the need of removal of noise from MRI images as a function of the pre-processing technique in image processing; usually noise filters are employed for this function. In this paper analyzes the operation of different noise filters.
Keywords— Magnetic Resonance Image, Noises, Gaussian noise, Salt-and-Pepper noise, analog filters, Non-linear filters, Wiener Filter.
69-74
PDF
16
Optimized Blurred Object Tracking Using ANFIS
-
Rajaprabha,Sugadev Mani
Abstract
Motion blur is very general issue faced in real videos. Various factors degrade the image quality in the video. In real video sequence, tracking severely blurred image is a challenging task. In this paper we propose a method to detect a blur image in a video sequence. In our existing method we perform with blurred image but it can’t track the severely blurred image. The existing technique has some drawback like low speed, unreliable and complex. In this paper we discussed blurred image tracking in real time video. Using SFTA (Segmentation based Feature Training Algorithm) the features of an image is extracted. Through extraction we can improve the performance and robustness of an image. A set of image is stored in the training set and using ANFIS (Adaptive Neural and Fuzzy Inference System) we compare the input of blurred image with training dataset image. Then it identifies and track severely blurred image in real video. Through this we improve image tracking speed and performance.
Keywords: Blur image, SFTA, ANFIS
75-79
PDF
17
Comparing Forensic Blueprint Sketches with Headshots
-
Dipanshu pathak,Amit Kumar
Abstract
The continuous development of biometric technology has provided criminal investigators additional gadgets to discover the identity of criminals. In addition to DNA and incidental proofs, if a hidden fingerprint is found at an investigative sight or a surveillance camera captures an image or an footage of a suspect’s face, then these clues may be used to discover the culprit’s identity using automated biometric identification. However, many crimes occur where there is above information isn’t present, but instead an eyewitness of the crime is present. In these situations a forensic artist is often used to work with the witness or the victim in order to drawblueprints that depicts the facialfeatures of the culprit according to the verbal illustrations. These blueprints are known as forensic sketches. This problem of comparing a forensic sketch to a gallery of headshots images is addressed here using a robust framework called local feature-based discriminant analysis (LFDA). Keywords- headshots, forensic blueprints, forensi sketches, local feature based descriminent analysis, feature based approach, texture descriptors, feature descriptors.
80-84
PDF
18
Survey on Machine Learning Based Mining Attribute Based Access Control Policies
-
Sonali V.Sapkale,B.R.Nandwalkar

19
A Review of Big Data Processing in Geo-Location Based Cost Minimization
-
Namami Vyas,Deepak Tomar
Abstract
Cost minimization is important issue in the processing of big data for geo-located data centers. For the minimization of cost used switched network group for the estimation of nearest data centers. The selection of nearest data center for the storage and retrieval of data precede huge amount of traffic on local server. For the processing of these server required large expansion for the maintains such as power backup and cooling system. For the optimization of cost the processing of data divided into three section task assignment, data placement and data movement. The process of task assignment optimized by different optimization algorithm for proper selection of path for data delivery. In the process of optimization used independent algorithm and hybrid algorithm. In this paper presents the review of cost optimization process and function of big data over data centers.
Keywords: - Big Data, Map Reduce, Cost optimization
90-94
PDF
20
Improved Multiobjective Binary Biogeography Based Optimization using CVM for Feature Selection Using Gene Expression Data
-
Sruthika.S,Dr.N.Tajunisha
Abstract
Gene expression data play an important role in the development of efficient cancer diagnoses and classification. The genes identified are subsequently used to classify independent test set samples. The different feature selection methods are investigated and most frequent features are selected among all methods. This paper provides gene selection strategies for multi-class classification that can be used to reach high prediction accuracies with a tiny low number of selected genes. In this paper, a multi-objective biogeography based optimization method is proposed to select the small subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the KNN (K’s Nearest Neighbour) algorithm is used to choose the 60 top gene expression data. Secondly, to make biogeography based optimization suitable for the discrete problem, binary biogeography based optimization, as called BBBO, is proposed based on a binary migration model and a binary mutation model. Then Core Vector Machine (CVM), is proposed by integrating the non-dominated sorting method and the crowding distance method into the BBBO framework. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on ten gene expression dataset benchmarks. Experimental results demonstrate that the proposed method is better or at least comparable with previous particle swarm optimization (PSO) algorithm and support vector machine (SVM) from literature when considering the quality of the solutions obtained.
95-101
PDF
 
 
IJCTA © Copyrights 2015| All Rights Reserved.

This work is licensed under a Creative Commons Attribution 2.5 India License.