......

..

IJCTA-Volume 7 Issue 3 / May-June 2016
S.No
Title/Author Name
Page No
1
Effectiveness of Usability Attributes in Domains of Virtual Environment
-Kirti Muley,Maya Ingle
Abstract
In recent years, Virtual Environment (VE) has gained immense attention of researchers. VE has its applications in all most all domains of software development. At the same time, it is worth to incorporate software usability attributes to develop an effective VE based application. A wide range of usability attributes may be used for different application areas and for environment. Hence, importance of usability attribute also varies from domain to domain or from application to application. In this paper, we attempt to identify the effectiveness of usability attributes specifically, in domains of VE. Algorithm BayesPost is proposed to evaluate the effectiveness of usability attributes of interest in various domains of VE. For execution of the algorithm, we have selected five domains such as Simulation, Educational Websites, Banking Sector, Medical and Entertainment covering 19 usability attributes. On the basis of this study, it may be stated that the probability of existence of usability attribute Interactivity is higher in the domains associated with learning. In contrast, usability attribute Readability possesses lower probability of existence in domains where visual effects are dominant as compared to textual data. It is also observed that probability of existence of usability attribute Active Distraction is subdominant in all domains except medical domain.
342-348
PDF
2
Techniques in Facial Expression Recognition
-
Avinash Prakash Pandhare,Umesh Balkrishna Chavan
Abstract
Facial expression recognition is gaining widespread importance as the applications related to Human – Computer interactions are increasing. This paper mentions various techniques and approaches that have been used in the field of facial expression recognition. Facial expression recognition takes place in various stages and these stages have been implemented by various approaches. Viola and Jones for face detection, Gabor filters for feature extraction, SVM classifiers for classification, L1 minimization for sparse representation, facial expression recognition, geometric deformation model, multiple gabor filters for robust feature extraction, parallel implementation of Viola and Jones for face detection and parallel implementation of SVM classifier for classification of expressions are discussed in this paper.
Keywords: viola and jones, gabor filters, l1 minimization, 2DPCA, CERT
349-351
PDF
3
A Novel Correlated Coefficients Vector Chipping Approach of Noise Filtration for Distorted Images
-
Md Ateeq ur Rahman,Abdul Samad Khan
Abstract
In this paper, an improvised dynamic noise filtration technique for the denoising of images is proposed which is based on the filtration of spectral content of the image. In this approach, a spectral decomposition in multi frequency band using multiwavelets is presented and an enhanced chipping factor is employed for discarding the additive noise from the extracted frequency band coefficients of images.
The proposed technique is based on the idea of utilising the spatial relation of pixels in the noisy image that underwent the multiband decomposition. The highly correlated decomposed coefficients constitute the attributes of the vector and the chipping operation is applied to the entire vector. In our work, an enhanced multivariate chipping technique is proposed which is most suitable for denoising of two dimensional noisy images. The efficiency of the proposed approach is evaluated with PSNR metric on the images distorted with Gaussian noise at different levels and the results shows that the noise is successfully discarded to a reasonable extent and also the efficiency of our approach considerably surpasses that of conventional denoising methods both subjectively and visually.
Keywords- Denoising, Gaussian Noise, Multiwavelets, Chipping, Decomposition.
352-356
PDF
4
A Comprehensive Review of Volatile Data Forensics
-
Dilpreet Singh Bajwa,Dr. Satish Kumar
Abstract
Volatile data forensics is a new challenging field required attention. Volatile data forensics must consider be a part of digital forensic process rather than individual process. It provides important and crucial information not acquired by following traditional forensic process. While we are following traditional forensic process, the important information like current state of system, running processes, open ports, recent established connections, username and passwords, unencrypted data and keys, traces of malwares and anti-forensic activities may be missed out which are present in volatile memory of system while system is in running state.. Volatile data is a source of this type of information and leads investigation fast in right direction. This paper provides a detailed view of need, challenges, tools, techniques required for volatile data forensics and importance of this field in digital investigation process.
Keywords: Digital Forensics, Computer Forensics, Memory Forensics, Volatile Data Forensics, Cyber Forensics Tools.
357-367
PDF
5
A Control Method for LTI Systems with Sensor Drifts
-
Wataru Hori,Takashi Takimoto
Abstract
We consider a stabilization problem of linear timeinvariant systems with a sensor drift by using a dynamic state feedback controller. The sensor drift occurs a bias in the steady-state, even if the system is stabilized. In previous research, it is investigated that steady-state blocking zeros of the stabilizing controller play an important role. Such a controller is called a washout controller. In this paper, we propose a new washout controller which eliminate the bias in the steady-state. Then, we develop a design method for the proposedwashout controller that can be designed by a stabilizing state feedback gain. Moreover, we show numerical simulations which illustrate the effectiveness of proposed washout control.
368-373
PDF
6
A Review on the Multi Modal Biometric Systems
-
Vedang Ratan Vatsa,Deepak Saini,Ankit Punia,Shruti Vatsa
Abstract
Biometrics is the science to detect an individual on the basis of his physiological or behavioral features. A single biometric factor is not enough to know the desired performance requirements, which arises a need for the multimodal biometric system. In this paper, we will focus on the multimodal biometric system covering signature identification, facial recognition, DNA identification, speech authentication, hand geometry recognition, Iris recognition and fingerprint identification. But, the performance of such systems easily degrades with a change in the extensive factors. In this paper, the importance and applications of biometrics are also highlighted.
374-384
PDF
7
Dynamics of Single Species Model with Time Delay in Polluted Environment
-
E.M.Elabbasy,Waleed A.I. Elmorsi ,A.A.Elsaadny
Abstract
In this paper we investigate the linear stability of the delayed single species model grows logistically in polluted environment. We show the occurrence of Hopf bifurcation at the positive equilibrium. To determine the direction of Hopf bifurcation and the stability of bifurcating periodic solution, we use the normal form approach and a center manifold theorem.
385-391
PDF
8
Application of Alternate Energy Efficient Clustering Protocols for Heterogeneous Networks
-
Mohd Waes Siddiqui,Vandana Dubey
Abstract
Due to the wide range of application of Wireless Sensor Networks (WSN) the past few years have witnessed the potential use of it and also become a hot research area now a days. Conserving sensor energy is one of the primary issues in WSN network, to prolong the lifetime of the network. Routing protocols developed for various other ad hoc networks such as MANET, VANET etc. can’t apply directly in WSN due to energy constrains of nodes. Various clustering based protocols have been suggested over the years including LEACH, TEEN, PEGASIS etc. Most of the clustering protocols however assume the nodes to bear same characteristic i.e. homogenous, which is just a hypothetical case. As, we know there can be difference in the sensors being used according to the type of usage they are put into. Thus node heterogeneity is required to be taken into account while designing any network protocol in Wireless Sensor Network paradigm. In this paper work, which a gist of the research work in the same area, the author tries to analyze the various heterogeneous clustering based routing protocols and to suggest a novel approach towards attaining energy efficiency in the form of a hybrid energy efficient protocol.
Keywords—ClusterHead, HEED, LEECH, Wireless Sensor Networks.
392-396
PDF
9
Personalized Concept-Based Clustering Of Search Engine Queries
-
Rohit Chouhan,Dr.Pramod S. Nair
Abstract
Now a day web search currently facing many problems like search queries are very short and ambiguous and not meet exact user want. To remove such type problem few search engines suggest terms that are meaningfully related to the submitted queries so that users may be select from the suggestions the ones that reflect their information needs. In this paper, we introduce an hybrid approach that takes the user’s conceptual preferences in directive to provide personalized query recommendations. We achieve this goal with two new strategies. First, we develop online techniques that extract concepts from the web-snippets of the search result returned from a query and use the concepts to identify related queries for that user query. Second, we propose a new two phase personalized agglomerative clustering algorithm that is able to generate personalized query cluster show Proposed approach will be better precision and recall than the existing query clustering methods.
Keywords:- Search Engine, Personalization, Click Through, Concept Clustering, Bipartite graph, Ranking algorithm.
397-402
PDF
10
Boosting and Meta-Learning Techniques for Distributed Data Mining on Electronic Medical Datasets
-
Dr.M.Nandhini,S.Urmela,Jerry. W. Sangma,Pranjal Saikia
Abstract
Distributed Data Mining (DDM) has evolved as a dominant area of Data Mining in recent times. DDM involves homogeneous classifier approach which mines data maintaining same set of attributes across distributed geographical sites. Boosting and meta-learning techniques are dominant homogeneous classifier approaches. This paper implements DDM based on boosting (AdaBoost J48 classifier algorithm) and meta-learning (k-means algorithm) on hepatitis, hypothyroid and diabetes EHRs.
Keywords – Distributed Data Mining, classifier approach, boosting, meta-learning, Electronic Health Records.
403-410
PDF
11
IPv4 to IPv6 Transition: Optimal Routing Protocol within Dual Stack and Tunneling
-
Aparna Sivaprakash,S.Kayalvizhi
Abstract
The number of global internet users has been growing exponentially, necessitating a much larger number of unique IP addresses for all the connected networking devices. The prevalent IP version 4 is not able to meet the current requirement for IP addresses. To meet the future IP address requirement, a new version IPv6 has been introduced since 1999. However IPv6 is not backward compatible with IPv4.since it is not possible to migrate all the networking devices to IPv6 in a single day, IPv4 and IPv6 are going to be used in parallel for some time. As an interim solution to enable communication between IPv4 and IPv6, several techniques have been developed. The most widely used among them are Dual Stack and Tunneling techniques. This project proposes to examine all possible combinations of the two main routing protocols namely RIPng and OSPFv3 within Dual Stack and Tunneling to find the most optimal combination and enhance the existing system. The efficiency of each combination will be assessed based on the following parameters: Latency, Throughput, Packet Loss and Convergence Time.
Keywords- IPv4, IPv6, RIP, OSPF, Tunneling, Dual Stack.
411-416
PDF
12
Architecture and Design Strategies for creating a Radio Player Application for the Android
-
Rushabh Shroff
Abstract
Android™, the world's most popular mobile platform[1]. It powers hundreds of millions of mobile devices in morethan 190 countries around the world[2]. It's the largest installed base of any mobile platform and growing fast - every day an-other million users power up their Android devices for the first time and start looking for apps, games, and other digital content. Android gives us a Software platform for creating apps and games for Android users everywhere, as well as an open mar-ketplace for distributing to them instantly.
A radio is type of a type of a music player where the tracks to be played are decided on the fly and played on the device in a queue fashion i.e. one after the other. This paper will not talk about the algorithm behind the song selection, but the architec-ture of the Android Application for playing the tracks inside the Android ecosystem.
417-419
PDF
13
Slang Word Identification on Twitter
-
Rushabh Shroff,Amitash Rames
Abstract
It is commonly known that people use different words to refer to the same things. We aim to find such similar words and classify their usage based on location. We believe this can have multiple real world usage. Such analysis could help linguistic scientists and aid in linguistic training. If a beverage selling company needed to advertise their product, they could tailor their adver-tisements to use words based on the map above to connect better with the audience so this data can be used in targeted marketing. On a higher level, if we measure this data over time, we can capture the resilience of words in areas. We can also map the evolution of words over time. For example: some words may fade out of usage, some may spread to other areas, some may even give way to new words. We think this can be a very inter-esting for businesses as they can measure the evolution of their brand name.
420-426
PDF
14
Clustering Based Automatic Fuzzy Partitioning of Numerical Attributes
-
Swati Ramdasi,Dr.Shailaja Shirwaikar,Dr.Vilas Kharat
Abstract
Partitioning of quantified attributes is essential for mining association rules from quantified data and the Fuzzy approach solves the sharp boundary problem giving Fuzzy association rules having high interpretability and rich applicability. The paper presents automated partitioning of numerical data into Fuzzy sets based on k means clustering algorithm. This can be used as a pre-processing step for numerical data and as first step to Fuzzy Apriori algorithm used to generate Fuzzy Association rules. The algorithm also requires the definitions of fuzzy support and fuzzy cardinality of the dataset which are defined and used in validating the proposed technique over standard datasets.
427-431
PDF
15
An Efficient Load Balanced Clustering Method for Mobile Data Gathering in WSN
-
Soumya Gyanapnor,Swathi C
Abstract
In WSN applications, sensors are generally densely deployed and randomly scattered over a sensing field and left unattended after being deployed, which make it difficult to recharge or replace their batteries. After sensors form into autonomous organizations, those sensors near the data sink typically deplete their batteries much faster than others due to more relaying traffic. When sensors around the data sink deplete their energy, network connectivity and coverage may not be guaranteed. Due to these constraints, it is crucial to design an energy-efficient data collection scheme that consumes energy uniformly across the sensing field to achieve long network lifetime. Here, we propose a three-layer framework LBC-DDU for mobile data collection in wireless sensor networks, which includes the sensor layer, cluster head layer, and mobile collector (called SenCar) layer.
Keywords: Clustering, Load Balanced Clustering, SenCar, Wireless Sensor Networks
432-437
PDF
16
A Model for Identifying Source of Packet Drops and Forgery Attacks in WSN
-
Ambika,Nirupama S
Abstract
Large scale sensor networks are deployed in numerous application domains, and the data they collect are used in decision-making for critical infrastructures. Data are streamed from multiple sources through intermediate processing nodes that aggregate information. A malicious adversary may introduce additional nodes in the network or compromise existing ones. Therefore, assuring high data trustworthiness is crucial for correct decision-making. This survey proposes a new lightweight scheme in order to securely transmit provenance with sensor data. The proposed in-packet Bloom filters techniques used to encode provenance with the sensor data. This mechanism initially performs provenance at the base station then perform reconstruction of the data at the base station. In addition to this the provenance scheme functionality used to detect packet drop attacks organized by malicious data forwarding nodes.
Keywords: Bloom Filters, Packet Drop, Provenance Decoding, Provenance Encoding
438-442
PDF
17
Using Cooperative Contact and Standing-based Watchdogs Recognizing Selfish Nodes in MANET
-
Rupesh,Sangameshwar Kawdi
Abstract
In mobile adhoc networks, generating and maintaining anonymity for any adhoc node is challenging because of the node mobility, dynamic network topology, cooperative nature of the network.. Existing techniques based on cryptosystem and broadcasting cannot be easily adapted to MANET because of their extensive cryptographic computation and/or large communication overhead. Mobile ad-hoc networks (MANETs) assume that mobile nodes voluntary cooperate in order to work properly. This cooperation is a cost-intensive activity and some nodes can refuse to cooperate, leading to a selfish node behaviour. Thus, the overall network performance could be seriously affected. The use of watchdogs is a well-known mechanism to detect selfish nodes. However, the detection process performed by watchdogs can fail, generating false positives and false negatives that can induce to wrong operations. Moreover, relying on local watchdogs alone can lead to poor performance when detecting selfish nodes, in term of precision and speed.
Keywords: Local Watchdog, MANET, Performance Evaluation, Selfish Node
443-447
PDF
18
Individual Document Keyword Extraction for Tamil
-
T.Vaishnavi,Roxanna Samuel
Abstract
Keyword extraction is an important technique for summarization, document clustering, Web page retrieval, document retrieval, text mining, and so on. By extracting significant keywords, we can easily identify the content which is easy to read and understand the relationship among documents. Keyword extraction is considered as one of the core technology for all automatic processing for text materials. This paper employs, Conditional Random Fields (CRF) for the task of extracting effective keywords that uniquely identify a document for Tamil using Machine Language Techniques. Keyword Extraction includes POS and Chunking process. Part Of Speech tagging and chunking are the elementary processing steps for any language processing process. Part of speech (POS) tagging is the procedure of labelling the annotation of syntactic categories for each word in the corpus. Chunking is the process of identifying and splitting the text into syntactically correlated word groups. Chunking process employs Conditional Random Field to segment the sentences. We have developed our own tagset for interpret the corpus, which is useful for training and testing the POS tag generator and the chunker. Results show that the Pos-tag enhanced keyword extraction model indeed may assist in automatic key word assignment and in fact performs significantly better than the original state-of-the-art keyword extractor.
Keywords: Keyword Extraction, POS Tagging, NP Chunking, SVM, CRF
448-452
PDF
19
Stemming in Tamil for Affix Stripping
-
S.Mohanapriya,E.Surya
Abstract
Stemming is the one of the most important step in many of the Natural Language processing tasks. Stemming reduces inflected words to a common stem/root word. Stemming process mainly carried out in English language because Tamil language is more complex in structure and more over it consists of critical grammatical rules. Tamil is a Dravidian language, mainly spoken by Tamil. Tamil words have more derivational forms than any other languages. The words get inflected to different forms based on number, person, gender and tense. Taking this complexity into account, the present attempt is to propose the effective stemmer for the Tamil language. The stemmer is used to find out the stem/root word for the Tamil language input. So the stemmer is built based on Affix stripping algorithm for the Tamil input. Then the stemmer reduces over-stemming and under-stemming errors. Affix stripping algorithm removes the prefix and/or suffix from the given Tamil input based on some rules. The input is taken as the Tamil text. Then the raw input is going for the Normalization, Feature extraction and Tokenization process. It removes the special characters, Stop words from the input and divides the input into tokens respectively. Then it fed into the stemmer for the stemming process. It finally displays the collection of stemmed words as an output and the performance, efficiency of the stemmer is evaluated.
Keywords: NLP, Stemming, Affix Stripping, Normalization, Feature Extraction, Tokenization.
453-457
PDF
20
Secured Document Search and Retrieval using Visual Cryptography Scheme in Cloud Environment
-
Priyanka.K,V.Mercy Rajaselvi
Abstract
Cloud storage provides a flexible storage at an economical cost, but data privacy is a major concern that prevents end-users from storing confidential data on the cloud. This paper focuses on security for the documents that are uploaded and stored on the cloud. In order to avoid threats like breaches and malware attacks, Visual Cryptography Scheme (VCS) helps in high level security to preserve the data that are stored on the cloud. The VCS scheme is implemented by separately sending source image to the cloud and key image to the provider. When both the source image and key image overlays each other, a key is generated for encrypting the document file. Then the document file is encrypted using Advanced Encryption Standard (AES). When a document file is uploaded to cloud, an OPE (Order Preserving Encryption) key will be generated by the cloud provider. Then along with the encrypted file, the OPE password and source image is sent to the cloud. When the user requests for a file, the cloud provider would authenticate the user’s identity and then sends the OPE password and the key image to the user. The user sends the OPE password to the cloud for retrieving files. The cloud verifies by checking the OPE password with the password sent by the user and then it sends the document file and the source image if it is matched. The source image and the key image overlays with each other and key for decryption is generated. Using the key, user can decrypt the file.
Keywords: Visual Cryptography Scheme, Order Preserving Encryption, Advanced Encryption Standard.
458-464
PDF
 
 
IJCTA © Copyrights 2015| All Rights Reserved.

This work is licensed under a Creative Commons Attribution 2.5 India License.