......

..

IJCTA-Volume 7 Issue 6 / November-December 2016
S.No
Title/Author Name
Page No
1
Dynamic KNN-Query Processing for Location Based Services K-Nearest Neighbors algorithm
-Pushpa Latha.N,Divya Sreenivasanaidu Shikaripura
Abstract
Location Based Services (LBS) are the services where the query point needs to be known. When the query location is provided, it is possible to have quality services. LBS applications became a reality as there are technology innovations in the real world. When location based queries come to the application, application needs to provide accurate results. Accurate results are essential as there are many routes to reach a solution. Monitoring road traffic is also essential to save time for answering location based queries. Towards this end, online routes API are available. In this paper we proposed and implemented an LBS system based on transport API. We built a prototype application to demonstrate the proof of concept. The empirical results are encouraging as the query processing efficiency is improved with API that saves routes.
Index Terms – Data mining, query processing, LBS, spatial data
748-752
PDF
2
Knowledge Discovery from Deployed Mobile Apps for Identifying Ranking Fraud
-Ch.V.V Narasimha Raju,B.Himabindu
Abstract
Mobile applications became very popular in the recent past. Moreover the application owners or vendors are promoting applications with different kinds of strategies. One such strategy is giving a ranking to mobile application. However, it is reported that many are misusing or giving misleading ranking to influence general public to use their applications. In other words, they are promoting sale of their mobile apps using fraud ranking. As people of all walks of life are using mobile phones and they prefer having mobile applications for various purposes, it is essential to have a mechanism to identify ranking fraud and provide correct ranking in order to promote a genuine and good environment. In this paper we proposed a framework that makes use of mobile app work flow and historical knowledge as inputs and perform ranking fraud detection. The detection technique uses leading sessions and perform collection of ranking based, review based and rating based evidences and aggregate them in order to identify fraud. We built a prototype application that demonstrates the proof of concept besides supporting the local and global anomaly detection as part of finding ranking fraud. The empirical resuts revealed tht the proposed system is useful.
Keywords – Ranking, mobile apps, ranking fraud, evidence aggregation
753-758
PDF
3
Analysis of various Security Models in Cloud Computing
-Mehak Jain,Rachna Jain,Deepika Kumar
Abstract
Cloud Computing is an upcoming computing technology wherein users can access various services such as IaaS, PaaS, SaaS in a pay as you go model. Over the years, the numbers of users using cloud services has exponentially increased owing to the benefits offered by it. These benefits include universal access, unlimited storage, scalability of resources, increased collaboration, cloud has slowly revolutionized the way commercial computing works. However, it has its fair share of drawbacks. The most concerning disadvantage pertaining to it is the security of data stored on the cloud. Users may store their sensitive data like credit card details, log information for a company, personal information (social security number), etc. on the cloud. However, this data can be accessed by a third party and used for actions with malicious intent. Various security issues like integrity of data, unauthorized access, privacy, etc. are the foremost concern of every cloud user. In this paper, we have analysed a large number of proposed and implemented cloud security techniques. These techniques range from two and three tier architecture models to various frameworks. This paper discusses the key features, advantages and disadvantages of these techniques. A comparative analysis of these techniques has been conducted, further highlighting their benefits and drawbacks.
Keywords - Cloud Computing, Security, Data Integrity, Encryption
759-767
PDF
4
An Effective Outlier Detection-Based Data Aggregation for Wireless Sensor Networks
-Dr Ashwini K B,Dr Usha J
Abstract
Data aggregation protocols are essential for wireless sensornetworks to reduce energy consumption and prolong network lifetime. However for wireless sensor networks, not only the energy consumption of sensor nodes but also the correctness of the data aggregation results is critical. As wireless sensor networks are usually deployed in harsh and hostile environments, malfunctioning and compromised sensor nodes negatively affect the correctness of the data aggregation results. This paper presents data aggregation scheme that eliminates the outliers, and then, it determines the sensor nodes that have distinct sensed data and collects only one sensor node that has the actual data for each distinct sensed data, and the data aggregator does not accept data from any other sensor nodes. This process ensures that (i) no outlier data is included in the aggregated data and (ii) there is no redundant data with the data aggregator. The simulation results show that the proposed scheme is able to reduce the number of false data transmissions, thereby increasing the data aggregation accuracy.
768-772
PDF
5
Optimized Route Technique for DSR Routing Protocol in MANET
-Dr.K.Santhi,Dr.G.Kalpana
Abstract
Mobile ad hoc network (MANET) is a collection of portable devices which communicate with each other without the help of any fixed base station or access point. Each node in MANET experiences the dynamic topology, limited transmission range, bandwidth and battery power which affects routing. The critical issue of routing in MANET is to select an optimal and stable route. Link failure causes due to high mobility, congestion and limited battery power which affects the performance of the routing protocol. Such problems make a routing protocol ineffective and unreliable. To make a routing protocol effective and reliable, this paper proposes a Optimized Routing Technique(ORT) using Modified Combined Weight Function (MCWF) mechanism by calculating signal strength, energy level, load and distance between nodes. Then based on the MCWF, the routes are arranged such that routes with minimum length, traffic load, maximum energy level and signal strength are listed first in the route table. The path is established with route which has maximum MCWF. The benefit of this mechanism is to select the stable and optimal path to reach the destination. It is implemented using NS-2 which minimizes the end-to-end delay, overhead and energy consumption and maximizes the packet delivery ratio.
Keywords: DSR, MANET, Modified Combined Weight Function, stable route, link failure
773-779
PDF
6
Design and Development of a Robust Algorithm for Information Extraction using K-Means and AGNES
-Nancy,Arvind Kaur
Abstract
Today, data mining has become a burning issue of research in computer and information science with the perspective to find knowledge from large datasets. a modern document contains not only text but also audios, videos, images as well. several tools, techniques and algorithms are available for the extraction of knowledge from the dataset. in this paper, a comparative analysis of various clustering techniques along with their features, pros and cons has been done which helps us to give an insight about these techniques in detail. A hybrid algorithm has been proposed to merge the advantages of these two approaches k-means and AGNES so that various disadvantages of both can be removed. In this paper clustering algorithms are implemented on an open source versatile tool, MATLAB (MATrix LABoratory) and comparison has been done on the basis of certain parameters like accuracy, precision, recall, fscore, true positive, true negative, false positive, false negative for prediction on the Iris dataset.
780-787
PDF
7
Component Retrieval using Local Repository by K-Mean and Cosine Similarity
-Sonal Mehta,Vijay Nagpal
Abstract
Software recycle is mainly consists of making use of any existing information, element or product when designing and implementing a new system or a product. It means using a segment of source code that can be used again to add new functionalities with some alteration. Replication of an entire software program does not count as a reuse. Reuse of assets is dependent upon both the similarities and differences among the applications in which the component is being used we make as system for software access by matching, -kmean and cosine similarity.
Keywords— k-mean, clustering, matching, cosine
788-797
PDF
8
Internet of Things-Architecture and Enabling Technologies
-Prof.T.Venkat Narayana Rao,Akshit Mandala,Shayideep Sangam
Abstract
The phrase Internet of Things (IOT) stands out as a vision of the future Internet. The network of interconnected objects from tiny material like paper to huge automobiles, through this network are left to interact with themselves and enable exchange of information. The productivity and the efficiency of the physical world would be used to innovate services and automate the things. This paper aims to state the level of Internet of Things and exhibits the key technological drivers, potential challenges and future research areas, which would be the next industrial revolution. This would improve the accurate monitoring system of objects around us and also improve your quality of life.
Index Terms— Cloud Computing, Electronic Product Code (EPC), Internet of Things (IOT), Radio Frequency Identification (RFID), Wireless Sensor Network (WSN), Smart Environments Security, Systemic Approach
798-804
PDF
9
Stochastic Equation Based Modeling on Multi Release Incorporating Learning Effect and Two Types of Imperfect Debugging on Faults of Different Severity
-A.R.Prasanan,Suneeta Bhati,Javinder Singh,Yogesh Sharma
Abstract
Software reliability refers to the likelihood of software to function without failure for a specified duration of time under few conditions. Reliability is very imperative to amplify the software persistence and its longitivity. Software testing illustrates the process that ferrets out the faults in whole and worth of developed computer software. Entire eradication of faults from software is nonviable due to software intricacy and caliber of testing team, this event is termed as imperfect debugging. Error generation is defined as the procedure in which the faults are imperfectly removed and additional faults emanate from these existing faults. The inbuilt flexibility of the proposed model takes care of different environment ranging from exponential to s-shaped to three stage Erlang model. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as stochastic process with continuous state space .In this paper, we propose a new software reliability growth model based on It^o type of stochastic differential equation. In this paper we have proposed an SDE based SRGM which consider the learning effect and experienced gained by the testing team as testing progress in the presence of two types of imperfect debugging.
Key words: NHPP, SDE-It^o type, imperfect debugging, error generation, multi up-gradation, severity of faults
805-814
PDF
9
Stochastic Equation Based Modeling on Multi Release Incorporating Learning Effect and Two Types of Imperfect Debugging on Faults of Different Severity
-A.R.Prasanan,Suneeta Bhati,Javinder Singh,Yogesh Sharma
Abstract
Software reliability refers to the likelihood of software to function without failure for a specified duration of time under few conditions. Reliability is very imperative to amplify the software persistence and its longitivity. Software testing illustrates the process that ferrets out the faults in whole and worth of developed computer software. Entire eradication of faults from software is nonviable due to software intricacy and caliber of testing team, this event is termed as imperfect debugging. Error generation is defined as the procedure in which the faults are imperfectly removed and additional faults emanate from these existing faults. The inbuilt flexibility of the proposed model takes care of different environment ranging from exponential to s-shaped to three stage Erlang model. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as stochastic process with continuous state space .In this paper, we propose a new software reliability growth model based on It^o type of stochastic differential equation. In this paper we have proposed an SDE based SRGM which consider the learning effect and experienced gained by the testing team as testing progress in the presence of two types of imperfect debugging.
Key words: NHPP, SDE-It^o type, imperfect debugging, error generation, multi up-gradation, severity of faults
805-814
PDF
10
Intra Cloud Trust Managment Technique
-N Ambika,Dr.M Sujaritha
Abstract
Hatman is a decentralized trust through replication in the cloud, and guarantees calculation respectability. To assess notoriety based trust administration in a reasonable cloud environment, we expand a full scale, generation level information handling cloud Hadoop MapReduce with a notoriety construct trust administration usage situated in light of Eigen Trust. Textures and irregularities constitute criticism as assertions and differences between hubs. These structure a trust grid whose eigenvector encodes the worldwide notorieties of all hubs in the cloud. The worldwide trust vector is counseled when picking between varying imitation reactions, with the most dependable reaction conveyed to the client as the occupation result. To accomplish high adaptability and low overhead, demonstrate that occupation replication, result consistency checking, and trust administration would all be able to be planned as profoundly parallelized MapReduce calculations. The security offered by the cloud scales with its computational force. The trust administration framework is incorporated as in expert hubs keep up a little, trusted store of trust and notoriety data; notwithstanding, all calculation is decentralized in that trust grid calculations and client submitted work code is all dispatched to slave hubs.
Keywords: Cloud computing, Cloud security, Hadoop MapReduce, Cloud Trusting
815-825
PDF
     
     
IJCTA © Copyrights 2015| All Rights Reserved.

This work is licensed under a Creative Commons Attribution 2.5 India License.