IJCTA-Volume 6 Issue 6/ November-December 2015
Title/Author Name
Page No
A Web Based Automated Data Ordering System for Multiple Satellite Vendors
-JayaSudha Tigiripalli,Sonu SinghTomar,B.Radhika,Manju Sarma,B.Gopalakrishna
Demand for high resolution data for various geospatial applications is increasing day by day. Need arises to procure data from foreign vendors to meet user needs and expand the capacity of data supply. National remote sensing center (NRSC) being the nodal agency, has the responsibility to procure the data from different foreign vendors on behalf of Indian users. Streamlining and automating the process workflow to enable easy ordering and monitoring the order status to enhance the user ordering experience is vital. Communicating the workflow tasks at regular intervals through instant messaging is the demand of the day. Integrating into the existing multi mission workflow is essential to improve resource reuse, performance and turnaround time. Difference in the order processing of IRS data and foreign high resolution data arises due to heterogeneous and foreign nature of data processing. User-friendly interactive web based online services become need of the hour to cater user demands.
This paper describes the design and implementation aspects of foreign satellite high resolution data ordering system developed using oracle database for data repository and RIA technologies for user friendly interactive interface. It provides interlinking of vendor, user and processes to ensure continuity.
The behavior of Chip Firing Game and related Model: A Comprehensive Survey
Le Manh Ha
In this paper, we give a survey of known results concerning the presence of order structure and lattices in the context of discrete dynamical models derived from studies of Chip Firing Games.
Index Terms—Chip Firing Game, Conflicting Chip Firing Game, Discrete dynamical system, order and lattice structure,Petri net, Rotor router.
A Secure Self Adaptation Mechanism in for Service Base Cloud Application
Purushottam S.Chavan,Prof B. R. Nandwalkar
Self-Adaptation, as a concept, has been around for many years in several domains like Business, etc. Self-adaptivity in computer-based systems is relatively upcoming techniques. Cloud computing, with its promise of (almost) unlimited storage, computation, and bandwidth, is increasingly becoming the infrastructure of choice for many organizations. As cloud offerings reliable, service-based applications need to dynamically qualify themselves to self-adapt to changing QOS requirements. In this paper, we present a decentralized mechanism for such self-adaptation and Security analysis using market-based heuristics. We use a CDA to allow applications to decide which services to choose, among the many on offer. We view an application as a multi-agent system and the cloud as a metropolis where many such applications self-compromise. Then, To achieve Quality of services, we use Gale Shapley Algorithm to optimize hardware resources efficiently.
Using Conceptual Dependency Theory to represent Marathi text
Madhuri M. Deshpande,Dr. Sharad D. Gore
The paper aims at using the Conceptual Dependency (CD) Theory[4] (Schank, 1975), to represent dependency amongst words in Marathi sentences. Context is the set of circumstances, time, place, background and the environment within which something takes place. Text has sense and words have meanings. The way a word is used with the text, determines its sense. Context sensitivity is inherent in natural languages. A word or group of words in a sentence will always convey a concept depending on the context in which they are used in a sentence. Hence, if one is able to identify the event under which the statement is being used, ambiguities in translation can be resolved more easily. Conceptual Dependency Theory focuses on concepts and understanding about a concept instead of syntax and structure.
Redundant Network Traffic Elimination Techniques: A comprehensive Survey
Abdullah Al Mamun,Hassan Ali, Sultan Anwar
A large amount of popular content is transformed across the network for different purposes. This transformed content comprises large amount of duplication that consume bandwidth inefficiently and degrade network performance. Redundant Traffic Elimination (RTE) also known as data deduplication detects and removes repeated chunks of data across network flows, protocols, and applications, with the purpose of reducing bandwidth usage. In this paper we survey state of the art RTE techniques (WAN optimization, protocol independent, PACK, SMART-RE, End-To-End elimination etc.) implemented in various types of networks. We categorize the techniques based on the problem solution perspective, critically analyze the techniques, and find gaps that need consideration in future. Results show that already implemented RTE techniques considerably remove the repeated content and improve network performance. However, still there are gaps especially in mobile networks, wireless and cloud computing area that needs consideration in future.
Index Terms—Redundant traffic elimination (RTE), WAN Optimization, Protocol Independent, Data de-duplication
A Survey- An Introduction of Wireless Sensor Network
Rupendra Kumar,Rupendra Kumar,Nilesh Chandra
Wireless sensor network (WSNs) has important application such as remoting environment monitoring and target tracking .WSNs is a collection of large number of sensor node which sense the data and transmit to base station. These base station is the central hub for wireless network communication between computers. These sensors nodes are implanted with wireless interfaces through which communication with one another to form a network. In this survey, we try to present an outline of wireless sensor networks and their application domains including the challenges that should be keep in mind in order to push the technology further.
In this survey we try to bring out recent development in WSNs from what stated in pre-existing surveys.We do our survey on the leading research projects, journals ,standards ,technologies, and platforms. This paper try to help new researchers entering the domain of WSNs by providing a comprehensive and elaborate survey on recent developments.[2]
Keywords:Sensor nodes, Sink node, Wireless Sensor Network.
A Survey of Nature inspired Computing for Energy Optimization in Wireless Sensor Network
Soumitra Das,Dr.Barani S,Dr.Sanjeev Wagh,Dr.S.S.Sonavane
Wireless Sensor Networks have become increasingly popular due to their extensive array of applications. The lifetime of the sensor node is based on the limited battery powered devices. Therefore, designing of an effective Wireless Sensor Network to maximize the lifetime of sensor node becomes one of the salient performance metrics. There are various energy aware routing protocols described in the literature for the achievement of optimal routing. This paper is aimed to provide a detailed review of existing energy optimization in the context of nature inspired techniques, taking into consideration parameters related to energy efficiency as the core objective. Finally, we summarize by presenting a broad deficiencies of various protocols that have been used for energy efficient routing with respect to nature inspired computing.
Keywords: Wireless Sensor Networks; energy optimization; Network Lifetime; Optimal Routing; Nature inspired computing, Metaheuristics.
Comparative study on Authenticated Sub Graph Similarity Search in Outsourced Graph Database
Today security is very important in the database system. Advanced database systems face a great challenge raised by the emergence of massive, complex structural data in bioinformatics, chem-informatics, and many other applications. Since exact matching is often too restrictive, similarity search of complex structures becomes a vital operation that must be supported efficiently. The Subgraph similarity search is used in graph databases to retrieve graphs whose subgraphs are similar to a given query graph. It has been proven successful in a wide range of applications including bioinformatics and chem-informatics, etc. Due to the cost of providing efficient similarity search services on ever-increasing graph data, database outsourcing is apparently an appealing solution to database owners. In this paper, we are studying on authentication techniques that follow the popular filtering-and-verification framework. An authentication-friendly metric index called GMTree. Specifically, we transform the similarity search into a search in a graph metric space and derive small verification objects (VOs) to-be-transmitted to query clients. To further optimize GMTree, we are studying on a sampling-based pivot selection method and an authenticated version of MCS computation.
Keywords—Graph Database, Sub graph similarity search, query authentication, outsourced database.
CW Optimization for Low Density 802.11 Networks
-Iwona Dolińska,Antoni Masiukiewicz,Grzegorz Rządkowski,Mariusz Jakubowski
Distributed coordination function (DCF) scheme is the basic mechanism of MAC layer in 802.11 standards. The element which controls the access to the channel is the value of Contention Window (CW). There is a process of this value management and the value of Contention Window is very important from the point of view of transmission Quality of Service parameters, especially the average throughput. Although CW has no direct influence on the current throughput value, if we take into account the average throughput in time unit, the situation is different. If high values of CW are drawn, as a result we obtain long backoff time (TBO) and the average throughput in normalized time unit will decrease since less time will be available for the data transfer. The throughput significantly depends on available transmission speed. However, this speed will be less critical if we have more time for the data transfer.
In the paper, the authors analyze the influence of CWmin value on both collision rate and the TBO value. The CWmin decrease increases the collision rate and decreases the TBO value and the dead time simultaneously. Results of performed simulations show that for small Wi-Fi networks, the CWmin decrease can increase amount of time available for the data transmission.
A Score Point based Email Spam Filtering Genetic Algorithm
-Preeti Trivedi,Sudhir Singh
E-mail is one of the most essential parts of communications over internet today. However, each day we spent several minutes in deleting spam related to advertisement of products, offering loans at low interest rates, drugs etc. Though spam filters are capable to identify spam mails but spammers are constantly evolving newer methods to send spam messages to more and more people. With the advent of technology mobile devices and other portable electronic devices are now Wi-Fi enabled and internet telephony VoIP (voice over internet protocol) has made communicating across the world easier and inexpensive. Social networks like Twitter, Facebook, MySpace, orkut are very general means of connecting with friends across universally. However this has opened a newer audience for spammers to exploit. Spam is not just limited to e-mail anymore, it is on VoIP in the form of unsolicited marketing or advertising phone calls, or marketing, advertising and pornography links on social network. Spam is everywhere. This paper presents a genetic algorithms based spam filtering technique whose fitness function is based on the score point. We have shown that the considered algorithm provide a good recognition rate of 84% at FPR of 0.001
Privacy Concealment and Data Security over Outsourced Cloud Environments
-Sunit Ranjan Poddar,Sanika Sarang Kamtekar,Ameya Ravindra Sawant,Diksha PawanKumar Jangid,Prof.Mandar Mokashi
In an era of wireless internet, the use of mobile devices and cell phones has reached to a great extent. In this new digital world the availability of data accessibility over internet plays an important role. Dynamic storage systems are preferred for storing huge amount of diverse data. The cloud servers are capable of processing and storing any amount of data required, furthermore the cloud servers are flexible and scalable to a great extent. The information on these servers is of huge importance to the data owners and thus require protective strategies. We thus propose a system which consists of a group of techniques resulting in providing a secure data storage and data integrity. We intend to use RSA algorithm for encrypting and decrypting the data and digital signatures. We also destine to use SHA1 for generating hash keys. These hash keys help us support our motive for redundancy checks as well as for maintaining data integrity. Our system also equip de-duplication of user data which promises efficient bandwidth utilization without harming the privacy of the user. Load balancing is another feature that we include in our system for better performance and storage measures. We strongly believe that the existing systems do not provide the features included in our proposed framework. Also, our system promises improved time and space complexities.
Keywords: Data Integrity, De-duplication, Load Balancing, Hash-keys, RSA, Message Digest algorithm, Secure Hash algorithm, Third Party Auditor, Public Auditing
Agent-Based Approaches for Behavioural Modelling in Military Simulations
-Gaurav Chaudhary
Behavioral modeling of combat entities in military simulations by creating synthetic agents in order to satisfy various battle scenarios is an important problem. The conventional modeling tools are not always sufficient to handle complex situations requiring adaptation. To deal with this Agent-Based Modeling (ABM) is employed, as the agents exhibit autonomous behavior by adapting and varying their behavior during the course of the simulation whilst achieving the goals. Synthetic agents created by means of Computer Generated Force (CGF) is a relatively recent approach to model behavior of combat entities for a more realistic training and effective military planning. CGFs, are also sometimes referred to as Semi- Automated Forces (SAF) and enables to create high-fidelity simulations. Agents are used to control and augment the behavior of CGF entities, hence converting them into Intelligent CGF (ICGF). The intelligent agents can be modeled to exhibit cognitive abilities.
For this review paper, extensive papers on state-of-the-art in agent-based modeling approaches and applications were surveyed. The paper assimilates issues involved in ABM with CGF as an important component of it. It reviews modeling aspects with respect to the inter-relationship between ABM and CGF, which is required to carry out behavioral modeling. Important CGFs have been examined and a list with their significant features is given. Another issue that has been reviewed is that how the synthetic agents having different capabilities are implemented at different battle levels. Brief mention of state-of-the-art integrated cognitive architectures and a list of significant cognitive applications based on them with their features is given. At the same time, the maturity of ABM in agent-based applications has also been considered.
Keywords: ABM, CGF, Behavior Modeling
A Novel Threshold Estimation Based Face Recognition Technique
-Aparna Tiwari,Ram Kripal Mishra
In recent times biometric based authentication gained lot of attention due to its advantages. The traditional approaches are PIN or password based which are now not so secure due to the hacking etc.. Moreover, password and PIN can be stolen. To counteract such problems, a biometric based identifier can be used which is unique to each user. In the similar context face and finger are most preferred biometrics. In this paper, a face recognition based algorithm Principal Component Analysis (PCA) is discussed which is very famous in face recognition. This method is based on Eigen values and Eigen functions. In this method only principal components are considered and other components which are away from the principal axis are discarded, thus dimension reduces significantly. The major problem with biometric is the selection of proper threshold and in general selected heuristically. In this paper an formula based on the Eigen value is derived and obtained results improves significantly.
Big Data: Leveraging Hadoop platform to process Semi and Unstructured data
-Pratiba D,Dr.Shobha G,Vishwas C N
The use of internet has lead to the generation of large amount of data which is termed as big data. Hadoop is tool which is widely used for processing big data. Big data comprises of complex data which may be completely unstructured format or semi-structured format. In this paper we discuss how semi-structured data can be processed in Hadoop. We discuss the approaches involved in this and also how MapReduce model helps in this processing.
Keywords— Big data; Hadoop; Semi-structured data; MapRedeuce;Unstructured data
Preserving Users Privacy in Personalized Web Search
-Kalyani R. Kshirsagar,Prof. N.R.Wankhade
Web search engines are very important portal for ordinary people that are looking for useful information on the web. Generic search engine cannot identify the users search goals behind the query, if users enter improper keyword, ambiguous keyword and lack of user’s ability to express what they need exactly. Personalized web search overcome these above problem. Personalized web search is an effective way of search technique that aiming to provide customizing search results and also it improves the search quality. Personalized web search (PWS) is ability thatidentifies different users needs who issue the same query for web searching for carry out data retrieval as a part of his/herinterests. User hesitated to disclose their privatepreference information to search engines which has become major issue in personalized web search. Thus, a balance must be stuck between user’s privacy and quality of search. In this paper we have surveyed user attitudes towards the privacy protection and here we have provided methodology for securing rich users profile while transferring the query to the server side using cryptography random4 algorithm. User profiles summarize a user’s specific interests into a hierarchical organisation according to particular interests. Two parameters used here minDetail and expRatio for specifying the privacy requirements to help the userto choose the content and degree of detail of profile information exposed to search engine.
Keywords:personalized web search, Generic search, user profile, user search behaviour
Qualitative and Quantitative Evaluations of L2RLS Algorithm
Online object tracking is a major challenge problem in computer vision due difficulties to account for appearance changes of object. For efficient and robust tracking, here propose a robust and fast tracking algorithm in which object tracking is achieved by solving l2-regularized least square (l2-RLS) problems in a Bayesian inference framework. Compared with the complex l1-based algorithm, it provides a very fast performance without the loss of accuracy in handling the tracking problem. In order to reduce tracking drift, this work present a method that takes occlusion and motion blur into account rather than simply includes image observations for model update and compare the proposed method with several existing methods like PCA, SRPCA, MIL, PN, VTD, Frag, l1 minimization by considering Overlap Rate, Center Error parameters.
Keywords: l2-RLS, PCA, Frag Tracker, l1/l2 sparse coding, object Representation, Object Tracking, Sparse Prototype, Overlap Rate and Center Error.
Preserving Privacy in Social Tagging
-Pradnya M. Deshmane,Prof.N.R.Wankhade
Our system works on the collaborative tagging technique which is very famous in online system or social networking system. This system works on the bottle neck area of some previous tagging method. Our system contains the module that extends the tagging functionality capacity and features. Our system is having one policy layer that analyzes the collaborative tagging before it is into action. Our system required this layer to consider user preferences, deliberately defining sources on the basis of tag categories associated with them, and, some other parameters considering their trust worthiness. Along with this our systems try to preserve the user’s sensitive information which is predictable from his feedback and comments. Hence our system also provides one privacy protection layer. Along with this there is an issue in some social networking sites like facebook. These social networking sites are using user’s profile to analysis and extract behavioral aspect of user. This type of profile sometimes may contain sensitive information. Our system avoids misuse of such profile by using technology namely tag suppression. Hence our system first filter tags using our policy layer and then for user interest it uses our privacy layer and finally from user interest respective tag is shared. We contribute one phase using which user can view social network status / reputation of particular user. Based on this online social network status / reputation of particular user we can take decision whether to allow such user or not. It is completely preventive approach that helps user to take decision whether to block particular user or not.
Keywords - Tagging, Collaborative Tagging, Policy Layer, Privacy Layer
Maximum Matched Pattern-based Topic Model In Information Filtering
-S.K.Thakare, M.G.Bhandare
User always needs information that information is collected or gathered form the collection of document .from collection of document user’s useful information is generated by using information filtering. Many methods are used for information filtering but this are suffered from some drawbacks so in this paper we proposed Maximum matched Pattern-based Topic Model. Important Features of the model are information is generated in terms of multiple topic .user information represented in term of pattern .by using topic model pattern are generated and all are organized by their statistical and taxonomic features. The most trait and representative pattern is our Maximum Matched pattern. This Maximum matched pattern is used to find our relevant document for user information .And also it is used for information filtering and gives irrelevant document and relevant document from that pattern.
Key Words: Topic model, information filtering, pattern mining, relevance ranking, user interest model
Automatic Web Scrapping using Visual Selectors
-Rashmi Bhosale,Prashant Jawalkar
The amount of information that is currently vailable on the net grows at a very fast pace, thus web can be considered as the largest knowledge repository ever developed and made available to the public. A web data extraction system is a system that extracts data from web pages automatically. Web data analysis applications such as extracting mutual funds information from a website, extracting opening and closing price of stock daily from a web page and so on involves web data extraction.Early techniques were construcingt wrapper to visit those sites and collect data which is time consuming. Thus a technique called as Automatic Web Scrapping Using Visual Selectors(AWSUVS) is proposed. For selected data sections AWSUVS discovers extraction pattern automatically. AWSUVS uses visual cues to identify data records while ignoring noise items such as advertises and navigation bars
Smart Grid Architecture for Comprehensive Dynamic Pricing for PHEVs
Plug-in Hybrid Electric Vehicles (PHEVs) are the vehicles that depend on energy from power grid. When such vehicles are deployed in large scale, there will be great demand for energy in peak-hours. There will be different levels of usage based on time. Towards this end, optimizing the supply-demand and also pricing for PHEVs is very challenging problem to be addressed. Recently there was a solution named Distributed Dynamic Pricing (D2P). The solution was based on smart grid architecture in order to optimize the energy usage patterns exhibited by PHEVs. Foreign microgrid energy microgrid energy are two kinds of serviced rendered. The pricing is also different based on the vehicles are in the home place of roaming. The aim of the method was to have cost-effective charging and discharging of power to PHEVs. In this paper we built a prototype application to demonstrate the proof of concept. The empirical results are encouraging.
Index Terms – Smart grid, electric-grid, PHEV, dynamic pricing
IJCTA © Copyrights 2015| All Rights Reserved.

This work is licensed under a Creative Commons Attribution 2.5 India License.