ABSTRACT

The use of cloud computing has increased rapidly in many organizations. Cloud
computing provides many benefits in terms of low cost and accessibility of data.
Ensuring the security of cloud computing is a major factor in the cloud computing
environment, as users often store sensitive information with cloud storage
providers but these providers may be untrusted. Dealing with “single cloud”
providers is predicted to become less popular with customers due to risks of service
availability failure and the possibility of malicious insiders in the single cloud. A
movement towards “multi-clouds”, or in other words, “interclouds” or “cloud-ofclouds”
has emerged recently.
This paper surveys recent research related to single and multi-cloud security and
addresses possible solutions. It is found that the research into the use of multicloud
providers to maintain security has received less attention from the research
community than has the use of single clouds. This work aims to promote the use of
multi-clouds due to its ability to reduce security risks that affect the cloud
computing user.



ABSTRACT

Data de duplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality of sensitive data while supporting de duplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing. To better protect data security, this paper makes the first attempt to formally address the problem of authorized data de duplication. Different from traditional de duplication systems, the differential privileges of users are further considered in duplicate check besides the data itself. We also present several new de duplication constructions supporting authorized duplicate check in a hybrid cloud architecture. Security analysis demonstrates that our scheme is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement a prototype of our proposed authorized duplicate check scheme and conduct test bed experiments using our prototype. We show that our proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.



ABSTRACT

Wireless body area network (WBAN) has been recognized as one of the promising wireless sensor technologies for improving healthcare service, thanks to its capability of seamlessly and continuously exchanging medical information in real time. However, the lack of a clear in-depth defense line in such a new networking paradigm would make its potential users worry about the leakage of their private information, especially to those unauthenticated or even malicious adversaries. In this paper, we present a pair of efficient and light-weight authentication protocols to enable remote WBAN users to anonymously enjoy healthcare service. In particular, our authentication protocols are rooted with a novel certificateless signature (CLS) scheme, which is computational, efficient, and provably secure against existential forgery on adaptively chosen message attack in the random oracle model. Also, our designs ensure that application or service providers have no privilege to disclose the real identities of users. Even the network manager, which serves as private key generator in the authentication protocols, is prevented from impersonating legitimate users. The performance of our designs is evaluated through both theoretic analysis and experimental simulations, and the comparative studies demonstrate that they outperform the existing schemes in terms of better trade-off betweendesirable security properties and computational overhead, nicely meeting the needs of WBANs.



ABSTRACT

Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.



ABSTRACT

     PRISON MANAGEMENT SYSTEM is an efficient application for maintaining Prisoners information. More than a data storage program, it helps you manage the prisoners. It offers a wide variety of reports that give you exactly the information you need. Add new prisoner details and about new cases. It enables to track every prisoner and their activities. There are three kinds of users Data administrator, Police officials and Administrator. Administrator is considered as Super user and he has full access and rights over the system than anybody else. Administrator can view the details of prisoners, cases, release diary, parole register and interview requests as well as in-out register. Next Police officials have rights to view the nominal rolls, case, parole, interview requests and his functionality is to check the nominal roll which comprises of whole details of prisoners and to fix date for court hearing as per court’s advice. Finally, Data administrator prepares reports and enters data on behalf of the administrator with appropriate data. They have privilege to add and update the data as per administrator’s concern. The interview requests of relatives are made through data admin to administrator and the status of reports can reflect appropriately.



ABSTRACT

     AUTOMATION OF COLLECTORATE is an efficient application for performing tasks and services provided by district collector office. The District Collector is the district head of administration of the bureaucracy in a state of India. The administrator is considered as the super user of this system, which indeed the collector and higher official. The public peoples and end users are provided with the online service for applying forms such as community, income, death and patta certificates. The schemes and services available in the collectorate are also given completely exclusive and every details of single scheme are mentioned in this system. A user can register him with the system and once he/she is registered, they can sign in to apply for forms and can participate in tender offers announced by the administrator. Online petition filing system is available for every end user even for peoples who haven’t registered with the system. Online petition filing is also available in registered users interface and they can check the status of their petition whether action taken for his petition or it is rejected due to some reasons.



ABSTRACT

Cheap ubiquitous computing enables the collection of massive amounts of personal data in a wide variety of domains. Many organizations aim to share such data while obscuring features that could disclose personally identifiable information. Much of this data exhibits weak structure (e.g., text), such that machine learning approaches have been developed to detect and remove identifiers from it. While learning is never perfect, and relying on such approaches to sanitize data can leak sensitive information, a small risk is often acceptable. Our goal is to balance the value of published data and the risk of an adversary discovering leaked identifiers. We model data sanitization as a game between 1) a publisher who chooses a set of classifiers to apply to data and publishes only instances predicted as non-sensitive and 2) an attacker who combines machine learning and manual inspection to uncover leaked identifying information. We introduce a fast iterative greedy algorithm for the publisher that ensures a low utility for a resource-limited adversary. Moreover, using five text data sets we illustrate that our algorithm leaves virtually no automatically identifiable sensitive instances for a state-of-the-art learning algorithm, while sharing over 93% of the original data, and completes after at most 5 iterations.



ABSTRACT

Jammers can severely disrupt the communications in wireless networks, and jammers’ position information allows the defender to actively eliminate the jamming attacks. Thus, in this paper, we aim to design a framework that can localize one or multiple jammers with a high accuracy. Most of existing jammer-localization schemes utilize indirect measurements (e.g., hearing ranges)affected by jamming attacks, which makes it difficult to localize jammers accurately. Instead, we exploit a direct measurement—the strength of jamming signals (JSS). Estimating JSS is challenging as jamming signals may be embedded in other signals. As such, we devise an estimation scheme based on ambient noise floor and validate it with real-world experiments. To further reduce estimation errors, we define an evaluation feedback metric to quantify the estimation errors and formulate jammer localization as a nonlinear optimization problem, whose global optimal solution is close to jammers’ true positions. We explore several heuristic search algorithms for approaching the global optimal solution, and our simulation results show that our error-minimizing-based framework achieves better performance than the existing schemes. In addition, our error-minimizing framework can utilize indirect measurements to obtain a better location estimation compared with prior work.



ABSTRACT

In distributed transactional database systems deployed over cloud servers, entities cooperate to form proofs of authorizations that are justified by collections of certified credentials. These proofs and credentials may be evaluated and collected over extended time periods under the risk of having the underlying authorization policies or the user credentials being in inconsistent states. It therefore becomes possible for policy-based authorization systems to make unsafe decisions that might threaten sensitive resources. In this paper, we highlight the criticality of the problem. We then define the notion of trusted transactions when dealing with proofs of authorization. Accordingly, we propose several increasingly stringent levels of policy consistency constraints, and present different enforcement approaches to guarantee the trustworthiness of transactions executing On cloud servers. We propose a Two-Phase Validation Commit protocol as a solution, which is a modified version of the basic Two-Phase Validation Commit protocols. We finally analyze the different approaches presented using both analytical evaluation of the overheads and simulations to guide the decision makers to which approach to use.



ABSTRACT

This feature is made available to public for interaction with police indirectly. This system registers the complaints from people through online and is helpful to the police department in identifying criminals. In this system any person can register their complaint online. The aim of this project is to develop an E-cops reporting and management system which is easily accessible to the public, police department and the administrative department.

Generally many crimes seen by the public will not reach to the police due to many reasons like fear, lack of time, ignorance. Due to this reason many cases are not even reported to the police station. Though some cases are registered they are not investigated properly due to lack of evidences and cooperation of the public. This project helps the public to report about the crimes to the police without any fear in correct time.

This is also helpful for higher authorities of police to have an overview about the progress of the investigation.This feature is made available to public for interaction with police indirectly. This system registers the complaints from people through online and is helpful to the police department in identifying criminals.

In this system any person can register their complaint online.The aim of this project is to develop an E-cops reporting and management system which is easily accessible to the public, police department and the administrative department.



ABSTRACT

With the wide deployment of public cloud computing infrastructures, using clouds to host data query services has become an appealing solution for the advantages on scalability and cost-saving. However, some data might be sensitive that the data owner does not want to move to the cloud unless the data confidentiality and query privacy are guaranteed. On the other hand, a secured query service should still provide efficient query processing and significantly reduce the in-house workload to fully realize the benefits of cloud computing. We propose the RASP data perturbation method to provide secure and efficient range query and kNN query services for protected data in the cloud. The RASP data perturbation method combines order preserving encryption, dimensionality expansion, random noise injection, and random projection, to provide strong resilience to attacks on the perturbed data and queries. It also preserves multidimensional ranges, which allows existing indexing techniques to be applied to speedup range query processing. The kNN-R algorithm is designed to work with the RASP range query algorithm to process the kNN queries. We have carefully analyzed the attacks on data and queries under a precisely defined threat model and realistic security assumptions. Extensive experiments have been conducted to show the advantages of this approach on efficiency and security.



ABSTRACT

Successful deployment of Electronic Health Record helps improve patient safety and quality of care, but it has the prerequisite of interoperability between Health Information Exchange at different hospitals. The Clinical Document Architecture (CDA) developed by HL7 is a core document standard to ensure such interoperability, and propagation of this document format is critical for interoperability. Unfortunately, hospitals are reluctant to adopt interoperable HIS due to its deployment cost except for in a handful countries. A problem arises even when more hospitals start using the CDA document format because the data scattered in different documents are hard to manage. In this paper, we describe our CDA document generation and integration Open API service based on cloud computing, through which hospitals are enabled to conveniently generate CDA documents without having to purchase proprietary software. Our CDA document integration system integrates multiple CDA documents per patient into a single CDA document and physicians and patients can browse the clinical data in chronological order. Our system of CDA document generation and integration is based on cloud computing and the service is offered in Open API. Developers using different platforms thus can use our system to enhance interoperability.



ABSTRACT

Offering strong data protection to cloud users while enabling rich applications is a challenging task. We explore a new cloud platform architecture called Data Protection as a Service, which dramatically reduces the per-application development effort required to offer data protection, while still allowing rapid development and maintenance.



ABSTRACT

Trust management is one of the most challenging issues for the adoption and growth of cloud computing. The highly dynamic, distributed, and non-transparent nature of cloud services introduces several challenging issues such as privacy, security, and availability. Preserving consumers’ privacy is not an easy task due to the sensitive information involved in the interactions between consumers and the trust management service. Protecting cloud services against their malicious users (e.g., such users might give misleading feedback to disadvantage a particular cloud service) is a difficult problem. Guaranteeing the availability of the trust management service is another significant challenge because of the dynamic nature of cloud environments. In this article, we describe the design and implementation of CloudArmor, a reputation-based trust management framework that provides a set of functionalities to deliver Trust as a Service (TaaS), which includes i) a novel protocol to prove the credibility of trust feedbacks and preserve users’ privacy, ii) an adaptive and robust credibility model for measuring the credibility of trust feedbacks to protect cloud services from malicious users and to compare the trustworthiness of cloud services, and iii) an availability model to manage the availability of the decentralized implementation of the trust management service. The feasibility and benefits of our approach have been validated by a prototype and experimental studies using a collection of real-world trust feedbacks on cloud services.



ABSTRACT

We propose a new design for large-scale multimedia content protection systems. Our design leverages cloud infrastructures to provide cost efficiency, rapid deployment, scalability, and elasticity to accommodate varying workloads. The proposed system can be used to protect different multimedia content types, including 2-D videos, 3-D videos, images, audio clips, songs, and music clips. The system can be deployed on private and/or public clouds. Our system has two novel components: (i) method to create signatures of 3-D videos, and (ii) distributed matching engine for multimedia objects. The signature method creates robust and representative signatures of 3-D videos that capture the depth signals in these videos and it is computationally efficient to compute and compare as well as it requires small storage. The distributed matching engine achieves high scalability and it is designed to support different multimedia objects. We implemented the proposed system and deployed it on two clouds: Amazon cloud and our private cloud. Our experiments with more than 11,000 3-D videos and 1 million images show the high accuracy and scalability of the proposed system. In addition, we compared our system to the protection system used by YouTube and our results show that the YouTube protection system fails to detect most copies of 3-D videos, while our system detects more than 98% of them. This comparison shows the need for the proposed 3-D signature method, since the state-of-the-art commercial system was not able to handle 3-D videos.



ABSTRACT

CITS is a powerful human resource tool for maintaining employee and company information. More than a data storage program, CITS helps you manage your employees. CITS offers a wide variety of Reports that give you exactly the information you need. View payroll information by department, or find everyone who is receiving company Benefits. CITS gives you the power of information with different report categories.

 

CITS allows you to add and remove employees from the program and provides access to all employee information categories from Address History to Work Information. Organization files keep track of your company information. From this screen you can create, modify, and Remove company data. You can adjust data for company benefits, departments, evaluation categories, and Positions. It is a good idea to define your departments and positions before adding employees. You must also set up your company benefits and evaluations before adding them to your employee files. When you create a new category such as an Additional department or position, it is immediately available for selection in every applicable employee screen. Checklists assist you in office management by creating a list of items that need to be completed for a particular event. For example, you may want to make a checklist of everything that needs to be done when someone is hired.

 

CITS allows you to preview and print different reports that range from individual Work History to department Headcounts. Each report screen has different options. You can change the name of the report by editing the Report Title field. This will not change the name of the report in the drop down box, only the name as it appears on the report. The header includes report information such as the report title and date. Your company name (if selected) and report title will appear in every header. If you choose to have your company logo displayed on your report, it will appear in the top left corner. The date range of the report is displayed below the title. The date of the report is in the top right corner directly above the page number. The header may also contain information such as the employee name, department, and Selection Criteria.



ABSTRACT

In recent years, the boundaries between e-commerce and social networking have become increasingly blurred. Many e-commerce websites support the mechanism of social login where users can sign on the websites using their social network identities such as their Facebook or Twitter accounts. Users can also post their newly purchased products on microblogs with links to the e-commerce product web pages. In this paper we propose a novel solution for cross-site cold-start product recommendation, which aims to recommend products from e-commerce websites to users at social networking sites in “coldstart” situations, a problem which has rarely been explored before. A major challenge is how to leverage knowledge extracted from social networking sites for cross-site cold-start product recommendation. We propose to use the linked users across social networking sites and e-commerce websites (users who have social networking accounts and have made purchases on e-commerce websites) as a bridge to map users’ social networking features to another feature representation for product recommendation. In specific, we propose learning both users’ and products’ feature representations (called user embeddings and product embeddings, respectively) from data collected from e-commerce websites using recurrent neural networks and then apply a modified gradient boosting trees method to transform users’ social networking features into user embeddings. We then develop a feature-based matrix factorization approach which can leverage the learnt user embeddings for cold-start product recommendation. Experimental results on a large dataset constructed from the largest Chinese microblogging service SINA WEIBO and the largest Chinese B2C e-commerce website JINGDONG have shown the effectiveness of our proposed framework.



ABSTRACT

Cloud storage services have become commercially  popular due to their overwhelming advantages .To provide ubiquitous always-on access, a cloud service provider (CSP) maintains multiple replicas for each piece of data on geographically distributed servers. A key problem of using the replication Technique in clouds is that it is very expensive to achieve strong consistency on a worldwide scale. In this paper, we first present a novel consistency as a service (CaaS) model, which consists of a large data cloud and multiple small audit clouds. In the CaaS model, a data cloud is maintained by a CSP, and a group of users that constitute an audit cloud can verify whether the data cloud provides the promised level of consistency or not. We propose a two-level auditing architecture, which only requires a loosely synchronized clock in the audit cloud. Then, we design algorithms to quantify the severity of violations with two metrics: the commonality of violations, and the staleness of the value of a read. Finally, we devise a heuristic auditing strategy (HAS) to reveal as many violations as possible. Extensive experiments were performed using a combination of simulations and real cloud deployments to validate HAS.



ABSTRACT

The project titled as “Online Crimefile Management “is a web based application. This software provides facility for reporting online crimes, complaints, missing persons, show mostwanted person details mailing as well as chatting. Any Number of clients can connect to the server. Each user first makes their login to sever to show their availability. The server can be any Web Server. An SMTP Server must be maintained for temporary storage of emails and chat jar files for enable the chatting facilities.The main modules in this project



ABSTRACT

To Develop a CSRS System for a  network ,which should study the hackers behaviour and store it in a DataBase.

 

 

                                    CSRS can play a key role in a defensive strategy. While there are many studies and sources for information, there is no single source that discusses the major strategic issues concrerning CSRS. The main attraction of a CSRS is not limited to what you can learn but how you can learn it. As a result, a CSRS fits into the defensive plan as a way of

 

  • Studing blackhat activity.
  • Developing a reasoned response to the threat,
  • Testing and developing new responses and tactics to a given threat, and
  • CSRS can also slow down another attack thus allowing time to develop a countermeasure.

 

 

CSRS help to develop a reasoned, proactive response to a threat. In addition, they facilitate the building of contingencies thus contributing to the need to know what you don’t know in good project management practices. CSRS are no panacea. There are significant risks  and exposures to the organization if objectives are not well defined, do not implement a CSRS just for its own sake.

The primary issues to be addressed fall into two categories: Administrative/Policy and Technical.

The Administrative/Policy category covers legal, liability and misuse of data issues, while the Technical category relates to the choice of building or buying a CSRS placement of the CSRS and support issues.

We have seen that the use of policy and procedures to manage these issues can go a long way to solve them. Some liability issues such as ‘Uplink Liablility’ can be managed by sound configuration of the technical environment.

When configuring a CSRS, there areas of data management to consider: Data Control, Data Capture and, Data Collection.

Consideration must be given to both of these major categories before proceeding to implement a CSRS else; there is a significant risk of exposing the organization to both monetary and legal sanctions. Resolving these issues will build a better CSRS and help in ongoing management.



ABSTRACT

We propose a new decentralized access control scheme for secure data storage in clouds, that supports anonymous authentication. In the proposed scheme, the cloud verifies the authenticity of the ser without knowing the user’s identity before storing data. Our scheme also has the added feature of access control in which only valid users are able to decrypt the stored information. The scheme prevents replay attacks and supports creation, modification, and reading data stored in the cloud. We also address user revocation. Moreover, our authentication and access control scheme is decentralized and robust, unlike other access control schemes designed for clouds which are centralized. The communication, computation, and storage overheads are comparable to centralized approaches.



ABSTRACT

With 20 million installs a day, third-party apps are a major reason for the popularity and addictiveness of Facebook. Unfortunately, hackers have realized the potential of using apps for spreading malware and spam. The problem is already significant, as we find that at least 13% of apps in our dataset are malicious. So far, the research community has focused on detecting malicious posts and campaigns. In this paper, we ask the question: Given a Facebook application, can we determine if it is malicious? Our key contribution is in developing FRAppE—Facebook’s Rigorous Application Evaluator—arguably the first tool focused on detecting malicious apps on Facebook. To develop FRAppE, we use information gathered by observing the posting behavior of 111K Facebook apps seen across 2.2 million users on Facebook. First, we identify a set of features that help us distinguish malicious apps from benign ones. For example, we find that malicious apps often share names with other apps, and they typically request fewer permissions than benign apps. Second, leveraging these distinguishing features, we show that FRAppE can detect malicious apps with 99.5% accuracy, with no false positives and a high true positive rate (95.9%). Finally, we explore the ecosystem of malicious Facebook apps and identify mechanisms that these apps use to propagate. Interestingly, we find that many apps collude and support each other; in our dataset, we find 1584 apps enabling the viral propagation of 3723 other apps through their posts. Long term, we see FRAppE as a step toward creating an independent watchdog for app assessment and ranking, so as to warn Facebook users before installing apps.



ABSTRACT

Placing critical data in the hands of a cloud provider should come with the guarantee of security and availability for data at rest, in motion, and in use. Several alternatives exist for storage services, while data confidentiality solutions for the database as a service paradigm are still immature. We propose a novel architecture that integrates cloud database services with data confidentiality and the possibility of executing concurrent operations on encrypted data. This is the first solution supporting geographically distributed clients to connect directly to an encrypted cloud database, and to execute concurrent and independent operations including those modifying the database structure. The proposed architecture has the further advantage of eliminating intermediate proxies that limit the elasticity, availability, and scalability properties that are intrinsic in cloud-based solutions. The efficacy of the proposed architecture is evaluated through theoretical analyses and extensive experimental results based on a prototype implementation subject to the TPC-C standard benchmark for different numbers of clients and network latencies.



ABSTRACT

In this paper, we propose a dynamic audit service for verifying the integrity of a UN trusted and outsourced storage. Our audit service is constructed based on the techniques, fragment structure, random sampling and index-hash table, supporting provable updates to outsourced data and timely anomaly detection. In addition, we propose a method based on probabilistic query and periodic verification for improving the performance of audit services. Our experimental results not only validate the effectiveness of our approaches, but also show our audit system verifies the integrity with lower computation overhead and requiring less extra storage for audit metadata.
 



ABSTRACT

The migration to wireless network from wired network has been a global trend in the past few decades. The mobility and scalability brought by wireless network made it possible in many applications. Among all the contemporary wireless networks, Mobile Ad hoc network (MANET) is one of the most important and unique applications. On the contrary to traditional network architecture, MANET does not require a fixed network infrastructure; every single node works as both a transmitter and a receiver. Nodes communicate directly with each other when they are both within the same communication range. Otherwise, they rely on their neighbors to relay messages. The self-configuring ability of nodes in MANET made it popular among critical mission applications like military use or emergency recovery. However, the open medium and wide distribution of nodes make MANET vulnerable to malicious attackers. In this case, it is crucial to develop efficient intrusion-detection mechanisms to protect MANET from attacks. With the improvements of the technology and cut in hardware costs, we are witnessing a current trend of expanding MANETs into industrial applications. To adjust to such trend, we strongly believe that it is vital to address its potential security issues. In this paper, we propose and implement a new intrusion-detection system named Enhanced Adaptive Acknowledgment (EAACK) specially designed for MANETs. Compared to contemporary approaches, EAACK demonstrates higher malicious-behavior-detection rates in certain circumstances while does not greatly affect the network performances.



ABSTRACT

This work addresses the problem of how to enable efficient data query in a Mobile Ad-hoc SOcial Network (MASON), formed by mobile users who share similar interests and connect with one another by exploiting Bluetooth and/or WiFi connections. The data query in MASONs faces several unique challenges including opportunistic link connectivity, autonomous computing and storage, and unknown or inaccurate data providers. Our goal is to determine an optimal transmission strategy that supports the desired query rate within a delay budget and at the same time minimizes the total communication cost. To this end, we propose a centralized optimization model that offers useful theoretic insights and develop a distributed data query protocol for practical applications. To demonstrate the feasibility and efficiency of the proposed scheme and to gain useful empirical insights, we carry out a test bed experiment by using 25 off-the-shelf Dell Streak tablets for a period of 15 days. Moreover, extensive simulations are carried out to learn the performance trend under various network settings, which are not practical to build and evaluate in laboratories.



ABSTRACT

E-learning is another form of distance learning where education and training courses are delivered using computer technology. Typically, this means that courses are delivered either via the Internet, or on computer networks (linked computers). With the increased availability of PCs and Internet access, e-learning is becoming more and more popular. E-Learning is a web application using JSP.

This online application enables the end users to register online, select the subject, read the course and appear for the exam online. The results of the exams are also declared just after taking the test.The candidate should take tests in a particular sequence and also he can attempt the next test only if he has completed the previous papers. The correct answers for the questions are displayed after the exam. The date of the registration, date of exam, test results etc. are stored in the database.  



ABSTRACT

Vehicular Ad Hoc Networks (VANETs) adopt the Public Key Infrastructure (PKI) and Certificate Revocation Lists (CRLs) for their security. In any PKI system, the authentication of a received message is performed by checking if the certificate of the sender is included in the current CRL, and verifying the authenticity of the certificate and signature of the sender. In this paper, we propose an Expedite Message Authentication Protocol (EMAP) for VANETs, which replaces the time-consuming CRL checking process by an efficient revocation checking process. The revocation check process in EMAP uses a keyed Hash Message Authentication Code (HMAC), where the key used in calculating the HMAC is shared only between non-revoked On-Board Units (OBUs). In addition, EMAP uses a novel probabilistic key distribution, which enables non-revoked OBUs to securely share and update a secret key. EMAP can significantly decrease the message loss ratio due to the message verification delay compared with the conventional authentication methods employing CRL. By conducting security analysis and performance evaluation, EMAP is demonstrated to be secure and efficient.



ABSTRACT

Cloud computing plays an important role in supporting data storage, processing, and management in the Internet of Things (IoT). To preserve cloud data confidentiality and user privacy, cloud data are often stored in an encrypted form. However, duplicated data that are encrypted under different encryption schemes could be stored in the cloud, which greatly decreases the utilization rate of storage resources, especially for big data. Several data deduplication schemes have recently been proposed. However, most of them suffer from security weakness and lack of flexibility to support secure data access control. Therefore, few can be deployed in practice. This article proposes a scheme based on attribute-based encryption (ABE) to deduplicate encrypted data stored in the cloud while also supporting secure data access control. The authors evaluate the scheme's performance based on analysis and implementation. Results show the efficiency, effectiveness, and scalability of the scheme for potential practical deployment.



ABSTRACT

Hashing is a popular and efficient method for nearest neighbor search in large-scale data spaces by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. For most hashing methods, the performance of retrieval heavily depends on the choice of the high-dimensional feature descriptor. Furthermore, a single type of feature cannot be descriptive enough for different images when it is used for hashing. Thus, how to combine multiple representations for learning effective hashing functions is an imminent task. In this paper, we present a novel unsupervised multiview alignment hashing approach based on regularized kernel nonnegative matrix factorization, which can find a compact representation uncovering the hidden semantics and simultaneously respecting the joint probability distribution of data. In particular, we aim to seek a matrix factorization to effectively fuse the multiple information sources meanwhile discarding the feature redundancy. Since the raised problem is regarded as nonconvex and discrete, our objective function is then optimized via an alternate way with relaxation and converges to a locally optimal solution. After finding the low-dimensional representation, the hashing functions are finally obtained through multivariable logistic regression. The proposed method is systematically evaluated on three data sets: 1) Caltech-256; 2) CIFAR-10; and 3) CIFAR-20, and the results show that our method significantly outperforms the state-of-the-art multiview hashing techniques.



ABSTRACT

Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server (s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.



ABSTRACT

Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis. A major feature of the cloud services is that users’ data are usually processed remotely in unknown machines that users do not own or operate. While enjoying the convenience brought by this new emerging technology, users’ fears of losing control of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud services. To address this problem, here, we propose a novel highly decentralized information accountability framework to keep track of the actual usage of the users’ data in the cloud. In particular, we propose an object-centered approach that enables enclosing our logging mechanism together with users’ data and policies. We leverage the JAR programmable capabilities to both create a dynamic and traveling object, and to ensure that any access to users’ data will trigger authentication and automated logging local to the JARs. To strengthen user’s control, we also provide distributed auditing mechanisms. We provide extensive experimental studies that demonstrate the efficiency and effectiveness of the proposed approaches.



ABSTRACT

With virtual machine (VM) technology being increasingly mature, compute resources in cloud systems can be partitioned in fine granularity and allocated on demand. We make three contributions in this paper: 1) We formulate a deadline-driven resource allocation problem based on the cloud environment facilitated with VM resource isolation technology, and also propose a novel solution with polynomial time, which could minimize users’ payment in terms of their expected deadlines. 2) By analyzing the upper bound of task execution length based on the possibly inaccurate workload prediction, we further propose an error-tolerant method to guarantee task’s completion within its deadline. 3) We validate its effectiveness over a real VM-facilitated cluster environment under different levels of competition. In our experiment, by tuning algorithmic input deadline based on our derived bound, task execution length can always be limited within its deadline in the sufficient-supply situation; the mean execution length still keeps 70 percent as high as user specified deadline under the severe competition. Under the original-deadline-based solution, about 52.5 percent of tasks are completed within 0.95-1.0 as high as their deadlines, which still conform to the deadline-guaranteed requirement. Only 20 percent of tasks violate deadlines, yet most (17.5 percent) are still finished within 1.05 times of deadlines.



ABSTRACT

                            Cloud computing promises to significantly change the way we use computers and access and store our personal and business information. With these new computing and communications paradigms arise new data security challenges. Existing data protection mechanisms such as encryption have failed in preventing data theft attacks, especially those perpetrated by an insider to the cloud provider. We propose a different approach for securing data in the cloud using offensive decoy technology. We monitor data access in the cloud and detect abnormal data access patterns. When unauthorized access is suspected and then verified using challenge questions, we launch a disinformation attack by returning large amounts of decoy information to the attacker. This protects against the misuse of the user’s real data. Experiments conducted in a local file setting provide evidence that this approach may provide unprecedented levels of user data security in a Cloud environment.



ABSTRACT

Infrastructure as a Service (IaaS) cloud computing has transform the way we think of acquiring resources by introducing a simple change: allowing users to lease computational resources from the cloud provider’s datacenter for a short time by deploying virtual machines (VMs) on these resources. This new model raises new challenges in the design and development of IaaS middleware. One of those challenges is the need to deploy a large number (hundreds or even thousands) of VM instances simultaneously. Once the VM instances are deployed, another challenge is to simultaneously take a snapshot of many images and transfer them to persistent storage to support management tasks, such as suspend-resume and migration. With datacenters growing rapidly and configurations becoming heterogeneous, it is important to enable efficient concurrent deployment and snapshotting that are at the same time hypervisor independent and ensure a maximum compatibility with different configurations. This paper addresses these challenges by proposing a virtual file system specifically optimized for virtual machine image storage. It is based on a lazy transfer scheme coupled with object versioning that handles snapshotting transparently in a hypervisor-independent fashion, ensuring high portability for different configurations.



ABSTRACT

We address the problem of resource management for a large-scale cloud environment that hosts sites. Our contribution centers around outlining a distributed middleware architecture and presenting one of its key elements, a gossip protocol that meets our design goals: fairness of resource allocation with respect to hosted sites, efficient adaptation to load changes and scalability in terms of both the number of machines and sites. We formalize the resource allocation problem as that of dynamically maximizing the cloud utility under CPU and memory constraints. While we can show that an optimal solution without considering memory constraints is straightforward (but not useful), we provide an efficient heuristic solution for the complete problem instead. We evaluate the protocol through simulation and find its performance to be well aligned with our design goals.



ABSTRACT

The objective of the project is to automate the management of insurance activities of an insurance company. The purpose is to design a system using which users can perform all activities related to insurance company.    



ABSTRACT

Media streaming applications have recently attracted a large number of users in the Internet. With the advent of these bandwidth-intensive applications, it is economically inefficient to provide streaming distribution with guaranteed QoS relying only on central resources at a media content provider. Cloud computing offers an elastic infrastructure that media content providers (e.g., Video on Demand (VoD) providers) can use to obtain streaming resources that match the demand. Media content providers are charged for the amount of resources allocated (reserved) in the cloud. Most of the existing cloud providers employ a pricing model for the reserved resources that is based on non-linear time-discount tariffs (e.g., Amazon CloudFront and Amazon EC2). Such a pricing scheme offers discount rates depending non-linearly on the period of time during which the resources are reserved in the cloud. In this case, an open problem is to decide on both the right amount of resources reserved in the cloud, and their reservation time such that the financial cost on the media content provider is minimized.We propose a simple - easy to implement - algorithm for resource reservation that maximally exploits discounted rates offered in the tariffs, while ensuring that sufficient resources are reserved in the cloud. Based on the prediction of demand for streaming capacity, our algorithm is carefully designed to reduce the risk of making wrong resource allocation decisions. The results of our numerical evaluations and simulations show that the proposed algorithm significantly reduces the monetary cost of resource allocations in the cloud as compared to other conventional schemes.



ABSTRACT

Data sharing is an important functionality in cloud storage. In this article, we show how to securely, efficiently, and flexibly share data with others in cloud storage. We describe new public-key cryptosystems which produce constant-size ciphertexts such that efficient delegation of decryption rights for any set of ciphertexts are possible. The novelty is that one can aggregate any set of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. In other words, the secret key holder can release a constant-size aggregate key for flexible choices of ciphertext set in cloud storage, but the other encrypted files outside the set remain confidential. This compact aggregate key can be conveniently sent to others or be stored in a smart card with very limited secure storage. We provide formal security analysis of our schemes in the standard model. We also describe other application of our schemes. In particular, our schemes give the first public-key patient-controlled encryption for flexible hierarchy, which was yet to be known.



ABSTRACT

Wireless Sensor Networks (WSNs) are increasingly used in data-intensive applications such as micro-climate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit all the data generated within an application’s lifetime to the base station despite the fact that sensor nodes have limited power supplies. We propose using low-cost disposable mobile relays to reduce the energy consumption of data-intensive WSNs. Our approach differs from previous work in two main aspects. First, it does not require complex motion planning of mobile nodes, so it can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless transmissions into a holistic optimization framework. We present efficient distributed implementations for each algorithm that require only limited, localized synchronization. Because we do not necessarily compute an optimal topology, our final routing tree is not necessarily optimal. However, our simulation results show that our algorithms significantly outperform the best existing solutions.



ABSTRACT

To provide fault tolerance for cloud storage, recent studies propose to stripe data across multiple cloud vendors. However, if a cloud suffers from a permanent failure and loses all its data, we need to repair the lost data with the help of the other surviving clouds to preserve data redundancy. We present a proxy-based storage system for fault-tolerant multiple-cloud storage called NC Cloud, which achieves cost-effective repair for a permanent single-cloud failure. NC Cloud is built on top of a network-coding-based storage scheme called the functional minimum-storage regenerating (FMSR) codes, which maintain the same fault tolerance and data redundancy as in traditional erasure codes (e.g., RAID-6), but use less repair traffic and, hence, incur less monetary cost due to data transfer. One key design feature of our FMSR codes is that we relax the encoding requirement of storage nodes during repair, while preserving the benefits of network coding in repair. We implement a proof-of-concept prototype of NC Cloud and deploy it atop both local and commercial clouds. We validate that FMSR codes provide significant monetary cost savings in repair over RAID-6 codes, while having comparable response time performance in normal cloud storage operations such as upload/download.



ABSTRACT

This project is to reduce the human energy by providing online service for shopping.  This system typically enables the user to view and purchase the products according to their wish.  Some of the critical activities that can be performed with this system are seeing the products list ,rates and its descriptions, to book products online from your home itself,to participate in auction of products. Administrator of shop adds products and its rates to the system .He also have full rights to view the customers who are registered and also to update and delete the products. A new customer must register themselves to purchase online, registered users can login by their credentials. Auction of products is implemented in this system.  A customer can bid their amount for the products which are under auction. Communication between customer and administrator is established my messaging system.



ABSTRACT

The purpose of on-line exam simulator is to take online test in an efficient Manner and no time wasting for checking the paper. The main objective of on-line Exam simulator is to efficiently evaluate the candidate thoroughly through a fully automated system that not only saves lot of time but also gives fast results. For students they give papers according to their convenience and time and there is no need of using extra thing like paper, pen etc.

 



ABSTRACT

We introduce two approaches for improving privacy policy management in online social networks. First, we introduce a mechanism using proven clustering techniques that assists users in grouping their friends for group based policy management approaches. Second, we introduce a policy management approach that leverages a user's memory and opinion of their friends to set policies for other similar friends. W refer to this new approach as Same-As Policy Management.To demonstrate the effectiveness of our policy management improvements, we implemented a prototype Facebook application and conducted an extensive user study. Leveraging proven clustering techniques, we demonstrated a 23% reduction in friend grouping time. In addition, we demonstrated considerable reductions in policy authoring time using Same- As Policy Management over traditional group based policy management approaches. Finally, we presented user perceptions of both improvements, which are very encouraging.



ABSTRACT

                               With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data— while preserving identity privacy — remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.



ABSTRACT

In this paper, we present PACK (Predictive ACKs), a novel end-to-end traffic redundancy elimination (TRE) system, designed for cloud computing customers. Cloud-based TRE needs to apply a judicious use of cloud resources so that the bandwidth cost reduction combined with the additional cost of TRE computation and storage would be optimized. PACK’s main advantage is its capability of offloading the cloud-server TRE effort to end clients, thus minimizing the processing costs induced by the TRE algorithm. Unlike previous solutions, PACK does not require the server to continuously maintain clients’ status. This makes PACK very suitable for pervasive computation environments that combine client mobility and server migration to maintain cloud elasticity. PACK is based on a novel TRE technique, which allows the client to use newly received chunks to identify previously received chunk chains, which in turn can be used as reliable predictors to future transmitted chunks.We present a fully functional PACK implementation, transparent to all TCP-based applications and network devices. Finally, we analyze PACK benefits for cloud users, using traffic traces from various sources.



ABSTRACT

     PRISON MANAGEMENT SYSTEM is an efficient application for maintaining Prisoners information. More than a data storage program, it helps you manage the prisoners. It offers a wide variety of Reports that give you exactly the information you need. Add new prisoner details and about new cases. It enables to track every prisoner and their activities. There are three kinds of users Data administrator, Police officials and Administrator. Administrator is considered as Super user and he has full access and rights over the system than anybody else. Administrator can view the details of prisoners, cases, release diary, parole register and interview requests as well as in-out register. Next Police officials have rights to view the nominal rolls, case, parole, interview requests and his functionality is to check the nominal roll which comprises of whole details of prisoners and to fix date for court hearing as per court’s advice. Finally, Data administrator prepares reports and enters data on behalf of the administrator with appropriate data. They have privilege to add and update the data as per administrator’s concern. The interview requests of relatives are made through data admin to administrator and the status of reports can reflect appropriately.



ABSTRACT

 Current approaches to enforce fine-grained access control on confidential data hosted in the cloud are based on fine-grained encryption of the data. Under such approaches, data owners are in charge of encrypting the data before uploading them on the cloud and re-encrypting the data whenever user credentials or authorization policies change. Data owners thus incur high communication and computation costs. A better approach should delegate the enforcement of fine-grained access control to the cloud, so to minimize the overhead at the data owners, while assuring data confidentiality from the cloud. We propose an approach, based on two layers of encryption, that addresses such requirement. Under our approach, the data owner performs a coarse-grained encryption, whereas the cloud performs a fine-grained encryption on top of the owner encrypted data. A challenging issue is how to decompose access control policies (ACPs) such that the two layer encryption can be performed. We show that this problem is NP-complete and propose novel optimization algorithms. We utilize an efficient group key management scheme that supports expressive ACPs. Our system assures the confidentiality of the data and preserves the privacy of users from the cloud while delegating most of the access control enforcement to the cloud.



ABSTRACT

A key approach to secure cloud computing is for the data owner to store encrypted data in the cloud, and issue decryption keys to authorized users. Then, when a user is revoked, the data owner will issue re-encryption commands to the cloud to re-encrypt the data, to prevent the revoked user from decrypting the data, and to generate new decryption keys to valid users, so that they can continue to access the data. However, since a cloud computing environment is comprised of many cloud servers, such commands may not be received and executed by all of the cloud servers due to unreliable network communications. In this paper, we solve this problem by proposing a timebased re-encryption scheme, which enables the cloud servers to automatically re-encrypt data based on their internal clocks. Our solution is built on top of a new encryption scheme, attributebased encryption, to allow fine-grain access control, and does not require perfect clock synchronization for correctness.



ABSTRACT

Wireless Body Area Networks (WBANs) are expected to play a major role in the field of patient-health monitoring in the near future, which gains tremendous attention amongst researchers in recent years. One of the challenges is to establish a secure communication architecture between sensors and users, whilst addressing the prevalent security and privacy concerns. In this paper, we propose a communication architecture for BANs, and design a scheme to secure the data communications between implanted /wearable sensors and the data sink/data consumers (doctors or nurse) by employing Ciphertext-Policy Attribute Based Encryption (CP ABE) [1] and signature to store the data in ciphertext format at the data sink, hence ensuring data security. Our scheme achieves a role-based access control by employing an access control tree defined by the attributes of the data. We also design two protocols to securely retrieve the sensitive data from a BAN and instruct the sensors in a BAN. We analyze the proposed scheme, and argue that it provides message authenticity and collusion resistance, and is efficient and feasible. We also evaluate its performance in terms of energy consumption and communication/computation overhead.



ABSTRACT

Cloud computing provides a flexible and convenient way for data sharing, which brings various benefits for both the society and individuals. But there exists a natural resistance for users to directly outsource the shared data to the cloud server since the data often contain valuable information. Thus, it is necessary to place cryptographically enhanced access control on the shared data. Identity-based encryption is a promising cryptographical primitive to build a practical data sharing system. However, access control is not static. That is, when some user’s authorization is expired, there should be a mechanism that can remove him/her from the system. Consequently, the revoked user cannot access both the previously and subsequently shared data. To this end, we propose a notion called revocable-storage identity-based encryption (RS-IBE), which can provide the forward/backward security of ciphertext by introducing the functionalities of user revocation and ciphertext update simultaneously. Furthermore, we present a concrete construction of RS-IBE, and prove its security in the defined security model. The performance comparisons indicate that the proposed RS-IBE scheme has advantages in terms of functionality and efficiency, and thus is feasible for a practical and cost-effective data-sharing system. Finally, we provide implementation results of the proposed scheme to demonstrate its practicability.



ABSTRACT

Cloud storage is an application of clouds that liberates organizations from establishing in-house data storage systems. However, cloud storage gives rise to security concerns. In case of group-shared data, the data face both cloud-specific and conventional insider threats. Secure data sharing among a group that counters insider threats of legitimate yet malicious users is an important research issue. In this paper, we propose the Secure Data Sharing in Clouds (SeDaSC) methodology that provides: 1) data confidentiality and integrity; 2) access control; 3) data sharing (forwarding) without using compute-intensive reencryption; 4) insider threat security; and 5) forward and backward access control. The SeDaSC methodology encrypts a file with a single encryption key. Two different key shares for each of the users are generated, with the user only getting one share. The possession of a single share of a key allows the SeDaSC methodology to counter the insider threats. The other key share is stored by a trusted third party, which is called the cryptographic server. The SeDaSC methodology is applicable to conventional and mobile cloud computing environments. We implement a working prototype of the SeDaSC methodology and evaluate its performance based on the time consumed during various operations. We formally verify the working of SeDaSC by using high-level Petri nets, the Satisfiability Modulo Theories Library, and a Z3 solver. The results proved to be encouraging and show that SeDaSC has the potential to be effectively used for secure data sharing in the cloud.



ABSTRACT

Complete reference guide of Medical Professions, Hospitals, and the Pharmacy across the nation that offer the online discussion (through phone), and Online Appointment to a patient if necessary to obtain these careers.

You can look up any of the listed medical careers and find out their roles and responsibilities, facilities, Qualification requirements (For Physicians, Nurses, Lab Technicians, Hospitals), and

, we're focused on two key areas: Quality patient care and Customer service. Every Sparsh facility works to make patient care and customer satisfaction its top associated affiliations with your career option.

We also have links to obtain quality Physician and Hospitals. We also have links to great financial advertise sites that can help make your career choice a reality.

Throughout our hospitals and our company priority. To find and inquire about a Qualified Hospital, select your Location and go through the Search once.  Not all hospitals provide public postings of their openings. However, all hospitals welcome general inquiries.



ABSTRACT

We propose a novel approach for steganography using a reversible texture synthesis. A texture synthesis process resamples a smaller texture image, which synthesizes a new texture image with a similar local appearance and an arbitrary size. We weave the texture synthesis process into steganography to conceal secret messages. In contrast to using an existing cover image to hide messages, our algorithm conceals the source texture image and embeds secret messages through the process of texture synthesis. This allows us to extract the secret messages and source texture from a stego synthetic texture. Our approach offers three distinct advantages. First, our scheme offers the embedding capacity that is proportional to the size of the stego texture image. Second, a steganalytic algorithm is not likely to defeat our steganographic approach. Third, the reversible capability inherited from our scheme provides functionality, which allows recovery of the source texture. Experimental results have verified that our proposed algorithm can provide various numbers of embedding capacities, produce a visually plausible texture images, and recover the source texture.



ABSTRACT

The flexibility and mobility of Mobile Ad hoc Networks (MANETs) have made them increasing popular in a wide range of use cases. To protect these networks, security protocols have been developed to protect routing and application data. However, these protocols only protect routes or communication, not both. Both secure routing and communication security protocols must be implemented to provide full protection. The use of communication security protocols originally developed for wireline and WiFi networks can also place a heavy burden on the limited network resources of a MANET. To address these issues, a novel secure framework (SUPERMAN) is proposed. The framework is designed to allow existing network and routing protocols to perform their functions, whilst providing node authentication, access control, and communication security mechanisms. This paper presents a novel security framework for MANETs, SUPERMAN. Simulation results comparing SUPERMAN with IPsec, SAODV and SOLSR are provided to demonstrate the proposed frameworks suitability for wireless communication security.



ABSTRACT

Attribute-based encryption, especially for ciphertext-policy attribute-based encryption, can fulfill the functionality of fine-grained access control in cloud storage systems. Since users attributes may be issued by multiple attribute authorities, multi-authority ciphertext-policy attributebased encryption is an emerging cryptographic primitive for enforcing attribute-based access control on outsourced data. However, most of the existing multi-authority attribute-based systems are either insecure in attribute-level revocation or lack of efficiency in communication overhead and computation cost. In this paper, we propose an attribute-based access control scheme with two-factor protection for multi-authority cloud storage systems. In our proposed scheme, any user can recover the outsourced data if and only if this user holds sufficient attribute secret keys with respect to the access policy and authorization key in regard to the outsourced data. In addition, the proposed scheme enjoys the properties of constant-size ciphertext and small computation cost. Besides supporting the attribute-level revocation, our proposed scheme allows data owner to carry out the user-level revocation.



ABSTRACT

Ad-hoc low-power wireless networks are an exciting research direction in sensing and pervasive computing. Prior security work in this area has focused primarily on denial of communication at the routing or medium access control levels. This paper explores resource depletion attacks at the routing protocol layer, which permanently disable networks by quickly draining nodes’ battery power. These “Vampire” attacks are not specific to any specific protocol, but rather rely on the properties of many popular classes of routing protocols. We find that all examined protocols are susceptible to Vampire attacks, which are devastating, difficult to detect, and are easy to carry out using as few as one malicious insider sending only protocol compliant messages.



ABSTRACT

Recently, OSNs have come under Sybil attacks [1]. In this attack, a malicious user creates multiple fake identities, known as Sybils [1], to unfairly increase their power and influence within a target community. Researchers have observed Sybils forwarding spam and malware on Renren [2], Facebook [3] and Twitter [4]. To defend against Sybils, prior Sybil defenses [5]–[9] leverage the positive trust relationships among users, and rely on the key assumption that Sybils can befriend only few real accounts [10]. Unfortunately, we find that people in real OSNs still have a non-zero probability to accept friend requests of strangers, leaving room for Sybils to connect real users through sending a large amount of requests. In this paper, we further explores the negative distrust relationships (e.g., in the form of rejected friend requests) among users, as Sybils have more distrust relationships than trust ones with real users. However, this feature cannot be directly applied because attackers could obfuscate their Sybils from the detector by generating many fake trust relationships among Sybils. To prune the fake relationships, we model the friend invitation interactions among users as a signed, directed network, with an edge directed from the sender to the receiver and a sign (1= ¡ 1) indicates whether a friend request is accepted.



ABSTRACT

The goal of the E-KNOWLEDGE WIRELESS HEALTH CARE SYSTEM System is to improve the efficiency of the healthcare system by reducing the overall time and cost used to create documents and retrieve information



Address

IEEEProjectz Bangalore:
#14/2, 2nd Floor, Next to Total Gaz, Near GT mall, Magadi main road, Bangalore-23.

IEEEProjectz Mysore:
#1, 2nd Floor, Namratha Complex, Near Saraswathi Theatre, Kamakshi Hospital Road, Saraswathipuram, Mysore-570009.

IEEEProjectz Mangalore:
#32, 1st Floor, Near Westline Signature, Nanthoor, Kadri, Mangalore-575005.

IEEEProjectz Hassan:
#87, 3rd Floor, Above SBI ATM, Near Mission Hospital, RC Road, Hassan-573201.