IEEE 2017-2018 Network Security Projects in Java

Abstract:

Photo sharing is an attractive feature which popularizes online social networks (OSNs). Unfortunately, it may leak users' privacy if they are allowed to post, comment, and tag a photo freely. In this paper, we attempt to address this issue and study the scenario when a user shares a photo containing individuals other than himself/herself (termed co-photo for short). To prevent possible privacy leakage of a photo, we design a mechanism to enable each individual in a photo be aware of the posting activity and participate in the decision making on the photo posting. For this purpose, we need an efficient facial recognition (FR) system that can recognize everyone in the photo. However, more demanding privacy setting may limit the number of the photos publicly available to train the FR system. To deal with this dilemma, our mechanism attempts to utilize users' private photos to design a personalized FR system specifically trained to differentiate possible photo co-owners without leaking their privacy. We also develop a distributed consensus-based method to reduce the computational complexity and protect the private training set. We show that our system is superior to other possible approaches in terms of recognition ratio and efficiency. Our mechanism is implemented as a proof of concept Android application on Facebook's platform.

Abstract:

Information credibility on Twitter has been a topic of interest among researchers in the fields of both computer and social sciences, primarily because of the recent growth of this platform as a tool for information dissemination. Twitter has made it increasingly possible to offer near-real-time transfer of information in a very cost-effective manner. It is now being used as a source of news among a wide array of users around the globe. The beauty of this platform is that it delivers timely content in a tailored manner that makes it possible for users to obtain news regarding their topics of interest. Consequently, the development of techniques that can verify information obtained from Twitter has become a challenging and necessary task. In this paper, we propose a new credibility analysis system for assessing information credibility on Twitter to prevent the proliferation of fake or malicious information. The proposed system consists of four integrated components: a reputation-based component, a credibility classifier engine, a user experience component, and a feature-ranking algorithm. The components operate together in an algorithmic form to analyze and assess the credibility of Twitter tweets and users. We tested the performance of our system on two different datasets from 489,330 unique Twitter accounts. We applied 10-fold cross-validation over four machine learning algorithms. The results reveal that a significant balance between recall and precision was achieved for the tested dataset.

Abstract:

Privacy and integrity have been the main road block to the applications of two-tiered sensor networks. The storage nodes, which act as a middle tier between the sensors and the sink, could be compromised and allow attackers to learn sensitive data and manipulate query results. Prior schemes on secure query processing are weak, because they reveal non-negligible information, and therefore, attackers can statistically estimate the data values using domain knowledge and the history of query results. In this paper, we propose the first top-k query processing scheme that protects the privacy of sensor data and the integrity of query results. To preserve privacy, we build an index for each sensor collected data item using pseudo-random hash function and Bloom filters and transform top-k queries into top-range queries. To preserve integrity, we propose a data partition algorithm to partition each data item into an interval and attach the partition information with the data. The attached information ensures that the sink can verify the integrity of query results. We formally prove that our scheme is secure under IND-CKA security model. Our experimental results on real-life data show that our approach is accurate and practical for large network sizes.

Abstract:

In recent years, there are increasing interests in using path identifiers ( PIDs ) as inter-domain routing objects. However, the PIDs used in existing approaches are static, which makes it easy for attackers to launch the distributed denial-of-service (DDoS) flooding attacks. To address this issue, in this paper, we present the design, implementation, and evaluation of dynamic PID (D-PID), a framework that uses PIDs negotiated between the neighboring domains as inter-domain routing objects. In D-PID, the PID of an inter-domain path connecting the two domains is kept secret and changes dynamically. We describe in detail how neighboring domains negotiate PIDs and how to maintain ongoing communications when PIDs change. We build a 42-node prototype comprised of six domains to verify D-PID’s feasibility and conduct extensive simulations to evaluate its effectiveness and cost. The results from both simulations and experiments show that D-PID can effectively prevent DDoS attacks.

Abstract:

In this paper we focus on the problem of data aggregation using two aggregators in a data center network, where the source racks are allowed to split their data and send to the aggregators using multiple paths. We show that the problem of finding a topology that minimizes aggregation time is NP-hard for k = 2, 3, 4, where k is the maximum degree of each ToR switch (number of uplinks in a top-of-rack switch) in the data center. We also show that the problem becomes solvable in polynomial time for k = 5 and 6 and conjecture the same for k > 6. Experimental results show that, for k = 6, our topology optimization algorithm reduces the aggregation time by as much as 83.32% and reduces total network traffic by as much as 99.5% relative to the torus heuristic, proposed in [1], which readily proves the significant improvement in performance achieved by the proposed algorithm.

Abstract:

Live migration is a key technique for virtual machine (VM) management in data center networks, which enables flexibility in resource optimization, fault tolerance, and load balancing. Despite its usefulness, the live migration still introduces performance degradations during the migration process. Thus, there has been continuous efforts in reducing the migration time in order to minimize the impact. From the network's perspective, the migration time is determined by the amount of data to be migrated and the available bandwidth used for such transfer. In this paper, we examine the problem of how to schedule the migrations and how to allocate network resources for migration when multiple VMs need to be migrated at the same time. We consider the problem in the Software-defined Network (SDN) context since it provides flexible control on routing. More specifically, we propose a method that computes the optimal migration sequence and network bandwidth used for each migration. We formulate this problem as a mixed integer programming, which is NP-hard. To make it computationally feasible for large scale data centers, we propose an approximation scheme via linear approximation plus fully polynomial time approximation, and obtain its theoretical performance bound. Through extensive simulations, we demonstrate that our fully polynomial time approximation (FPTA) algorithm has a good performance compared with the optimal solution and two state-of-the-art algorithms. That is, our proposed FPTA algorithm approaches to the optimal solution with less than 10% variation and much less computation time. Meanwhile, it reduces the total migration time and the service downtime by up to 40% and 20% compared with the state-of-the-art algorithms, respectively.

Abstract:

Legacy networks are often designed to operate with simple single-path routing, like shortest-path, which is known to be throughput suboptimal. On the other hand, previously proposed throughput optimal policies (i.e., backpressure) require every device in the network to make dynamic routing decisions. In this work, we study an overlay architecture for dynamic routing such that only a subset of devices (overlay nodes) need to make dynamic routing decisions. We determine the essential collection of nodes that must bifurcate traffic for achieving the maximum multicommodity network throughput. We apply our optimal node placement algorithm to several graphs and the results show that a small fraction of overlay nodes is sufficient for achieving maximum throughput. Finally, we propose a heuristic policy (OBP), which dynamically controls traffic bifurcations at overlay nodes. In all studied simulation scenarios, OBP not only achieves full throughput, but also reduces delay in comparison to the throughput optimal backpressure routing.

Abstract:

As the Internet takes an increasingly central role in our communications infrastructure, the slow convergence of routing protocols after a network failure becomes a growing problem. To assure fast recovery from link and node failures in IP networks, we present a new recovery scheme called Multiple Routing Configurations (MRC). Our proposed scheme guarantees recovery in all single failure scenarios, using a single mechanism to handle both link and node failures, and without knowing the root cause of the failure. MRC is strictly connectionless, and assumes only destination based hop-by-hop forwarding. MRC is based on keeping additional routing information in the routers, and allows packet forwarding to continue on an alternative output link immediately after the detection of a failure. It can be implemented with only minor changes to existing solutions. In this paper we present MRC, and analyze its performance with respect to scalability, backup path lengths, and load distribution after a failure. We also show how an estimate of the traffic demands in the network can be used to improve the distribution of the recovered traffic, and thus reduce the chances of congestion when MRC is used.

Abstract:

Energy efficiency has become one of the major concerns for today’s cloud datacenters. Dynamic virtual machine (VM) consolidation is a promising approach for improving the resource utilization and energy efficiency of datacenters. However, the live migration technology that VM consolidation relies on is costly in itself, and this migration cost is usually heterogeneous as well as the datacenter. This paper investigates the following bi-objective optimization problem: how to pay limited migration costs to save as much energy as possible via dynamic VM consolidation in a heterogeneous cloud datacenter. To capture these two conflicting objectives, a consolidation score function is designed for an overall evaluation on the basis of a migration cost estimation method and an upper bound estimation method for maximal saved power. To optimize the consolidation score, a greedy heuristic and a swap operation are introduced, and an improved grouping genetic algorithm (IGGA) based on them is proposed. Lastly, empirical studies are performed, and the evaluation results show that IGGA outperforms existing VM consolidation methods.

Abstract:

Wireless sensor networks are employed in many applications, such as health care, environmental sensing, and industrial monitoring. An important research issue is the design of efficient medium access control (MAC) protocols, which have an essential role for the reliability, latency, throughput, and energy efficiency of communication, especially as communication is typically one of the most energy consuming tasks. Therefore, analytical models providing a clear understanding of the fundamental limitations of the different MAC schemes, as well as convenient way to investigate their performance and optimize their parameters, are required. In this paper, we propose a generic framework for modeling MAC protocols, which focuses on energy consumption, latency, and reliability. The framework is based on absorbing Markov chains, and can be used to compare different schemes and evaluate new approaches. The different steps required to model a specific MAC using the proposed framework are illustrated through a study case. Moreover, to exemplify how the proposed framework can be used to evaluate new MAC paradigms, evaluation of the novel pure-asynchronous approach, enabled by emerging ultra-low-power wake-up receivers, is done using the proposed framework. Experimental measurements on real hardware were performed to set framework parameters with accurate energy consumption and latency values, to validate the framework, and to support our results.

Abstract:

Due to the increasing vulnerabilities in cyberspace, security alone is not enough to prevent a breach, but cyber forensics or cyber intelligence is also required to prevent future attacks or to identify the potential attacker. The unobtrusive and covert nature of biometric data collection of keystroke dynamics has a high potential for use in cyber forensics or cyber intelligence. In this paper, we investigate the usefulness of keystroke dynamics to establish the person identity. We propose three schemes for identifying a person when typing on a keyboard. We use various machine learning algorithms in combination with the proposed pairwise user coupling technique and show the performance of each separate technique as well as the performance when combining two or more together. In particular, we show that pairwise user coupling in a bottom-up tree structure scheme gives the best performance, both concerning accuracy and time complexity. The proposed techniques are validated by using keystroke data. However, these techniques could equally well be applied to other pattern identification problems. We have also investigated the optimized feature set for person identification by using keystroke dynamics. Finally, we also examined the performance of the identification system when a user, unlike his normal behaviour, types with only one hand, and we show that performance then is not optimal, as was to be expected.

Abstract:

Mobile communications (e.g., emails, Snapchat and Facebook) over a wireless connection is a norm in our Internet-connected society. Ensuring the security of communications between devices is an ongoing challenge. A number of authenticated key exchange (AKE) protocols have been proposed to verify the authenticity of a user and the integrity of messages sent over an insecure wireless communication channel. Recently, Tsai et al. proposed two AKE protocols designed for wireless network systems. In this paper, we demonstrate that their protocols are vulnerable to off-line password guessing attacks through presenting concrete attacks, contrary to their claims.