IEEE 2019-2020 Network Security Projects in Java
Abstract: As networks expand in size and complexity, they pose greater administrative and management challenges. Software-defined networks (SDNs) offer a promising approach to meeting some of these challenges. In this paper, we propose a policy-driven security architecture for securing end-to-end services across multiple SDN domains. We develop a language-based approach to design security policies that are relevant for securing SDN services and communications. We describe the policy language and its use in specifying security policies to control the flow of information in a multi-domain SDN. We demonstrate the specification of fine-grained security policies based on a variety of attributes, such as parameters associated with users and devices/switches, context information, such as location and routing information, and services accessed in SDN as well as security attributes associated with the switches and controllers in different domains. An important feature of our architecture is its ability to specify path- and flow-based security policies that are significant for securing end-to-end services in SDNs. We describe the design and the implementation of our proposed policy-based security architecture and demonstrate its use in scenarios involving both intra- and inter-domain communications with multiple SDN controllers. We analyze the performance characteristics of our architecture as well as discuss how our architecture is able to counteract various security attacks. The dynamic security policy-based approach and the distribution of corresponding security capabilities intelligently as a service layer that enables flow-based security enforcement and protection of multitude of network devices against attacks are important contributions of this paper.
Abstract: Constrained image splicing detection and localization (CISDL), which investigates two input suspected images and identifies whether one image has suspected regions pasted from the other, is a newly proposed challenging task for image forensics. In this paper, we propose a novel adversarial learning framework to learn a deep matching network for CISDL. Our framework mainly consists of three building blocks. First, a deep matching network based on atrous convolution (DMAC) aims to generate two high-quality candidate masks, which indicate suspected regions of the two input images. In DMAC, atrous convolution is adopted to extract features with rich spatial information, a correlation layer based on a skip architecture is proposed to capture hierarchical features, and atrous spatial pyramid pooling is constructed to localize tampered regions at multiple scales. Second, a detection network is designed to rectify inconsistencies between the two corresponding candidate masks. Finally, a discriminative network drives the DMAC network to produce masks that are hard to distinguish from ground-truth ones. The detection network and the discriminative network collaboratively supervise the training of DMAC in an adversarial way. Besides, a sliding window-based matching strategy is investigated for high-resolution images matching. Extensive experiments, conducted on five groups of datasets, demonstrate the effectiveness of the proposed framework and the superior performance of DMAC.
Abstract: To contest the rapidly developing cyber-attacks, numerous collaborative security schemes in which multiple security entities can exchange their observations and other relevant data to achieve more effective security decisions, are proposed and developed in the literature. However, the security-related information shared among the security entities may contain some sensitive information and such information exchange can raise privacy concerns, especially when these entities belong to different organizations. With such consideration, the interplay between the attacker and the collaborative entities is formulated as Quantitative Information Flow (QIF) games, in which the QIF theory is adapted to measure the collaboration gain and the privacy loss of the entities in the information sharing process. In particular, three games are considered, each corresponding to one possible scenario of interest in practice. Based on the game-theoretic analysis, the expected behaviors of both the attacker and the security entities are obtained. In addition, simulation results are presented to validate the analysis.
Abstract: Copy-move forgery on very short speech segments, followed by post-processing operations to eliminate traces of the forgery, presents a great challenge to forensic detection. In this paper, we propose a robust method for detecting and locating a speech copy-move forgery. We found that pitch and formant can be used as the features representing a voiced speech segment, and these two features are very robust against commonly used post-processing operations. In the proposed algorithm, we first divide the speech recording into voiced speech segments and unvoiced speech segments. We then extract the pitch sequence and the first two formant sequences as the feature set of each voiced speech segment. Dynamic time warping is applied to compute the similarities of each feature set. By comparing the similarities with a threshold, we can detect and locate copy-move forgeries in speech recording. The extensive experiments show that the proposed method is very effective in detecting and locating copy-move forgeries, even on a forged speech segment as short as one voiced speech segment. The proposed method is also robust against several kinds of commonly used post-processing operations and background noise, which highlights the promising potential of the proposed method as a speech copy-move forgery localization tool in practical forensics applications.
Abstract: With cloud storage services, users can remotely store their data to the cloud and realize the data sharing with others. Remote data integrity auditing is proposed to guarantee the integrity of the data stored in the cloud. In some common cloud storage systems such as the electronic health records system, the cloud file might contain some sensitive information. The sensitive information should not be exposed to others when the cloud file is shared. Encrypting the whole shared file can realize the sensitive information hiding, but will make this shared file unable to be used by others. How to realize data sharing with sensitive information hiding in remote data integrity auditing still has not been explored up to now. In order to address this problem, we propose a remote data integrity auditing scheme that realizes data sharing with sensitive information hiding in this paper. In this scheme, a sanitizer is used to sanitize the data blocks corresponding to the sensitive information of the file and transforms these data blocks' signatures into valid ones for the sanitized file. These signatures are used to verify the integrity of the sanitized file in the phase of integrity auditing. As a result, our scheme makes the file stored in the cloud able to be shared and used by others on the condition that the sensitive information is hidden, while the remote data integrity auditing is still able to be efficiently executed. Meanwhile, the proposed scheme is based on identity-based cryptography, which simplifies the complicated certificate management. The security analysis and the performance evaluation show that the proposed scheme is secure and efficient.
Abstract: The detection of rescaling operations represents an important task in multimedia forensics. While many effective heuristics have been proposed, there is no theory on the forensic detectability revealing the conditions of more or less reliable detection. We study the problem of discriminating 1D and 2D genuine signals from signals that have been downscaled with the goal of quantifying the statistical distinguishability between these two hypotheses. This is done by assuming known signal models and deriving the expressions of statistical distances that are linked to the hypothesis testing theory, namely, the symmetrized form of Kullback-Leibler divergence known as the Jeffreys divergence, and the Bhattacharyya divergence. The analysis is performed for varying parameters of both the genuine signal model (variance and one-step correlation) and the rescaling process (rescaling factor, interpolation kernel, grid shift, and anti-alias filter), thus allowing us to reveal the insights on their influence and interplay. In addition to the signal itself, we consider the signal transformations (prefilter and covariance matrix estimators) that are often involved in practical rescaling detectors, showing that they yield similar results in terms of distinguishability. Numerical tests on synthetic and real signals confirm the main observations from the theoretical analysis.
Abstract: Adaptive multi-rate (AMR), a popular audio compression standard, is widely used in mobile communication and mobile Internet applications and has become a novel carrier for hiding information. To improve the statistical security, this paper presents a steganographic scheme in the AMR fixed codebook (FCB) domain based on the pulse distribution model (PDM-AFS), which is obtained from the distribution characteristics of the FCB value in the cover audio. The pulse positions in stego audio are controlled by message encoding and random masking to make the statistical distribution of the FCB parameters close to that of the cover audio. The experimental results show that the statistical security of the proposed scheme is better than that of the existing schemes. Furthermore, the hiding capacity is maintained compared with the existing schemes. The average hiding capacity can reach 2.06 kbps at an audio compression rate of 12.2 kbps, and the auditory concealment is good. To the best of our knowledge, this is the first secure AMR FCB steganographic scheme that improves the statistical security based on the distribution model of the cover audio. This scheme can be extended to other audio compression codecs under the principle of algebraic code excited linear prediction (ACELP), such as G.723.1 and G.729.
Abstract: The forensic investigation of JPEG compression generally relies on the analysis of first-order statistics based on image histogram. The JPEG compression detection methods based on such methodology can be effortlessly circumvented by adopting some anti-forensic attacks. This paper presents a counter JPEG anti-forensic method by considering the second-order statistical analysis based on the co-occurrence matrices (CMs). The proposed framework comprises three stages: selection of the target difference image, evaluation of CMs, and generation of second-order statistical feature based on CMs. In the first stage, we explore the effects of dithering operation of JPEG anti-forensics by analyzing the variance inconsistencies along the diagonals. Afterward, CMs are evaluated in the second stage to highlight the effects of grainy noise introduced during the dithering operation. The third stage is devoted to generate an optimal second-order statistical feature which is fed to the SVM classifier. The experimental results based on the uncompressed color image database and BOSSBase dataset images demonstrated that the proposed forensic detector based on CM is very efficient even in the presence of anti-forensic attacks. Moreover, the experimental results also confirm the competency of the proposed method in counter median filtering and contrast enhancement anti-forensics. The proposed scheme also provides satisfactory results in detecting other image processing operations such as mean filtering, Gaussian filtering, Weiner filtering, scaling, and rotation, thereby revealing its multi-purpose nature.
Abstract: The proliferation of the Internet of Things (IoT) is reshaping our lifestyle. With IoT sensors and devices communicating with each other via the Internet, people can customize automation rules to meet their needs. Unless carefully defined, however, such rules can easily become points of security failure as the number of devices and complexity of rules increase. Device owners may end up unintentionally providing access or revealing private information to unauthorized entities due to complex chain reactions among devices. Prior work on trigger-action programming either focuses on conflict resolution or usability issues or fails to accurately and efficiently detect such attack chains. This paper explores the security vulnerabilities when users have the freedom to customize automation rules using trigger-action programming. We define two broad classes of attack-privilege escalation and privacy leakage -and present a practical model-checking-based system called SafeChain that detects hidden attack chains exploiting the combination of rules. Built upon existing model-checking techniques, SafeChain identifies attack chains by modeling the IoT ecosystem as a finite-state machine. To improve practicability, SafeChain avoids the need to accurately model an environment by frequently rechecking the automation rules given the current states and employs rule-aware optimizations to further reduce overhead. Our comparative analysis shows that SafeChain can efficiently and accurately identify attack chains, and our prototype implementation of SafeChain can verify 100 rules in less than 1 s with no false positives.
Abstract: Combined with many different attack forms, advanced persistent threats (APTs) are becoming a major threat to cyber security. Existing security protection works typically either focus on one-shot case, or separate detection from response decisions. Such practices lead to tractable analysis, but miss key inherent APTs persistence and risk heterogeneity. To this end, we propose a Lyapunov-based security-aware defense mechanism backed by threat intelligence, where robust defense strategy-making is based on acquired heterogeneity knowledge. By exploring temporal evolution of risk level, we introduce priority-aware virtual queues, which together with attack queues, enable security-aware response among hosts. Specifically, a long-term time average profit maximization problem is formulated. We first develop risk admission control policy to accommodate hosts' risk tolerance and response capacity. Under multiple attacker resources, defense control policy is implemented on two-stage decisions, involving proportional fair resource allocation and host-attack assignment. In particular, distributed auction-based assignment algorithm is designed to capture uncertainty in the number of resolved attacks, where high-risk host-attack pairs are prioritized over others. We theoretically prove our mechanism can guarantee bounded queue backlogs, profit optimality, no underflow condition, and robustness to detection errors. Simulations on real-world data set corroborate theoretical analysis and reveal the importance of security awareness.