Cloud Security

05 April 2017, 09:00, Track 1

Session chair: Dieter Gollmann, Hamburg University of Technology, Germany

DoS Attacks on Your Memory in Cloud

Tianwei Zhang, Yinqian Zhang, Ruby B. Lee

AbstractAdd to calendar

In cloud computing, network Denial of Service (DoS) attacks are well studied and defenses have been implemented, but severe DoS attacks on a victim’s working memory by a single hostile VM are not well understood. Memory DoS attacks are Denial of Service (or Degradation of Service) attacks caused by contention for hardware memory resources on a cloud server. Despite the strong memory isolation techniques for virtual machines (VMs) enforced by the software virtualization layer in cloud servers, the underlying hardware memory layers are still shared by the VMs and can be exploited by a clever attacker in a hostile VM co-located on the same server as the victim VM, denying the victim the working memory he needs. We first show quantitatively the severity of contention on different memory resources. We then show that a malicious cloud customer can mount low-cost attacks to cause severe performance degradation for a Hadoop distributed application, and 38X delay in response time for an E-commerce website in the Amazon EC2 cloud. Then, we design an effective, new defense against these memory DoS attacks, using a statistical metric to detect their existence and execution throttling to mitigate the attack damage. We achieve this by a novel re-purposing of existing hardware performance counters and duty cycle modulation for security, rather than for improving performance or power consumption. We implement a full prototype on the OpenStack cloud system. Our evaluations show that this defense system can effectively defeat memory DoS attacks with negligible performance overhead.

Side Channels in Deduplication: Trade-offs between Leakage and Efficiency

Frederik Armknecht, Colin Boyd, Gareth T. Davies, Kristian Gjøsteen, Mohsen Toorani

AbstractAdd to calendar

Deduplication removes redundant copies of files or data blocks stored on the cloud. Client-side deduplication, where the client only uploads the file upon the request of the server, provides major storage and bandwidth savings, but introduces a number of security concerns. Harnik et al. (2010) showed how cross-user client-side deduplication inherently gives the adversary access to a (noisy) side-channel that may divulge whether or not a particular file is stored on the server, leading to leakage of user information. We provide formal definitions for deduplication strategies and their security in terms of adversarial advantage. Using these definitions, we provide a criterion for designing good strategies and then prove a bound characterizing the necessary trade-off between security and efficiency.

Sharing Proofs of Retrievability across Tenants

Frederik Armknecht, Jens-Matthias Bohli, David Froelicher, Ghassan Karame

AbstractAdd to calendar

Proofs of Retrievability (POR) are cryptographic proofs which provide assurance to a single tenant (who creates tags using his secret material) that his files can be retrieved in their entirety. However, POR schemes completely ignore storage-efficiency concepts, such as multi-tenancy and data deduplication, which are being widely utilized by existing cloud storage providers. Namely, in deduplicated storage systems, existing POR schemes would incur an additional overhead for storing tenants’ tags which grows linearly with the number of users deduplicating the same file. This overhead clearly reduces the (economic) incentives of cloud providers to integrate existing POR/PDP solutions in their offerings. In this paper, we propose a novel storage-efficient POR, dubbed SPORT, which transparently supports multi-tenancy and data deduplication. More specifically, SPORT enables tenants to securely share the same POR tags in order to verify the integrity of their deduplicated files. By doing so, SPORT considerably reduces the storage overhead borne by cloud providers when storing the tags of different tenants deduplicating the same content. We show that SPORT resists against malicious tenants/cloud providers (and against collusion among a subset of the tenants and the cloud). Finally, we implement a prototype based on SPORT, and evaluate its performance in a realistic cloud setting. Our evaluation results show that our proposal incurs tolerable computational overhead on the tenants and the cloud provider.

Cache-Based Application Detection in the Cloud Using Machine Learning

Berk Gulmezoglu, Thomas Eisenbarth, Berk Sunar

AbstractAdd to calendar

Cross-VM attacks have emerged as a major threat on commercial clouds. These attacks commonly exploit hardware level leakages on shared physical servers. A co-located machine can readily feel the presence of a co-located instance with a heavy computational load through performance degradation due to contention on shared resources. Shared cache architectures such as the last level cache (LLC) have become a popular leakage source to mount cross-VM attack. By exploiting LLC leakages, researchers have already shown that it is possible to recover fine grain information such as cryptographic keys from popular software libraries. This makes it essential to verify implementations that handle sensitive data across the many versions and numerous target platforms, a task too complicated, error prone and costly to be handled by human beings. Here we propose a machine learning based technique to classify applications according to their cache access profiles. We show that with minimal and simple manual processing steps feature vectors can be used to train models using support vector machines to classify the applications with a high degree of success. The profiling and training steps are completely automated and do not require any inspection or study of the code to be classified. In native execution, we achieve a successful classification rate as high as 98\% (L1 cache) and 78\% (LLC) over 40 benchmark applications in the Phoronix suite with mild training. In the cross-VM setting on the noisy Amazon EC2 the success rate drops to 60\% for a suite of 25 applications. With this initial study we demonstrate that it is possible to train meaningful models to successfully predict applications running in co-located instances.