The Malware Menace: How Does It Find Its Way to Our Computers?
Mustaque Ahamad, Georgia Institute of Technology, USA • Wednesday April 5, 16:35
Most modern malware infections occur via the browser, typically due to social engineering or drive-by downloads attacks. A key question is what users do which exposes their computers to such malware infections. This talk will explore the “origin” of malware download attacks experienced by real network users, with the objective of improving malware download defenses. Specifically, we will discuss results of a study of web paths followed by users who eventually fall victim to different types of malware downloads. This study was enabled by a novel incident investigation system, named WebWitness, that was developed and deployed on a large academic network, where we collected and categorized thousands of live malicious download paths. An analysis of this labeled data allowed us to design a new defense against drive-by downloads which can decrease the infection rate of certain drive-by downloads by almost six times, on average, compared to existing URL blacklisting approaches. This system also allowed us to gain a better understanding of the tactics that are used by social engineering malware downloads, which can help us educate users against such attacks.
This work was done jointly with Terry Nelms, Roberto Perdisci and Manos Antonakakis.
Dr. Mustaque Ahamad is a professor of computer science at the Georgia Institute of Technology where he has served on the faculty of the College of Computing since 1985. From 2012-2016, he also served a global professor of engineering at New York University Abu Dhabi. He is chief scientist of Pindrop Security, which he co-founded in 2011. Dr. Ahamad was director of the Georgia Tech Information Security Center (GTISC) from 2004 to 2012. As director of GTISC, he helped develop several major research thrusts in areas that include security of converged communication networks, identity and access management, and security of healthcare information technology. Currently, he leads Georgia Tech’s educational programs in cyber security as associate director of its Institute for Information Security and Privacy. His research interests span distributed systems, computer security and dependable systems. He has published over one hundred research papers in these areas. Dr. Ahamad received his Ph.D. in computer science from the State University of New York at Stony Brook in 1985. He received his undergraduate degree in electrical and electronics engineering from the Birla Institute of Technology and Science, Pilani, India.
CoverUp: Privacy Through “Forced” Participation in Anonymous Communication Networks
Srdjan Capkun, ETH, Switzerland • Monday April 3, 13:30
The privacy guarantees of anonymous communication networks (ACNs) are bounded by the number of participants, which produce cover traffic for each other. As a consequence, an ACN can only achieve strong privacy guarantees if it succeeds in attracting a large number of active users. Vice versa, weak privacy guarantees renders an ACN unattractive, leading to a low number of users.
build two applications on top of CoverUp: an anonymous feed and chat. We show that both achieve practical performance and strong privacy guarantees. Towards a network-level attacker CoverUp makes voluntary and involuntary participants indistinguishable, thereby providing an anonymity set that includes all voluntary and involuntary participants i.e. (all website visitors). Given this, CoverUp provides even more than mere anonymity: the voluntary participants can hide the very intention to use the ACN. As the concept of forced participation raises ethical and legal concerns, we discuss these concerns and describe how these can be addressed.
Srdjan Capkun (Srđan Čapkun) is a Full Professor in the Department of Computer Science, ETH Zurich and Director of the Zurich Information Security and Privacy Center (ZISC). He was born in Split, Croatia. He received his Dipl.Ing. Degree in Electrical Engineering / Computer Science from the University of Split in 1998, and his Ph.D. degree in Communication Systems from EPFL in 2004. Prior to joining ETH Zurich in 2006 he was a postdoctoral researcher in the Networked & Embedded Systems Laboratory (NESL), University of California Los Angeles and an Assistant Professor in the Informatics and Mathematical Modelling Department, Technical University of Denmark (DTU). His research interests are in system and network security. One of his main focus areas is wireless security. He is a co-founder of 3db Access, a company focusing on secure distance measuement and proximity-based access control, and of Sound-Proof a spin-off focusing on usable on-line authentication. In 2016 he received an ERC Consolidator Grant for a project on securing positioning in wireless networks.
Security and Privacy Challenges for Aviation Networks
Ivan Martinovic, Oxford University, UK • Monday April 3, 17:15
In this talk we will discuss the security impact of wireless technologies used in the aviation sector. The ongoing move from traditional air traffic control systems such as radar and voice towards enhanced surveillance and communications systems using modern data networks causes a marked shift in the security of the aviation environment. Implemented through the European SESAR and the US American NextGen programmes, several new air traffic control and communication protocols are currently being rolled out. Unfortunately, during their development realistic threat models were not taken into account: as digital avionics communication technologies are getting more widely accessible, traditional electronic warfare threat models are fast becoming obsolete.
Ivan is an Associate Professor at the Department of Computer Science, University of Oxford. Before coming to Oxford he was a postdoctoral researcher at the Security Research Lab, UC Berkeley and at the Secure Computing and Networking Centre, UC Irvine. From 2009 until 2011 he enjoyed a Carl-Zeiss Foundation Fellowship and he was an associate lecturer at TU Kaiserslautern, Germany. He obtained his PhD from TU Kaiserslautern under supervision of Prof. Jens B. Schmitt and MSc from TU Darmstadt, Germany.
The Case for System Command Encryption
David Naccache, ENS, France • Tuesday April 4, 11:40
In several popular standards (e.g. ISO 7816, ISO 14443 or ISO 11898) and IoT applications, a node (transponder, terminal) sends commands and data to another node (transponder, card) to accomplish an applicative task (e.g. a payment or a measurement).
Most standards encrypt and authenticate the data. However, as an application of Kerckhoffs’ principle, system designers usually consider that commands are part of the system specifications and must hence be transmitted in clear while the data that these commands process is encrypted and signed. While this assumption holds in systems representable by relatively simple state machines, leaking command information is undesirable when the addressed nodes offer the caller a large “toolbox” of commands that the addressing node can activate in many different orders to accomplish different applicative goals.
This work proposes protections allowing encrypting and protecting not only the data but also the commands associated to them. The practical implementation of this idea raises a number of difficulties. The first is that of defining a clear adversarial model, a question that we will not address in this paper. The difficulty comes from the application-specific nature of the harm that may possibly stem from leaking the command sequence as well as from the modeling of the observations that the attacker has on the target node’s behavior (is a transaction accepted? is a door opened? is a packet routed etc). This paper proposes a collection of empirical protection techniques allowing the sender to hide the sequence of commands sent. We discuss the advantages and the shortcomings of each proposed method. Besides the evident use of nonces (or other internal system states) to render the encryption of identical commands different in time, we also discuss the introduction of random delays between commands (to avoid inferring the next command based on the time elapsed since the previous command), the splitting of a command followed by n data bytes into a collection of encrypted sub-commands conveying the n bytes in chunks of random sizes and the appending of a random number of useless bytes to each packet. Independent commands can be permuted in time or sent ahead of time and buffered. Another practically useful countermeasure consists in masking the number of commands by adding useless “null” command packets. In its best implementation, the flow of commands is sent in packets in which, at times, the sending node addresses several data and command chunks belonging to different successive commands in the sequence.
From an applicative standpoint, the protected node must be designed in a way that prevents, as much as possible, the external observation of the effects of a command. For instance, if a command must have an effect expressible within a time interval [ta,tb] then the system should randomly trigger the command’s effect between ta and tb. All the above recommendations will be summarized and listed as “prudent applicative design principles” that system designers could use and adapt when building new applications.
A starting point for a theoretical framework for reasoning about generic command learning attacks may be the following: Let A(f, x) denote the set of states through which the machine passed while performing a protocol f run on input x. Let H denote entropy and define discretion(f) = H(A(f,x) | x). We define the protocol f as more discreet than f’ if discretion(f) > discretion(f’). Discretion is a very interesting metric because of its independence of any specific attack. Put differently, increasing discretion secures a system against attacks that…haven’t been invented yet!. Denote by f ~ f’ two alternative protocols computing y. Let O(f) denote the time complexity of f. f is optimally discreet if f’ ∈ O(f) and f ~ f’ ⇒ discretion(f) ≥ discretion(f’). Hence, a prudent system engineer must always use optimally discreet protocols. Classifying existing network protocols according to this criterion is, to the best of our knowledge, new. We propose to examine with Huawei several popular protocols and evaluate their discretion. A very interesting line of research will be the design of parameterized algorithms fw where both O(fw) and discretion(fw) are increasing functions of w. This may concretely measure the price at which extra calculations could buy extra security (discretion) and the marginality ratio function discretion(fw)/log(O(fw)) will reflect the number of discretion bits bought at the price of one work-factor bit.
D. Naccache heads the ENS’ Information Security Group. His research areas are code security, the automated and the manual detection of vulnerabilities. Before joining ENS he was a professor at UP2. He previously worked for 15 years for Gemalto, Oberthur & Technicolor. He is a forensic expert by several courts, and the incumbent of the Law and IT forensics chair at EOGN. ens-paris.fr.
Control-Flow Hijacking: Are We Making Progress?
Matthias Payer, Purdue, US • Tuesday April 4, 10:45
Memory corruption errors in C/C++ programs remain the most common source of security vulnerabilities in today’s systems. Over the last 10+ years we have deployed several defenses. Data Execution Prevention (DEP) protects against code injection – eradicating this attack vector. Yet, control-flow hijacking and code reuse remain challenging despite wide deployment of Address Space Layout Randomization (ASLR) and stack canaries. These defenses are probabilistic and rely on information hiding.
The deployed defenses complicate attacks, yet control-flow hijack attacks (redirecting execution to a location that would not be reached in a benign execution) are still prevalent. Attacks reuse existing gadgets (short sequences of code), often leveraging information disclosures to learn the location of the desired gadgets. Strong defense mechanisms have not yet been widely deployed due to (i) the time it takes to roll out a security mechanism, (ii) incompatibility with specific features, and (iii) performance overhead. In the meantime, only a set of low-overhead but incomplete mitigations has been deployed in practice.
Control-Flow Integrity (CFI) and Code-Pointer Integrity (CPI) are two promising upcoming defense mechanisms, protecting against control-flow hijacking. CFI guarantees that the runtime control flow follows the statically determined control-flow graph. An attacker may reuse any of the valid transitions at any control-flow transfer. We compare a broad range of CFI mechanisms using a unified nomenclature based on (i) a qualitative discussion of the conceptual security guarantees, (ii) a quantitative security evaluation, and (iii) an empirical evaluation of their performance in the same test environment. For each mechanism, we evaluate (i) protected types of control-flow transfers, (ii) the precision of the protection for forward and backward edges. For open-source compiler-based implementations, we additionally evaluate (iii) the generated equivalence classes and target sets, and (iv) the runtime performance. CPI on the other hand is a dynamic property that enforces selective memory safety through bounds checks for code pointers by separating code pointers from regular data.
Mathias Payer is a security researcher and an assistant professor in computer science at Purdue university, leading the HexHive group. His research focuses on protecting applications in the presence of vulnerabilities, with a focus on memory corruption. He is interested in system security, binary exploitation, software-based fault isolation, binary translation/recompilation, and (application) virtualization.
Before joining Purdue in 2014 he spent two years as Post-Doc in Dawn Song’s BitBlaze group at UC Berkeley. He graduated from ETH Zurich with a Dr. sc. ETH in 2012, focusing on low-level binary translation and security. He analyzed different exploit techniques and wondered how we can enforce integrity for a subset of data (e.g., code pointers). All prototype implementations are open-source. In 2014, he founded the b01lers Purdue CTF team.
Security in Personal Genomics: Lest We Forget
Gene Tsudik, UC Irvine, US • Monday April 3, 14:30
Genomic privacy has attracted much attention from the research community, mainly since its risks are unique and breaches can lead to terrifying leakage of most personal and sensitive information. The much less explored topic of genomic security needs to mitigate threats of the digitized genome being altered by its owner or an outside party, which can have dire consequences, especially, in medical or legal settings. At the same time, many anticipated genomic applications (with varying degrees of trust) require only small amounts of genomic data. Supporting such applications requires a careful balance between security and privacy. Furthermore, genome’s size raises performance concerns.
We argue that genomic security must be taken seriously and explored as a research topic in its own right. To this end, we discuss the problem space, identify the stakeholders, discuss assumptions about them, and outline several simple approaches based on common cryptographic techniques, including signature variants and authenticated data structures. We also present some extensions and identify opportunities for future research. The main goal of this paper is to highlight the importance of genomic security as a research topic in its own right.
Gene Tsudik is a Chancellor’s Professor of Computer Science at the University of California, Irvine (UCI). He obtained his PhD in Computer Science from USC in 1991. Before coming to UCI in 2000, he was at IBM Zurich Research Laboratory (1991-1996) and USC/ISI (1996-2000). Over the years, his research interests included numerous topics in security and applied cryptography.. Gene Tsudik is a Fulbright Scholar, a Fubright Specialist, a fellow of ACM, IEEE and AAAS, as well as a member of Academia Europaea. From 2009 to 2015 he was the Editor-in-Chief of ACM Transactions on Information and Systems Security (TISSEC).