Crypto APIs and JVM byte types

In a previous post, I talked about crypto API tradeoffs. In this post, I'll go into a specific API design case in caesium, my cryptographic library for Clojure, a language that runs on the Java Virtual Machine.

JVM byte types

The JVM has several standard byte types. For one-shot cryptographic APIs, the two most relevant ones are byte arrays (also known as byte[]) and java.nio.ByteBuffer. Unfortunately, they have different pros and cons, so there is no unambiguously superior choice.

ByteBuffer can produce slices of byte arrays and other byte buffers with zero-copy semantics. This makes a useful tool when want to place an encrypted message in a pre-allocated binary format. One example of this is my experimental NMR suite. Another use case is generating more than one key out of a single call to a key derivation function. The call produces one (long) output, and ByteBuffer lets you slice it into different keys.

Byte arrays are easily serializable, but ByteBuffer is not. Even if you teach your serialization library about ByteBuffer, this usually results in extra copying during serialization.

Byte arrays are constant length, and that length is stored with the array, so it's cheap to access. Figuring out how much to read from a ByteBuffer requires a (trivial) amount of math by calling remaining. This is because the ByteBuffer is a view, and it can be looking at a different part of the underlying memory at different times. For a byte array, this is all fixed: a byte array's starting and stopping points remain constant. Computing the remaining length of a ByteBuffer may not always be constant time, although it probably is. Even if it isn't, it's probably not in a way that is relevant to the security of the scheme (in caesium, only cryptographic hashes, detached signatures and detached MACs don't publicly specify the message length).

ByteBuffer has a public API for allocating direct buffers. This means they are not managed by the JVM. Therefore they won't be copied around by the garbage collector, and memory pinning is free. "Memory pinning" means that you notify the JVM that some external C code is using this object, so it should not be moved around or garbage collected until that code is done using that buffer. You can't pass "regular" (non-direct) buffers to C code. When you do that, the buffer is first copied under the hood. Directly allocated buffers let you securely manage the entire lifecycle of the buffer. For example, they can be securely zeroed out after use. Directly allocated ByteBuffer instances might have underlying arrays; this is explicitly unspecified. Therefore, going back to an array might be zero-copy. In my experiments, these byte buffers never have underlying arrays, so copying is always required. I have not yet done further research to determine if this generally the case. In addition to ByteBuffer, thesun.misc.Unsafe class does have options for allocating memory directly, but it's pretty clear that use of that class is strongly discouraged. Outside of the JDK, the Pointer API in jnr-ffi works identically to ByteBuffer.

Design decisions

As a brief recap from my previous post, it's important that we design an API that makes common things easy and hard things possible while remaining secure and performant. For the cryptographic APIs in caesium, there are a number of variables to consider:

  • Are the return types and arguments ByteBuffer instances, byte arrays ([B), Pointer instances, or something else?
  • Is the return type fixed per exposed function, or is the return type based on the input types, like Clojure's empty?
  • Are the APIs "C style" (which passes in the output buffer as an argument) or "functional style" (which allocates the output buffer for you)?
  • Does the implementation convert to the appropriate type (which might involve copying), does it use reflection to find the appropriate type, does it explicitly dispatch on argument types, or does it assume you give it some specific types?

Many of these choices are orthogonal, meaning we can choose them independently. With dozens of exposed functions, half a dozen or so arguments per function with 2-4 argument types each, two function styles, four argument conversion styles, and two ways of picking the return type, this easily turns into a combinatorial explosion of many thousands of exposed functions.

All of these choices pose trade-offs. We've already discussed the differences between the different byte types, so I won't repeat them here. Having the function manage the output buffer for you is the most convenient option, but it also precludes using direct byte buffers effectively. Type conversion is most convenient, but type dispatch is faster, and statically resolvable dispatching to the right implementation is faster still. The correct return value depends on context. Trying to divine what the user really wanted is tricky, and, as we discussed before, the differences between those types are significant.

The functions exposed in caesium live on the inside of a bigger system, in the same sense that IO libraries like Twisted and manifold live on the edges. Something gives you some bytes, you perform some cryptographic operations on them, and then the resulting bytes go somewhere else. This is important, because it reduces the number of contexts in which people end up with particular types.

Implementing the API

One easy decision is that the underlying binding should support every permutation, regardless of what the API exposes. This would most likely involve annoying code generation in a regular Java/jnr-ffi project, but caesium is written in Clojure. The information on how to bind libsodium is a Clojure data structure that gets compiled into an interface, which is what jnr-ffi consumes. This makes it easy to expose every permutation, since it's just some code that operates on a value. You can see this at work in the caesium.binding namespace. As a consequence, an expert implementer (who knows exactly which underlying function they want to call with no "smart" APIs or performance overhead) can always just drop down to the binding layer.

Another easy call is that all APIs should raise exceptions, instead of returning success codes. Success codes make sense for a C API, because there's no reasonable exception mechanism available. However, problems like failed decryption should definitely just raise exceptions.

It gets tricky when we compare APIs that take an output buffer versus APIs that build the output buffer for you. The latter are clearly the easiest to use, but the former are necessary for explicit buffer life cycle management. You can also easily build the managed version from the unmanaged version, but you can't do the converse. As a consequence, we should expose both.

Having to expose both has the downside that we haven't put a dent in that combinatorial explosion of APIs yet. Let's consider the cases in which someone might have a byte buffer:

  • They're using them as a slice of memory, where the underlying memory could be another byte buffer (direct or indirect) or a byte array -- usually a byte array wrapping a byte buffer.
  • They're managing their own (presumably direct) output buffers.

In the former case, the byte buffers primarily act as inputs. In the latter, they exclusively act as outputs. Because both byte buffers and byte arrays can act as inputs, any API should be flexible in what it accepts. However, this asymmetry in how the types are used, and how they can be converted, has consequences for APIs where the caller manages the output buffer versus APIs that manage it for you.

When the API that manages the output buffer for you, the most reasonable return type is a byte array. There is no difference between byte arrays created by the API and those created by the caller, and there's no reasonable way to reuse them. If you do really need a byte buffer for some reason, wrapping that output array is simple and cheap. Conversely, APIs where the caller manages the output buffer should use output byte buffers. Callers who are managing their own byte buffer need to call an API that supports that, and there's nothing to be gained from managing your own byte arrays (only direct byte buffers). This is fine for internal use within caesium — the byte array producing API can just wrap it in a byte buffer view.

This means we've reduced the surface significantly: APIs with caller-managed buffers output to ByteBuffer, and APIs that manage it themselves return byte arrays. This takes care of the output types, but not the input types.

Keys, salts, nonces, messages et cetera will usually be byte arrays, since they're typically just read directly from a file or made on the spot. However rare, there can be good reasons for having any of these as byte buffers. For example, a key might have been generated from a different key using a key derivation function; a nonce might be synthetically generated (as with deterministic or nonce-misuse resistant schemes); either might be randomly generated but just into a pre-existing buffer.

The easiest way for this to work by default is reflection. That mostly works, until it doesn't. Firstly, reflecting can be brittle. For example, if all of your byte sequence types are known but a buffer length isn't, Clojure's reflection will fail to find the appropriate method, even if it is unambiguous. Secondly, unannotated Clojure fns always take boxed objects, not primitives, which is what we want for calling into C. Annotating is imperfect, too, because it moves the onus of producing a primitive to the caller. These aren't really criticisms of Clojure. At this point we're well into weird edge case territory which this system wasn't designed for.

We can't do static dispatch for the public API, because we've established that we should be flexible in our input types. We can work around the unknown type problems with reflection using explicitly annotated call sites. That means we're dispatching on types, which comes with its own set of issues. In the next blog post, I'll go into more detail on how that works, with a bunch of benchmarks. Stay tuned!

Tradeoffs in cryptographic API design

Producing cryptographic software is a difficult and specialized endeavor. One of the pitfalls is that getting it wrong looks exactly like getting it right. Much like a latent memory corruption bug or a broken distributed consensus algorithm, a piece of cryptographic software can appear to be functioning perfectly, while being subtly broken in a way that only comes to light years later. As the adage goes, attacks never get worse; they only get better. Implementation concerns like timing attacks can be fiendishly complicated to solve, involving problems like division instructions on modern Intel CPUs taking a variable number of cycles depending on the size of the input. Implementation concerns aren't the only problem; just designing the APIs themselves is a complex task as well.

Like all API design, cryptographic API design is a user experience exercise. It doesn't matter how strong or fast your cryptographic software is if no one uses it. The people who end up with ECB mode didn't end up with it because they understood what that meant. They got stuck with it because it was the default and it didn't require thinking about scary parameters like IVs, nonces, salts and tweaks. Even if someone ended up with CTR or CBC, these APIs are still precarious; they'll still be vulnerable to issues like nonce reuse, fixed IV, key-as-IV, unauthenticated encryption...

User experience design always means deep consideration of who your users are. A particular API might be necessary for a cryptographic engineer to build new protocols, but that API is probably not a reasonable default encryption API. An explicit-nonce encryption scheme is great for a record layer protocol between two peers like TLS, but it's awful for someone trying to encrypt a session cookie. We can't keep complaining about people getting it wrong when we keep giving them no chances at getting it right. This is why I'm building educational material like Crypto 101 and why I care about cryptography like nonce-misuse resistance that's easier to use correctly. (The blog post on my new nonce-misuse resistant schemes for libsodium is coming soon, I promise!)

Before you can make your API easy to use, first you have to worry about getting it to work at all.

An underlying cryptographic library might expose an unfortunate API. It might be unwieldy because of historical reasons, backwards compatibility, language limitations, or even simple oversight. Regardless of why the API is the way it is, even minute changes to it—a nicer type, an implied parameter—might have subtle but catastrophic consequences for the security of the final product. Figuring out if an arbitrary-length integer in your programming language is interchangeable with other representations, like the implementation in your crypto library or a char *, has many complex facets. It doesn't just have to be true under some conditions; ideally, it's true for every platform your users will run your software on, in perpetuity.

There might be an easy workaround to an annoying API. C APIs often take a char * together with a length parameter, because C doesn't have a standard way of passing a byte sequence together with its length. Most higher level languages, including Java and Python, have byte sequence types that know their own length. Therefore, you can specify the char * and its associated length in a single parameter on the high-level side. That's just the moral equivalent of building a small C struct that holds both. (Whether or not you can trust C compilers to get anything right at all is a point of contention.)

These problems compound when you are binding libraries in languages and environments with wildly different semantics. For example, your runtime might have a relocating garbage collector. Pointers in C and objects in CPython stay put, but objects move around all the time in environments like the JVM (HotSpot) or PyPy. That implies copying to or from a buffer whenever you call C code, unless the underlying virtual machine supports "memory pinning": forcing the object to stay put for the duration of the call.

Programmers normally operate in a drastically simplified model of the world. We praise programming designs for their ability to separate concerns, so that programmers can deal with one problem at a time. The modern CPU your code runs on is always an intricate beast, but you don't worry about cache lines when you're writing a Python program. Only a fraction of programmers ever has to worry about them at all. Those that do typically only do so after the program already works so they can still focus on one part of the problem.

When designing cryptographic software, these simplified models we normally program in don't generally work. A cryptographic engineer often needs to worry about concerns all the way up and down the stack simultaneously: from application layer concerns, to runtime semantics like the Java Language Specification, to FFI semantics and the C ABI on all relevant platforms, to the underlying CPU, to the mathematical underpinnings themselves. The engineer has to manage all of those, often while being hamstrung by flawed designs like TLS' MAC-then-pad-then-encrypt mess.

In future blog posts, I'll go into more detail about particular cryptographic API design concerns, starting with JVM byte types. If you're interested, you should follow me on Twitter or subscribe to my blog's feed.

Footnote: I'm happy to note that cffi now also has support for memory pinning since PyPy will support it in the upcoming 5.2 release, although that means I'll no longer be able to make Paul Kehrer of PyCA fame jealous with the pinning support in caesium.

Nonce misuse resistance 101

This post is an introduction to nonce-misused resistant cryptosystems and why I think they matter. The first part of this post is about nonce-based authenticated encryption schemes: how they work, and how they fail. If you're already familiar with them, you can skip to the section on protocol design. If you're completely new to cryptography, you might like my free introductory course to cryptography, Crypto 101. In a future blog post, I'll talk about some nonce-misuse resistant schemes I've implemented using libsodium.

Many stream ciphers and stream cipher-like constructions such as CTR, GCM, (X)Salsa20... take a nonce. You can think of it as a pointer that lets you jump to a particular point in the keystream. This makes these ciphers "seekable", meaning that you can decrypt a small part of a big ciphertext, instead of having to decrypt everything up to that point first. (That ends up being trickier than it seems, because you still want to authenticate that small chunk of ciphertext, but that's a topic for another time.)

The critical security property of a nonce is that it's never repeated under the same key. You can remember this by the mnemonic that a nonce is a "number used once". If you were to repeat the nonce, the keystream would also repeat. That means that an attacker can take the two ciphertexts and XOR them to compute the XOR of the plaintexts. If C_n are ciphertexts, P_n plaintexts, K_n keystreams, and ^ is bitwise exclusive or:

C_1 = K_1 ^ P_1
C_2 = K_2 ^ P_2

The attacker just XORs C_1 and C_2 together:

C_1 ^ C_2 = K_1 ^ P_1 ^ K_2 ^ P_2

Since XOR is commutative (you can rearrange the order), K_1 = K_2, and XOR'ing two equal values cancels them out:

C_1 ^ C_2 = P_1 ^ P_2

That tells an attacker a lot about the plaintext, especially if some of one of the plaintexts is predictable. If the attacker has access to an encryption oracle, meaning that they can get encryptions for plaintexts of their choosing, they can even get perfect decryptions. That is not an unrealistic scenario. For example, if you're encrypting session cookies that contain the user name and e-mail, I can register using a name and e-mail address that has a lot of Z characters, and then I know that just XORing with Z will reveal most of the plaintext. For an idea of the state of the art in attacking two-time pads (the usual term for two ciphertexts with a reused keystream), see Mason06.

Protocol design

For many on-line protocols like TLS, the explicit nonce provides a convenient way to securely send many messages under a per-session key. Because the critical security property for a nonce is that it is never repeated with the same key, it's safe to use a counter. In protocols where both peers send messages to each other, you can just have one peer use odd nonces and have the other use even ones. There are some caveats here: for example, if the nonce size is sufficiently small, an attacker might try to make that counter overflow, resulting in a repeated nonce.

For off-line (or at-rest) protocols, it's a little trickier. You don't have a live communication channel to negotiate a new ephemeral key over, so you're stuck with longer-term keys or keys derived from them. If multiple systems are participating, you need to decide ahead of time which systems own which nonces. Even then, systems need to keep track of which nonces they've used. That doesn't work well, especially not in a distributed system where nodes and connections can fail at any time. This is why some cryptosystems like Fernet provide an API that doesn't require you to specify anything besides a key and a message.

One solution is to use randomized nonces. Since nonces can't repeat, random nonces should be large: if they're too small, you might randomly select the same nonce twice, per the birthday bound. That is the only difference between Salsa20 and XSalsa20: Salsa20 has a 64 bit nonce, whereas XSalsa20 has a 192 bit nonce. That change exists explicitly to make random nonces secure.

Picking a random nonce and just prepending it to the secretbox ciphertext is secure, but there are a few problems with this approach. It's not clear to practitioners that that's a secure construct. Doing this may seem obvious to a cryptographer, but not to someone who just wants to encrypt a message. Prepending a nonce doesn't feel much different from e.g. appending a MAC. A somewhat knowledgeable practitioner knows that there's plenty of ways to use MACs that are insecure, and they don't immediately see that the prefix-nonce construction is secure. Not wanting to design your own cryptosystems is a good reflex which we should be encouraging.

Random nonces also mean that any system sending messages needs access to high-quality random number generators while they're sending a message. That's often, but not always true. Bugs around random number generation, especially userspace CSPRNGs, keep popping up. This is often a consequence of poor programming practice, but it can also be a consequence of poorly-configured VMs or limitations of embedded hardware.

Nonce-misuse resistant systems

To recap, not all protocols have the luxury of an obvious nonce choice, and through circumstances or poor practices, nonces might repeat anyway. Regardless of how cryptographers feel about how important nonce misuse is, we can anecdotally and empirically verify that such issues are real and common. This is true even for systems like TLS where there is an "obvious" nonce available (Böck et al, 2016). It's easy to point fingers, but it's better to produce cryptosystems that fail gracefully.

Rogaway and Shrimpton (2006) defined a new model called nonce-misuse resistance. Informally, nonce-misuse resistance schemes ensure that a repeated random nonce doesn't result in plaintext compromise. In the case of a broken system where the attacker can cause repeated nonces, an attacker will only be able to discern if a particular message repeated, but they will not be able to decrypt the message.

Rogaway and Shrimpton also later developed a mode of operation called SIV (synthetic IV), which Gueron and Lindell are refined to GCM-SIV, a SIV-like that takes advantage of fast GCM hardware implementations. Those two authors are currently working with Adam Langley to standardize the AES-GCM-SIV construction through CFRG. AEZ and HS1-SIV, two entries in the CAESAR competition, also feature nonce-misuse resistance. CAESAR is an ongoing competition, and GCM-SIV is not officially finished yet, so this is clearly a field that is still evolving.

There are parallels between nonce-misuse resistance and length extension attacks. Both address issues that arguably only affected systems that were doing it wrong to begin with. (Note, however, in the embedded case above, it might not be a software design flaw but a hardware limitation.) Fortunately, the SHA-3 competition showed that you can have increased performance and still be immune to a class of problems. I'm hopeful that CAESAR will consider nonce-misuse resistance an important property of an authenticated encryption standard.

Repeated messages

Repeated messages are suboptimal, and in some protocols they might be unacceptable. However, they're a fail-safe failure mode for nonce misuse. You're not choosing to have a repeated ciphertext, you're just getting a repeated ciphertext instead of a plaintext disclosure (where the attacker would also know that you repeated a message). In the case of a secure random nonce, a nonce-misuse resistant scheme is just as secure, at the cost of a performance hit.

In a context where attackers can see individual messages to detect repeated ciphertexts, it makes sense to also consider a model where attackers can replay messages. If replaying messages (which presumably have side effects) is a problem, a common approach is to add a validity timestamp. This is a feature of Fernet, for example. A device that doesn't have access to sufficient entropy will still typically have access to a reasonably high-resolution clock, which is still more than good enough to make sure the synthetic IVs don't repeat either.

OK, but how does it work?

Being able to trade plaintext disclosure for attackers being able to detect repeated messages sounds like magic, but it makes sense once you realize how they work. As demonstrated in the start of this post, nonce re-use normally allows an attacker to have two keystreams cancel out. That only makes sense if two distinct messages are encrypted using the same (key, nonce) pair. NMR solves this by making the nonce also depend on the message itself. Informally, it means that a nonce should never repeat for two distinct messages. Therefore, an attacker can't cancel out the keystreams without cancelling out the messages themselves as well.

This model does imply off-line operation, in that the entire message has to be scanned before the nonce can be computed. For some protocols, that may not be acceptable, although plenty of protocols work around this assumption by simply making individual messages sufficiently small.

Thanks to Aaron Zauner and Kurt Griffiths for proofreading this post.

Supersingular isogeny Diffie-Hellman 101

Craig Costello, Patrick Longa and Michael Naehrig, three cryptographers at Microsoft Research, recently published a paper on supersingular isogeny Diffie-Hellman. This paper garnered a lot of interest in the security community and even made it to the front page of Hacker News. Most of the discussion around it seemed to be how no one understands isogenies, even within cryptography-literate communities. This article aims to give you a high-level understanding of what this cryptosystem is and why it works.

This post assumes that you already know how Diffie-Hellman works in the abstract, and that you know elliptic curves are a mathematical construct that you can use to perform Diffie-Hellman operations, just like you can with the integers mod p (that would be "regular" Diffie-Hellman). If that was gibberish to you and you'd like to know more, check out Crypto 101, my free introductory book on cryptography. You don't need a math background to understand those concepts at a high level. The main difference is that Crypto 101 sticks to production cryptography, while this is still experimental.

It's not surprising that isogeny-based cryptography is so confusing. Up until recently, it was unambiguously in the realm of research, not even close to being practically applicable. Its mathematical underpinnings are much more complex than regular elliptic curves, let alone integers mod p. It also looks superficially similar to elliptic curve Diffie-Hellman, which only adds to the confusion.

With that, let's begin!

What is this paper about?

Supersingular isogeny Diffie-Hellman (SIDH) is one of a handful of "post-quantum" cryptosystems. Those are cryptosystems that will remain secure even if the attacker has access to a large quantum computer. This has nothing to do with quantum cryptography (for example, quantum key distribution) beyond their shared quantum mechanical underpinning.

Why should I care about quantum computers?

General quantum computers are not useful as general-purpose computing devices, but they can solve some problems much faster than classical computers. Classical computers can emulate quantum computers, but only with exponential slowdown. A sufficiently large quantum computer could break most production cryptography, including cryptosystems based on the difficulty of factoring large numbers (like RSA), taking discrete logs over the integers mod p (like regular DH), or taking discrete logs over elliptic curves (like ECDH and ECDSA). To quantify that, consider the following table:

quantum computer attack cost versus classical

In this table, n refers to the modulus size for RSA, and the field size for ECC. Look at the rightmost column, which represents time taken by the classical algorithm, and compare it to the "time" columns, which represent how much a quantum computer would take. As n increases, the amount of time the quantum computer would take stays in the same ballpark, whereas, for a classical computer, it increases (almost) exponentially. Therefore, increasing n is an effective strategy for keeping up with ever-faster classical computers, but it is ineffective at increasing the run time for a quantum computer.

Aah! Why isn't everyone panicking about this?!

The good news is that these large quantum computers don't exist yet.

If you look at the qubits column, you'll see that these attacks require large universal quantum computers. The state of the art in those only has a handful of qubits. In 2011, IBM successfully factored 143 using a 4-qubit quantum computer. Scaling the number of qubits up is troublesome. In that light, larger key sizes may prove effective after all; we simply don't know yet how hard it is to build quantum computers that big.

D-wave, a quantum computing company, has produced computers with 128 and 512 qubits and even >1000 qubits. While there is some discussion if D-waves provide quantum speedup or are even real quantum computers at all; there is no discussion that they are not universal quantum computers. Specifically, they only claim to solve one particular problem called quantum annealing. The 1000 qubit D-Wave 2X cannot factor RSA moduli of ~512 bits or solve discrete logs on curves of ~120 bits.

The systems at risk implement asymmetric encryption, signatures, and Diffie-Hellman key exchanges. That's no accident: all post-quantum alternatives are asymmetric algorithms. Post-quantum secure symmetric cryptography is easier: we can just use bigger key sizes, which are still small enough to be practical and result in fast primitives. Quantum computers simply halve the security level, so all we need to do to maintain a 128 bit security level is to use ciphers with 256 bit keys, like Salsa20.

Quantum computers also have an advantage against SIDH, but both are still exponential in the field size. The SIDH scheme in the new paper has 192 bits of security against a classical attacker, but still has 128 bits of security against a quantum attacker. That's in the same ballpark as most symmetric cryptography, and better than the 2048-bit RSA certificates that underpin the security of the Internet.

What makes this paper special?

Post-quantum cryptography has been firmly in the realm of academic research and experiments. This paper makes significant advancements in how practically applicable SIDH is.

Being future-proof sounds good. If this makes it practical, why don't we start using it right now?

SIDH is a young cryptosystem in a young field, and hasn't had the same level of scrutiny as some of the other post-quantum cryptosystems, let alone the "regular" cryptosystems we use daily. Attacks only get better, they never get worse. It's possible that SIDH is insecure, and we just don't know how to break it yet. It does have a good argument for why quantum algorithms wouldn't be able to crack it (more on that later), but that's a hypothesis, not a proof.

The new performance figures from this paper are impressive, but this system is still much slower than the ones we use today. Key generation and key exchange take a good 50 million cycles or so each. That's about a thousand times slower than Curve25519, a curve designed about 10 years ago. Key sizes are also much larger: SIDH public keys are 751 bytes, whereas Curve25519 keys are only 32 bytes. For on-line protocols like HTTPS operating over TCP, that's a significant cost.

Finally, there are issues with implementing SIDH safely. Systems like Diffie-Hellman over integers mod p are much less complex than elliptic curve Diffie-Hellman (ECDH), let alone SIDH. With ECDH and ECC in general, we've seen new implementation difficulties, especially with early curves. Point addition formulas would work, unless you were adding a point to itself. You have to check that input points are on the curve, or leak the secret key modulo some small order. These are real implementation problems, even though we know how to solve them.

This is nothing compared to the difficulties implementing SIDH. Currently, SIDH security arguments rely on honest peers. A peer that gives you a pathological input can utterly break the security of the scheme. To make matters worse, while we understand how to verify inputs for elliptic curve Diffie-Hellman, we don't have a way to verify inputs for isogeny-based cryptography at all. We don't have much research to fall back on here either. This isn't a SIDH-specific problem; post-quantum cryptography isn't mature enough yet to have implementation issues like these nailed down yet. (For an example from lattice-based cryptography, see the recent paper by Bindel et al.)

I don't want to diminish the importance of this paper in any way! Just because it's not something that your browser is going to be doing tomorrow doesn't mean it's not an impressive accomplishment. It's just a step on the path that might lead to production crypto one day.

OK, fine. Why is this so different from elliptic curve Diffie-Hellman?

While SIDH and ECDH both use elliptic curves, they're different beasts. SIDH generates new curves to perform a DH exchange, whereas ECDH uses points on one fixed curve. These supersingular curves also have different properties from regular curves. Using a supersingular curve for regular elliptic curve operations would be horribly insecure. If you have some background in elliptic curves: supersingular curves have a tiny embedding degree, meaning that solving the ECDLP over F(p) can easily be transformed into solving the DLP over F(p^n) where n is that small embedding degree. Most curves have large embedding degrees, meaning that solving the ECDLP directly is easier than translating it into a DLP and then solving that. You generally have to go out of your way to find a curve with a small embedding degree. That is only done in specialized systems, like for pairing-based cryptography, or, as in this case, supersingular isogeny-based Diffie-Hellman.

Let's recap ECDH. Public keys are points on a curve, and secret keys are numbers. Alice and Bob agree on the parameters of the exchange ahead of time, such as the curve E and a generator point P on that curve. Alice picks a secret integer a and computes her public key aP. Bob picks a secret integer b and computes his public key bP. Alice and Bob send each other their public keys, and multiply their secret key by the other peer's public key. Since abP = baP, they compute the same secret. Since an attacker has neither secret key, they can't compute the shared secret.

SIDH is different. Secret keys are isogenies...

Whoa whoa whoa. What the heck are isogenies?

An isogeny between elliptic curves is a function from one elliptic curve to another that preserves base points. That means it takes points on one curve and returns points on the other curve. Every point on the input curve will map to a point on the output curve; but multiple points may map to the same point. Formally speaking, the isogeny is surjective. An isogeny is also a homomorphism. That is, it preserves the structure of the curve. For any two points P and Q, phi(P + Q) = phi(P) + phi(Q).

We have a bunch of formulas for generating isogenies from a curve and a point. You might remember that the set of values a function takes is its "domain", and the set of values it returns is called its "codomain". The domain of such an isogeny is the curve you give it; its codomain might be the same curve, or it might be a different one. In general, for SIDH, we care about the case where it produces a new curve.

OK, so explain how SIDH works again.

Roughly speaking, a secret key is an isogeny, and a public key is an elliptic curve. By "mixing" their isogeny with the peer's public curve, each peer generates a secret curve. The two peers will generally generate different curves, but those curves will have the same j-invariant.

Wait, what's a j-invariant?

The j-invariant is a number you can compute for a particular curve. Perhaps the best analogy would be the discriminant for quadratic equation you might remember from high school math; it's a single number that tells you something interesting about the underlying curve. There are different formulas for curves in different forms. For example, for a curve in short Weierstrass form y^2 = x^3 + ax + b, the j-invariant is:

j(E) = (1728 * 4a^3)/(4a^3 + 27b^2)

The j-invariant has a few cool properties. For example, while this is the formula for the short Weierstrass form, the value of j doesn't change if you put the same curve in a different form. Also, all curves with the same j-invariant are isomorphic. However, for SIDH you don't really care about these properties; you just care that the j-invariant is a number you can compute, and it'll be the same for the two secret curves that are generated by the DH exchange.

OK, try explaining SIDH again.

The protocol fixes a supersingular curve E and four points on that curve: P_A, Q_A, P_B, Q_B.

Alice picks two random integers, m_A and n_A. She takes a linear combination of those two integers with P_A and Q_A to produce a random point R_A, so:

R_A = n_A * P_A + m_A * Q_A

That random point defines Alice's secret isogeny through the isogeny formulas I talked about above. The codomain of that isogeny forms Alice's public curve. Alice transforms points P_B and Q_B with the isogeny. She sends Bob her public curve and the two transformed points.

Bob does the same thing, except with A and B swapped.

Once Alice gets Bob's public key, she applies m_A and n_A again to the corresponding transformed points she got from Bob. She generates a new isogeny phiBA from the resulting point just like she did before to generate her private key. That isogeny's codomain will be an elliptic curve E_BA.

When Bob performs his side of the exchange, he'll produce a different isogeny and a different elliptic curve E_AB; but it will have the same j-invariant as the curve Alice computed. That j-invariant is the shared key.

I've compiled a transcript of a Diffie-Hellman exchange using Sage so you can see a (toy!) demo in action.

I know a little about elliptic curves. I thought they were always non-singular. What's a supersingular elliptic curve but a contradiction in terms?

You're right! Supersingular elliptic curves are somewhat confusingly named. Supersingular elliptic curves are still elliptic curves, and they are non-singular just like all other elliptic curves. The "supersingular" refers to the singular values of the j-invariant. Equivalently, the Hasse invariant will be 0.

So, why does it matter that the curve is supersingular?

Firstly, computing the isogeny is much easier on supersingular curves than on ordinary (not supersingular) elliptic curves. Secondly, if the curve is ordinary, the scheme can be broken in subexponential time by a quantum attacker.

Isogeny-based cryptography using ordinary curves was considered as a post-quantum secure cryptosystem before SIDH. However, Childs et al. showed a subexponential quantum algorithm in 2010. This paper appeared to have ended isogeny-based cryptography: it was already slower than other post-quantum systems, and now it was shown that it wasn't even post-quantum secure.

Because supersingular curves are rare, they had not previously been considered for isogeny-based cryptography. However, the paper itself suggested that supersingular curves might be worth examining, so it ended up pushing research in a new direction rather than ending it.

Explaining why the supersingular curve makes the problem quantum-hard is tricky without being thoroughly familiar with isogenies and quantum computing. If you're really interested, the Childs paper explains how the quantum attack in the ordinary case works. Informally, in the ordinary case, there is a group action (the isogeny star operator) of the ideal class group onto the set of isomorphism classes of isogenous curves with the same endomorphism ring. That can be shown to be a special case of the abelian group hidden shift problem, which can be solved quickly on a quantum computer. In the supersingular case, there is no such group action to exploit. (If you're trying to solve for this at home; this is why SIDH needs to define the 4 points P_A, P_B, Q_A, Q_B.)

I would like to thank Thomas Ptacek for reviewing this blog post and bearing with me as I struggle through trying to come up with human-readable explanations for all of this stuff; Sean Devlin for reminding me that Sage is an excellent educational tool; and Watson Ladd for pointing out a correction w.r.t the Hasse invariant (the Hasse-Witt matrix is undefined, not singular.). Finally, I'd like to thank all the people who reviewed drafts of this post, including (in no particular order) Bryan Geraghty, Shane Wilton, Sean Devlin, Thomas Ptacek, Tanner Prynn, Glyph Lefkowitz and Chris Wolfe.

Introducing Teleport

I'm happy to introduce Teleport, a new open source platform for managing SSH infrastructure. Teleport is built by Gravitational, a Y Combinator company that ships SaaS on any platform. While I'm not a part of Gravitational, I have been advising them on the Teleport project.

Most teams don't have a great authentication story. Some rely on passing passwords around haphazardly, while others rely on copying everyone's ~/.ssh/id_rsa.pub to every new box. More complex homegrown systems quickly become unwieldy. These methods are problematic both operationally and from a security perspective: when security and usability are at odds, security tends to lose out. For a lot of teams, a single compromised key off of a developer machine spells disaster, on-boarding new team members is painful, and key rotation doesn't happen.

In the last few years, strong multi-factor authentication has become the norm. Tokens are only valid for a brief period of time, use challenge-response protocols, or both. Teleport helps bring the same level of sophistication to infrastructure. It helps system administrators leverage the security benefits of short-lived certificates, while keeping the operational benefits of decoupling server authentication from user authentication. It lets you run isolated clusters, so that a compromise of staging credentials doesn't lead to a compromise in production. It automatically maintains clear audit logs: who logged in, when and where they logged in, and what they did once they got there.

Teleport comes with a beautiful, usable UI, making it easy to visualize different clusters and the available machines within them. The UI is optional: many system administrators will prefer to use their existing SSH client, and Teleport supports that natively. Because it implements the SSH_AUTH_SOCK protocol, integrating your current CLI workflow is a simple matter of setting a single environment variable.

As someone with an open-source background, I'm glad to see this software released and developed out in the open. A decent SSH key management story should be available to everyone, and that's what Teleport does. I believe making this technology more accessible is good for everyone, including commercial vendors. Democratizing a decent DIY story helps turn their product into the battle-hardened and commercially supported version of industry best practice; and as such, I hope this helps grow that market. As a principal engineer at Rackspace Managed Security, I'm excited to start working towards better authentication stories, both internally and for our customers, with Teleport as the new baseline.

Releasing early and often is also an important part of open source culture. That can be at odds with doing due diligence when releasing security-critical systems like Teleport, especially when those systems have non-trivial cryptographic components. We feel Teleport is ready to show to the public now. To make sure we act as responsibly as possible, I've helped the Teleport team to join forces with a competent independent third-party auditor. We're not recommending that you bet the farm on Teleport by running it in production as your only authentication method just yet, but we do think it's ready for motivated individuals to start experimenting with it.

Some people might feel that a better SSH story means you're solving the wrong problem. It seems at odds with the ideas behind immutable infrastructure and treating servers as cattle, not pets. I don't think that's true. Firstly, even with immutable infrastructure, being able to SSH into a box to debug and monitor is still incredibly important. Being able to rapidly deploy a bunch of fixed images quickly may be good, but you still have to know what to fix first. Secondly, existing systems don't always work that way. It may not be possible, let alone economically rational, to "port" them effectively. It's easy to think of existing systems as legacy eyesores that only exist until you can eradicate them, but they do exist, they're typically here to stay, and they need a real security story, too.

Teleport is still in its early stages. It's usable today, and I'm convinced it has a bright future ahead of it. It's written in a beautiful, hackable Go codebase, and available on Github starting today.

Don't expose the Docker socket (not even to a container)

Docker primarily works as a client that communicates with a daemon process (dockerd). Typically that socket is a UNIX domain socket called /var/run/docker.sock. That daemon is highly privileged; effectively having root access. Any process that can write to the dockerd socket also effectively has root access.

This is no big secret. Docker clearly documents this in a bunch of places, including the introductory documentation. It's an excellent reason to use Docker Machine for development purposes, even on Linux. If your regular user can write to the dockerd socket, then every code execution vulnerability comes with a free privilege escalation.

The warnings around the Docker socket typically come with a (sometimes implicit) context of being on the host to begin with. Write access to the socket as an unprivileged user on the host may mean privileged access to the host, but there seems to be some confusion about what happens when you get write access to the socket from a container.

The two most common misconceptions seem to be that it either doesn't grant elevated privileges at all, or that it grants you privileged access within the container (and without a way to break out). This is false; write access to the Docker socket is root on the host, regardless on where that write comes from. This is different from Jerome Pettazoni's dind, which gives you Docker-in-Docker; we're talking about access to the host's Docker socket.

The process works like this:

  1. The Docker container gets a docker client of its own, pointed at the /var/run/docker.sock.
  2. The Docker container launches a new container mounting / on /host. This is the host root filesystem, not the first container.
  3. The second container chroots to /host, and is now effectively root on the host. (There are a few differences between this and a clean login shell; for example, /proc/self/cgroups will still show Docker cgroups. However, the attacker has all of the permissions necessary to work around this.)

This is identical to the process you'd use to escalate from outside of a container. Write access to the Docker socket is root on the host, full stop; who's writing, or where they're writing from, doesn't matter.

Unfortunately, there are plenty of development teams unaware of this property. I recently came across one, and ended up making a screencast to unambiguously demonstrate the flaw in their setup (which involved a container with write access to the Docker socket).

This isn't new; it's been a known property of the way Docker works ever since the (unfortunately trivially cross-site scriptable) REST API listening on a local TCP port was replaced with the /var/run/docker.sock UNIX domain socket.

querySelectorAll from an element probably doesn't do what you think it does

Modern browsers have APIs called querySelector and querySelectorAll. They find one or more elements matching a CSS selector. I'm assuming basic familiarity with CSS selectors: how you select elements, classes and ids. If you haven't used them, the Mozilla Developer Network has an excellent introduction.

Imagine the following HTML page:

<!DOCTYPE html>
<html>
<body>
    <img id="outside">
    <div id="my-id">
        <img id="inside">
        <div class="lonely"></div>
        <div class="outer">
            <div class="inner"></div>
        </div>
    </div>
</body>
</html>

document.querySelectorAll("div") returns a NodeList of all of the <div> elements on the page. document.querySelector("div.lonely") returns that single lonely div.

document supports both querySelector and querySelectorAll, letting you find elements in the entire document. Elements themselves also support both querySelector and querySelectorAll, letting you query for elements that are descendants of that element. For example, the following expression will find images that are descendants of #my-id:

document.querySelector("#my-id").querySelectorAll("img")

In the sample HTML page above, it will find <img id="inside"> but not <img id="outside">.

With that in mind, what do these two expressions do?

document.querySelectorAll("#my-id div div");
document.querySelector("#my-id").querySelectorAll("div div");

You might reasonably expect them to be equivalent. After all, one asks for div elements inside div elements inside #my-id, and the other asks for div elements inside div elements that are descendants of #my-id. However, when you look at this JSbin, you'll see that they produce very different results:

document.querySelectorAll("#my-id div div").length === 1;
document.querySelector("#my-id").querySelectorAll("div div").length === 3;

What is going on here?

It turns out that element.querySelectorAll doesn't match elements starting from element. Instead, it matches elements matching the query that are also descendants of element. Therefore, we're seeing three div elements: div.lonely, div.outer, div.inner. We're seeing them because they both match the div div selector and are all descendants of #my-id.

The trick to remembering this is that CSS selectors are absolute. They are not relative to any particular element, not even the element you're calling querySelectorAll on.

This even works with elements outside the element you're calling querySelectorAll on. For example, this selector:

document.querySelector("#my-id").querySelector("div div div")

... matches div.inner in this snippet (JSbin):

<!DOCTYPE html>
<html>
  <body>
    <div>
      <div id="my-id">
        <div class="inner"></div>
      </div>
    </div>
  </body>
</html>

I think this API is surprising, and the front-end engineers I've asked seem to agree with me. This is, however, not a bug. It's how the spec defines it to work, and browsers consistently implement it that way. Safari. John Resig commented how he and others felt this behavior was quite confusing back when the spec came out.

If you can't easily rewrite the selector to be absolute like we did above, there are two alternatives: the :scope CSS pseudo-selector, and query/queryAll.

The :scope pseudo-selector matches against the current scope. The name comes from the CSS scoping, which limits the scope of styles to part of the document. The element we're calling querySelectorAll on also counts as a scope, so this expression only matches div.inner:

document.querySelector("#my-id").querySelectorAll(":scope div div");

Unfortunately, browser support for scoped CSS and the :scope pseudo-selector is extremely limited. Only recent versions of Firefox support it by default. Blink-based browsers like Chrome and Opera require the well-hidden experimental features flag to be turned on. Safari has a buggy implementation. Internet Explorer doesn't support it at all.

The other alternative is element.query/queryAll. These are alternative methods to querySelector and querySelectorAll that exist on DOM parent nodes. They also take selectors, except these selectors are interpreted relative to the element being queried from. Unfortunately, these methods are even more obscure: they are not referenced on MDN or caniuse.com, and are missing from the current DOM4 working draft, dated 18 June 2015. They were still present in an older version, dated 4 February 2014, as well as in the WHATWG Living Document version of the spec. They have also been implemented by at least two polyfills:

In conclusion, the DOM spec doesn't always necessarily do the most obvious thing. It's important to know pitfalls like these, because they're difficult to discover from just the behavior. Fortunately, you can often rewrite your selector so that it isn't a problem. If you can't, there's always a polyfill to give you the modern API you want. Alternatively, libraries like jQuery can also help you get a consistent, friendly interface for querying the DOM.

Today's OpenSSL bug (for techies without infosec chops)

What happened?

OpenSSL 1.0.1n+ and 1.0.2b+ had a new feature that allows finding an alternative certificate chain when the first one fails. The logic in that feature had a bug in it, such that it didn't properly verify if the certificates in the alternative chain had the appropriate permissions; specifically, it didn't check if those certificates are certificate authorities.

Specifically, this means that an attacker who has a valid certificate for any domain, can use that certificate to produce new certificates. Those normally wouldn't work, but the algorithm for finding the alternative trust chain doesn't check if the valid certificate can act as a certificate authority.

What's a certificate (chain)?

A certificate is a bit like an ID card: it has some information about you (like your name), and is authenticated by a certificate authority (in the case of an ID, usually your government).

What's a certificate authority?

A certificate authority is an entity that's allowed to authenticate certificates. Your computer typically ships with the identity of those certificate authorities, so it knows how to recognize certificates authorized by them.

In the ID analogy, your computer knows how to recognize photo IDs issued by e.g. California.

The issue here is that in some cases, OpenSSL was willing to accept signatures authenticated by certificates that don't have certificate authority powers. In the analogy, it would mean that it accepted CostCo cards as valid ID, too.

Why did they say it wouldn't affect most users?

This basically means "we're assuming most users are using OpenSSL for vanilla servers", which is probably true. Most servers do use OpenSSL, and most clients (browsers) don't.

The bug affects anyone trying to authenticate their peer. That includes regular clients, and servers doing client authentication. Regular servers aren't affected, because they don't authenticate their peer.

Servers doing client authentication are fairly rare. The biggest concern is with clients. While browsers typically don't use OpenSSL, a lot of API clients do. For those few people affected by the bug and with clients that use OpenSSL, the bug is catastrophic.

What's client authentication?

The vast majority of TLS connections only authenticate the server. When the client opens the connection, the server sends its certificate. The client checks the certificate chain against the list of certificate authorities that it knows about. The client is typically authenticated, but over the protocol spoken inside of TLS (usually HTTP), not at a TLS level.

That isn't the only way TLS can work. TLS also supports authenticating clients with certificates, just like it authenticates servers. This is called mutually authenticated TLS, because both peers authenticate each other. At Rackspace Managed Security, we use this for all communication between internal nodes. We also operate our own certificate authority to sign all of those certificates.

What's TLS?

TLS is what SSL has been called for way over a decade. The old name stuck (particularly in the name "OpenSSL"), but you should probably stop using it when you're talking about the secure protocol, since all of the versions of the protocol that were called "SSL" have crippling security bugs.

Why wasn't this found by automated testing?

I'm not sure. I wish automated testing this stuff was easier. Since I'm both a user and a big fan of client authentication, which is a pretty rare feature, I hope to spend more time in the future creating easy-to-use automated testing tools for this kind of scenario.

How big is the window?

1.0.1n and 1.0.2b were both released on 11 Jun 2015. The fixes, 1.0.1p and 1.0.2d, were released today, on 9 Jul 2015.

The "good news" is that the bad releases are recent. Most people who have an affected version will be updating regularly, so the number of people affected is small.

The bug affected following platforms (non-exhaustive):

  • It did not affect stock OS X, because they still ship 0.9.8. However, the bug does affect a stable version shipped through Homebrew (1.0.2c).
  • Ubuntu is mostly not affected. The only affected version is the unreleased 15.10 (Wily). Ubuntu has already released an update for it.
  • The bug affects stable releases of Fedora. I previously mistakenly reported that the contrary, but that information was based on their package version numbers, which did not match upstream. Fedora backported the faulty logic to their version of 1.0.1k, which was available in Fedora 21 and 22. They have since released patches; see this ticket for details. Thanks to Major Hayden for the correction!
  • The bug does not affect Debian stable, but it does affect testing and unstable.
  • The bug affects ArchLinux testing.

In conclusion

The bug is disastrous, but affects few people. If you're running stable versions of your operating system, you're almost certainly safe.

The biggest concern is with software developers using OS X. That audience uses HTTPS APIs frequently, and the clients to connect to those APIs typically use OpenSSL. OS X comes with 0.9.8zf by default now, which is a recent revision of an ancient branch. Therefore, people have a strong motivation to get their OpenSSL from a third-party source. The most popular source is Homebrew, which up until earlier this morning shipped 1.0.2c. The bug affects that version. If you installed OpenSSL through Homebrew, you should go update right now.

HTTPS requests with client certificates in Clojure

The vast majority of TLS connections only authenticate the server. When the client opens the connection, the server sends its certificate. The client checks the certificate against the list of certificate authorities that it knows about. The client is typically authenticated, but over the inner HTTP connection, not at a TLS level.

That isn't the only way TLS can work. TLS also supports authenticating clients with certificates, just like it authenticates servers. This is called mutually authenticated TLS, because both peers authenticate each other. At Rackspace Managed Security, we use this for all communication between internal nodes. We also operate our own certificate authority to sign all of those certificates.

One major library, http-kit, makes use of Java's javax.net.ssl, notably SSLContext and SSLEngine. These Java APIs are exhaustive, and very... Java. While it's easy to make fun of these APIs, most other development environments leave you using OpenSSL, whose APIs are patently misanthropic. While some of these APIs do leave something to be desired, aphyr has done a lot of the hard work of making them more palatable with less-awful-ssl. That gives you an SSLContext. Request methods in http-kit have an opts map that you can pass a :sslengine object to. Given an SSLContext, you just need to do (.createSSLEngine ctx) to get the engine object you want.

Another major library, clj-http, uses lower-level APIs. Specifically, it requires [KeyStore][keystore] instances for its :key-store and :trust-store options. That requires diving deep into Java's cryptographic APIs, which, as mentioned before, might be something you want to avoid. While clj-http is probably the most popular library, if you want to do fancy TLS tricks, you probably want to use http-kit instead for now.

My favorite HTTP library is aleph by Zach Tellman. It uses Netty instead of the usual Java IO components. Fortunately, Netty's API is at least marginally friendlier than the one in javax.net.ssl. Unfortunately, there's no less-awful-ssl for Aleph. Plus, since I'm using sente for asynchronous client-server communication, which doesn't have support for aleph yet. So, I'm comfortably stuck with http-kit for now.

In conclusion, API design is UX design. The library that "won" for us was simply the one that was easiest to use.

For a deeper dive in how TLS and its building blocks work, you should watch my talk, Crypto 101, or the matching book. It's free! Oh, and if you're looking for information security positions (that includes entry-level!) in an inclusive and friendly environment that puts a heavy emphasis on teaching and personal development, you should get in touch with me at [email protected].

Call for proposal proposals

I'm excited to announce that I was invited to speak at PyCon PL. Hence, I'm preparing to freshen up my arsenal of talks for the coming year. The organizers have very generously given me a lot of freedom regarding what to talk about.

I'd like to do more security talks as well as shift focus towards a more technical audience, going more in-depth and touching on more advanced topics.

Candidates

Object-capability systems

Capabilities are a better way of thinking about authorization. A capability ("cap") gives you the authority to perform some action, without giving you any other authority. Unlike role-based access control systems, capability based systems nearly always fail-closed; if you don't have the capability, you simply don't have enough information to perform an action. Contrast this with RBAC systems, where authorization constraints are enforced with pinky swears, and therefore often subverted.

I think I can make an interesting case for capability systems to any technical audience with some professional experience. Just talk about secret management, and how it's nearly always terrifying! This gives me an opportunity to talk about icecap (docs) and shimmer (blog, my favorite pastimes.

Putting a backdoor in RDRAND

I've blogged about this before before, but I think I could turn it into a talk. The short version is that Linux's PRNG mixes in entropy from the RDRAND in a way that would allow a malicious implementation to control the output of the PRNG in ways that would be indistinguishable to a (motivated) observer.

As a proof of concept, I'd love to demo the attack, either in software (for example, with QEMU) or even in hardware with an open core. I could also go into the research that's been done regarding hiding stuff on-die. Unfortunately, the naysayers so far have relied on moving the goalposts continuously, so I'm not sure that would convince them this is a real issue.

Retroreflection

An opportunity to get in touch with my languishing inner electrical engineer! It turns out that when you zap radio waves at most hardware, the reflection gets modulated based on what it's doing right now. The concept became known as TEMPEST, an NSA program. So far, there's little public research on how feasible it is for your average motivated hacker. This is essentially van Eck phreaking, with 2015 tools. There's probably some interesting data to pick off of USB HIDs, and undoubtedly a myriad of interesting devices controlled by low-speed RS-232. Perhaps wireless JTAG debugging?

The unfinished draft bin

Underhanded curve selection

Another talk in the underhanded cryptography section I've considered would be about underhanded elliptic curve selection. Unfortunately, bringing the audience up to speed with the math to get something out of it would be impossible in one talk slot. People already familiar with the math are also almost certainly familiar with the argument for rigid curves.

Web app authentication

Some folks asked for a tutorial on how to authenticate to web apps. I'm not sure I can turn that into a great talk. There's a lot of general stuff that's reasonably obvious, and then there's highly framework-specific stuff. I don't really see how I can provide a lot of value for people's time.

Feedback

David Reid and Dwayne Litzenberger made similar, excellent points. They both recommend talking about object-capability systems. Unlike the other two, it will (hopefully) actually help people build secure software. Also, the other two will just make people feel sad. I feel like those points generalize to all attack talks; are they just not that useful?