NCW

Foreshortened

Close-up views of Nicholas Wilson

About

portrait

I am a software programmer working at RealVNC with many interests including singing, and am a Christian. Many of my friends remember me as a Part III mathematician at Peterhouse. Find out more or contact me!

Visual Studio C Run-Time: I’ve had enough of it

After an entire week struggling against the Visual Studio CRT, I’ve had enough. I’m sick of it. I just can’t take it any longer. How hard would it be for Microsoft to provide a C library that works?

We need to do that. There needs to be a project offering a BSD-licensed replacement for the VS CRT, that’s source-compatible with Unix applications.

I’ve posted the README at least, source won’t follow for a bit until I can clean it up (there’s not enough interest at work for me to be able to work on it there, I’ll have to do it at home or I’ll never be able to make it BSD-licensed or accept contributions).

Mozilla vs Google on user privacy: WebSockets

There’s a disagreement going on at the moment about user privacy. Fundamentally, Mozilla and Google have a different vision of what a browser does. Chrome is “thin”, and Google has explained that the minimal UI around the web content is part of their vision of simply wrapping other content and running it (securely). Firefox is “fat”, an all-singing, all-dancing application that’s meant to do everything users need, so it’s there for web authors to use but not extend. For Chrome, every web page is an app, something you can pin in an app tab and make a start menu shortcut too. They want each page to be like a desktop app. Mozilla absolutely stands against that. A page is a page, and can’t upstage the browser, and can’t act like a desktop app.

Here’s the sticking point for Mozilla: navigate to https://www.websocket.org/echo.html and try and connect. No luck in Firefox unless you select TLS, but in Chrome it works fine.

The general principle is that Mozilla does not trust any web page to guard a user’s privacy, but Chrome believes in a different world of webapps where it’s absolutely OK to run a web page as you would a ‘proper’ application.

Suppose you could make a non-TLS WebSocket connection from a TLS page. The users might be entering sensitive information on this secure page, and as far as the browser’s concerned, the information is flying off the page in the clear. Worse, the page is compromised because unverified, insecure data is flowing into the page from outside. From the vendor’s point of view though, it might be different: suppose you implement a nice secure protocol using ECDH+RSA auth+AES-GCM (calling into OpenSSL/NSS crypto routines via the WebCrypto JavaScript wrappers). The connection could be totally secure, but the browser can’t verify it.

Why not just make a TLS connection? Because browsers only implement the CA model for checking identity. That’s a big namespace of peers, covering all domain names, but you might want to connect to a different sort of entity (a Facebook peer, an entity inside a different namespace). That’s a big problem for Mozilla, because they can’t verify identities in other namespaces, and won’t give webapps the power to implement these things themselves.

Mozilla after all may have a point: until a few years ago, Facebook sent login cookies out in the clear. How can a web user know who to trust to implement the right security?

So, we’re stuck. We have a product that works in Chrome, but not Firefox, so we can’t offer it as a webapp. Instead web authors are forced to fall back to asking users to download a small standalone native binary to run. That’s a shame, and we want a way around that.

I should close by pointing out there may be a way ahead, using WebRTC. It’s still unclear whether it meets our use-case, but it could potentially work for us too. Unfortunately the only bit I’m interested in, data connections, hasn’t yet been fully implemented by anyone yet, so it’s too early to say.

When someone you love does something that isn’t the best for them, you’re never angry, just sad. If you’re angry, it means you’re more concerned with your plan for how they’ll live than their actual happiness. Helping someone love themselves is never about control. (But, I think there are occasions for anger at things a loved-one does to others or ourselves.)

What a relief! I’ll never again have a quarrel or issue with Elspeth we can’t resolve. Before we were married, there wad always the chance something would come up we couldn’t fix, and we might have had to break up and leave it unresolved forever. Now we’re together, it’s nice to know that can’t happen.

Algebraic attack on NTRU using Witt vectors and Gröbner bases

Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies

How should I create a secure connection?

Use an existing protocol?

If you opt to use an existing protocol like SSH-TRANS, TLS, or IPSec, you’re actually being pretty sensible. There are plenty of good reasons why you’d want to roll your own though:

  1. Simplicity. Too much choice of ciphers is bad for you. Do something simple that uses some good primitives and have done with it.
  2. Understandable. Generic transports can be too generalised, with dangerous features you don’t know about. Libraries offering implementations of TLS are very hard to use, for example. I have more confidence that I can write correct code invoking some AES routines than I have in my ability to invoke TLS routines correctly. There are just too many options and quirky behaviours to understand, renegotiations, stored sessions, complex certificate models, close behaviour, and more.
  3. Minimal. Generic transports might have stuff that just doesn’t make sense for you. Transport-level compression? Padding modes? You might need something simple that you can implement everywhere with minimal dependencies.

Use ephemeral encryption for forward security

This is the #1 thing we got wrong in the 90s. Many protocols didn’t use ephemeral encryption (in fact, did any?). The idea was perhaps that hardware was too slow to support it. In any case, we can do it now, so any new connection layer should do it.

The idea is that if someone records your session, then waits for a year until you throw away your phone, and recovers your private key, he can now decrypt the session. He doesn’t need to go after the key that’s safe in your datacentre: either end’s private key is enough. This is very unfortunate. It’s not an attack that needs crypto cleverness, you just steal the key and use off-the-shelf shrinkwrapped software from your vendor (eg stock RealVNC or OpenSSH) to decrypt the saved session.

When a session ends, both parties should scrub the key used for encryption, and the knowledge of it passes out of the world. Later key compromise won’t let someone decrypt connections made prior to the key’s theft.

Never use long-lived keys (which identify a party) to do encryption as well.

How do we do this? There’s basically only one system these days worth using: Diffie-Hellman. Traditional DH is a bit slow, but forward security is trending now because of ECDH, which is really rather fast. There’s now no excuse. Use ECDH for every connection you ever make.

How do I identify someone?

Use RSA signatures. See “What signature algorithm do I use?”.

How do I then encrypt data?

The obvious answer is Rijndael (AES). AES isn’t ideal, but it’s so widely used you need a good reason to strike out and use something else.

There are some good reasons, in fact. AES is notoriously hard to implement in software in constant-time. If you can run on the same processor where some encryption is being done with a key want to steal, just watching the timeslices leaks enough information to get it eventually. Getting a slice on Rackspace where your enemy runs his infrastructure on some VMs in theory could be enough.

Timing attacks are not easy however. I’m not concerned (maybe I should be), but any crypto is good if that’s the biggest worry.

Further, AES can be done with dedicated instruction on most Intel processors since 2010, and on ARMv8. These are constant-time.

Finally, AES is approved for sale into military or government markets. The competitors aren’t (at high security levels—3DES is approved up to 80 bits of security).

For an overview of alternatives to AES, see Matthew Green, “So you want to use an alternative cipher…”. The general wisdom is that Salsa20 is one of the best replacements for AES, certainly my favourite, but at 3 cycle/byte on x86 it’s not even quicker than hardware-accelerated (AES-NI) AES, although it beats i586 AES (15 cycles/byte). Secondly, there are no widely-approved ways of adding message authentication to a stream cipher. Check back in a decade.

Finally, how do we authenticate data?

Or, what mode of operation do I use for AES? There are a lot of different opinions here. (A few representative ones: Colin Percival says “only use CTR-HMAC”; plenty of people don’t like GCM because it’s too hard to understand; some people don’t like EAX because it was rejected by NIST…)

Some absolutes: CBC is too hard to get right. Don’t do it. HMAC-CTR is a mistake (if you’re going to MAC, do encrypt-then-MAC rather than MAC-then-encrypt, because practical breaches have been shown against protocols that did the wrong thing, like SSH).

The choices are then CTR-HMAC, OCB, EAX, or GCM. I’m assuming we know what those are and the arguments.

My own opinion is to use GCM. I have some idiosyncratic reasons for that preference, as well as standard ones: 1. It’s NIST approved. Marketing/sales potential. This wins over EAX. 1. It’s fast. The only mode that will get hardware acceleration in phones. Mobile is important for the future of your product, even if you don’t think so. Fast AES followed by slow authentication is no-one’s idea of a good time.

  1. It’s unencumbered (beating OCB).
  2. It’s easy to use, if not to implement (beating CTR-HMAC, which actually has some nasty gotchas regarding the IV)
  3. We do now have some roughly constant-time software implementations if that’s a concern.
  4. My favourite: it’s got browser support. If you want to port your application to run in a browser, and you should, then you’ll want to use the Web Cryptography API. WebCrypto offers access to fast native primitives from JavaScript. In traditional software, if your platform doesn’t ship with certain routines you can supply them yourself and not lose out (except on automatic updates). It’s OK for a desktop product to pick a maverick cipher and ship the code for it. A browser-based product needs to get that browser support though to be fast. Using a cipher that might be dropped by browsers, or risks not having 100% browser support in the future, is a risk for you protocol! GCM is the only authenticated AES mode I can see being supported by all browsers for ever, because it’s enshrined in TLS over and above CTR-HMAC and EAX. Mozilla and IE ship it already.

What signature algorithm do I use?

Answer: RSA.

RSA is great. It’s well understood, and we have padding schemes now (OAEP, PSS) that really work. If you do DH, you can in fact avoid needing to encrypt at all in most protocols, and just do a signing operation. Since PSS is clearly robust against padding attacks (clear even to an uninitiate like me), and OAEP is merely “not yet cracked”, this suggests that signing be preferred where possible to encrypting. The point is, the traditional thorn in RSA is the fear of padding attacks, but hopefully we’re past that now unless you’re sure you need to encrypt something, and even there, RSA is believed to be more than adequate.

Regarding ECDSA

I think DSA is not worth using, and ECDSA is just DSA over a new group. Basically, if your random numbers are even slightly biased, with every signature you leak a tiny bit of information about your private key! Majorly bad. With RSA, you reduce the security of the one connection that used the dodgy random number, but other connections are independent. I think this a deal-breaker for DSA.

I hate to say it, even though ECDSA is faster than RSA for most operations, you should consider it obsolete junk (in a modern package) unless you’re very careful.

For ephemeral keys it’s OK, but you don’t normally sign using ephemeral keys. There’s one application of this though: using an existing secure connection to bootstrap another. In this case, you can make an ephemeral key, exchange it over the side-channel, then the peer can authenticate you using it. Make sure you only ever use it once, and you’re OK, you don’t need to worry whether someone has ever used the key before for another purpose. In this scenario, the ECDSA key is a slightly more expensive, but much more comforting, form of shared-secret authentication.

Lattice methods

One day, we’ll replace RSA with something else. Lattices methods seem to be the most promising way forward: the underlying problem is clearly not easy, and is thought to be resistant to quantum cryptography. Unfortunately, this is the only cryptographic primitive where I don’t have the background to read the underlying results, but my gut feeling is that we’ll get a widely-recognised good lattice signature scheme fairly soon.

NTRUSign, the most popular one so far, is moderately convincing to me, but it has some difficulties. I expect it’s likely to be cracked eventually (that is, be found to have a good few powers of two knocked off its security; “broken” doesn’t necessarily refer to having a practical attack exists). I think we need to wait for twenty years before using these sorts of things in production, although the company that has the NTRU patent is desperately trying to get people to buy!

A comparison of different elliptic curves: which to use

Links

oidrelay

oidrelay - A Java servlet to act as a public relay for OpenID to Relying Parties behind firewalls

Fork me on GitHub!

OpenID Indirect Relying Parties

This spec lets an OpenID Relying Party work from behind a firewall or NAT. The asociated GitHub project implements the relay server as a Java Servlet.

Please, anyone interested comment on the spec in the wiki or the bug tracker. Or, look at the implementation!

Csmith

More from the group at Utah

taviso/ScaleWindow.c

Very cute! Oops. I plan to conduct a rather thorough search for overflow bugs in our codebase soon (most importantly undefined behaviour flaws).

Embedded in Academia : A Guide to Undefined Behavior in C and C++, Part 1

Another helpful summary article from John Regehr explaining the current position regarding what compilers implement these days.

Integer security, a chapter from Seacord

Very interesting. You’ll probably also want to read John Regehr, “Overflows in SafeInt”, which looks to be now a very good helper library, and Ian Lance Taylor, “Signed Overflow”.