Gripes with Google Groups

If you’re like me, you think of Google Groups as the Usenet client turned mailing list manager. If you’re a GCP user or maybe one of a handful of SAML users you probably know Google Groups as an access control mechanism. The bad news is we’re both right.

This can blow up if permissions on those groups aren't set right. Your groups were probably originally created by a sleep-deprived founder way before anyone was worried about access control. It's been lovingly handcrafted and never audited ever since. Let’s say their configuration is, uh, “inconsistent”. If an administrator adds people to the right groups as part of their on-boarding, it’s not obvious when group membership is secretly self-service. Even if someone can't join a group, they might still be able to read it.

You don’t even need something using group membership as access control for this to go south. The simplest way is a password reset email. (Having a list of all of your vendors feels like a dorky compliance requirement, but it's underrated. Being able to audit which ones have multi-factor authentication is awesome.)

Some example scenarios:

Scenario 1 You get your first few customers and start seeing fraud. You create a mailing list with the few folks who want to talk about that topic. Nobody imagined that dinky mailing list would grow out to a full-fledged team, let alone one with permissions to a third party analytics suite that has access to all your raw data.

Scenario 2 Engineering team treats their mailing list as open access for the entire company. Ops deals with ongoing incidents candidly and has had bad experiences with nosy managers looking for scapegoats. That’s great until someone in ops extends an access control check in some custom software that gates on ops@ to also include engineering@.

Scenario 3 board@ gets a new investor who insists on using their existing email address. An administrator confuses the Google Groups setting for allowing out-of-domain addresses with allowing out-of-domain registration. Everyone on the Internet can read the cap table for your next funding round.

This is a mess. It bites teams that otherwise have their ducks in a row. Cleaning it up gets way worse down the line. Get in front of it now and you probably won’t have to worry about it until someone makes you audit it, which is probably 2-3 years from now.

Google Groups has some default configurations for new groups these days:

  • Public (Anyone in ${DOMAIN} can join, post messages, view the members list, and read the archives.)
  • Team (Only managers can invite new members, but anyone in ${DOMAIN} can post messages, view the members list, and read the archives.)
  • Announcement-only (Only managers can post messages and view the members list, but anyone in ${DOMAIN} can join and read the archives.)
  • Restricted (Only managers can invite new members. Only members can post messages, view the members list, and read the archives. Messages to the group do not appear in search results.)

This is good but doesn't mean you're out of the woods:

  • These are just defaults for access control settings. Once a group is created, you get to deal with the combinatorial explosion of options. Most of them don't really make sense. You probably don't know when someone messes with the group, though.
  • People rarely document intent in the group description (or anywhere for that matter). When a group deviates, you have no idea if it was supposed to.
  • "Team" lets anyone in the domain read. That doesn't cover "nosy manager" or "password reset" scenarios.

Auditing this is kind of a pain. The UI is slow and relevant controls are spread across multiple pages. Even smallish companies end up with dozens of groups. The only way we've found to make this not suck is by using the GSuite Admin SDK and that's a liberal definition of "not suck".

You should have a few archetypes of groups. Put the name in the group itself, because that way the expected audience and access control is obvious to users and auditors alike. Here are some archetypes we've found:

  • Team mailing lists, should be called xyzzy-team@${DOMAIN}. Only has team members, no external members, no self-service membership.
  • Internal-facing mailing lists, should be called xyzzy-corp@${DOMAIN}. Public self-serve access for employees, no external members, limit posting to domain members or mailing list members. These are often associated with a team, but unlike -team mailing lists anyone can join them.
  • External-facing lists. Example: contracts-inbound@${DOMAIN}. No self-serve access, no external members, but anyone can post.
  • External member lists (e.g. boards, investors): board-ext@${DOMAIN}. No self-serve access, external members allowed, members and either members or anyone at the domain can post.

PS: Groups can let some users post as the group. I haven't ran a phishing exercise that way, but I'm guessing an email appearing to legitimately come from [email protected] is going to be pretty effective.

Cryptographic right answers

We’re less interested in empowering developers and a lot more pessimistic about the prospects of getting this stuff right.

There are, in the literature and in the most sophisticated modern systems, “better” answers for many of these items. If you’re building for low-footprint embedded systems, you can use STROBE and a sound, modern, authenticated encryption stack entirely out of a single SHA-3-like sponge constructions. You can use NOISE to build a secure transport protocol with its own AKE. Speaking of AKEs, there are, like, 30 different password AKEs you could choose from.

But if you’re a developer and not a cryptography engineer, you shouldn’t do any of that. You should keep things simple and conventional and easy to analyze; “boring”, as the Google TLS people would say.

(This content has been developed and updated by different people over a decade. We've kept what Colin Percival originally said in 2009, Thomas Ptacek said in 2015, and what we're saying in 2018 for comparison. If you're designing something today, just use the 2018 Latacora recommendation.)

Cryptographic Right Answers

Encrypting Data

  • Percival, 2009: AES-CTR with HMAC.
  • Ptacek, 2015: (1) NaCl/libsodium’s default, (2) ChaCha20-Poly1305, or (3) AES-GCM.
  • Latacora, 2018: KMS or XSalsa20+Poly1305

You care about this if: you're hiding information from users or the network.

If you are in a position to use KMS, Amazon’s (or Google’s) Hardware Security Module time share, use KMS. If you could use KMS but encrypting is just a fun weekend project and you might be able to save some money by minimizing your KMS usage, use KMS. If you’re just encrypting secrets like API tokens for your application at startup, use SSM Parameter Store, which is KMS. You don’t have to understand how KMS works.

Otherwise, what you want ideally is “AEAD”: authenticated encryption with additional data (the option for plaintext authenticated headers).

The mainstream way to get authenticated encryption is to use a stream cipher (usually: AES in CTR mode) composed with a polynomial MAC (a cryptographic CRC).

The problem you’ll run into with all those mainstream options is nonces: they want you to come up with a unique (usually random) number for each stream which can never be reused. It’s simplest to generate nonces from a secure random number generator, so you want a scheme that makes that easy.

Nonces are particularly important for AES-GCM, which is the most popular mode of encryption. Unfortunately, it’s particularly tricky with AES-GCM, where it’s just-barely-but-maybe-not-quite on the border of safe to use random nonces.

So we recommend you use XSalsa20-Poly1305. This is a species of “ChaPoly” constructions, which, put together, are the most common encryption constructions outside of AES-GCM. Get XSalsa20-Poly1305 from libsodium or NaCl.

The advantage to XSalsa20 over ChaCha20 and Salsa20 is that XSalsa supports an extended nonce; it’s big enough that you can simply generate a big long random nonce for every stream and not worry about how many streams you’re encrypting.

There are “NMR” or “MRAE” schemes in the pipeline that promise some degree of security even if nonces are mishandled; these include GCM-SIV (all the SIVs, really) and CAESAR-contest-finalist Deoxys-II. They’re interesting, but nobody really supports or uses them yet, and with an extended nonce, the security win is kind of marginal. They’re not boring. Stay boring for now.

Avoid: AES-CBC, AES-CTR by itself, block ciphers with 64-bit blocks --- most especially Blowfish, which is inexplicably popular, OFB mode. Don't ever use RC4, which is comically broken.

Symmetric key length

  • Percival, 2009: Use 256-bit keys.
  • Ptacek, 2015: Use 256-bit keys.
  • Latacora, 2018: Go ahead and use 256 bit keys.

You care about this if: you're using cryptography.

But remember: your AES key is far less likely to be broken than your public key pair, so the latter key size should be larger if you're going to obsess about this.

Avoid: constructions with huge keys, cipher "cascades", key sizes under 128 bits.

Symmetric “Signatures”

  • Percival, 2009: Use HMAC.
  • Ptacek, 2015: Yep, use HMAC.
  • Latacora, 2018: Still HMAC.

You care about this if: you're securing an API, encrypting session cookies, or are encrypting user data but, against medical advice, not using an AEAD construction.

If you're authenticating but not encrypting, as with API requests, don't do anything complicated. There is a class of crypto implementation bugs that arises from how you feed data to your MAC, so, if you're designing a new system from scratch, Google "crypto canonicalization bugs". Also, use a secure compare function.

If you use HMAC, people will feel the need to point out that SHA3 (and the truncated SHA2 hashes) can do “KMAC”, which is to say you can just concatenate the key and data and hash them and be secure. This means that in theory HMAC is doing unnecessary extra work with SHA-3 or truncated SHA-2. But who cares? Think of HMAC as cheap insurance for your design, in case someone switches to non-truncated SHA-2.

Avoid: custom "keyed hash" constructions, HMAC-MD5, HMAC-SHA1, complex polynomial MACs, encrypted hashes, CRC.

Hashing algorithm

  • Percival, 2009: Use SHA256 (SHA-2).
  • Ptacek, 2015: Use SHA-2.
  • Latacora, 2018: Still SHA-2.

You care about this if: you always care about this.

If you can get away with it: use SHA-512/256, which truncates its output and sidesteps length extension attacks.

We still think it's less likely that you'll upgrade from SHA-2 to SHA-3 than it is that you'll upgrade from SHA-2 to something faster than SHA-3, and SHA-2 still looks great, so get comfortable and cuddly with SHA-2.

Avoid: SHA-1, MD5, MD6.

Random IDs

  • Percival, 2009: Use 256-bit random numbers.
  • Ptacek, 2015: Use 256-bit random numbers.
  • Latacora, 2018: Use 256-bit random numbers.

You care about this if: you always care about this.

From /dev/urandom.

Avoid: userspace random number generators, the OpenSSL RNG, havaged, prngd, egd, /dev/random.

Password handling

  • Percival, 2009: scrypt or PBKDF2.
  • Ptacek, 2015: In order of preference, use scrypt, bcrypt, and then if nothing else is available PBKDF2.
  • Latacora, 2018: In order of preference, use scrypt, argon2, bcrypt, and then if nothing else is available PBKDF2.

You care about this if: you accept passwords from users or, anywhere in your system, have human-intelligible secret keys.

But, seriously: you can throw a dart at a wall to pick one of these. Technically, argon2 and scrypt are materially better than bcrypt, which is much better than PBKDF2. In practice, it mostly matters that you use a real secure password hash, and not as much which one you use.

Don’t build elaborate password-hash-agility schemes.

Avoid: SHA-3, naked SHA-2, SHA-1, MD5.

Asymmetric encryption

  • Percival, 2009: Use RSAES-OAEP with SHA256 and MGF1+SHA256 bzzrt pop ffssssssst exponent 65537.
  • Ptacek, 2015: Use NaCl/libsodium (box / crypto_box).
  • Latacora, 2018: Use Nacl/libsodium (box / crypto_box).

You care about this if: you need to encrypt the same kind of message to many different people, some of them strangers, and they need to be able to accept the message asynchronously, like it was store-and-forward email, and then decrypt it offline. It's a pretty narrow use case.

Of all the cryptographic "right answers", this is the one you're least likely to get right on your own. Don't freelance public key encryption, and don't use a low-level crypto library like OpenSSL or BouncyCastle.

Here are several reasons you should stop using RSA and switch to elliptic curve:

  • RSA (and DH) drag you towards "backwards compatibility" (ie: downgrade-attack compatibility) with insecure systems.
  • RSA begs implementors to encrypt directly with its public key primitive, which is usually not what you want to do
  • RSA has too many knobs. In modern curve systems, like Curve25519, everything is pre-set for security.

NaCl uses Curve25519 (the most popular modern curve, carefully designed to eliminate several classes of attacks against the NIST standard curves) in conjunction with a ChaPoly AEAD scheme. Your language will have bindings (or, in the case of Go, its own library implementation) to NaCl/libsodium; use them. Don’t try to assemble this yourself. Libsodium has a list.

Don't use RSA.

Avoid: Systems designed after 2015 that use RSA, RSA-PKCS1v15, RSA, ElGamal, I don't know, Merkle-Hellman knapsacks? Just avoid RSA.

Asymmetric signatures

  • Percival, 2009: Use RSASSA-PSS with SHA256 then MGF1+SHA256 in tricolor systemic silicate orientation.
  • Ptacek, 2015: Use Nacl, Ed25519, or RFC6979.
  • Latacora, 2018: Use Nacl or Ed25519.

You care about this if: you're designing a new cryptocurrency. Or, a system to sign Ruby Gems or Vagrant images, or a DRM scheme, where the authenticity of a series of files arriving at random times needs to be checked offline against the same secret key. Or, you're designing an encrypted message transport.

The allegations from the previous answer are incorporated herein as if stated in full.

The two dominating use cases within the last 10 years for asymmetric signatures are cryptocurrencies and forward-secret key agreement, as with ECDHE-TLS. The dominating algorithms for these use cases are all elliptic-curve based. Be wary of new systems that use RSA signatures.

In the last few years there has been a major shift away from conventional DSA signatures and towards misuse-resistent "deterministic" signature schemes, of which EdDSA and RFC6979 are the best examples. You can think of these schemes as "user-proofed" responses to the Playstation 3 ECDSA flaw, in which reuse of a random number leaked secret keys. Use deterministic signatures in preference to any other signature scheme.

Ed25519, the NaCl/libsodium default, is by far the most popular public key signature scheme outside of Bitcoin. It’s misuse-resistant and carefully designed in other ways as well. You shouldn’t freelance this either; get it from NaCl.

Avoid: RSA-PKCS1v15, RSA, ECDSA, DSA; really, especially avoid conventional DSA and ECDSA.

Diffie-Hellman

  • Percival, 2009: Operate over the 2048-bit Group #14 with a generator of 2.
  • Ptacek, 2015: Probably still DH-2048, or Nacl.
  • Latacora, 2018: Probably nothing. Or use Curve25519.

You care about this if: you're designing an encrypted transport or messaging system that will be used someday by a stranger, and so static AES keys won't work.

The 2015 version of this document confused the hell out of everyone.

Part of the problem is that our “Right Answers” are a response to Colin Percival’s “Right Answers”, and his included a “Diffie-Hellman” answer, as if “Diffie-Hellmanning” was a thing developers routinely do. In reality, developers simply shouldn’t freelance their own encrypted transports. To get a sense of the complexity of this issue, read the documentation for the Noise Protocol Framework. If you’re doing a key-exchange with DH, you probably want an authenticated key exchange (AKE) that resists key compromise impersonation (KCI), and so the primitive you use for DH is not the only important security concern.

But whatever.

It remains the case: if you can just use NaCl, use NaCl. You don't even have to care what NaCl does. That’s the point of NaCl.

Otherwise: use Curve25519. There are libraries for virtually every language. In 2015, we were worried about encouraging people to write their own Curve25519 libraries, with visions of Javascript bignum implementations dancing in our heads. But really, part of the point of Curve25519 is that the entire curve was carefully chosen to minimize implementation errors. Don’t write your own! But really, just use Curve25519.

Don’t do ECDH with the NIST curves, where you’ll have to carefully verify elliptic curve points before computing with them to avoid leaking secrets. That attack is very simple to implement, easier than a CBC padding oracle, and far more devastating.

The 2015 document included a clause about using DH-1024 in preference to sketchy curve libraries. You know what? That’s still a valid point. Valid and stupid. The way to solve the “DH-1024 vs. sketchy curve library” problem is, the same as the “should I use Blowfish or IDEA?” problem. Don’t have that problem. Use Curve25519.

Avoid: conventional DH, SRP, J-PAKE, handshakes and negotiation, elaborate key negotiation schemes that only use block ciphers, srand(time()).*

Website security

  • Percival, 2009: Use OpenSSL.
  • Ptacek, 2015: Remains: OpenSSL, or BoringSSL if you can. Or just use AWS ELBs
  • Latacora, 2018: Use AWS ALB/ELB or OpenSSL, with LetsEncrypt

You care about this if: you have a website.

If you can pay AWS not to care about this problem, we recommend you do that.

Otherwise, there was a dark period between 2010 and 2016 where OpenSSL might not have been the right answer, but that time has passed. OpenSSL has gotten better, and, more importantly, OpenSSL is on-the-ball with vulnerability disclosure and response.

Using anything besides OpenSSL will drastically complicate your system for little, no, or even negative security benefit. So just keep it simple.

Speaking of simple: LetsEncrypt is free and automated. Set up a cron job to re-fetch certificates regularly, and test it.

Avoid: offbeat TLS libraries like PolarSSL, GnuTLS, and MatrixSSL.

Client-server application security

  • Percival, 2009: Distribute the server's public RSA key with the client code, and do not use SSL.
  • Ptacek, 2015: Use OpenSSL, or BoringSSL if you can. Or just use AWS ELBs
  • Latacora, 2018: Use AWS ALB/ELB or OpenSSL, with LetsEncrypt

You care about this if: the previous recommendations about public-key crypto were relevant to you.*

It seems a little crazy to recommend TLS given its recent history:

  • The Logjam DH negotiation attack
  • The FREAK export cipher attack
  • The POODLE CBC oracle attack
  • The RC4 fiasco
  • The CRIME compression attack
  • The Lucky13 CBC padding oracle timing attack
  • The BEAST CBC chained IV attack
  • Heartbleed
  • Renegotiation
  • Triple Handshakes
  • Compromised CAs
  • DROWN (though personally we’re warped and an opportunity to play with attacks like DROWN would be in our “pro” column)

Here's why you should still use TLS for your custom transport problem:

  • In custom protocols, you don’t have to (and shouldn’t) depend on 3rd party CAs. You don’t even have to use CAs at all (though it’s not hard to set up your own); you can just use a whitelist of self-signed certificates which is approximately what SSH does by default, and what you’d come up with on your own.
  • Since you’re doing a custom protocol, you can use the best possible TLS cipher suites: TLS 1.2+, Curve25519, and ChaPoly. That eliminates most attacks on TLS. The reason everyone doesn’t do this is that they need backwards-compatibility, but in custom protocols you don’t need that.
  • Many of these attacks only work against browsers, because they rely on the victim accepting and executing attacker-controlled Javascript in order to generate repeated known/chosen plaintexts.

Avoid: designing your own encrypted transport, which is a genuinely hard engineering problem; using TLS but in a default configuration, like, with "curl"; using "curl", IPSEC.

Online backups

  • Percival, 2009: Use Tarsnap.
  • Ptacek, 2015: Use Tarsnap.
  • Latacora, 2018: Store PMAC-SIV-encrypted arc files to S3 and save fingerprints of your backups to an ERC20-compatible blockchain.

You care about this if: you bother backing things up.

Just kidding. You should still use Tarsnap.

(This post was syndicated on the Latacora blog.)

Smaller Clojure Docker builds with multi-stage builds

A common pattern in Docker is to use a separate build environment from the runtime environment. Many platforms have different requirements when you're generating a runnable artifact than when you're running it.

In languages like Go, Rust or C, where the most common implementations produce native binaries, the resulting artifact may require nothing from the environment at all, or perhaps as little as a C standard library. Even in languages like Python that don't typically have a build step, you might indirectly use code that still requires compilation. Common examples include OpenSSL with pyca/cryptography or NETLIB and other numerical libraries with numpy/scipy.

In Clojure, you can easily build "uberjars" with both lein and boot. These are jars (the standard JVM deployable artifact) that come with all dependencies prepackaged, requiring nothing beyond what's in the Java standard library (rt.jar). While this still requires a JRE to run, that is still much smaller than the full development environment.

There are a few advantages to separating environments. It all boils down to them not having anything in them they don't need. That has clear performance advantages, although Docker has historically mitigated this problem with layered pulls. It can have security benefits as well: you can't have bugs in software you don't ship. Even software that isn't directly used in the build process can be affected: some build environments will contain plenty of software that is never used that would normally carry over into your production environments.

Historically, most users of Docker haven't bothered. Even if there are advantages, they aren't worth the hassle of having separate Docker environments and ferrying data between them. While different ways of effectively sharing data between containers have been available for years, people who wanted a shared build step have mostly had to write their own tooling. For example, my icecap project has a batch file with an embedded Dockerfile that builds libsodium debs.

The upcoming release of Docker will add support for a new feature called multi-stage builds, where this pattern is much simpler. Dockerfiles themselves know about your precursor environments now, and future containers have full access to previous containers for copying build artifacts around. This requires Docker 17.05 or newer.

Here's an example Dockerfile that builds an uberjar from a standard lein-based app, and puts it in a new JRE image:

FROM clojure AS build-env
WORKDIR /usr/src/myapp
COPY project.clj /usr/src/myapp/
RUN lein deps
COPY . /usr/src/myapp
RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" myapp-standalone.jar

FROM openjdk:8-jre-alpine
WORKDIR /myapp
COPY --from=build-env /usr/src/myapp/myapp-standalone.jar /myapp/myapp.jar
ENTRYPOINT ["java", "-jar", "/myapp/myapp.jar"]

This captures the uberjar name from the lein uberjar output. If your uberjar name doesn't end in .standalone.jar, that won't work. You can change the name of the uberjar with the :uberjar-name setting in project.clj. If you set it to myapp-standalone.jar, you don't need the gnarly sed expression anymore at all, and can just call lein uberjar. (Thanks to Łukasz Korecki for the suggestion!)

The full clojure base image is a whopping 629MB (according to docker images), whereas openjdk:8-jre-alpine clocks in at 81.4MB. That's a little bit of an unfair comparison: clojure also has an alpine-based image. However, this still illustrates the savings compared to the most commonly used Docker image.

There are still good reasons for not using multi-stage builds. In the icecap example above, the entire point is to use Docker as a build system to produce a deb artifact outside of Docker. However, that's a pretty exotic use case: for most people this will hopefully make smaller Docker images an easy reality.

Edited: The original blog post said that the Docker version to support this feature was in beta at time of writing. That was/is correct, but it's since been released, so I updated the post.

Edited:* Łukasz Korecki pointed out that project.clj has an :uberjar-name parameter which can be used to avoid the gnarly sed expression. Thanks Łukasz!

2016 rMBP caveats

I bought the 2016 15" retina MacBook Pro as soon as it became available. I've had it for a week now, and there have been some issues you might want to be aware of if you'd like to get one.

(There are a bunch of links to Amazon in this article. They're not affiliate links.)

System Integrity Protection is often disabled

I noticed via Twitter that some people were reporting that System Integrity Protection (SIP) was disabled by default on their Macs. SIP is a mechanism via which macOS protects critical system files from being overwritten.

You can check if SIP is enabled on your system by running csrutil status in a terminal. Sure enough, SIP was disabled for both me and my wife's new rMBPs. To enable SIP, boot into the recovery mode (hold ⌘-R when booting), open a terminal, type csrutil enable and reboot.

Perhaps unrelatedly, different out-of-the-box rMBPs appear to have different builds of OS X Sierra 10.12.1.

Thunderbolt 2 dongle doesn't work with external screens

I have a Dell 27" 4k montior (P2715Q). I used it with my previous-generation rMBP with a DisplayPort-to-mDP2 cable to connect it to its Thunderbolt 2 port. When buying my laptop, it suggested I get a Thunderbolt 3 to Thunderbolt 2 dongle. I was expecting to get a Thunderbolt 2 port like the one on my previous Mac. When I plugged it in to my monitor, it told me that there was a cable plugged in, but no signal coming from the computer.

My understanding was that the Thunderbolt spec implies PCIe lanes and other protocols over the same port. Specifically, Thunderbolt 2 means 4 PCI Express 2.0 lanes with DisplayPort 1.2; at a cursory glance, Wikipedia agrees. (Thunderbolt 3 adds HDMI 2.0 and USB 3.1 gen 2.)

I spent about an hour and a half on the phone with AppleCare folks. The Apple support people were very friendly. (I'm guessing their instructions tell them to never, under any circumstances, interrupt a customer. It was a little weird.) I was redirected a few times. They had a variety of suggestions, including:

  • Changing my monitor to MST mode, which shouldn't be necessary for DisplayPort 1.2-supporting devices, and did nothing but make my monitor not work with my old rMBP either. Fortunately I was able to recover via HDMI to my old laptop.
  • Buying the Apple Digital AV Adapter instead. That adapter used HDMI instead of mDP2. That's a significant downgrade; my use of DisplayPort was intentional, because DisplayPort 1.2 is the only way I can power the 4K display at 60Hz. (The new adapter does not support HDMI 2.0, which is necessary for 4K@60Hz.)
  • Buying a third-party DisplayPort adapter or dock. This is precarious at best. Most existing devices don't work with the new rMBP, because they use a previous-generation TI chip. There are plenty of docks that wont work, by StarTech, Dell, Kensington and Plugable. I found one Dock by CalDigit that will ostensibly work with the new rMBP, but doesn't supply enough power to charge it.

Eventually, we found a KB article that spells out that the Thunderbolt dongle doesn't work for DisplayPort displays:

The Thunderbolt 3 (USB-C) to Thunderbolt 2 Adapter doesn't support connections to these devices:

  • Apple DisplayPort display
  • DisplayPort devices or accessories, such as Mini DisplayPort to HDMI or Mini DisplayPort to VGA adapters
  • 4K Mini DisplayPort displays that don’t have Thunderbolt

I'm a little vindicated by the Mac Store review page for the dongle; apparently I wasn't the only person to expect that. (I was unable to see the reviews before my purchase, because I purchased it with my Mac, which doesn't show reviews. Also, the product was brand new at the time, and didn't have these reviews yet.)

Belkin and OWC will be shipping docks that allegedly work with the new rMBP, but Belkin's is currently unavailable with no ship date mentioned, and OWC claims February 2017.

WiFi failing with USB-C devices plugged in

Just as I was going to start writing this post, I noticed that I wasn't able to sync my blog repository from GitHub:

Get https://api.github.com/repos/lvh/lvh.github.io: dial tcp 192.30.253.116:443: connect: network is unreachable

It didn't click at first what was going on. I restarted my router, connected to different networks, tried a different machine -- all telling me it was this laptop that was misbehaving. I started trying everything, and realized I had recently plugged in my WD backup drive from which I was copying over an SSH key. It's a USB 3.0 drive that I'm connecting via an AUKEY USB 3 to USB-C converter. I removed the drive, and my WiFi starts working again. Plugging it back in does not instantly, but eventually, break WiFi again.

After searching, I was able to find someone with the same problem. It is unclear to me if this issue is related to the first-gen TI chip issue mentioned above. In that video, the authors are also using a USB 3.0 to USB-C plug, albeit a different one from mine. I don't have a reference USB-C machine that isn't a new 2016 rMBP to test with. However, this seems plausible, because the USB 3.0 dongle I purchased from Apple ostensibly works fine.

This does not seem like a reasonable failure mode.

The escape key, and the new keyboard

I spend most of my day in Emacs. I'm perfectly happy with the new keyboard. I've also used the regular MacBook butterfly keyboard, and the new version is significantly better. I've never had a problem with not having an escape key; every app where I would've cared to press it had an escape key drawn on the new Touch Bar. However, not having tactile feedback for the escape key is annoying. When I was setting up my box and quickly editing a file in vim, I successfully pressed Escape to exit insert mode -- but I ended up pressing it five times because I thought I didn't hit it. Apparently the visual feedback vim gives me that I've exited insert mode is not, actually, what my brain relies on. I'll let you know if I get used to it.

Charging

I'll miss the safety of Magsafe, but being able to plug in your charger on either side is an unexpected nice benefit.

Conclusion

I was ready to accept a transition period of dongles; I bought into it, literally and figuratively. However, most of the dongles don't actually work, and that sucks. So, maybe wait for the refresh, or at least until the high-quality docks are available.