Are Your Web Server Configs 2018 Compliant?

Setting Web Server Options for 2018

Welcome to Infrastructure Week July 2018! New articles and tools every day this week.

Over the past couple years web servers have added support for more new operating system features, protocols, and encryption systems. Are you using the best options for 2018 or are you stuck using features you learned how to configure years ago?

nginx

2016 Feature: Simultaneous RSA and EC Certificates

nginx added support for using multiple TLS certificates per server block in 1.11.0 released 2016-05-24. You should be using both RSA and EC certs on all your domains now.

If not, get your RSA and EC certs using my Let’s Encrypt automation tool then configure nginx to use both kinds of certificates and keys:

nginx allows adding both certificates using the same repeated ssl_certificate{_key} syntax. nginx figures out internally which one is RSA and which one is EC and which clients need which certs.

2012 Feature: OCSP Stapling

Staple

OCSP is Online Certificate Status Protocol which basically lets you request a CA-signed status of certificates, then your servers “staple” their own signed validation of your certificate inline with all new TLS connections.

OCSP Stapling is a voluntary opt-in by your web servers (no mail servers seem to support it yet) so your clients don’t have to send their own blocking and privacy-leaking OCSP requests to validate your certificates.

Must-Staple

OCSP also has a mandatory counterpart: the new(ish) option: Must-Staple. You can create certificates with an option declaring the certificate must have an attached live OCSP Response or else clients must refuse to connect. Many developers don’t recognize the difference between opportunistic OCSP Stapling and OCSP Must-Staple which leads to many broken servers in the wild (cough nginxdevelopers cough).

Must-Staple sounds like a great feature, right? Except there’s some infrastructure downsides:

  • if your web server doesn’t attach the OCSP response, clients refuse to connect to your server since having a bundled OCSP response is a required condition of your certificate
  • many kinds of TLS servers don’t support even voluntary OCSP Stapling, much less Must-Staple, so you can’t use Must-Staple certificates on things like mail servers or internal database servers
  • nginx refuses to properly implement OCSP support, so if you use Must-Staple with nginx, your clients will not be able to reliably connect to your TLS/http2 servers.

nginx fails OCSP Must-Staple support in two ways:

  • nginx requests OCSP staples async when the first client connects to a server after reload
    • meaning: after you restart/reload your server, the first clients to connect will not get OCSP Staples until the cache populates.
    • obviously, with Must-Staple certificates, all those connections will fail from the client since connections don’t include the required Staple extension data.
    • the typical “workaround” for this problem is manually populating an OCSP Staple file yourself to avoid the nginx async lookup behavior
      • you must manually update your on-disk OCSP Staple cache regularly so it doesn’t expire
      • but this is an innate design problem considering:
  • since nginx added multiple certificate support (and they don’t give a shit about correct OCSP stapling behavior), you can’t provide your own hard-coded OCSP Responses in a server block because nginx only supports one ssl_stapling_file directive per server even when the server block has multiple certificates.
    • nginx developers continue to call OCSP just an “optimization” when it clearly isn’t. They think it’s a temporary feature they can ignore until the world stops using it, but the world is moving towards the feature more and more.

The problem of nginx developers refusing to understand OCSP has existed for years and nginx can’t seem to be bothered to reliably serve clients connecting to server blocks requiring Must-Staple. As of July 2018, you can’t use OCSP Must-Staple with nginx1.

2013 Feature: TCP_FASTOPEN

A (relatively) new TCP feature allows repeated TCP connections to avoid a new 3-way handshake by fancy session resumption.

nginx added support for TCP_FASTOPEN in 1.5.8 released 2013-12-17.

TCP_FASTOPEN requires two steps to enable:

  • first, you have to enable it for incoming connections under Linux:
    • sysctl net.ipv4.tcp_fastopen=3
      • obviously configure the option to persist too
      • more usage details available at TCP Fast Open
  • then you have to enable it on each of your nginx hosts by adding fastopen=LIMIT to your listen directives (e.g. fastopen=64 or fastopen=4096 etc) where LIMIT prevents server resource exhaustion.

Many articles say you must compile nginx with a define -DTCP_FASTOPEN=23, but the latest nginx source I checked enables fastopen via automatic build-time feature detection. So, if your system supports TCP_FASTOPEN, nginx is built for it without extra flags.

2015 Feature: HTTP/2

nginx implemented spdy replacement HTTP/2 as of 1.9.5 released 2015-09-22 (then spent 18 months fixing leaks, crashes, and icky bugs, but it works better now). http2 requires TLS encrypted connections and enables other performance improvements over the legacy HTTP/1.1 text protocol.

Implement support for http2 by adding ssl http2 to each listen parameter on port 443 (which also obviously requires setting up proper TLS keys and certificates).

2015 Feature: SO_REUSEPORT

Linux added the ability to act as an in-kernel load balancer among processes all listening on the same socket.

nginx added support for SO_REUSEPORT and it benchmarked favorably.

Enabling SO_REUSEPORT is as simple as adding reuseport to your listen directives, but you will want to monitor your performance before and after (if you are a high traffic site) because not everything is puppies and roses in the kernel.

2014 Feature: SSL Session Tickets

nginx added ssl_session_tickets support in 1.5.9 released 2014-01-22 to enable faster and less computationally intensive resumption of TLS sessions.

You enable it with simple config options, but some viewpoints recommend not using the feature if you don’t manage your servers carefully.

Basically, web servers (nginx, apache, haproxy, anything using OpenSSL APIs) never expire the shared-key ticket cache. Since the cache is basically full of symmetric keys, it breaks your TLS forward secrecy if the server never rotates its static keys.

Fortunately, nginx does generate a new random in-memory key every time it starts or reloads. So, all you have to do is create a cron job or timer to nginx -s reload your web server every 6, 12, or 24 hours depending on your needs. For larger server deployments, you can generate a shared ticket key file and distribute it to all your servers (in tmpfs please, don’t let it persist), then rotate the shared ticket key file every 6-20 hours.

You can verify your server did rotate its ticket cache by running this script against a server. Remember the first line of output, then reload your server and run the script again. The first line of output should change. If it changed, your server generated a new ssl session ticket cache key:

2018 Feature: HTTP/2 Push

HTTP/2 allows web servers to async push extra files to clients any time they want. You can use this feature to, for example, push style sheets or certain images to a client when they request a page to reduce the number of round trips required.

nginx has a nice writeup on the feature. For config details, see the http2_push directive.

haproxy

As of 2018, haproxy now supports all those new web features too:

  • HTTP/2 (via adding alpn h2,http/1.1 to your bind options)
    • (and obviously implies full TLS support as well)
      • further implying you must add ssl crt <certfile> to your bind to enable encryption for your h2 connections
  • TCP_FASTOPEN (via tfo)
  • SO_REUSEPORT (since 2007)
  • Serving both RSA and EC certs from one host (see “cert bundles” as of 1.7)
    • previous to 1.7, there was a very convoluted configuration hack to obtain the same result.
  • Unlike nginx, haproxy does support proper OCSP Stapling configurations for unlimited numbers of certificates, but you must refresh the OCSP status yourself (via cron/timer) then notify haproxy so it can start serving the new OCSP response.
  • TLS session keys (see tls-ticket-keys, but you must create your own random keys and replace them every 6-24 hours).
  • and a bonus feature: haproxy now supports caching small objects in its frontend for things like favicons so it doesn’t have to continually bother static backends where responses are never changing anyway.

Plus, haproxy fixed their multi-year-long “problem” of uninterrupted reloads (which seems to technically have been a Linux SMP locking issue) described in the great writeup: No More Hacks!

Environments

On top of all the changes and improvements above, underlying fads and best practices are always changing too.

You can pick your favorite TLS suites using the Mozilla TLS Config Tool. Enjoy deciding how restrictive or loose you want to control your supported clients.

Also these days, instead of burning potentially hours of time trying to generate “custom” DH params, you should use well-known qualified good versions as described at pre-defined DHE groups.

Plus, you’ll want to make sure, since you are now always running 100% TLS HTTP/2 forever, you set your HSTS to a big value for all requests.

Conclusion

Everything changes. In the past four years we’ve seen the rise of public EC certificates, wider use of SO_REUSEPORT, the birth of HTTP/2 from the guts of spdy, OS vendors deploying newer kernels with support for things like TCP_FASTOPEN (it takes OS vendors 2-5 years to ship new kernel features after they hit mainline), and all server projects rushing to integrate and implement all the various features all at once.

Keep on configurin’

-Matt@mattsta☁mattsta


Stay tuned for more Infrastructure Week July 2018! New articles and tools every day this week.


  1. though, some people suggest hacks where you create a nginx wrapper start/restart/reload script to immediately connect to all your domains after startup to “prime the cache,” but even that isn’t reliable because “live” clients can still get through before responses are cached. For a reliable solution you would have to:

    • add firewall rule disabling nginx access to the public
    • restart/reload nginx
    • script connections to each of your domains over TLS for each of their RSA and EC certs so nginx attempts to populate OCSP caches
    • script new connections in a loop to verify nginx is including OCSP responses with each of your certificates now
    • re-enable public nginx access through your firewall

    Nobody is going to do that properly. They’ll restart nginx, let clients fail, then let them reload and eventually get through.