HTTP/2 for Apache httpd


So, Apache 2.4.24 is about to get tagged and tested Real Soon Now and I managed to find some time to work on proxy setups and mod_h2. Which was needed badly. The result - so far - is the shiny v1.7.5 which you can drop into an Apache 2.4.23.

If you still run the plain vanilla mod_h2 that comes with Apache 2.4.23, you will get several benefits from upgrading (be it from here or via the next Apache release). I describe those in more detail below:


Connection timeouts are only active while doing IO. When the backend is happily chewing on a request, mod_http2 will wait very patiently for it to produce something. Only when request processing reads or writes and is blocked will the timeout happen. This also means that ProxyTimeout is treated separately from Timeout, as it should be.

Example: you have timely clients and set a Timeout of 10 seconds. So when reading DATA for requests, or sending DATA for responses is blocked for more than 10 seconds, the request fails. But in the backend, you have a slow, old application server and requests can be quite complex, so you are willing to let it chew at a response for 5 minutes, before you suspect that something is wrong. You set the ProxyTimeout to 300 seconds.

Before it got fixed, such a setup was not possible, since the 10 seconds would always trigger, even while waiting on the backend to produce something. Not so now.


There was a CGI related problem. To be more exact, it was mod_cgid related. You see, mod_cgid talks to a daemon and needs some kind of identifier for each CGI process it triggers (so that it can kill a process it is not happy with, for example). Since several httpd processes might talk to the same daemon, such an identifier needs to be unique among the whole server.

Now, connections have a server-wide unique ID and since there is only one request per connection, let's use the connection id to identify our CGI request. And this works nicely for HTTP/1.1. However in HTTP/2, there are several requests per connection at the same time. And up till now, when two or more of them where CGIs, mod_cgid assigned the same id. And maybe cancelled the wrong one. Yikes!

Since this only messed up CGIs on the same connection, it's not a security risk. You cannot mess with CGIs from someone else.

Happily, connection ids are of type long. That means on 64-bit systems, and with the way Apache generates these ids, it is possible to squeeze the HTTP/2 stream id into it and have no collisions. On 32-bit systems, though, this needs more work. And the final result is not in yet...

Needless to say, mod_http2 smoke tests now include those that run 50 uploads on 2 connections each against a CGI script and check the completeness of all results.

Expecting a 100-continue

Many clients prefer the 'Expect: 100-continue' request header when performing uploads. This worked for simple setups, but did not do the neccessary things when acting as a HTTP proxy. It generated the '100 Continue' response way too early and suppressed the header when talking to the backend server.

This has been fixed now. Expect headers are passed to the backend and only when the backend sends the 100 will mod_http2 send this on to the client. So, now also HTTP/2 clients can be sure that 100 Continue means "I am willing to take in your data, please send it".

Graceful restarts

Graceful server restarts lead to early request terminations. Now the module knows better and will process ongoing requests to the end before shutting down the connection. And it will notify the client that it is no longer willing to accept new ones.

HTTP/2 Proxy Health

When adding more and more upload tests, those also executed in the mod_proxy_http2 setups, and - to my misery - exhibited spurious failures now and then. What was going on?

A closer look (and tons of trace outputs) showed that the failed requests where all sending data to the server. mod_proxy_http2 did send it on to the backend and when the backend failed, no one was in possession of the data any longer and the request had to fail.

But why did the backend fail? Turns out, it was no real failure, but the backend was just closing its connection after a keepalive timeout expired. The Apache proxy infrastructure keeps connections around for future reuse, so mod_proxy_http2 often gets a connection where a HTTP/2 session has already been established and used. It can fire new requests right away and everyone is happy.

But when the backend closed the connection, it sends a last GOAWAY frame, which is sitting in the connection buffers and make the connection look fine and healthy. It is handed to mod_proxy_http2 which immediately sends requests over it, then reads only to find that the other side has already left the party.

Originally, I though to fix this by letting mod_proxy_http2 read from a live connection first. If it finds a GOAWAY, it can chuck the connection and create a new one. But what if the GOAWAY has not yet arrived, but is alreay on its way? The change would not help here.

What is implemented now is safe even in those cases: mod_proxy_http2 will continue to send of requests right away, but will suspend sending and request DATA until it is certain that the connection is still good. This means that request processing can start without unnecessary delays, but sending DATA will have the penalty of 1 RTT - if the connection was "old", e.g. did not receive frames for more than a second. This seems quite ok, because RTT to backends is usually small and on busy servers, where backend connections stay busy, no delays are necessary.

Hope to have a new Apache release by the end of the month. If you cannot wait, grab the lastest HTTP/2 support here. And let me know how it worked for you!

M√ľnster, 18.10.2016,

Stefan Eissing, greenbytes GmbH

Copyright (C) 2016 greenbytes GmbH

Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without warranty of any kind. See LICENSE for details.