Howto create an NGINX module that hooks into upstream routing Saturday, 26. November 2011
(Originally published at Efficientcloud. I decided to x-post it here for my archives.)
During the last few weeks, we were working on a NGINX load-balancing module that needs to ask another server where to route requests.
As you know, NGINX is asynchronous, so we can't just block for a few hundred miliseconds and keep all other requests waiting. So we patched the server to add a "preconnect" hook in the ngxhttpupstream_module, which allows us to overwrite the target server at the last minute.
In this article, I'll introduce the basic building blocks you'll need to create a load-balancing module and give hints on how to use this patch for NGINX-1.0.5 that enables us to create asynchronous calls in exactly that phase.
NGX module dev basics
Before you start developing, read the Debugging Page on the NGINX Wiki.
For easier debugging, you can configure the server to be single-process:
daemon off; master_process off;
so you can launch the program with gdb and get immediate access to backtraces etc.
(This of course does not help with shared memory & concurrency issues etc, but we won't touch those here anyways.)
You can also do printf-style debugging by writing to the debug log using
Since NGINX' source code is a brilliant but rather undocumented mess, we're going to have to look at a few things here:
- http load-balancing module basics
- how to create connections to third-party servers from your module
However, you'll be well-advised to also read Evan Miller's great NGINX Modules Guides before reading this.
It's good to keep in mind that
- nginx is brilliant
- if you think you need a library, look twice. nginx provides a lot of things for you
- if you still think you need a library make sure it won't block
Load-Balancing Module Basics
So we're sure you've read Evan Miller's Guides, but let's refresh the main lessons concerning Load Balancing modules:
The registration function
is referenced by the configuration command.
has access to | ngx_http_upstream_srv_conf_t *uscf; by calling | uscf = ngx_http_conf_get_module_srv_conf(cf, ngx_http_upstream_module); sets | uscf->peer.init_upstream /* to your module's upstream initialization function */ | uscf->flags /* to whatever your module requires, see emillers guide for acceptable flags */
The upstream initialization function
was set in the registration function.
important params | ngx_http_upstream_srv_conf_t *us sets | us->peer.init /* the peer initialization function */ | us->peer.data /* usually a struct containing information about all your peers; | you'll probably want to access us->servers and get their ips from there | again, emillers guide has a nice example | */
The peer initialization function
was set in the upstream initialization function, is called once per request and sets up data structures that are persistent across backend retries.
important params | ngx_http_request_t *r | ngx_http_upstream_srv_conf_t *us has access to | us->peer.data /* set earlier */ should set | r->upstream->peer.data /* the persistent struct */ | r->upstream->peer.free /* peer release function; tracks failures */ | r->upstream->peer.get /* load-balancing function */ | r->upstream->peer.tries /* how many retries */ btw, | r->upstream->peer /* is of type *ngx_peer_connection_t */
The load-balancing function
Was set in the peer initialization function and has access to the backend-retry-persistent data structure and can choose which of the possible peers to connect to.
important params | ngx_peer_connection_t *pc | void *data /* formerly r->upstream->peer.data */ should set | pc->sockaddr | pc->socklen | pc->name
It should probably track the number of connections in your data structure, so it tries a different upstream next time it's called.
With this knowledge, you should be able to understand other modules' source code. It's a good idea to read the roundrobin and/or hash load balancing modules source code to get a more complete view of "how the professionals do it".
Instead of sockets, you'll want to use
ngx_connect_peer with ngx_peer_connection_t *c
First, alloc the structure and set the following things
c->log /* likely to r->connection->log */ c->name /* likely to the IP or hostname you're connecting to; as ngx_str_t */ c->sockaddr c->socklen c->get /* likely to ngx_event_get_peer, which is a dummy method. you could also skip setting sockaddr&socklen and set it in a function you define here. */
Then you can
ngx_connect_peer(c) /* initialize the connection */ /* and set callbacks: */ c->connection->data /* to whatever data you'll need to use in the read and write handlers. this should likely include references to buffers of things you should send and expect to receive */ c->connection->read->handler /* read handler callback */ c->connection->write->handler /* write handler callback */ /* you'll also want to set timeouts: */ ngx_add_timer(c->connection->read, $timeout) ngx_add_timer(c->connection->write, $timeout) /* and tell the request that there's another connection it depends on */ r->main->count++;
You'll notice that there are no major close or error handlers, you'll have to handle most of the failure modes in the read and write callbacks.
The handlers are functions that
receive | ngx_event_t *e have access to | c = e->data /* is the connection; *ngx_connection_t */ | c->data /* contains whatever you set earlier as c->connection->data */
In these handlers, you'll probably want to ...
/* check for timeouts */ e->timedout /* send or receive data */ n = ngx_send(…) /* in the write handler */ n = ngx_recv(…) /* in the receive handler */ /* Keep in mind that incoming data can be fragmented due to network issues. */ ngx_close_connection(c) /* when you're done */ /* interesting functions for parsing messages: */ ngx_inet_addr(unsigned char *, size_t) ngx_atoi and htons
Putting it together
Our patch adds a field to ngxpeerconnection_t called preconnect, and causes the upstream handler to call the function instead of actually connecting to the peer.
In the peer initialization function, you can therefore set
to a function of your choosing.
The preconnect function
Should set up whatever asynchronous operation you'd like to execute.
It receives the same arguments as
ngx_http_upstream_connect, which are
ngx_http_request_t *r ngx_http_upstream_t *u
You'll need both of them, so you should create a structure that includes them and make sure later functions will receive them (via connection->data pointers, for example)
You can then do again whatever you'd do in the load-balancing function.
Whenever you are done, the functions available to you are
ngx_http_upstream_connect_real(r,u) /* to continue the connection */ ngx_http_upstream_next(r,u) /* to indicate that this attempt was unsuccessful */
Cleaning Up after yourself
If you don't want to spend hours debugging weird segfaults (as I did), you'll want to clean up after yourself. Resist the temptation to use the request's memory pool cleanup hook, and use
In the cleanup function, you'll want to close your outgoing connection if it's still open. This will also unregister your read and write handlers and thus them from accessing the then-unusable request object and whatever depends on it.
You should now be able to create nginx upstream modules that can ask another server where to connect to. Good luck!