Return {ok, Req, State, hibernate}
or {reply, Data, Req, State, hibernate} to hibernate the websocket
process and save up memory and CPU. You should hibernate processes
that will receive few messages. So probably most of them.
It removes all the non-essential data from the HTTP request record.
It allows some applications to make better use of their memory,
for example websockets which do not need to keep all the headers
information and can simply discard it using this function.
Following mochiweb and misultin's example here even though I'm not
too thrilled about starting them and not stopping but it's optional
and the application's author can start/stop them as normal anyway.
The formatted date is generated and kept up to date regularly
by a gen_server process storing it in the cowboy_clock ets table.
Then it is retrieved by other processes simply by reading the table.
Limits the number of parallel requests processed at once.
Waiting requests are kept in the accept queue.
The limit is not explicitly observed, but it should be
around the given value at any time. Defaults to 1024.
Thanks to ostinelli's benchmark to point out the issue hopefully
solved by this and the previous commit.
The dispatcher now accepts '...' as the leading segment of Host and the
trailing segment of Path, this special atom matches any remaining path tail.
When given "cowboy.bugs.dev-extend.eu", host rule ['...', <<"dev-extend">>,
<<"eu">>] matches and fills host_info with [<<"cowboy">>, <<"bugs">>].
When given "/a/b/c/d", path rule [<<"a">>, <<"b">>, '...'] matches and fills
path_info with [<<"c">>, <<"d">>].
The server now does a single recv (or more, but only if needed)
which is then sent to erlang:decode_packet/3 multiple times. Since
most requests are smaller than the default MTU on many platforms,
we benefit from this greatly.
In the case of requests with a body, the server usually read at
least part of the body on the first recv. This is bufferized
properly and used when later retrieving the body.
In the case of pipelined requests, we can end up reading many
requests in a single recv, which are then handled properly using
only the buffer containing the received data.
This fixes issues with the http_load benchmark tool. The default backlog
option from OTP only queues up to 5 connections, which is way too low for
a fast-responding server.
Issue initially found thanks to DeadZen bugging me to test cowboy with
http_load. Fix found thanks to ostinelli's misultin already having the
backlog option which was the one thing it did differently than cowboy.