Import cowboy.
Cowboy is a complete and light erlang webserver, used to serve the dudeswave blog. URL: https://github.com/ninenines/cowboymain
parent
8ebac63dac
commit
9fdf706bcb
|
@ -0,0 +1,334 @@
|
||||||
|
= Contributing
|
||||||
|
|
||||||
|
This document is a guide on how to best contribute to this project.
|
||||||
|
|
||||||
|
== Definitions
|
||||||
|
|
||||||
|
*SHOULD* describes optional steps. *MUST* describes mandatory steps.
|
||||||
|
|
||||||
|
*SHOULD NOT* and *MUST NOT* describes pitfalls to avoid.
|
||||||
|
|
||||||
|
_Your local copy_ refers to the copy of the repository that you have
|
||||||
|
on your computer. _origin_ refers to your fork of the project. _upstream_
|
||||||
|
refers to the official repository for this project.
|
||||||
|
|
||||||
|
== Discussions
|
||||||
|
|
||||||
|
For general discussion about this project, please open a ticket.
|
||||||
|
Feedback is always welcome and may transform in tasks to improve
|
||||||
|
the project, so having the discussion start there is a plus.
|
||||||
|
|
||||||
|
Alternatively you may try the #ninenines IRC channel on Freenode,
|
||||||
|
or, if you need the discussion to stay private, you can send an
|
||||||
|
email at contact@ninenines.eu.
|
||||||
|
|
||||||
|
== Support
|
||||||
|
|
||||||
|
Free support is generally not available. The rule is that free
|
||||||
|
support is only given if doing so benefits most users. In practice
|
||||||
|
this means that free support will only be given if the issues are
|
||||||
|
due to a fault in the project itself or its documentation.
|
||||||
|
|
||||||
|
Paid support is available for all price ranges. Please send an
|
||||||
|
email to contact@ninenines.eu for more information.
|
||||||
|
|
||||||
|
== Bug reports
|
||||||
|
|
||||||
|
You *SHOULD* open a ticket for every bug you encounter, regardless
|
||||||
|
of the version you use. A ticket not only helps the project ensure
|
||||||
|
that bugs are squashed, it also helps other users who later run
|
||||||
|
into this issue. You *SHOULD* give as much information as possible
|
||||||
|
including what commit/branch, what OS/version and so on.
|
||||||
|
|
||||||
|
You *SHOULD NOT* open a ticket if another already exists for the
|
||||||
|
same issue. You *SHOULD* instead either add more information by
|
||||||
|
commenting on it, or simply comment to inform the maintainer that
|
||||||
|
you are also affected. The maintainer *SHOULD* reply to every
|
||||||
|
new ticket when they are opened. If the maintainer didn't say
|
||||||
|
anything after a few days, you *SHOULD* write a new comment asking
|
||||||
|
for more information.
|
||||||
|
|
||||||
|
You *SHOULD* provide a reproducible test case, either in the
|
||||||
|
ticket or by sending a pull request and updating the test suite.
|
||||||
|
|
||||||
|
When you have a fix ready, you *SHOULD* open a pull request,
|
||||||
|
even if the code does not fit the requirements discussed below.
|
||||||
|
Providing a fix, even a dirty one, can help other users and/or
|
||||||
|
at least get the maintainer on the right tracks.
|
||||||
|
|
||||||
|
You *SHOULD* try to relax and be patient. Some tickets are merged
|
||||||
|
or fixed quickly, others aren't. There's no real rules around that.
|
||||||
|
You can become a paying customer if you need something fast.
|
||||||
|
|
||||||
|
== Security reports
|
||||||
|
|
||||||
|
You *SHOULD* open a ticket when you identify a DoS vulnerability
|
||||||
|
in this project. You *SHOULD* include the resources needed to
|
||||||
|
DoS the project; every project can be brought down if you have
|
||||||
|
the necessary resources.
|
||||||
|
|
||||||
|
You *SHOULD* send an email to contact@ninenines.eu when you
|
||||||
|
identify a security vulnerability. If the vulnerability originates
|
||||||
|
from code inside Erlang/OTP itself, you *SHOULD* also consult
|
||||||
|
with OTP Team directly to get the problem fixed upstream.
|
||||||
|
|
||||||
|
== Feature requests
|
||||||
|
|
||||||
|
Feature requests are always welcome. To be accepted, however, they
|
||||||
|
must be well defined, make sense in the context of the project and
|
||||||
|
benefit most users.
|
||||||
|
|
||||||
|
Feature requests not benefiting most users may only be accepted
|
||||||
|
when accompanied with a proper pull request.
|
||||||
|
|
||||||
|
You *MUST* open a ticket to explain what the new feature is, even
|
||||||
|
if you are going to submit a pull request for it.
|
||||||
|
|
||||||
|
All these conditions are meant to ensure that the project stays
|
||||||
|
lightweight and maintainable.
|
||||||
|
|
||||||
|
== Documentation submissions
|
||||||
|
|
||||||
|
You *SHOULD* follow the code submission guidelines to submit
|
||||||
|
documentation.
|
||||||
|
|
||||||
|
The documentation is available in the 'doc/src/' directory. There
|
||||||
|
are three kinds of documentation: manual, guide and tutorials. The
|
||||||
|
format for the documentation is Asciidoc.
|
||||||
|
|
||||||
|
You *SHOULD* follow the same style as the surrounding documentation
|
||||||
|
when editing existing files.
|
||||||
|
|
||||||
|
You *MUST* include the source when providing media.
|
||||||
|
|
||||||
|
== Examples submissions
|
||||||
|
|
||||||
|
You *SHOULD* follow the code submission guidelines to submit examples.
|
||||||
|
|
||||||
|
The examples are available in the 'examples/' directory.
|
||||||
|
|
||||||
|
You *SHOULD* focus on exactly one thing per example.
|
||||||
|
|
||||||
|
== Code submissions
|
||||||
|
|
||||||
|
You *SHOULD* open a pull request to submit code.
|
||||||
|
|
||||||
|
You *SHOULD* open a ticket to discuss backward incompatible changes
|
||||||
|
before you submit code. This step ensures that you do not work on
|
||||||
|
a large change that will then be rejected.
|
||||||
|
|
||||||
|
You *SHOULD* send your code submission using a pull request on GitHub.
|
||||||
|
If you can't, please send an email to contact@ninenines.eu with your
|
||||||
|
patch.
|
||||||
|
|
||||||
|
The following sections explain the normal GitHub workflow.
|
||||||
|
|
||||||
|
=== Cloning
|
||||||
|
|
||||||
|
You *MUST* fork the project's repository on GitHub by clicking on the
|
||||||
|
_Fork_ button.
|
||||||
|
|
||||||
|
On the right page of your fork's page is a field named _SSH clone URL_.
|
||||||
|
Its contents will be identified as `$ORIGIN_URL` in the following snippet.
|
||||||
|
|
||||||
|
On the right side of the project's repository page is a similar field.
|
||||||
|
Its contents will be identified as `$UPSTREAM_URL`.
|
||||||
|
|
||||||
|
Finally, `$PROJECT` is the name of this project.
|
||||||
|
|
||||||
|
To setup your clone and be able to rebase when requested, run the
|
||||||
|
following commands:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ git clone $ORIGIN_URL
|
||||||
|
$ cd $PROJECT
|
||||||
|
$ git remote add upstream $UPSTREAM_URL
|
||||||
|
|
||||||
|
=== Branching
|
||||||
|
|
||||||
|
You *SHOULD* base your branch on _master_, unless your patch applies
|
||||||
|
to a stable release, in which case you need to base your branch on
|
||||||
|
the stable branch, for example _1.0.x_.
|
||||||
|
|
||||||
|
The first step is therefore to checkout the branch in question:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ git checkout 1.0.x
|
||||||
|
|
||||||
|
The next step is to update the branch to the current version from
|
||||||
|
_upstream_. In the following snippet, replace _1.0.x_ by _master_
|
||||||
|
if you are patching _master_.
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ git fetch upstream
|
||||||
|
$ git rebase upstream/1.0.x
|
||||||
|
|
||||||
|
This last command may fail and ask you to stash your changes. When
|
||||||
|
that happens, run the following sequence of commands:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ git stash
|
||||||
|
$ git rebase upstream/1.0.x
|
||||||
|
$ git stash pop
|
||||||
|
|
||||||
|
The final step is to create a new branch you can work in. The name
|
||||||
|
of the new branch is up to you, there is no particular requirement.
|
||||||
|
Replace `$BRANCH` with the branch name you came up with:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ git checkout -b $BRANCH
|
||||||
|
|
||||||
|
_Your local copy_ is now ready.
|
||||||
|
|
||||||
|
=== Source editing
|
||||||
|
|
||||||
|
There are very few rules with regard to source code editing.
|
||||||
|
|
||||||
|
You *MUST* use horizontal tabs for indentation. Use one tab
|
||||||
|
per indentation level.
|
||||||
|
|
||||||
|
You *MUST NOT* align code. You can only add or remove one
|
||||||
|
indentation level compared to the previous line.
|
||||||
|
|
||||||
|
You *SHOULD NOT* write lines more than about a hundred
|
||||||
|
characters. There is no hard limit, just try to keep it
|
||||||
|
as readable as possible.
|
||||||
|
|
||||||
|
You *SHOULD* write small functions when possible.
|
||||||
|
|
||||||
|
You *SHOULD* avoid a too big hierarchy of case clauses inside
|
||||||
|
a single function.
|
||||||
|
|
||||||
|
You *SHOULD* add tests to make sure your code works.
|
||||||
|
|
||||||
|
=== Committing
|
||||||
|
|
||||||
|
You *SHOULD* run Dialyzer and the test suite while working on
|
||||||
|
your patch, and you *SHOULD* ensure that no additional tests
|
||||||
|
fail when you finish.
|
||||||
|
|
||||||
|
You can use the following command to run Dialyzer:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ make dialyze
|
||||||
|
|
||||||
|
You have two options to run tests. You can either run tests
|
||||||
|
across all supported Erlang versions, or just on the version
|
||||||
|
you are currently using.
|
||||||
|
|
||||||
|
To test across all supported Erlang versions:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ make -k ci
|
||||||
|
|
||||||
|
To test using the current version:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ make tests
|
||||||
|
|
||||||
|
You can then open Common Test logs in 'logs/all_runs.html'.
|
||||||
|
|
||||||
|
By default Cowboy excludes a few test suites that take too
|
||||||
|
long to complete. For example all the examples are built and
|
||||||
|
tested, and one Websocket test suite is very extensive. In
|
||||||
|
order to run everything, do:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ make tests FULL=1
|
||||||
|
|
||||||
|
Once all tests pass (or at least, no new tests are failing),
|
||||||
|
you can commit your changes.
|
||||||
|
|
||||||
|
First you need to add your changes:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ git add src/file_you_edited.erl
|
||||||
|
|
||||||
|
If you want an interactive session, allowing you to filter
|
||||||
|
out changes that have nothing to do with this commit:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ git add -p
|
||||||
|
|
||||||
|
You *MUST* put all related changes inside a single commit. The
|
||||||
|
general rule is that all commits must pass tests. Fix one bug
|
||||||
|
per commit. Add one feature per commit. Separate features in
|
||||||
|
multiple commits only if smaller parts of the feature make
|
||||||
|
sense on their own.
|
||||||
|
|
||||||
|
Finally once all changes are added you can commit. This
|
||||||
|
command will open the editor of your choice where you can
|
||||||
|
put a proper commit title and message.
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ git commit
|
||||||
|
|
||||||
|
Do not use the `-m` option as it makes it easy to break the
|
||||||
|
following rules:
|
||||||
|
|
||||||
|
You *MUST* write a proper commit title and message. The commit
|
||||||
|
title is the first line and *MUST* be at most 72 characters.
|
||||||
|
The second line *MUST* be left blank. Everything after that is
|
||||||
|
the commit message. You *SHOULD* write a detailed commit
|
||||||
|
message. The lines of the message *MUST* be at most 80 characters.
|
||||||
|
You *SHOULD* explain what the commit does, what references you
|
||||||
|
used and any other information that helps understanding why
|
||||||
|
this commit exists. You *MUST NOT* include commands to close
|
||||||
|
GitHub tickets automatically.
|
||||||
|
|
||||||
|
=== Cleaning the commit history
|
||||||
|
|
||||||
|
If you create a new commit every time you make a change, however
|
||||||
|
insignificant, you *MUST* consolidate those commits before
|
||||||
|
sending the pull request.
|
||||||
|
|
||||||
|
This is done through _rebasing_. The easiest way to do so is
|
||||||
|
to use interactive rebasing, which allows you to choose which
|
||||||
|
commits to keep, squash, edit and so on. To rebase, you need
|
||||||
|
to give the original commit before you made your changes. If
|
||||||
|
you only did two changes, you can use the shortcut form `HEAD^^`:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ git rebase -i HEAD^^
|
||||||
|
|
||||||
|
=== Submitting the pull request
|
||||||
|
|
||||||
|
You *MUST* push your branch to your fork on GitHub. Replace
|
||||||
|
`$BRANCH` with your branch name:
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ git push origin $BRANCH
|
||||||
|
|
||||||
|
You can then submit the pull request using the GitHub interface.
|
||||||
|
You *SHOULD* provide an explanatory message and refer to any
|
||||||
|
previous ticket related to this patch. You *MUST NOT* include
|
||||||
|
commands to close other tickets automatically.
|
||||||
|
|
||||||
|
=== Updating the pull request
|
||||||
|
|
||||||
|
Sometimes the maintainer will ask you to change a few things.
|
||||||
|
Other times you will notice problems with your submission and
|
||||||
|
want to fix them on your own.
|
||||||
|
|
||||||
|
In either case you do not need to close the pull request. You
|
||||||
|
can just push your changes again and, if needed, force them.
|
||||||
|
This will update the pull request automatically.
|
||||||
|
|
||||||
|
[source,bash]
|
||||||
|
$ git push -f origin $BRANCH
|
||||||
|
|
||||||
|
=== Merging
|
||||||
|
|
||||||
|
This is an open source project maintained by independent developers.
|
||||||
|
Please be patient when your changes aren't merged immediately.
|
||||||
|
|
||||||
|
All pull requests run through a Continuous Integration service
|
||||||
|
to ensure nothing gets broken by the changes submitted.
|
||||||
|
|
||||||
|
Bug fixes will be merged immediately when all tests pass.
|
||||||
|
The maintainer may do style changes in the merge commit if
|
||||||
|
the submitter is not available. The maintainer *MUST* open
|
||||||
|
a new ticket if the solution could still be improved.
|
||||||
|
|
||||||
|
New features and backward incompatible changes will be merged
|
||||||
|
when all tests pass and all other requirements are fulfilled.
|
|
@ -0,0 +1,13 @@
|
||||||
|
Copyright (c) 2011-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
|
||||||
|
Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
purpose with or without fee is hereby granted, provided that the above
|
||||||
|
copyright notice and this permission notice appear in all copies.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
|
@ -0,0 +1,7 @@
|
||||||
|
.PHONY: all clean
|
||||||
|
|
||||||
|
all:
|
||||||
|
${MAKE} -C src
|
||||||
|
|
||||||
|
clean:
|
||||||
|
${MAKE} -C src clean
|
|
@ -0,0 +1,37 @@
|
||||||
|
= Cowboy
|
||||||
|
|
||||||
|
Cowboy is a small, fast and modern HTTP server for Erlang/OTP.
|
||||||
|
|
||||||
|
== Goals
|
||||||
|
|
||||||
|
Cowboy aims to provide a *complete* HTTP stack in a *small* code base.
|
||||||
|
It is optimized for *low latency* and *low memory usage*, in part
|
||||||
|
because it uses *binary strings*.
|
||||||
|
|
||||||
|
Cowboy provides *routing* capabilities, selectively dispatching requests
|
||||||
|
to handlers written in Erlang.
|
||||||
|
|
||||||
|
Because it uses Ranch for managing connections, Cowboy can easily be
|
||||||
|
*embedded* in any other application.
|
||||||
|
|
||||||
|
Cowboy is *clean* and *well tested* Erlang code.
|
||||||
|
|
||||||
|
== Online documentation
|
||||||
|
|
||||||
|
* https://ninenines.eu/docs/en/cowboy/2.12/guide[User guide]
|
||||||
|
* https://ninenines.eu/docs/en/cowboy/2.12/manual[Function reference]
|
||||||
|
|
||||||
|
== Offline documentation
|
||||||
|
|
||||||
|
* While still online, run `make docs`
|
||||||
|
* User guide available in `doc/` in PDF and HTML formats
|
||||||
|
* Function reference man pages available in `doc/man3/` and `doc/man7/`
|
||||||
|
* Run `make install-docs` to install man pages on your system
|
||||||
|
* Full documentation in Asciidoc available in `doc/src/`
|
||||||
|
* Examples available in `examples/`
|
||||||
|
|
||||||
|
== Getting help
|
||||||
|
|
||||||
|
* https://github.com/ninenines/cowboy/issues[Issues tracker]
|
||||||
|
* https://ninenines.eu/services[Commercial Support]
|
||||||
|
* https://github.com/sponsors/essen[Sponsor me!]
|
|
@ -0,0 +1,10 @@
|
||||||
|
{application, 'cowboy', [
|
||||||
|
{description, "Small, fast, modern HTTP server."},
|
||||||
|
{vsn, "2.12.0"},
|
||||||
|
{modules, ['cowboy','cowboy_app','cowboy_bstr','cowboy_children','cowboy_clear','cowboy_clock','cowboy_compress_h','cowboy_constraints','cowboy_decompress_h','cowboy_handler','cowboy_http','cowboy_http2','cowboy_loop','cowboy_metrics_h','cowboy_middleware','cowboy_req','cowboy_rest','cowboy_router','cowboy_static','cowboy_stream','cowboy_stream_h','cowboy_sub_protocol','cowboy_sup','cowboy_tls','cowboy_tracer_h','cowboy_websocket']},
|
||||||
|
{registered, [cowboy_sup,cowboy_clock]},
|
||||||
|
{applications, [kernel,stdlib,crypto,cowlib,ranch]},
|
||||||
|
{optional_applications, []},
|
||||||
|
{mod, {cowboy_app, []}},
|
||||||
|
{env, []}
|
||||||
|
]}.
|
|
@ -0,0 +1,26 @@
|
||||||
|
.PHONY: all
|
||||||
|
.SUFFIXES: .erl .beam
|
||||||
|
|
||||||
|
ERLC?= erlc -server
|
||||||
|
ERLFLAGS+= -I ../include -pa ../../cowlib -pa ../../ranch/src -pa ${PWD}
|
||||||
|
ERLOPTS+= +warn_missing_spec +warn_untyped_record
|
||||||
|
|
||||||
|
OBJS= cowboy_stream.beam cowboy_middleware.beam cowboy_sub_protocol.beam
|
||||||
|
OBJS+= cowboy_children.beam cowboy_clear.beam cowboy_clock.beam
|
||||||
|
OBJS+= cowboy_compress_h.beam cowboy_constraints.beam
|
||||||
|
OBJS+= cowboy_decompress_h.beam cowboy_handler.beam
|
||||||
|
OBJS+= cowboy_http.beam cowboy_http2.beam cowboy_loop.beam
|
||||||
|
OBJS+= cowboy_metrics_h.beam cowboy_bstr.beam
|
||||||
|
OBJS+= cowboy_req.beam cowboy_rest.beam cowboy_router.beam
|
||||||
|
OBJS+= cowboy_static.beam cowboy_stream_h.beam
|
||||||
|
OBJS+= cowboy_sup.beam cowboy_tls.beam cowboy_app.beam
|
||||||
|
OBJS+= cowboy_tracer_h.beam cowboy_websocket.beam cowboy.beam
|
||||||
|
|
||||||
|
all: ${OBJS}
|
||||||
|
|
||||||
|
.erl.beam:
|
||||||
|
${ERLC} ${ERLOPTS} ${ERLFLAGS} $<
|
||||||
|
|
||||||
|
clean:
|
||||||
|
rm -f *.beam
|
||||||
|
|
|
@ -0,0 +1,118 @@
|
||||||
|
%% Copyright (c) 2011-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy).
|
||||||
|
|
||||||
|
-export([start_clear/3]).
|
||||||
|
-export([start_tls/3]).
|
||||||
|
-export([stop_listener/1]).
|
||||||
|
-export([get_env/2]).
|
||||||
|
-export([get_env/3]).
|
||||||
|
-export([set_env/3]).
|
||||||
|
|
||||||
|
%% Internal.
|
||||||
|
-export([log/2]).
|
||||||
|
-export([log/4]).
|
||||||
|
|
||||||
|
-type opts() :: cowboy_http:opts() | cowboy_http2:opts().
|
||||||
|
-export_type([opts/0]).
|
||||||
|
|
||||||
|
-type fields() :: [atom()
|
||||||
|
| {atom(), cowboy_constraints:constraint() | [cowboy_constraints:constraint()]}
|
||||||
|
| {atom(), cowboy_constraints:constraint() | [cowboy_constraints:constraint()], any()}].
|
||||||
|
-export_type([fields/0]).
|
||||||
|
|
||||||
|
-type http_headers() :: #{binary() => iodata()}.
|
||||||
|
-export_type([http_headers/0]).
|
||||||
|
|
||||||
|
-type http_status() :: non_neg_integer() | binary().
|
||||||
|
-export_type([http_status/0]).
|
||||||
|
|
||||||
|
-type http_version() :: 'HTTP/2' | 'HTTP/1.1' | 'HTTP/1.0'.
|
||||||
|
-export_type([http_version/0]).
|
||||||
|
|
||||||
|
-spec start_clear(ranch:ref(), ranch:opts(), opts())
|
||||||
|
-> {ok, pid()} | {error, any()}.
|
||||||
|
start_clear(Ref, TransOpts0, ProtoOpts0) ->
|
||||||
|
TransOpts1 = ranch:normalize_opts(TransOpts0),
|
||||||
|
{TransOpts, ConnectionType} = ensure_connection_type(TransOpts1),
|
||||||
|
ProtoOpts = ProtoOpts0#{connection_type => ConnectionType},
|
||||||
|
ranch:start_listener(Ref, ranch_tcp, TransOpts, cowboy_clear, ProtoOpts).
|
||||||
|
|
||||||
|
-spec start_tls(ranch:ref(), ranch:opts(), opts())
|
||||||
|
-> {ok, pid()} | {error, any()}.
|
||||||
|
start_tls(Ref, TransOpts0, ProtoOpts0) ->
|
||||||
|
TransOpts1 = ranch:normalize_opts(TransOpts0),
|
||||||
|
SocketOpts = maps:get(socket_opts, TransOpts1, []),
|
||||||
|
TransOpts2 = TransOpts1#{socket_opts => [
|
||||||
|
{alpn_preferred_protocols, [<<"h2">>, <<"http/1.1">>]}
|
||||||
|
|SocketOpts]},
|
||||||
|
{TransOpts, ConnectionType} = ensure_connection_type(TransOpts2),
|
||||||
|
ProtoOpts = ProtoOpts0#{connection_type => ConnectionType},
|
||||||
|
ranch:start_listener(Ref, ranch_ssl, TransOpts, cowboy_tls, ProtoOpts).
|
||||||
|
|
||||||
|
ensure_connection_type(TransOpts=#{connection_type := ConnectionType}) ->
|
||||||
|
{TransOpts, ConnectionType};
|
||||||
|
ensure_connection_type(TransOpts) ->
|
||||||
|
{TransOpts#{connection_type => supervisor}, supervisor}.
|
||||||
|
|
||||||
|
-spec stop_listener(ranch:ref()) -> ok | {error, not_found}.
|
||||||
|
stop_listener(Ref) ->
|
||||||
|
ranch:stop_listener(Ref).
|
||||||
|
|
||||||
|
-spec get_env(ranch:ref(), atom()) -> ok.
|
||||||
|
get_env(Ref, Name) ->
|
||||||
|
Opts = ranch:get_protocol_options(Ref),
|
||||||
|
Env = maps:get(env, Opts, #{}),
|
||||||
|
maps:get(Name, Env).
|
||||||
|
|
||||||
|
-spec get_env(ranch:ref(), atom(), any()) -> ok.
|
||||||
|
get_env(Ref, Name, Default) ->
|
||||||
|
Opts = ranch:get_protocol_options(Ref),
|
||||||
|
Env = maps:get(env, Opts, #{}),
|
||||||
|
maps:get(Name, Env, Default).
|
||||||
|
|
||||||
|
-spec set_env(ranch:ref(), atom(), any()) -> ok.
|
||||||
|
set_env(Ref, Name, Value) ->
|
||||||
|
Opts = ranch:get_protocol_options(Ref),
|
||||||
|
Env = maps:get(env, Opts, #{}),
|
||||||
|
Opts2 = maps:put(env, maps:put(Name, Value, Env), Opts),
|
||||||
|
ok = ranch:set_protocol_options(Ref, Opts2).
|
||||||
|
|
||||||
|
%% Internal.
|
||||||
|
|
||||||
|
-spec log({log, logger:level(), io:format(), list()}, opts()) -> ok.
|
||||||
|
log({log, Level, Format, Args}, Opts) ->
|
||||||
|
log(Level, Format, Args, Opts).
|
||||||
|
|
||||||
|
-spec log(logger:level(), io:format(), list(), opts()) -> ok.
|
||||||
|
log(Level, Format, Args, #{logger := Logger})
|
||||||
|
when Logger =/= error_logger ->
|
||||||
|
_ = Logger:Level(Format, Args),
|
||||||
|
ok;
|
||||||
|
%% We use error_logger by default. Because error_logger does
|
||||||
|
%% not have all the levels we accept we have to do some
|
||||||
|
%% mapping to error_logger functions.
|
||||||
|
log(Level, Format, Args, _) ->
|
||||||
|
Function = case Level of
|
||||||
|
emergency -> error_msg;
|
||||||
|
alert -> error_msg;
|
||||||
|
critical -> error_msg;
|
||||||
|
error -> error_msg;
|
||||||
|
warning -> warning_msg;
|
||||||
|
notice -> warning_msg;
|
||||||
|
info -> info_msg;
|
||||||
|
debug -> info_msg
|
||||||
|
end,
|
||||||
|
error_logger:Function(Format, Args).
|
|
@ -0,0 +1,27 @@
|
||||||
|
%% Copyright (c) 2011-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_app).
|
||||||
|
-behaviour(application).
|
||||||
|
|
||||||
|
-export([start/2]).
|
||||||
|
-export([stop/1]).
|
||||||
|
|
||||||
|
-spec start(_, _) -> {ok, pid()}.
|
||||||
|
start(_, _) ->
|
||||||
|
cowboy_sup:start_link().
|
||||||
|
|
||||||
|
-spec stop(_) -> ok.
|
||||||
|
stop(_) ->
|
||||||
|
ok.
|
|
@ -0,0 +1,123 @@
|
||||||
|
%% Copyright (c) 2011-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_bstr).
|
||||||
|
|
||||||
|
%% Binary strings.
|
||||||
|
-export([capitalize_token/1]).
|
||||||
|
-export([to_lower/1]).
|
||||||
|
-export([to_upper/1]).
|
||||||
|
|
||||||
|
%% Characters.
|
||||||
|
-export([char_to_lower/1]).
|
||||||
|
-export([char_to_upper/1]).
|
||||||
|
|
||||||
|
%% The first letter and all letters after a dash are capitalized.
|
||||||
|
%% This is the form seen for header names in the HTTP/1.1 RFC and
|
||||||
|
%% others. Note that using this form isn't required, as header names
|
||||||
|
%% are case insensitive, and it is only provided for use with eventual
|
||||||
|
%% badly implemented clients.
|
||||||
|
-spec capitalize_token(B) -> B when B::binary().
|
||||||
|
capitalize_token(B) ->
|
||||||
|
capitalize_token(B, true, <<>>).
|
||||||
|
capitalize_token(<<>>, _, Acc) ->
|
||||||
|
Acc;
|
||||||
|
capitalize_token(<< $-, Rest/bits >>, _, Acc) ->
|
||||||
|
capitalize_token(Rest, true, << Acc/binary, $- >>);
|
||||||
|
capitalize_token(<< C, Rest/bits >>, true, Acc) ->
|
||||||
|
capitalize_token(Rest, false, << Acc/binary, (char_to_upper(C)) >>);
|
||||||
|
capitalize_token(<< C, Rest/bits >>, false, Acc) ->
|
||||||
|
capitalize_token(Rest, false, << Acc/binary, (char_to_lower(C)) >>).
|
||||||
|
|
||||||
|
-spec to_lower(B) -> B when B::binary().
|
||||||
|
to_lower(B) ->
|
||||||
|
<< << (char_to_lower(C)) >> || << C >> <= B >>.
|
||||||
|
|
||||||
|
-spec to_upper(B) -> B when B::binary().
|
||||||
|
to_upper(B) ->
|
||||||
|
<< << (char_to_upper(C)) >> || << C >> <= B >>.
|
||||||
|
|
||||||
|
-spec char_to_lower(char()) -> char().
|
||||||
|
char_to_lower($A) -> $a;
|
||||||
|
char_to_lower($B) -> $b;
|
||||||
|
char_to_lower($C) -> $c;
|
||||||
|
char_to_lower($D) -> $d;
|
||||||
|
char_to_lower($E) -> $e;
|
||||||
|
char_to_lower($F) -> $f;
|
||||||
|
char_to_lower($G) -> $g;
|
||||||
|
char_to_lower($H) -> $h;
|
||||||
|
char_to_lower($I) -> $i;
|
||||||
|
char_to_lower($J) -> $j;
|
||||||
|
char_to_lower($K) -> $k;
|
||||||
|
char_to_lower($L) -> $l;
|
||||||
|
char_to_lower($M) -> $m;
|
||||||
|
char_to_lower($N) -> $n;
|
||||||
|
char_to_lower($O) -> $o;
|
||||||
|
char_to_lower($P) -> $p;
|
||||||
|
char_to_lower($Q) -> $q;
|
||||||
|
char_to_lower($R) -> $r;
|
||||||
|
char_to_lower($S) -> $s;
|
||||||
|
char_to_lower($T) -> $t;
|
||||||
|
char_to_lower($U) -> $u;
|
||||||
|
char_to_lower($V) -> $v;
|
||||||
|
char_to_lower($W) -> $w;
|
||||||
|
char_to_lower($X) -> $x;
|
||||||
|
char_to_lower($Y) -> $y;
|
||||||
|
char_to_lower($Z) -> $z;
|
||||||
|
char_to_lower(Ch) -> Ch.
|
||||||
|
|
||||||
|
-spec char_to_upper(char()) -> char().
|
||||||
|
char_to_upper($a) -> $A;
|
||||||
|
char_to_upper($b) -> $B;
|
||||||
|
char_to_upper($c) -> $C;
|
||||||
|
char_to_upper($d) -> $D;
|
||||||
|
char_to_upper($e) -> $E;
|
||||||
|
char_to_upper($f) -> $F;
|
||||||
|
char_to_upper($g) -> $G;
|
||||||
|
char_to_upper($h) -> $H;
|
||||||
|
char_to_upper($i) -> $I;
|
||||||
|
char_to_upper($j) -> $J;
|
||||||
|
char_to_upper($k) -> $K;
|
||||||
|
char_to_upper($l) -> $L;
|
||||||
|
char_to_upper($m) -> $M;
|
||||||
|
char_to_upper($n) -> $N;
|
||||||
|
char_to_upper($o) -> $O;
|
||||||
|
char_to_upper($p) -> $P;
|
||||||
|
char_to_upper($q) -> $Q;
|
||||||
|
char_to_upper($r) -> $R;
|
||||||
|
char_to_upper($s) -> $S;
|
||||||
|
char_to_upper($t) -> $T;
|
||||||
|
char_to_upper($u) -> $U;
|
||||||
|
char_to_upper($v) -> $V;
|
||||||
|
char_to_upper($w) -> $W;
|
||||||
|
char_to_upper($x) -> $X;
|
||||||
|
char_to_upper($y) -> $Y;
|
||||||
|
char_to_upper($z) -> $Z;
|
||||||
|
char_to_upper(Ch) -> Ch.
|
||||||
|
|
||||||
|
%% Tests.
|
||||||
|
|
||||||
|
-ifdef(TEST).
|
||||||
|
capitalize_token_test_() ->
|
||||||
|
Tests = [
|
||||||
|
{<<"heLLo-woRld">>, <<"Hello-World">>},
|
||||||
|
{<<"Sec-Websocket-Version">>, <<"Sec-Websocket-Version">>},
|
||||||
|
{<<"Sec-WebSocket-Version">>, <<"Sec-Websocket-Version">>},
|
||||||
|
{<<"sec-websocket-version">>, <<"Sec-Websocket-Version">>},
|
||||||
|
{<<"SEC-WEBSOCKET-VERSION">>, <<"Sec-Websocket-Version">>},
|
||||||
|
{<<"Sec-WebSocket--Version">>, <<"Sec-Websocket--Version">>},
|
||||||
|
{<<"Sec-WebSocket---Version">>, <<"Sec-Websocket---Version">>}
|
||||||
|
],
|
||||||
|
[{H, fun() -> R = capitalize_token(H) end} || {H, R} <- Tests].
|
||||||
|
-endif.
|
|
@ -0,0 +1,192 @@
|
||||||
|
%% Copyright (c) 2017-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_children).
|
||||||
|
|
||||||
|
-export([init/0]).
|
||||||
|
-export([up/4]).
|
||||||
|
-export([down/2]).
|
||||||
|
-export([shutdown/2]).
|
||||||
|
-export([shutdown_timeout/3]).
|
||||||
|
-export([terminate/1]).
|
||||||
|
-export([handle_supervisor_call/4]).
|
||||||
|
|
||||||
|
-record(child, {
|
||||||
|
pid :: pid(),
|
||||||
|
streamid :: cowboy_stream:streamid() | undefined,
|
||||||
|
shutdown :: timeout(),
|
||||||
|
timer = undefined :: undefined | reference()
|
||||||
|
}).
|
||||||
|
|
||||||
|
-type children() :: [#child{}].
|
||||||
|
-export_type([children/0]).
|
||||||
|
|
||||||
|
-spec init() -> [].
|
||||||
|
init() ->
|
||||||
|
[].
|
||||||
|
|
||||||
|
-spec up(Children, pid(), cowboy_stream:streamid(), timeout())
|
||||||
|
-> Children when Children::children().
|
||||||
|
up(Children, Pid, StreamID, Shutdown) ->
|
||||||
|
[#child{
|
||||||
|
pid=Pid,
|
||||||
|
streamid=StreamID,
|
||||||
|
shutdown=Shutdown
|
||||||
|
}|Children].
|
||||||
|
|
||||||
|
-spec down(Children, pid())
|
||||||
|
-> {ok, cowboy_stream:streamid() | undefined, Children} | error
|
||||||
|
when Children::children().
|
||||||
|
down(Children0, Pid) ->
|
||||||
|
case lists:keytake(Pid, #child.pid, Children0) of
|
||||||
|
{value, #child{streamid=StreamID, timer=Ref}, Children} ->
|
||||||
|
_ = case Ref of
|
||||||
|
undefined -> ok;
|
||||||
|
_ -> erlang:cancel_timer(Ref, [{async, true}, {info, false}])
|
||||||
|
end,
|
||||||
|
{ok, StreamID, Children};
|
||||||
|
false ->
|
||||||
|
error
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% We ask the processes to shutdown first. This gives
|
||||||
|
%% a chance to processes that are trapping exits to
|
||||||
|
%% shut down gracefully. Others will exit immediately.
|
||||||
|
%%
|
||||||
|
%% @todo We currently fire one timer per process being
|
||||||
|
%% shut down. This is probably not the most efficient.
|
||||||
|
%% A more efficient solution could be to maintain a
|
||||||
|
%% single timer and decrease the shutdown time of all
|
||||||
|
%% processes when it fires. This is however much more
|
||||||
|
%% complex, and there aren't that many processes that
|
||||||
|
%% will need to be shutdown through this function, so
|
||||||
|
%% this is left for later.
|
||||||
|
-spec shutdown(Children, cowboy_stream:streamid())
|
||||||
|
-> Children when Children::children().
|
||||||
|
shutdown(Children0, StreamID) ->
|
||||||
|
[
|
||||||
|
case Child of
|
||||||
|
#child{pid=Pid, streamid=StreamID, shutdown=Shutdown} ->
|
||||||
|
exit(Pid, shutdown),
|
||||||
|
Ref = erlang:start_timer(Shutdown, self(), {shutdown, Pid}),
|
||||||
|
Child#child{streamid=undefined, timer=Ref};
|
||||||
|
_ ->
|
||||||
|
Child
|
||||||
|
end
|
||||||
|
|| Child <- Children0].
|
||||||
|
|
||||||
|
-spec shutdown_timeout(children(), reference(), pid()) -> ok.
|
||||||
|
shutdown_timeout(Children, Ref, Pid) ->
|
||||||
|
case lists:keyfind(Pid, #child.pid, Children) of
|
||||||
|
#child{timer=Ref} ->
|
||||||
|
exit(Pid, kill),
|
||||||
|
ok;
|
||||||
|
_ ->
|
||||||
|
ok
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec terminate(children()) -> ok.
|
||||||
|
terminate(Children) ->
|
||||||
|
%% For each child, either ask for it to shut down,
|
||||||
|
%% or cancel its shutdown timer if it already is.
|
||||||
|
%%
|
||||||
|
%% We do not need to flush stray timeout messages out because
|
||||||
|
%% we are either terminating or switching protocols,
|
||||||
|
%% and in the latter case we flush all messages.
|
||||||
|
_ = [case TRef of
|
||||||
|
undefined -> exit(Pid, shutdown);
|
||||||
|
_ -> erlang:cancel_timer(TRef, [{async, true}, {info, false}])
|
||||||
|
end || #child{pid=Pid, timer=TRef} <- Children],
|
||||||
|
before_terminate_loop(Children).
|
||||||
|
|
||||||
|
before_terminate_loop([]) ->
|
||||||
|
ok;
|
||||||
|
before_terminate_loop(Children) ->
|
||||||
|
%% Find the longest shutdown time.
|
||||||
|
Time = longest_shutdown_time(Children, 0),
|
||||||
|
%% We delay the creation of the timer if one of the
|
||||||
|
%% processes has an infinity shutdown value.
|
||||||
|
TRef = case Time of
|
||||||
|
infinity -> undefined;
|
||||||
|
_ -> erlang:start_timer(Time, self(), terminate)
|
||||||
|
end,
|
||||||
|
%% Loop until that time or until all children are dead.
|
||||||
|
terminate_loop(Children, TRef).
|
||||||
|
|
||||||
|
terminate_loop([], TRef) ->
|
||||||
|
%% Don't forget to cancel the timer, if any!
|
||||||
|
case TRef of
|
||||||
|
undefined ->
|
||||||
|
ok;
|
||||||
|
_ ->
|
||||||
|
_ = erlang:cancel_timer(TRef, [{async, true}, {info, false}]),
|
||||||
|
ok
|
||||||
|
end;
|
||||||
|
terminate_loop(Children, TRef) ->
|
||||||
|
receive
|
||||||
|
{'EXIT', Pid, _} when TRef =:= undefined ->
|
||||||
|
{value, #child{shutdown=Shutdown}, Children1}
|
||||||
|
= lists:keytake(Pid, #child.pid, Children),
|
||||||
|
%% We delayed the creation of the timer. If a process with
|
||||||
|
%% infinity shutdown just ended, we might have to start that timer.
|
||||||
|
case Shutdown of
|
||||||
|
infinity -> before_terminate_loop(Children1);
|
||||||
|
_ -> terminate_loop(Children1, TRef)
|
||||||
|
end;
|
||||||
|
{'EXIT', Pid, _} ->
|
||||||
|
terminate_loop(lists:keydelete(Pid, #child.pid, Children), TRef);
|
||||||
|
{timeout, TRef, terminate} ->
|
||||||
|
%% Brutally kill any remaining children.
|
||||||
|
_ = [exit(Pid, kill) || #child{pid=Pid} <- Children],
|
||||||
|
ok
|
||||||
|
end.
|
||||||
|
|
||||||
|
longest_shutdown_time([], Time) ->
|
||||||
|
Time;
|
||||||
|
longest_shutdown_time([#child{shutdown=ChildTime}|Tail], Time) when ChildTime > Time ->
|
||||||
|
longest_shutdown_time(Tail, ChildTime);
|
||||||
|
longest_shutdown_time([_|Tail], Time) ->
|
||||||
|
longest_shutdown_time(Tail, Time).
|
||||||
|
|
||||||
|
-spec handle_supervisor_call(any(), {pid(), any()}, children(), module()) -> ok.
|
||||||
|
handle_supervisor_call(which_children, {From, Tag}, Children, Module) ->
|
||||||
|
From ! {Tag, which_children(Children, Module)},
|
||||||
|
ok;
|
||||||
|
handle_supervisor_call(count_children, {From, Tag}, Children, _) ->
|
||||||
|
From ! {Tag, count_children(Children)},
|
||||||
|
ok;
|
||||||
|
%% We disable start_child since only incoming requests
|
||||||
|
%% end up creating a new process.
|
||||||
|
handle_supervisor_call({start_child, _}, {From, Tag}, _, _) ->
|
||||||
|
From ! {Tag, {error, start_child_disabled}},
|
||||||
|
ok;
|
||||||
|
%% All other calls refer to children. We act in a similar way
|
||||||
|
%% to a simple_one_for_one so we never find those.
|
||||||
|
handle_supervisor_call(_, {From, Tag}, _, _) ->
|
||||||
|
From ! {Tag, {error, not_found}},
|
||||||
|
ok.
|
||||||
|
|
||||||
|
-spec which_children(children(), module()) -> [{module(), pid(), worker, [module()]}].
|
||||||
|
which_children(Children, Module) ->
|
||||||
|
[{Module, Pid, worker, [Module]} || #child{pid=Pid} <- Children].
|
||||||
|
|
||||||
|
-spec count_children(children()) -> [{atom(), non_neg_integer()}].
|
||||||
|
count_children(Children) ->
|
||||||
|
Count = length(Children),
|
||||||
|
[
|
||||||
|
{specs, 1},
|
||||||
|
{active, Count},
|
||||||
|
{supervisors, 0},
|
||||||
|
{workers, Count}
|
||||||
|
].
|
|
@ -0,0 +1,62 @@
|
||||||
|
%% Copyright (c) 2016-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_clear).
|
||||||
|
-behavior(ranch_protocol).
|
||||||
|
|
||||||
|
-export([start_link/3]).
|
||||||
|
-export([start_link/4]).
|
||||||
|
-export([connection_process/4]).
|
||||||
|
|
||||||
|
%% Ranch 1.
|
||||||
|
-spec start_link(ranch:ref(), inet:socket(), module(), cowboy:opts()) -> {ok, pid()}.
|
||||||
|
start_link(Ref, _Socket, Transport, Opts) ->
|
||||||
|
start_link(Ref, Transport, Opts).
|
||||||
|
|
||||||
|
%% Ranch 2.
|
||||||
|
-spec start_link(ranch:ref(), module(), cowboy:opts()) -> {ok, pid()}.
|
||||||
|
start_link(Ref, Transport, Opts) ->
|
||||||
|
Pid = proc_lib:spawn_link(?MODULE, connection_process,
|
||||||
|
[self(), Ref, Transport, Opts]),
|
||||||
|
{ok, Pid}.
|
||||||
|
|
||||||
|
-spec connection_process(pid(), ranch:ref(), module(), cowboy:opts()) -> ok.
|
||||||
|
connection_process(Parent, Ref, Transport, Opts) ->
|
||||||
|
ProxyInfo = get_proxy_info(Ref, Opts),
|
||||||
|
{ok, Socket} = ranch:handshake(Ref),
|
||||||
|
%% Use cowboy_http2 directly only when 'http' is missing.
|
||||||
|
%% Otherwise switch to cowboy_http2 from cowboy_http.
|
||||||
|
%%
|
||||||
|
%% @todo Extend this option to cowboy_tls and allow disabling
|
||||||
|
%% the switch to cowboy_http2 in cowboy_http. Also document it.
|
||||||
|
Protocol = case maps:get(protocols, Opts, [http2, http]) of
|
||||||
|
[http2] -> cowboy_http2;
|
||||||
|
[_|_] -> cowboy_http
|
||||||
|
end,
|
||||||
|
init(Parent, Ref, Socket, Transport, ProxyInfo, Opts, Protocol).
|
||||||
|
|
||||||
|
init(Parent, Ref, Socket, Transport, ProxyInfo, Opts, Protocol) ->
|
||||||
|
_ = case maps:get(connection_type, Opts, supervisor) of
|
||||||
|
worker -> ok;
|
||||||
|
supervisor -> process_flag(trap_exit, true)
|
||||||
|
end,
|
||||||
|
Protocol:init(Parent, Ref, Socket, Transport, ProxyInfo, Opts).
|
||||||
|
|
||||||
|
get_proxy_info(Ref, #{proxy_header := true}) ->
|
||||||
|
case ranch:recv_proxy_header(Ref, 1000) of
|
||||||
|
{ok, ProxyInfo} -> ProxyInfo;
|
||||||
|
{error, closed} -> exit({shutdown, closed})
|
||||||
|
end;
|
||||||
|
get_proxy_info(_, _) ->
|
||||||
|
undefined.
|
|
@ -0,0 +1,221 @@
|
||||||
|
%% Copyright (c) 2011-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
%% While a gen_server process runs in the background to update
|
||||||
|
%% the cache of formatted dates every second, all API calls are
|
||||||
|
%% local and directly read from the ETS cache table, providing
|
||||||
|
%% fast time and date computations.
|
||||||
|
-module(cowboy_clock).
|
||||||
|
-behaviour(gen_server).
|
||||||
|
|
||||||
|
%% API.
|
||||||
|
-export([start_link/0]).
|
||||||
|
-export([stop/0]).
|
||||||
|
-export([rfc1123/0]).
|
||||||
|
-export([rfc1123/1]).
|
||||||
|
|
||||||
|
%% gen_server.
|
||||||
|
-export([init/1]).
|
||||||
|
-export([handle_call/3]).
|
||||||
|
-export([handle_cast/2]).
|
||||||
|
-export([handle_info/2]).
|
||||||
|
-export([terminate/2]).
|
||||||
|
-export([code_change/3]).
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
universaltime = undefined :: undefined | calendar:datetime(),
|
||||||
|
rfc1123 = <<>> :: binary(),
|
||||||
|
tref = undefined :: undefined | reference()
|
||||||
|
}).
|
||||||
|
|
||||||
|
%% API.
|
||||||
|
|
||||||
|
-spec start_link() -> {ok, pid()}.
|
||||||
|
start_link() ->
|
||||||
|
gen_server:start_link({local, ?MODULE}, ?MODULE, [], []).
|
||||||
|
|
||||||
|
-spec stop() -> stopped.
|
||||||
|
stop() ->
|
||||||
|
gen_server:call(?MODULE, stop).
|
||||||
|
|
||||||
|
%% When the ets table doesn't exist, either because of a bug
|
||||||
|
%% or because Cowboy is being restarted, we perform in a
|
||||||
|
%% slightly degraded state and build a new timestamp for
|
||||||
|
%% every request.
|
||||||
|
-spec rfc1123() -> binary().
|
||||||
|
rfc1123() ->
|
||||||
|
try
|
||||||
|
ets:lookup_element(?MODULE, rfc1123, 2)
|
||||||
|
catch error:badarg ->
|
||||||
|
rfc1123(erlang:universaltime())
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec rfc1123(calendar:datetime()) -> binary().
|
||||||
|
rfc1123(DateTime) ->
|
||||||
|
update_rfc1123(<<>>, undefined, DateTime).
|
||||||
|
|
||||||
|
%% gen_server.
|
||||||
|
|
||||||
|
-spec init([]) -> {ok, #state{}}.
|
||||||
|
init([]) ->
|
||||||
|
?MODULE = ets:new(?MODULE, [set, protected,
|
||||||
|
named_table, {read_concurrency, true}]),
|
||||||
|
T = erlang:universaltime(),
|
||||||
|
B = update_rfc1123(<<>>, undefined, T),
|
||||||
|
TRef = erlang:send_after(1000, self(), update),
|
||||||
|
ets:insert(?MODULE, {rfc1123, B}),
|
||||||
|
{ok, #state{universaltime=T, rfc1123=B, tref=TRef}}.
|
||||||
|
|
||||||
|
-type from() :: {pid(), term()}.
|
||||||
|
-spec handle_call
|
||||||
|
(stop, from(), State) -> {stop, normal, stopped, State}
|
||||||
|
when State::#state{}.
|
||||||
|
handle_call(stop, _From, State) ->
|
||||||
|
{stop, normal, stopped, State};
|
||||||
|
handle_call(_Request, _From, State) ->
|
||||||
|
{reply, ignored, State}.
|
||||||
|
|
||||||
|
-spec handle_cast(_, State) -> {noreply, State} when State::#state{}.
|
||||||
|
handle_cast(_Msg, State) ->
|
||||||
|
{noreply, State}.
|
||||||
|
|
||||||
|
-spec handle_info(any(), State) -> {noreply, State} when State::#state{}.
|
||||||
|
handle_info(update, #state{universaltime=Prev, rfc1123=B1, tref=TRef0}) ->
|
||||||
|
%% Cancel the timer in case an external process sent an update message.
|
||||||
|
_ = erlang:cancel_timer(TRef0),
|
||||||
|
T = erlang:universaltime(),
|
||||||
|
B2 = update_rfc1123(B1, Prev, T),
|
||||||
|
ets:insert(?MODULE, {rfc1123, B2}),
|
||||||
|
TRef = erlang:send_after(1000, self(), update),
|
||||||
|
{noreply, #state{universaltime=T, rfc1123=B2, tref=TRef}};
|
||||||
|
handle_info(_Info, State) ->
|
||||||
|
{noreply, State}.
|
||||||
|
|
||||||
|
-spec terminate(_, _) -> ok.
|
||||||
|
terminate(_Reason, _State) ->
|
||||||
|
ok.
|
||||||
|
|
||||||
|
-spec code_change(_, State, _) -> {ok, State} when State::#state{}.
|
||||||
|
code_change(_OldVsn, State, _Extra) ->
|
||||||
|
{ok, State}.
|
||||||
|
|
||||||
|
%% Internal.
|
||||||
|
|
||||||
|
-spec update_rfc1123(binary(), undefined | calendar:datetime(),
|
||||||
|
calendar:datetime()) -> binary().
|
||||||
|
update_rfc1123(Bin, Now, Now) ->
|
||||||
|
Bin;
|
||||||
|
update_rfc1123(<< Keep:23/binary, _/bits >>,
|
||||||
|
{Date, {H, M, _}}, {Date, {H, M, S}}) ->
|
||||||
|
<< Keep/binary, (pad_int(S))/binary, " GMT" >>;
|
||||||
|
update_rfc1123(<< Keep:20/binary, _/bits >>,
|
||||||
|
{Date, {H, _, _}}, {Date, {H, M, S}}) ->
|
||||||
|
<< Keep/binary, (pad_int(M))/binary, $:, (pad_int(S))/binary, " GMT" >>;
|
||||||
|
update_rfc1123(<< Keep:17/binary, _/bits >>, {Date, _}, {Date, {H, M, S}}) ->
|
||||||
|
<< Keep/binary, (pad_int(H))/binary, $:, (pad_int(M))/binary,
|
||||||
|
$:, (pad_int(S))/binary, " GMT" >>;
|
||||||
|
update_rfc1123(<< _:7/binary, Keep:10/binary, _/bits >>,
|
||||||
|
{{Y, Mo, _}, _}, {Date = {Y, Mo, D}, {H, M, S}}) ->
|
||||||
|
Wday = calendar:day_of_the_week(Date),
|
||||||
|
<< (weekday(Wday))/binary, ", ", (pad_int(D))/binary, Keep/binary,
|
||||||
|
(pad_int(H))/binary, $:, (pad_int(M))/binary,
|
||||||
|
$:, (pad_int(S))/binary, " GMT" >>;
|
||||||
|
update_rfc1123(<< _:11/binary, Keep:6/binary, _/bits >>,
|
||||||
|
{{Y, _, _}, _}, {Date = {Y, Mo, D}, {H, M, S}}) ->
|
||||||
|
Wday = calendar:day_of_the_week(Date),
|
||||||
|
<< (weekday(Wday))/binary, ", ", (pad_int(D))/binary, " ",
|
||||||
|
(month(Mo))/binary, Keep/binary,
|
||||||
|
(pad_int(H))/binary, $:, (pad_int(M))/binary,
|
||||||
|
$:, (pad_int(S))/binary, " GMT" >>;
|
||||||
|
update_rfc1123(_, _, {Date = {Y, Mo, D}, {H, M, S}}) ->
|
||||||
|
Wday = calendar:day_of_the_week(Date),
|
||||||
|
<< (weekday(Wday))/binary, ", ", (pad_int(D))/binary, " ",
|
||||||
|
(month(Mo))/binary, " ", (integer_to_binary(Y))/binary,
|
||||||
|
" ", (pad_int(H))/binary, $:, (pad_int(M))/binary,
|
||||||
|
$:, (pad_int(S))/binary, " GMT" >>.
|
||||||
|
|
||||||
|
%% Following suggestion by MononcQc on #erlounge.
|
||||||
|
-spec pad_int(0..59) -> binary().
|
||||||
|
pad_int(X) when X < 10 ->
|
||||||
|
<< $0, ($0 + X) >>;
|
||||||
|
pad_int(X) ->
|
||||||
|
integer_to_binary(X).
|
||||||
|
|
||||||
|
-spec weekday(1..7) -> <<_:24>>.
|
||||||
|
weekday(1) -> <<"Mon">>;
|
||||||
|
weekday(2) -> <<"Tue">>;
|
||||||
|
weekday(3) -> <<"Wed">>;
|
||||||
|
weekday(4) -> <<"Thu">>;
|
||||||
|
weekday(5) -> <<"Fri">>;
|
||||||
|
weekday(6) -> <<"Sat">>;
|
||||||
|
weekday(7) -> <<"Sun">>.
|
||||||
|
|
||||||
|
-spec month(1..12) -> <<_:24>>.
|
||||||
|
month( 1) -> <<"Jan">>;
|
||||||
|
month( 2) -> <<"Feb">>;
|
||||||
|
month( 3) -> <<"Mar">>;
|
||||||
|
month( 4) -> <<"Apr">>;
|
||||||
|
month( 5) -> <<"May">>;
|
||||||
|
month( 6) -> <<"Jun">>;
|
||||||
|
month( 7) -> <<"Jul">>;
|
||||||
|
month( 8) -> <<"Aug">>;
|
||||||
|
month( 9) -> <<"Sep">>;
|
||||||
|
month(10) -> <<"Oct">>;
|
||||||
|
month(11) -> <<"Nov">>;
|
||||||
|
month(12) -> <<"Dec">>.
|
||||||
|
|
||||||
|
%% Tests.
|
||||||
|
|
||||||
|
-ifdef(TEST).
|
||||||
|
update_rfc1123_test_() ->
|
||||||
|
Tests = [
|
||||||
|
{<<"Sat, 14 May 2011 14:25:33 GMT">>, undefined,
|
||||||
|
{{2011, 5, 14}, {14, 25, 33}}, <<>>},
|
||||||
|
{<<"Sat, 14 May 2011 14:25:33 GMT">>, {{2011, 5, 14}, {14, 25, 33}},
|
||||||
|
{{2011, 5, 14}, {14, 25, 33}}, <<"Sat, 14 May 2011 14:25:33 GMT">>},
|
||||||
|
{<<"Sat, 14 May 2011 14:25:34 GMT">>, {{2011, 5, 14}, {14, 25, 33}},
|
||||||
|
{{2011, 5, 14}, {14, 25, 34}}, <<"Sat, 14 May 2011 14:25:33 GMT">>},
|
||||||
|
{<<"Sat, 14 May 2011 14:26:00 GMT">>, {{2011, 5, 14}, {14, 25, 59}},
|
||||||
|
{{2011, 5, 14}, {14, 26, 0}}, <<"Sat, 14 May 2011 14:25:59 GMT">>},
|
||||||
|
{<<"Sat, 14 May 2011 15:00:00 GMT">>, {{2011, 5, 14}, {14, 59, 59}},
|
||||||
|
{{2011, 5, 14}, {15, 0, 0}}, <<"Sat, 14 May 2011 14:59:59 GMT">>},
|
||||||
|
{<<"Sun, 15 May 2011 00:00:00 GMT">>, {{2011, 5, 14}, {23, 59, 59}},
|
||||||
|
{{2011, 5, 15}, { 0, 0, 0}}, <<"Sat, 14 May 2011 23:59:59 GMT">>},
|
||||||
|
{<<"Wed, 01 Jun 2011 00:00:00 GMT">>, {{2011, 5, 31}, {23, 59, 59}},
|
||||||
|
{{2011, 6, 1}, { 0, 0, 0}}, <<"Tue, 31 May 2011 23:59:59 GMT">>},
|
||||||
|
{<<"Sun, 01 Jan 2012 00:00:00 GMT">>, {{2011, 5, 31}, {23, 59, 59}},
|
||||||
|
{{2012, 1, 1}, { 0, 0, 0}}, <<"Sat, 31 Dec 2011 23:59:59 GMT">>}
|
||||||
|
],
|
||||||
|
[{R, fun() -> R = update_rfc1123(B, P, N) end} || {R, P, N, B} <- Tests].
|
||||||
|
|
||||||
|
pad_int_test_() ->
|
||||||
|
Tests = [
|
||||||
|
{ 0, <<"00">>}, { 1, <<"01">>}, { 2, <<"02">>}, { 3, <<"03">>},
|
||||||
|
{ 4, <<"04">>}, { 5, <<"05">>}, { 6, <<"06">>}, { 7, <<"07">>},
|
||||||
|
{ 8, <<"08">>}, { 9, <<"09">>}, {10, <<"10">>}, {11, <<"11">>},
|
||||||
|
{12, <<"12">>}, {13, <<"13">>}, {14, <<"14">>}, {15, <<"15">>},
|
||||||
|
{16, <<"16">>}, {17, <<"17">>}, {18, <<"18">>}, {19, <<"19">>},
|
||||||
|
{20, <<"20">>}, {21, <<"21">>}, {22, <<"22">>}, {23, <<"23">>},
|
||||||
|
{24, <<"24">>}, {25, <<"25">>}, {26, <<"26">>}, {27, <<"27">>},
|
||||||
|
{28, <<"28">>}, {29, <<"29">>}, {30, <<"30">>}, {31, <<"31">>},
|
||||||
|
{32, <<"32">>}, {33, <<"33">>}, {34, <<"34">>}, {35, <<"35">>},
|
||||||
|
{36, <<"36">>}, {37, <<"37">>}, {38, <<"38">>}, {39, <<"39">>},
|
||||||
|
{40, <<"40">>}, {41, <<"41">>}, {42, <<"42">>}, {43, <<"43">>},
|
||||||
|
{44, <<"44">>}, {45, <<"45">>}, {46, <<"46">>}, {47, <<"47">>},
|
||||||
|
{48, <<"48">>}, {49, <<"49">>}, {50, <<"50">>}, {51, <<"51">>},
|
||||||
|
{52, <<"52">>}, {53, <<"53">>}, {54, <<"54">>}, {55, <<"55">>},
|
||||||
|
{56, <<"56">>}, {57, <<"57">>}, {58, <<"58">>}, {59, <<"59">>}
|
||||||
|
],
|
||||||
|
[{I, fun() -> O = pad_int(I) end} || {I, O} <- Tests].
|
||||||
|
-endif.
|
|
@ -0,0 +1,267 @@
|
||||||
|
%% Copyright (c) 2017-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_compress_h).
|
||||||
|
-behavior(cowboy_stream).
|
||||||
|
|
||||||
|
-export([init/3]).
|
||||||
|
-export([data/4]).
|
||||||
|
-export([info/3]).
|
||||||
|
-export([terminate/3]).
|
||||||
|
-export([early_error/5]).
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
next :: any(),
|
||||||
|
threshold :: non_neg_integer() | undefined,
|
||||||
|
compress = undefined :: undefined | gzip,
|
||||||
|
deflate = undefined :: undefined | zlib:zstream(),
|
||||||
|
deflate_flush = sync :: none | sync
|
||||||
|
}).
|
||||||
|
|
||||||
|
-spec init(cowboy_stream:streamid(), cowboy_req:req(), cowboy:opts())
|
||||||
|
-> {cowboy_stream:commands(), #state{}}.
|
||||||
|
init(StreamID, Req, Opts) ->
|
||||||
|
State0 = check_req(Req),
|
||||||
|
CompressThreshold = maps:get(compress_threshold, Opts, 300),
|
||||||
|
DeflateFlush = buffering_to_zflush(maps:get(compress_buffering, Opts, false)),
|
||||||
|
{Commands0, Next} = cowboy_stream:init(StreamID, Req, Opts),
|
||||||
|
fold(Commands0, State0#state{next=Next,
|
||||||
|
threshold=CompressThreshold,
|
||||||
|
deflate_flush=DeflateFlush}).
|
||||||
|
|
||||||
|
-spec data(cowboy_stream:streamid(), cowboy_stream:fin(), cowboy_req:resp_body(), State)
|
||||||
|
-> {cowboy_stream:commands(), State} when State::#state{}.
|
||||||
|
data(StreamID, IsFin, Data, State0=#state{next=Next0}) ->
|
||||||
|
{Commands0, Next} = cowboy_stream:data(StreamID, IsFin, Data, Next0),
|
||||||
|
fold(Commands0, State0#state{next=Next}).
|
||||||
|
|
||||||
|
-spec info(cowboy_stream:streamid(), any(), State)
|
||||||
|
-> {cowboy_stream:commands(), State} when State::#state{}.
|
||||||
|
info(StreamID, Info, State0=#state{next=Next0}) ->
|
||||||
|
{Commands0, Next} = cowboy_stream:info(StreamID, Info, Next0),
|
||||||
|
fold(Commands0, State0#state{next=Next}).
|
||||||
|
|
||||||
|
-spec terminate(cowboy_stream:streamid(), cowboy_stream:reason(), #state{}) -> any().
|
||||||
|
terminate(StreamID, Reason, #state{next=Next, deflate=Z}) ->
|
||||||
|
%% Clean the zlib:stream() in case something went wrong.
|
||||||
|
%% In the normal scenario the stream is already closed.
|
||||||
|
case Z of
|
||||||
|
undefined -> ok;
|
||||||
|
_ -> zlib:close(Z)
|
||||||
|
end,
|
||||||
|
cowboy_stream:terminate(StreamID, Reason, Next).
|
||||||
|
|
||||||
|
-spec early_error(cowboy_stream:streamid(), cowboy_stream:reason(),
|
||||||
|
cowboy_stream:partial_req(), Resp, cowboy:opts()) -> Resp
|
||||||
|
when Resp::cowboy_stream:resp_command().
|
||||||
|
early_error(StreamID, Reason, PartialReq, Resp, Opts) ->
|
||||||
|
cowboy_stream:early_error(StreamID, Reason, PartialReq, Resp, Opts).
|
||||||
|
|
||||||
|
%% Internal.
|
||||||
|
|
||||||
|
%% Check if the client supports decoding of gzip responses.
|
||||||
|
%%
|
||||||
|
%% A malformed accept-encoding header is ignored (no compression).
|
||||||
|
check_req(Req) ->
|
||||||
|
try cowboy_req:parse_header(<<"accept-encoding">>, Req) of
|
||||||
|
%% Client doesn't support any compression algorithm.
|
||||||
|
undefined ->
|
||||||
|
#state{compress=undefined};
|
||||||
|
Encodings ->
|
||||||
|
%% We only support gzip so look for it specifically.
|
||||||
|
%% @todo A recipient SHOULD consider "x-gzip" to be
|
||||||
|
%% equivalent to "gzip". (RFC7230 4.2.3)
|
||||||
|
case [E || E={<<"gzip">>, Q} <- Encodings, Q =/= 0] of
|
||||||
|
[] ->
|
||||||
|
#state{compress=undefined};
|
||||||
|
_ ->
|
||||||
|
#state{compress=gzip}
|
||||||
|
end
|
||||||
|
catch
|
||||||
|
_:_ ->
|
||||||
|
#state{compress=undefined}
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% Do not compress responses that contain the content-encoding header.
|
||||||
|
check_resp_headers(#{<<"content-encoding">> := _}, State) ->
|
||||||
|
State#state{compress=undefined};
|
||||||
|
%% Do not compress responses that contain the etag header.
|
||||||
|
check_resp_headers(#{<<"etag">> := _}, State) ->
|
||||||
|
State#state{compress=undefined};
|
||||||
|
check_resp_headers(_, State) ->
|
||||||
|
State.
|
||||||
|
|
||||||
|
fold(Commands, State=#state{compress=undefined}) ->
|
||||||
|
fold_vary_only(Commands, State, []);
|
||||||
|
fold(Commands, State) ->
|
||||||
|
fold(Commands, State, []).
|
||||||
|
|
||||||
|
fold([], State, Acc) ->
|
||||||
|
{lists:reverse(Acc), State};
|
||||||
|
%% We do not compress full sendfile bodies.
|
||||||
|
fold([Response={response, _, _, {sendfile, _, _, _}}|Tail], State, Acc) ->
|
||||||
|
fold(Tail, State, [vary_response(Response)|Acc]);
|
||||||
|
%% We compress full responses directly, unless they are lower than
|
||||||
|
%% the configured threshold or we find we are not able to by looking at the headers.
|
||||||
|
fold([Response0={response, _, Headers, Body}|Tail],
|
||||||
|
State0=#state{threshold=CompressThreshold}, Acc) ->
|
||||||
|
case check_resp_headers(Headers, State0) of
|
||||||
|
State=#state{compress=undefined} ->
|
||||||
|
fold(Tail, State, [vary_response(Response0)|Acc]);
|
||||||
|
State1 ->
|
||||||
|
BodyLength = iolist_size(Body),
|
||||||
|
if
|
||||||
|
BodyLength =< CompressThreshold ->
|
||||||
|
fold(Tail, State1, [vary_response(Response0)|Acc]);
|
||||||
|
true ->
|
||||||
|
{Response, State} = gzip_response(Response0, State1),
|
||||||
|
fold(Tail, State, [vary_response(Response)|Acc])
|
||||||
|
end
|
||||||
|
end;
|
||||||
|
%% Check headers and initiate compression...
|
||||||
|
fold([Response0={headers, _, Headers}|Tail], State0, Acc) ->
|
||||||
|
case check_resp_headers(Headers, State0) of
|
||||||
|
State=#state{compress=undefined} ->
|
||||||
|
fold(Tail, State, [vary_headers(Response0)|Acc]);
|
||||||
|
State1 ->
|
||||||
|
{Response, State} = gzip_headers(Response0, State1),
|
||||||
|
fold(Tail, State, [vary_headers(Response)|Acc])
|
||||||
|
end;
|
||||||
|
%% then compress each data commands individually.
|
||||||
|
fold([Data0={data, _, _}|Tail], State0=#state{compress=gzip}, Acc) ->
|
||||||
|
{Data, State} = gzip_data(Data0, State0),
|
||||||
|
fold(Tail, State, [Data|Acc]);
|
||||||
|
%% When trailers are sent we need to end the compression.
|
||||||
|
%% This results in an extra data command being sent.
|
||||||
|
fold([Trailers={trailers, _}|Tail], State0=#state{compress=gzip}, Acc) ->
|
||||||
|
{{data, fin, Data}, State} = gzip_data({data, fin, <<>>}, State0),
|
||||||
|
fold(Tail, State, [Trailers, {data, nofin, Data}|Acc]);
|
||||||
|
%% All the options from this handler can be updated for the current stream.
|
||||||
|
%% The set_options command must be propagated as-is regardless.
|
||||||
|
fold([SetOptions={set_options, Opts}|Tail], State=#state{
|
||||||
|
threshold=CompressThreshold0, deflate_flush=DeflateFlush0}, Acc) ->
|
||||||
|
CompressThreshold = maps:get(compress_threshold, Opts, CompressThreshold0),
|
||||||
|
DeflateFlush = case Opts of
|
||||||
|
#{compress_buffering := CompressBuffering} ->
|
||||||
|
buffering_to_zflush(CompressBuffering);
|
||||||
|
_ ->
|
||||||
|
DeflateFlush0
|
||||||
|
end,
|
||||||
|
fold(Tail, State#state{threshold=CompressThreshold, deflate_flush=DeflateFlush},
|
||||||
|
[SetOptions|Acc]);
|
||||||
|
%% Otherwise, we have an unrelated command or compression is disabled.
|
||||||
|
fold([Command|Tail], State, Acc) ->
|
||||||
|
fold(Tail, State, [Command|Acc]).
|
||||||
|
|
||||||
|
fold_vary_only([], State, Acc) ->
|
||||||
|
{lists:reverse(Acc), State};
|
||||||
|
fold_vary_only([Response={response, _, _, _}|Tail], State, Acc) ->
|
||||||
|
fold_vary_only(Tail, State, [vary_response(Response)|Acc]);
|
||||||
|
fold_vary_only([Response={headers, _, _}|Tail], State, Acc) ->
|
||||||
|
fold_vary_only(Tail, State, [vary_headers(Response)|Acc]);
|
||||||
|
fold_vary_only([Command|Tail], State, Acc) ->
|
||||||
|
fold_vary_only(Tail, State, [Command|Acc]).
|
||||||
|
|
||||||
|
buffering_to_zflush(true) -> none;
|
||||||
|
buffering_to_zflush(false) -> sync.
|
||||||
|
|
||||||
|
gzip_response({response, Status, Headers, Body}, State) ->
|
||||||
|
%% We can't call zlib:gzip/1 because it does an
|
||||||
|
%% iolist_to_binary(GzBody) at the end to return
|
||||||
|
%% a binary(). Therefore the code here is largely
|
||||||
|
%% a duplicate of the code of that function.
|
||||||
|
Z = zlib:open(),
|
||||||
|
GzBody = try
|
||||||
|
%% 31 = 16+?MAX_WBITS from zlib.erl
|
||||||
|
%% @todo It might be good to allow them to be configured?
|
||||||
|
zlib:deflateInit(Z, default, deflated, 31, 8, default),
|
||||||
|
Gz = zlib:deflate(Z, Body, finish),
|
||||||
|
zlib:deflateEnd(Z),
|
||||||
|
Gz
|
||||||
|
after
|
||||||
|
zlib:close(Z)
|
||||||
|
end,
|
||||||
|
{{response, Status, Headers#{
|
||||||
|
<<"content-length">> => integer_to_binary(iolist_size(GzBody)),
|
||||||
|
<<"content-encoding">> => <<"gzip">>
|
||||||
|
}, GzBody}, State}.
|
||||||
|
|
||||||
|
gzip_headers({headers, Status, Headers0}, State) ->
|
||||||
|
Z = zlib:open(),
|
||||||
|
%% We use the same arguments as when compressing the body fully.
|
||||||
|
%% @todo It might be good to allow them to be configured?
|
||||||
|
zlib:deflateInit(Z, default, deflated, 31, 8, default),
|
||||||
|
Headers = maps:remove(<<"content-length">>, Headers0),
|
||||||
|
{{headers, Status, Headers#{
|
||||||
|
<<"content-encoding">> => <<"gzip">>
|
||||||
|
}}, State#state{deflate=Z}}.
|
||||||
|
|
||||||
|
vary_response({response, Status, Headers, Body}) ->
|
||||||
|
{response, Status, vary(Headers), Body}.
|
||||||
|
|
||||||
|
vary_headers({headers, Status, Headers}) ->
|
||||||
|
{headers, Status, vary(Headers)}.
|
||||||
|
|
||||||
|
%% We must add content-encoding to vary if it's not already there.
|
||||||
|
vary(Headers=#{<<"vary">> := Vary}) ->
|
||||||
|
try cow_http_hd:parse_vary(iolist_to_binary(Vary)) of
|
||||||
|
'*' -> Headers;
|
||||||
|
List ->
|
||||||
|
case lists:member(<<"accept-encoding">>, List) of
|
||||||
|
true -> Headers;
|
||||||
|
false -> Headers#{<<"vary">> => [Vary, <<", accept-encoding">>]}
|
||||||
|
end
|
||||||
|
catch _:_ ->
|
||||||
|
%% The vary header is invalid. Probably empty. We replace it with ours.
|
||||||
|
Headers#{<<"vary">> => <<"accept-encoding">>}
|
||||||
|
end;
|
||||||
|
vary(Headers) ->
|
||||||
|
Headers#{<<"vary">> => <<"accept-encoding">>}.
|
||||||
|
|
||||||
|
%% It is not possible to combine zlib and the sendfile
|
||||||
|
%% syscall as far as I can tell, because the zlib format
|
||||||
|
%% includes a checksum at the end of the stream. We have
|
||||||
|
%% to read the file in memory, making this not suitable for
|
||||||
|
%% large files.
|
||||||
|
gzip_data({data, nofin, Sendfile={sendfile, _, _, _}},
|
||||||
|
State=#state{deflate=Z, deflate_flush=Flush}) ->
|
||||||
|
{ok, Data0} = read_file(Sendfile),
|
||||||
|
Data = zlib:deflate(Z, Data0, Flush),
|
||||||
|
{{data, nofin, Data}, State};
|
||||||
|
gzip_data({data, fin, Sendfile={sendfile, _, _, _}}, State=#state{deflate=Z}) ->
|
||||||
|
{ok, Data0} = read_file(Sendfile),
|
||||||
|
Data = zlib:deflate(Z, Data0, finish),
|
||||||
|
zlib:deflateEnd(Z),
|
||||||
|
zlib:close(Z),
|
||||||
|
{{data, fin, Data}, State#state{deflate=undefined}};
|
||||||
|
gzip_data({data, nofin, Data0}, State=#state{deflate=Z, deflate_flush=Flush}) ->
|
||||||
|
Data = zlib:deflate(Z, Data0, Flush),
|
||||||
|
{{data, nofin, Data}, State};
|
||||||
|
gzip_data({data, fin, Data0}, State=#state{deflate=Z}) ->
|
||||||
|
Data = zlib:deflate(Z, Data0, finish),
|
||||||
|
zlib:deflateEnd(Z),
|
||||||
|
zlib:close(Z),
|
||||||
|
{{data, fin, Data}, State#state{deflate=undefined}}.
|
||||||
|
|
||||||
|
read_file({sendfile, Offset, Bytes, Path}) ->
|
||||||
|
{ok, IoDevice} = file:open(Path, [read, raw, binary]),
|
||||||
|
try
|
||||||
|
_ = case Offset of
|
||||||
|
0 -> ok;
|
||||||
|
_ -> file:position(IoDevice, {bof, Offset})
|
||||||
|
end,
|
||||||
|
file:read(IoDevice, Bytes)
|
||||||
|
after
|
||||||
|
file:close(IoDevice)
|
||||||
|
end.
|
|
@ -0,0 +1,174 @@
|
||||||
|
%% Copyright (c) 2014-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_constraints).
|
||||||
|
|
||||||
|
-export([validate/2]).
|
||||||
|
-export([reverse/2]).
|
||||||
|
-export([format_error/1]).
|
||||||
|
|
||||||
|
-type constraint() :: int | nonempty | fun().
|
||||||
|
-export_type([constraint/0]).
|
||||||
|
|
||||||
|
-type reason() :: {constraint(), any(), any()}.
|
||||||
|
-export_type([reason/0]).
|
||||||
|
|
||||||
|
-spec validate(binary(), constraint() | [constraint()])
|
||||||
|
-> {ok, any()} | {error, reason()}.
|
||||||
|
validate(Value, Constraints) when is_list(Constraints) ->
|
||||||
|
apply_list(forward, Value, Constraints);
|
||||||
|
validate(Value, Constraint) ->
|
||||||
|
apply_list(forward, Value, [Constraint]).
|
||||||
|
|
||||||
|
-spec reverse(any(), constraint() | [constraint()])
|
||||||
|
-> {ok, binary()} | {error, reason()}.
|
||||||
|
reverse(Value, Constraints) when is_list(Constraints) ->
|
||||||
|
apply_list(reverse, Value, Constraints);
|
||||||
|
reverse(Value, Constraint) ->
|
||||||
|
apply_list(reverse, Value, [Constraint]).
|
||||||
|
|
||||||
|
-spec format_error(reason()) -> iodata().
|
||||||
|
format_error({Constraint, Reason, Value}) ->
|
||||||
|
apply_constraint(format_error, {Reason, Value}, Constraint).
|
||||||
|
|
||||||
|
apply_list(_, Value, []) ->
|
||||||
|
{ok, Value};
|
||||||
|
apply_list(Type, Value0, [Constraint|Tail]) ->
|
||||||
|
case apply_constraint(Type, Value0, Constraint) of
|
||||||
|
{ok, Value} ->
|
||||||
|
apply_list(Type, Value, Tail);
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, {Constraint, Reason, Value0}}
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% @todo {int, From, To}, etc.
|
||||||
|
apply_constraint(Type, Value, int) ->
|
||||||
|
int(Type, Value);
|
||||||
|
apply_constraint(Type, Value, nonempty) ->
|
||||||
|
nonempty(Type, Value);
|
||||||
|
apply_constraint(Type, Value, F) when is_function(F) ->
|
||||||
|
F(Type, Value).
|
||||||
|
|
||||||
|
%% Constraint functions.
|
||||||
|
|
||||||
|
int(forward, Value) ->
|
||||||
|
try
|
||||||
|
{ok, binary_to_integer(Value)}
|
||||||
|
catch _:_ ->
|
||||||
|
{error, not_an_integer}
|
||||||
|
end;
|
||||||
|
int(reverse, Value) ->
|
||||||
|
try
|
||||||
|
{ok, integer_to_binary(Value)}
|
||||||
|
catch _:_ ->
|
||||||
|
{error, not_an_integer}
|
||||||
|
end;
|
||||||
|
int(format_error, {not_an_integer, Value}) ->
|
||||||
|
io_lib:format("The value ~p is not an integer.", [Value]).
|
||||||
|
|
||||||
|
nonempty(Type, <<>>) when Type =/= format_error ->
|
||||||
|
{error, empty};
|
||||||
|
nonempty(Type, Value) when Type =/= format_error, is_binary(Value) ->
|
||||||
|
{ok, Value};
|
||||||
|
nonempty(format_error, {empty, Value}) ->
|
||||||
|
io_lib:format("The value ~p is empty.", [Value]).
|
||||||
|
|
||||||
|
-ifdef(TEST).
|
||||||
|
|
||||||
|
validate_test() ->
|
||||||
|
F = fun(_, Value) ->
|
||||||
|
try
|
||||||
|
{ok, binary_to_atom(Value, latin1)}
|
||||||
|
catch _:_ ->
|
||||||
|
{error, not_a_binary}
|
||||||
|
end
|
||||||
|
end,
|
||||||
|
%% Value, Constraints, Result.
|
||||||
|
Tests = [
|
||||||
|
{<<>>, [], <<>>},
|
||||||
|
{<<"123">>, int, 123},
|
||||||
|
{<<"123">>, [int], 123},
|
||||||
|
{<<"123">>, [nonempty, int], 123},
|
||||||
|
{<<"123">>, [int, nonempty], 123},
|
||||||
|
{<<>>, nonempty, error},
|
||||||
|
{<<>>, [nonempty], error},
|
||||||
|
{<<"hello">>, F, hello},
|
||||||
|
{<<"hello">>, [F], hello},
|
||||||
|
{<<"123">>, [F, int], error},
|
||||||
|
{<<"123">>, [int, F], error},
|
||||||
|
{<<"hello">>, [nonempty, F], hello},
|
||||||
|
{<<"hello">>, [F, nonempty], hello}
|
||||||
|
],
|
||||||
|
[{lists:flatten(io_lib:format("~p, ~p", [V, C])), fun() ->
|
||||||
|
case R of
|
||||||
|
error -> {error, _} = validate(V, C);
|
||||||
|
_ -> {ok, R} = validate(V, C)
|
||||||
|
end
|
||||||
|
end} || {V, C, R} <- Tests].
|
||||||
|
|
||||||
|
reverse_test() ->
|
||||||
|
F = fun(_, Value) ->
|
||||||
|
try
|
||||||
|
{ok, atom_to_binary(Value, latin1)}
|
||||||
|
catch _:_ ->
|
||||||
|
{error, not_an_atom}
|
||||||
|
end
|
||||||
|
end,
|
||||||
|
%% Value, Constraints, Result.
|
||||||
|
Tests = [
|
||||||
|
{<<>>, [], <<>>},
|
||||||
|
{123, int, <<"123">>},
|
||||||
|
{123, [int], <<"123">>},
|
||||||
|
{123, [nonempty, int], <<"123">>},
|
||||||
|
{123, [int, nonempty], <<"123">>},
|
||||||
|
{<<>>, nonempty, error},
|
||||||
|
{<<>>, [nonempty], error},
|
||||||
|
{hello, F, <<"hello">>},
|
||||||
|
{hello, [F], <<"hello">>},
|
||||||
|
{123, [F, int], error},
|
||||||
|
{123, [int, F], error},
|
||||||
|
{hello, [nonempty, F], <<"hello">>},
|
||||||
|
{hello, [F, nonempty], <<"hello">>}
|
||||||
|
],
|
||||||
|
[{lists:flatten(io_lib:format("~p, ~p", [V, C])), fun() ->
|
||||||
|
case R of
|
||||||
|
error -> {error, _} = reverse(V, C);
|
||||||
|
_ -> {ok, R} = reverse(V, C)
|
||||||
|
end
|
||||||
|
end} || {V, C, R} <- Tests].
|
||||||
|
|
||||||
|
int_format_error_test() ->
|
||||||
|
{error, Reason} = validate(<<"string">>, int),
|
||||||
|
Bin = iolist_to_binary(format_error(Reason)),
|
||||||
|
true = is_binary(Bin),
|
||||||
|
ok.
|
||||||
|
|
||||||
|
nonempty_format_error_test() ->
|
||||||
|
{error, Reason} = validate(<<>>, nonempty),
|
||||||
|
Bin = iolist_to_binary(format_error(Reason)),
|
||||||
|
true = is_binary(Bin),
|
||||||
|
ok.
|
||||||
|
|
||||||
|
fun_format_error_test() ->
|
||||||
|
F = fun
|
||||||
|
(format_error, {test, <<"value">>}) ->
|
||||||
|
formatted;
|
||||||
|
(_, _) ->
|
||||||
|
{error, test}
|
||||||
|
end,
|
||||||
|
{error, Reason} = validate(<<"value">>, F),
|
||||||
|
formatted = format_error(Reason),
|
||||||
|
ok.
|
||||||
|
|
||||||
|
-endif.
|
|
@ -0,0 +1,257 @@
|
||||||
|
%% Copyright (c) 2024, jdamanalo <joshuadavid.agustin@manalo.ph>
|
||||||
|
%% Copyright (c) 2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_decompress_h).
|
||||||
|
-behavior(cowboy_stream).
|
||||||
|
|
||||||
|
-export([init/3]).
|
||||||
|
-export([data/4]).
|
||||||
|
-export([info/3]).
|
||||||
|
-export([terminate/3]).
|
||||||
|
-export([early_error/5]).
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
next :: any(),
|
||||||
|
enabled = true :: boolean(),
|
||||||
|
ratio_limit :: non_neg_integer() | undefined,
|
||||||
|
compress = undefined :: undefined | gzip,
|
||||||
|
inflate = undefined :: undefined | zlib:zstream(),
|
||||||
|
is_reading = false :: boolean(),
|
||||||
|
|
||||||
|
%% We use a list of binaries to avoid doing unnecessary
|
||||||
|
%% memory allocations when inflating. We convert to binary
|
||||||
|
%% when we propagate the data. The data must be reversed
|
||||||
|
%% before converting to binary or inflating: this is done
|
||||||
|
%% via the buffer_to_binary/buffer_to_iovec functions.
|
||||||
|
read_body_buffer = [] :: [binary()],
|
||||||
|
read_body_is_fin = nofin :: nofin | {fin, non_neg_integer()}
|
||||||
|
}).
|
||||||
|
|
||||||
|
-spec init(cowboy_stream:streamid(), cowboy_req:req(), cowboy:opts())
|
||||||
|
-> {cowboy_stream:commands(), #state{}}.
|
||||||
|
init(StreamID, Req0, Opts) ->
|
||||||
|
Enabled = maps:get(decompress_enabled, Opts, true),
|
||||||
|
RatioLimit = maps:get(decompress_ratio_limit, Opts, 20),
|
||||||
|
{Req, State} = check_and_update_req(Req0),
|
||||||
|
Inflate = case State#state.compress of
|
||||||
|
undefined ->
|
||||||
|
undefined;
|
||||||
|
gzip ->
|
||||||
|
Z = zlib:open(),
|
||||||
|
zlib:inflateInit(Z, 31),
|
||||||
|
Z
|
||||||
|
end,
|
||||||
|
{Commands, Next} = cowboy_stream:init(StreamID, Req, Opts),
|
||||||
|
fold(Commands, State#state{next=Next, enabled=Enabled,
|
||||||
|
ratio_limit=RatioLimit, inflate=Inflate}).
|
||||||
|
|
||||||
|
-spec data(cowboy_stream:streamid(), cowboy_stream:fin(), cowboy_req:resp_body(), State)
|
||||||
|
-> {cowboy_stream:commands(), State} when State::#state{}.
|
||||||
|
data(StreamID, IsFin, Data, State=#state{next=Next0, inflate=undefined}) ->
|
||||||
|
{Commands, Next} = cowboy_stream:data(StreamID, IsFin, Data, Next0),
|
||||||
|
fold(Commands, State#state{next=Next, read_body_is_fin=IsFin});
|
||||||
|
data(StreamID, IsFin, Data, State=#state{next=Next0, enabled=false, read_body_buffer=Buffer}) ->
|
||||||
|
{Commands, Next} = cowboy_stream:data(StreamID, IsFin,
|
||||||
|
buffer_to_binary([Data|Buffer]), Next0),
|
||||||
|
fold(Commands, State#state{next=Next, read_body_is_fin=IsFin});
|
||||||
|
data(StreamID, IsFin, Data, State0=#state{next=Next0, ratio_limit=RatioLimit,
|
||||||
|
inflate=Z, is_reading=true, read_body_buffer=Buffer}) ->
|
||||||
|
case inflate(Z, RatioLimit, buffer_to_iovec([Data|Buffer])) of
|
||||||
|
{error, ErrorType} ->
|
||||||
|
zlib:close(Z),
|
||||||
|
Status = case ErrorType of
|
||||||
|
data_error -> 400;
|
||||||
|
size_error -> 413
|
||||||
|
end,
|
||||||
|
Commands = [
|
||||||
|
{error_response, Status, #{<<"content-length">> => <<"0">>}, <<>>},
|
||||||
|
stop
|
||||||
|
],
|
||||||
|
fold(Commands, State0#state{inflate=undefined, read_body_buffer=[]});
|
||||||
|
{ok, Inflated} ->
|
||||||
|
State = case IsFin of
|
||||||
|
nofin ->
|
||||||
|
State0;
|
||||||
|
fin ->
|
||||||
|
zlib:close(Z),
|
||||||
|
State0#state{inflate=undefined}
|
||||||
|
end,
|
||||||
|
{Commands, Next} = cowboy_stream:data(StreamID, IsFin, Inflated, Next0),
|
||||||
|
fold(Commands, State#state{next=Next, read_body_buffer=[],
|
||||||
|
read_body_is_fin=IsFin})
|
||||||
|
end;
|
||||||
|
data(_, IsFin, Data, State=#state{read_body_buffer=Buffer}) ->
|
||||||
|
{[], State#state{read_body_buffer=[Data|Buffer], read_body_is_fin=IsFin}}.
|
||||||
|
|
||||||
|
-spec info(cowboy_stream:streamid(), any(), State)
|
||||||
|
-> {cowboy_stream:commands(), State} when State::#state{}.
|
||||||
|
info(StreamID, Info, State=#state{next=Next0, inflate=undefined}) ->
|
||||||
|
{Commands, Next} = cowboy_stream:info(StreamID, Info, Next0),
|
||||||
|
fold(Commands, State#state{next=Next});
|
||||||
|
info(StreamID, Info={CommandTag, _, _, _, _}, State=#state{next=Next0, read_body_is_fin=IsFin})
|
||||||
|
when CommandTag =:= read_body; CommandTag =:= read_body_timeout ->
|
||||||
|
{Commands0, Next1} = cowboy_stream:info(StreamID, Info, Next0),
|
||||||
|
{Commands, Next} = data(StreamID, IsFin, <<>>, State#state{next=Next1, is_reading=true}),
|
||||||
|
fold(Commands ++ Commands0, Next);
|
||||||
|
info(StreamID, Info={set_options, Opts}, State0=#state{next=Next0,
|
||||||
|
enabled=Enabled0, ratio_limit=RatioLimit0, is_reading=IsReading}) ->
|
||||||
|
Enabled = maps:get(decompress_enabled, Opts, Enabled0),
|
||||||
|
RatioLimit = maps:get(decompress_ratio_limit, Opts, RatioLimit0),
|
||||||
|
{Commands, Next} = cowboy_stream:info(StreamID, Info, Next0),
|
||||||
|
%% We can't change the enabled setting after we start reading,
|
||||||
|
%% otherwise the data becomes garbage. Changing the setting
|
||||||
|
%% is not treated as an error, it is just ignored.
|
||||||
|
State = case IsReading of
|
||||||
|
true -> State0;
|
||||||
|
false -> State0#state{enabled=Enabled}
|
||||||
|
end,
|
||||||
|
fold(Commands, State#state{next=Next, ratio_limit=RatioLimit});
|
||||||
|
info(StreamID, Info, State=#state{next=Next0}) ->
|
||||||
|
{Commands, Next} = cowboy_stream:info(StreamID, Info, Next0),
|
||||||
|
fold(Commands, State#state{next=Next}).
|
||||||
|
|
||||||
|
-spec terminate(cowboy_stream:streamid(), cowboy_stream:reason(), #state{}) -> any().
|
||||||
|
terminate(StreamID, Reason, #state{next=Next, inflate=Z}) ->
|
||||||
|
case Z of
|
||||||
|
undefined -> ok;
|
||||||
|
_ -> zlib:close(Z)
|
||||||
|
end,
|
||||||
|
cowboy_stream:terminate(StreamID, Reason, Next).
|
||||||
|
|
||||||
|
-spec early_error(cowboy_stream:streamid(), cowboy_stream:reason(),
|
||||||
|
cowboy_stream:partial_req(), Resp, cowboy:opts()) -> Resp
|
||||||
|
when Resp::cowboy_stream:resp_command().
|
||||||
|
early_error(StreamID, Reason, PartialReq, Resp, Opts) ->
|
||||||
|
cowboy_stream:early_error(StreamID, Reason, PartialReq, Resp, Opts).
|
||||||
|
|
||||||
|
%% Internal.
|
||||||
|
|
||||||
|
%% Check whether the request needs content decoding, and if it does
|
||||||
|
%% whether it fits our criteria for decoding. We also update the
|
||||||
|
%% Req to indicate whether content was decoded.
|
||||||
|
%%
|
||||||
|
%% We always set the content_decoded value in the Req because it
|
||||||
|
%% indicates whether content decoding was attempted.
|
||||||
|
%%
|
||||||
|
%% A malformed content-encoding header results in no decoding.
|
||||||
|
check_and_update_req(Req=#{headers := Headers}) ->
|
||||||
|
ContentDecoded = maps:get(content_decoded, Req, []),
|
||||||
|
try cowboy_req:parse_header(<<"content-encoding">>, Req) of
|
||||||
|
%% We only automatically decompress when gzip is the only
|
||||||
|
%% encoding used. Since it's the only encoding used, we
|
||||||
|
%% can remove the header entirely before passing the Req
|
||||||
|
%% forward.
|
||||||
|
[<<"gzip">>] ->
|
||||||
|
{Req#{
|
||||||
|
headers => maps:remove(<<"content-encoding">>, Headers),
|
||||||
|
content_decoded => [<<"gzip">>|ContentDecoded]
|
||||||
|
}, #state{compress=gzip}};
|
||||||
|
_ ->
|
||||||
|
{Req#{content_decoded => ContentDecoded},
|
||||||
|
#state{compress=undefined}}
|
||||||
|
catch _:_ ->
|
||||||
|
{Req#{content_decoded => ContentDecoded},
|
||||||
|
#state{compress=undefined}}
|
||||||
|
end.
|
||||||
|
|
||||||
|
buffer_to_iovec(Buffer) ->
|
||||||
|
lists:reverse(Buffer).
|
||||||
|
|
||||||
|
buffer_to_binary(Buffer) ->
|
||||||
|
iolist_to_binary(lists:reverse(Buffer)).
|
||||||
|
|
||||||
|
fold(Commands, State) ->
|
||||||
|
fold(Commands, State, []).
|
||||||
|
|
||||||
|
fold([], State, Acc) ->
|
||||||
|
{lists:reverse(Acc), State};
|
||||||
|
fold([{response, Status, Headers0, Body}|Tail], State=#state{enabled=true}, Acc) ->
|
||||||
|
Headers = add_accept_encoding(Headers0),
|
||||||
|
fold(Tail, State, [{response, Status, Headers, Body}|Acc]);
|
||||||
|
fold([{headers, Status, Headers0} | Tail], State=#state{enabled=true}, Acc) ->
|
||||||
|
Headers = add_accept_encoding(Headers0),
|
||||||
|
fold(Tail, State, [{headers, Status, Headers}|Acc]);
|
||||||
|
fold([Command|Tail], State, Acc) ->
|
||||||
|
fold(Tail, State, [Command|Acc]).
|
||||||
|
|
||||||
|
add_accept_encoding(Headers=#{<<"accept-encoding">> := AcceptEncoding}) ->
|
||||||
|
try cow_http_hd:parse_accept_encoding(iolist_to_binary(AcceptEncoding)) of
|
||||||
|
List ->
|
||||||
|
case lists:keyfind(<<"gzip">>, 1, List) of
|
||||||
|
%% gzip is excluded but this handler is enabled; we replace.
|
||||||
|
{_, 0} ->
|
||||||
|
Replaced = lists:keyreplace(<<"gzip">>, 1, List, {<<"gzip">>, 1000}),
|
||||||
|
Codings = build_accept_encoding(Replaced),
|
||||||
|
Headers#{<<"accept-encoding">> => Codings};
|
||||||
|
{_, _} ->
|
||||||
|
Headers;
|
||||||
|
false ->
|
||||||
|
case lists:keyfind(<<"*">>, 1, List) of
|
||||||
|
%% Others are excluded along with gzip; we add.
|
||||||
|
{_, 0} ->
|
||||||
|
WithGzip = [{<<"gzip">>, 1000} | List],
|
||||||
|
Codings = build_accept_encoding(WithGzip),
|
||||||
|
Headers#{<<"accept-encoding">> => Codings};
|
||||||
|
{_, _} ->
|
||||||
|
Headers;
|
||||||
|
false ->
|
||||||
|
Headers#{<<"accept-encoding">> => [AcceptEncoding, <<", gzip">>]}
|
||||||
|
end
|
||||||
|
end
|
||||||
|
catch _:_ ->
|
||||||
|
%% The accept-encoding header is invalid. Probably empty. We replace it with ours.
|
||||||
|
Headers#{<<"accept-encoding">> => <<"gzip">>}
|
||||||
|
end;
|
||||||
|
add_accept_encoding(Headers) ->
|
||||||
|
Headers#{<<"accept-encoding">> => <<"gzip">>}.
|
||||||
|
|
||||||
|
%% @todo From cowlib, maybe expose?
|
||||||
|
qvalue_to_iodata(0) -> <<"0">>;
|
||||||
|
qvalue_to_iodata(Q) when Q < 10 -> [<<"0.00">>, integer_to_binary(Q)];
|
||||||
|
qvalue_to_iodata(Q) when Q < 100 -> [<<"0.0">>, integer_to_binary(Q)];
|
||||||
|
qvalue_to_iodata(Q) when Q < 1000 -> [<<"0.">>, integer_to_binary(Q)];
|
||||||
|
qvalue_to_iodata(1000) -> <<"1">>.
|
||||||
|
|
||||||
|
%% @todo Should be added to Cowlib.
|
||||||
|
build_accept_encoding([{ContentCoding, Q}|Tail]) ->
|
||||||
|
Weight = iolist_to_binary(qvalue_to_iodata(Q)),
|
||||||
|
Acc = <<ContentCoding/binary, ";q=", Weight/binary>>,
|
||||||
|
do_build_accept_encoding(Tail, Acc).
|
||||||
|
|
||||||
|
do_build_accept_encoding([{ContentCoding, Q}|Tail], Acc0) ->
|
||||||
|
Weight = iolist_to_binary(qvalue_to_iodata(Q)),
|
||||||
|
Acc = <<Acc0/binary, ", ", ContentCoding/binary, ";q=", Weight/binary>>,
|
||||||
|
do_build_accept_encoding(Tail, Acc);
|
||||||
|
do_build_accept_encoding([], Acc) ->
|
||||||
|
Acc.
|
||||||
|
|
||||||
|
inflate(Z, RatioLimit, Data) ->
|
||||||
|
try
|
||||||
|
{Status, Output} = zlib:safeInflate(Z, Data),
|
||||||
|
Size = iolist_size(Output),
|
||||||
|
do_inflate(Z, Size, iolist_size(Data) * RatioLimit, Status, [Output])
|
||||||
|
catch
|
||||||
|
error:data_error ->
|
||||||
|
{error, data_error}
|
||||||
|
end.
|
||||||
|
|
||||||
|
do_inflate(_, Size, Limit, _, _) when Size > Limit ->
|
||||||
|
{error, size_error};
|
||||||
|
do_inflate(Z, Size0, Limit, continue, Acc) ->
|
||||||
|
{Status, Output} = zlib:safeInflate(Z, []),
|
||||||
|
Size = Size0 + iolist_size(Output),
|
||||||
|
do_inflate(Z, Size, Limit, Status, [Output | Acc]);
|
||||||
|
do_inflate(_, _, _, finished, Acc) ->
|
||||||
|
{ok, iolist_to_binary(lists:reverse(Acc))}.
|
|
@ -0,0 +1,57 @@
|
||||||
|
%% Copyright (c) 2011-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
%% Handler middleware.
|
||||||
|
%%
|
||||||
|
%% Execute the handler given by the <em>handler</em> and <em>handler_opts</em>
|
||||||
|
%% environment values. The result of this execution is added to the
|
||||||
|
%% environment under the <em>result</em> value.
|
||||||
|
-module(cowboy_handler).
|
||||||
|
-behaviour(cowboy_middleware).
|
||||||
|
|
||||||
|
-export([execute/2]).
|
||||||
|
-export([terminate/4]).
|
||||||
|
|
||||||
|
-callback init(Req, any())
|
||||||
|
-> {ok | module(), Req, any()}
|
||||||
|
| {module(), Req, any(), any()}
|
||||||
|
when Req::cowboy_req:req().
|
||||||
|
|
||||||
|
-callback terminate(any(), map(), any()) -> ok.
|
||||||
|
-optional_callbacks([terminate/3]).
|
||||||
|
|
||||||
|
-spec execute(Req, Env) -> {ok, Req, Env}
|
||||||
|
when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
||||||
|
execute(Req, Env=#{handler := Handler, handler_opts := HandlerOpts}) ->
|
||||||
|
try Handler:init(Req, HandlerOpts) of
|
||||||
|
{ok, Req2, State} ->
|
||||||
|
Result = terminate(normal, Req2, State, Handler),
|
||||||
|
{ok, Req2, Env#{result => Result}};
|
||||||
|
{Mod, Req2, State} ->
|
||||||
|
Mod:upgrade(Req2, Env, Handler, State);
|
||||||
|
{Mod, Req2, State, Opts} ->
|
||||||
|
Mod:upgrade(Req2, Env, Handler, State, Opts)
|
||||||
|
catch Class:Reason:Stacktrace ->
|
||||||
|
terminate({crash, Class, Reason}, Req, HandlerOpts, Handler),
|
||||||
|
erlang:raise(Class, Reason, Stacktrace)
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec terminate(any(), Req | undefined, any(), module()) -> ok when Req::cowboy_req:req().
|
||||||
|
terminate(Reason, Req, State, Handler) ->
|
||||||
|
case erlang:function_exported(Handler, terminate, 3) of
|
||||||
|
true ->
|
||||||
|
Handler:terminate(Reason, Req, State);
|
||||||
|
false ->
|
||||||
|
ok
|
||||||
|
end.
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,117 @@
|
||||||
|
%% Copyright (c) 2011-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_loop).
|
||||||
|
-behaviour(cowboy_sub_protocol).
|
||||||
|
|
||||||
|
-export([upgrade/4]).
|
||||||
|
-export([upgrade/5]).
|
||||||
|
-export([loop/5]).
|
||||||
|
|
||||||
|
-export([system_continue/3]).
|
||||||
|
-export([system_terminate/4]).
|
||||||
|
-export([system_code_change/4]).
|
||||||
|
|
||||||
|
%% From gen_server.
|
||||||
|
-define(is_timeout(X), ((X) =:= infinity orelse (is_integer(X) andalso (X) >= 0))).
|
||||||
|
|
||||||
|
-callback init(Req, any())
|
||||||
|
-> {ok | module(), Req, any()}
|
||||||
|
| {module(), Req, any(), any()}
|
||||||
|
when Req::cowboy_req:req().
|
||||||
|
|
||||||
|
-callback info(any(), Req, State)
|
||||||
|
-> {ok, Req, State}
|
||||||
|
| {ok, Req, State, hibernate}
|
||||||
|
| {stop, Req, State}
|
||||||
|
when Req::cowboy_req:req(), State::any().
|
||||||
|
|
||||||
|
-callback terminate(any(), cowboy_req:req(), any()) -> ok.
|
||||||
|
-optional_callbacks([terminate/3]).
|
||||||
|
|
||||||
|
-spec upgrade(Req, Env, module(), any())
|
||||||
|
-> {ok, Req, Env} | {suspend, ?MODULE, loop, [any()]}
|
||||||
|
when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
||||||
|
upgrade(Req, Env, Handler, HandlerState) ->
|
||||||
|
loop(Req, Env, Handler, HandlerState, infinity).
|
||||||
|
|
||||||
|
-spec upgrade(Req, Env, module(), any(), hibernate | timeout())
|
||||||
|
-> {suspend, ?MODULE, loop, [any()]}
|
||||||
|
when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
||||||
|
upgrade(Req, Env, Handler, HandlerState, hibernate) ->
|
||||||
|
suspend(Req, Env, Handler, HandlerState);
|
||||||
|
upgrade(Req, Env, Handler, HandlerState, Timeout) when ?is_timeout(Timeout) ->
|
||||||
|
loop(Req, Env, Handler, HandlerState, Timeout).
|
||||||
|
|
||||||
|
-spec loop(Req, Env, module(), any(), timeout())
|
||||||
|
-> {ok, Req, Env} | {suspend, ?MODULE, loop, [any()]}
|
||||||
|
when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
||||||
|
%% @todo Handle system messages.
|
||||||
|
loop(Req=#{pid := Parent}, Env, Handler, HandlerState, Timeout) ->
|
||||||
|
receive
|
||||||
|
%% System messages.
|
||||||
|
{'EXIT', Parent, Reason} ->
|
||||||
|
terminate(Req, Env, Handler, HandlerState, Reason);
|
||||||
|
{system, From, Request} ->
|
||||||
|
sys:handle_system_msg(Request, From, Parent, ?MODULE, [],
|
||||||
|
{Req, Env, Handler, HandlerState, Timeout});
|
||||||
|
%% Calls from supervisor module.
|
||||||
|
{'$gen_call', From, Call} ->
|
||||||
|
cowboy_children:handle_supervisor_call(Call, From, [], ?MODULE),
|
||||||
|
loop(Req, Env, Handler, HandlerState, Timeout);
|
||||||
|
Message ->
|
||||||
|
call(Req, Env, Handler, HandlerState, Timeout, Message)
|
||||||
|
after Timeout ->
|
||||||
|
call(Req, Env, Handler, HandlerState, Timeout, timeout)
|
||||||
|
end.
|
||||||
|
|
||||||
|
call(Req0, Env, Handler, HandlerState0, Timeout, Message) ->
|
||||||
|
try Handler:info(Message, Req0, HandlerState0) of
|
||||||
|
{ok, Req, HandlerState} ->
|
||||||
|
loop(Req, Env, Handler, HandlerState, Timeout);
|
||||||
|
{ok, Req, HandlerState, hibernate} ->
|
||||||
|
suspend(Req, Env, Handler, HandlerState);
|
||||||
|
{ok, Req, HandlerState, NewTimeout} when ?is_timeout(NewTimeout) ->
|
||||||
|
loop(Req, Env, Handler, HandlerState, NewTimeout);
|
||||||
|
{stop, Req, HandlerState} ->
|
||||||
|
terminate(Req, Env, Handler, HandlerState, stop)
|
||||||
|
catch Class:Reason:Stacktrace ->
|
||||||
|
cowboy_handler:terminate({crash, Class, Reason}, Req0, HandlerState0, Handler),
|
||||||
|
erlang:raise(Class, Reason, Stacktrace)
|
||||||
|
end.
|
||||||
|
|
||||||
|
suspend(Req, Env, Handler, HandlerState) ->
|
||||||
|
{suspend, ?MODULE, loop, [Req, Env, Handler, HandlerState, infinity]}.
|
||||||
|
|
||||||
|
terminate(Req, Env, Handler, HandlerState, Reason) ->
|
||||||
|
Result = cowboy_handler:terminate(Reason, Req, HandlerState, Handler),
|
||||||
|
{ok, Req, Env#{result => Result}}.
|
||||||
|
|
||||||
|
%% System callbacks.
|
||||||
|
|
||||||
|
-spec system_continue(_, _, {Req, Env, module(), any(), timeout()})
|
||||||
|
-> {ok, Req, Env} | {suspend, ?MODULE, loop, [any()]}
|
||||||
|
when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
||||||
|
system_continue(_, _, {Req, Env, Handler, HandlerState, Timeout}) ->
|
||||||
|
loop(Req, Env, Handler, HandlerState, Timeout).
|
||||||
|
|
||||||
|
-spec system_terminate(any(), _, _, {Req, Env, module(), any(), timeout()})
|
||||||
|
-> {ok, Req, Env} when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
||||||
|
system_terminate(Reason, _, _, {Req, Env, Handler, HandlerState, _}) ->
|
||||||
|
terminate(Req, Env, Handler, HandlerState, Reason).
|
||||||
|
|
||||||
|
-spec system_code_change(Misc, _, _, _) -> {ok, Misc}
|
||||||
|
when Misc::{cowboy_req:req(), cowboy_middleware:env(), module(), any()}.
|
||||||
|
system_code_change(Misc, _, _, _) ->
|
||||||
|
{ok, Misc}.
|
|
@ -0,0 +1,331 @@
|
||||||
|
%% Copyright (c) 2017-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_metrics_h).
|
||||||
|
-behavior(cowboy_stream).
|
||||||
|
|
||||||
|
-export([init/3]).
|
||||||
|
-export([data/4]).
|
||||||
|
-export([info/3]).
|
||||||
|
-export([terminate/3]).
|
||||||
|
-export([early_error/5]).
|
||||||
|
|
||||||
|
-type proc_metrics() :: #{pid() => #{
|
||||||
|
%% Time at which the process spawned.
|
||||||
|
spawn := integer(),
|
||||||
|
|
||||||
|
%% Time at which the process exited.
|
||||||
|
exit => integer(),
|
||||||
|
|
||||||
|
%% Reason for the process exit.
|
||||||
|
reason => any()
|
||||||
|
}}.
|
||||||
|
|
||||||
|
-type informational_metrics() :: #{
|
||||||
|
%% Informational response status.
|
||||||
|
status := cowboy:http_status(),
|
||||||
|
|
||||||
|
%% Headers sent with the informational response.
|
||||||
|
headers := cowboy:http_headers(),
|
||||||
|
|
||||||
|
%% Time when the informational response was sent.
|
||||||
|
time := integer()
|
||||||
|
}.
|
||||||
|
|
||||||
|
-type metrics() :: #{
|
||||||
|
%% The identifier for this listener.
|
||||||
|
ref := ranch:ref(),
|
||||||
|
|
||||||
|
%% The pid for this connection.
|
||||||
|
pid := pid(),
|
||||||
|
|
||||||
|
%% The streamid also indicates the total number of requests on
|
||||||
|
%% this connection (StreamID div 2 + 1).
|
||||||
|
streamid := cowboy_stream:streamid(),
|
||||||
|
|
||||||
|
%% The terminate reason is always useful.
|
||||||
|
reason := cowboy_stream:reason(),
|
||||||
|
|
||||||
|
%% A filtered Req object or a partial Req object
|
||||||
|
%% depending on how far the request got to.
|
||||||
|
req => cowboy_req:req(),
|
||||||
|
partial_req => cowboy_stream:partial_req(),
|
||||||
|
|
||||||
|
%% Response status.
|
||||||
|
resp_status := cowboy:http_status(),
|
||||||
|
|
||||||
|
%% Filtered response headers.
|
||||||
|
resp_headers := cowboy:http_headers(),
|
||||||
|
|
||||||
|
%% Start/end of the processing of the request.
|
||||||
|
%%
|
||||||
|
%% This represents the time from this stream handler's init
|
||||||
|
%% to terminate.
|
||||||
|
req_start => integer(),
|
||||||
|
req_end => integer(),
|
||||||
|
|
||||||
|
%% Start/end of the receiving of the request body.
|
||||||
|
%% Begins when the first packet has been received.
|
||||||
|
req_body_start => integer(),
|
||||||
|
req_body_end => integer(),
|
||||||
|
|
||||||
|
%% Start/end of the sending of the response.
|
||||||
|
%% Begins when we send the headers and ends on the final
|
||||||
|
%% packet of the response body. If everything is sent at
|
||||||
|
%% once these values are identical.
|
||||||
|
resp_start => integer(),
|
||||||
|
resp_end => integer(),
|
||||||
|
|
||||||
|
%% For early errors all we get is the time we received it.
|
||||||
|
early_error_time => integer(),
|
||||||
|
|
||||||
|
%% Start/end of spawned processes. This is where most of
|
||||||
|
%% the user code lies, excluding stream handlers. On a
|
||||||
|
%% default Cowboy configuration there should be only one
|
||||||
|
%% process: the request process.
|
||||||
|
procs => proc_metrics(),
|
||||||
|
|
||||||
|
%% Informational responses sent before the final response.
|
||||||
|
informational => [informational_metrics()],
|
||||||
|
|
||||||
|
%% Length of the request and response bodies. This does
|
||||||
|
%% not include the framing.
|
||||||
|
req_body_length => non_neg_integer(),
|
||||||
|
resp_body_length => non_neg_integer(),
|
||||||
|
|
||||||
|
%% Additional metadata set by the user.
|
||||||
|
user_data => map()
|
||||||
|
}.
|
||||||
|
-export_type([metrics/0]).
|
||||||
|
|
||||||
|
-type metrics_callback() :: fun((metrics()) -> any()).
|
||||||
|
-export_type([metrics_callback/0]).
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
next :: any(),
|
||||||
|
callback :: fun((metrics()) -> any()),
|
||||||
|
resp_headers_filter :: undefined | fun((cowboy:http_headers()) -> cowboy:http_headers()),
|
||||||
|
req :: map(),
|
||||||
|
resp_status :: undefined | cowboy:http_status(),
|
||||||
|
resp_headers :: undefined | cowboy:http_headers(),
|
||||||
|
ref :: ranch:ref(),
|
||||||
|
req_start :: integer(),
|
||||||
|
req_end :: undefined | integer(),
|
||||||
|
req_body_start :: undefined | integer(),
|
||||||
|
req_body_end :: undefined | integer(),
|
||||||
|
resp_start :: undefined | integer(),
|
||||||
|
resp_end :: undefined | integer(),
|
||||||
|
procs = #{} :: proc_metrics(),
|
||||||
|
informational = [] :: [informational_metrics()],
|
||||||
|
req_body_length = 0 :: non_neg_integer(),
|
||||||
|
resp_body_length = 0 :: non_neg_integer(),
|
||||||
|
user_data = #{} :: map()
|
||||||
|
}).
|
||||||
|
|
||||||
|
-spec init(cowboy_stream:streamid(), cowboy_req:req(), cowboy:opts())
|
||||||
|
-> {[{spawn, pid(), timeout()}], #state{}}.
|
||||||
|
init(StreamID, Req=#{ref := Ref}, Opts=#{metrics_callback := Fun}) ->
|
||||||
|
ReqStart = erlang:monotonic_time(),
|
||||||
|
{Commands, Next} = cowboy_stream:init(StreamID, Req, Opts),
|
||||||
|
FilteredReq = case maps:get(metrics_req_filter, Opts, undefined) of
|
||||||
|
undefined -> Req;
|
||||||
|
ReqFilter -> ReqFilter(Req)
|
||||||
|
end,
|
||||||
|
RespHeadersFilter = maps:get(metrics_resp_headers_filter, Opts, undefined),
|
||||||
|
{Commands, fold(Commands, #state{
|
||||||
|
next=Next,
|
||||||
|
callback=Fun,
|
||||||
|
resp_headers_filter=RespHeadersFilter,
|
||||||
|
req=FilteredReq,
|
||||||
|
ref=Ref,
|
||||||
|
req_start=ReqStart
|
||||||
|
})}.
|
||||||
|
|
||||||
|
-spec data(cowboy_stream:streamid(), cowboy_stream:fin(), cowboy_req:resp_body(), State)
|
||||||
|
-> {cowboy_stream:commands(), State} when State::#state{}.
|
||||||
|
data(StreamID, IsFin=fin, Data, State=#state{req_body_start=undefined}) ->
|
||||||
|
ReqBody = erlang:monotonic_time(),
|
||||||
|
do_data(StreamID, IsFin, Data, State#state{
|
||||||
|
req_body_start=ReqBody,
|
||||||
|
req_body_end=ReqBody,
|
||||||
|
req_body_length=byte_size(Data)
|
||||||
|
});
|
||||||
|
data(StreamID, IsFin=fin, Data, State=#state{req_body_length=ReqBodyLen}) ->
|
||||||
|
ReqBodyEnd = erlang:monotonic_time(),
|
||||||
|
do_data(StreamID, IsFin, Data, State#state{
|
||||||
|
req_body_end=ReqBodyEnd,
|
||||||
|
req_body_length=ReqBodyLen + byte_size(Data)
|
||||||
|
});
|
||||||
|
data(StreamID, IsFin, Data, State=#state{req_body_start=undefined}) ->
|
||||||
|
ReqBodyStart = erlang:monotonic_time(),
|
||||||
|
do_data(StreamID, IsFin, Data, State#state{
|
||||||
|
req_body_start=ReqBodyStart,
|
||||||
|
req_body_length=byte_size(Data)
|
||||||
|
});
|
||||||
|
data(StreamID, IsFin, Data, State=#state{req_body_length=ReqBodyLen}) ->
|
||||||
|
do_data(StreamID, IsFin, Data, State#state{
|
||||||
|
req_body_length=ReqBodyLen + byte_size(Data)
|
||||||
|
}).
|
||||||
|
|
||||||
|
do_data(StreamID, IsFin, Data, State0=#state{next=Next0}) ->
|
||||||
|
{Commands, Next} = cowboy_stream:data(StreamID, IsFin, Data, Next0),
|
||||||
|
{Commands, fold(Commands, State0#state{next=Next})}.
|
||||||
|
|
||||||
|
-spec info(cowboy_stream:streamid(), any(), State)
|
||||||
|
-> {cowboy_stream:commands(), State} when State::#state{}.
|
||||||
|
info(StreamID, Info={'EXIT', Pid, Reason}, State0=#state{procs=Procs}) ->
|
||||||
|
ProcEnd = erlang:monotonic_time(),
|
||||||
|
P = maps:get(Pid, Procs),
|
||||||
|
State = State0#state{procs=Procs#{Pid => P#{
|
||||||
|
exit => ProcEnd,
|
||||||
|
reason => Reason
|
||||||
|
}}},
|
||||||
|
do_info(StreamID, Info, State);
|
||||||
|
info(StreamID, Info, State) ->
|
||||||
|
do_info(StreamID, Info, State).
|
||||||
|
|
||||||
|
do_info(StreamID, Info, State0=#state{next=Next0}) ->
|
||||||
|
{Commands, Next} = cowboy_stream:info(StreamID, Info, Next0),
|
||||||
|
{Commands, fold(Commands, State0#state{next=Next})}.
|
||||||
|
|
||||||
|
fold([], State) ->
|
||||||
|
State;
|
||||||
|
fold([{spawn, Pid, _}|Tail], State0=#state{procs=Procs}) ->
|
||||||
|
ProcStart = erlang:monotonic_time(),
|
||||||
|
State = State0#state{procs=Procs#{Pid => #{spawn => ProcStart}}},
|
||||||
|
fold(Tail, State);
|
||||||
|
fold([{inform, Status, Headers}|Tail],
|
||||||
|
State=#state{informational=Infos}) ->
|
||||||
|
Time = erlang:monotonic_time(),
|
||||||
|
fold(Tail, State#state{informational=[#{
|
||||||
|
status => Status,
|
||||||
|
headers => Headers,
|
||||||
|
time => Time
|
||||||
|
}|Infos]});
|
||||||
|
fold([{response, Status, Headers, Body}|Tail],
|
||||||
|
State=#state{resp_headers_filter=RespHeadersFilter}) ->
|
||||||
|
Resp = erlang:monotonic_time(),
|
||||||
|
fold(Tail, State#state{
|
||||||
|
resp_status=Status,
|
||||||
|
resp_headers=case RespHeadersFilter of
|
||||||
|
undefined -> Headers;
|
||||||
|
_ -> RespHeadersFilter(Headers)
|
||||||
|
end,
|
||||||
|
resp_start=Resp,
|
||||||
|
resp_end=Resp,
|
||||||
|
resp_body_length=resp_body_length(Body)
|
||||||
|
});
|
||||||
|
fold([{error_response, Status, Headers, Body}|Tail],
|
||||||
|
State=#state{resp_status=RespStatus}) ->
|
||||||
|
%% The error_response command only results in a response
|
||||||
|
%% if no response was sent before.
|
||||||
|
case RespStatus of
|
||||||
|
undefined ->
|
||||||
|
fold([{response, Status, Headers, Body}|Tail], State);
|
||||||
|
_ ->
|
||||||
|
fold(Tail, State)
|
||||||
|
end;
|
||||||
|
fold([{headers, Status, Headers}|Tail],
|
||||||
|
State=#state{resp_headers_filter=RespHeadersFilter}) ->
|
||||||
|
RespStart = erlang:monotonic_time(),
|
||||||
|
fold(Tail, State#state{
|
||||||
|
resp_status=Status,
|
||||||
|
resp_headers=case RespHeadersFilter of
|
||||||
|
undefined -> Headers;
|
||||||
|
_ -> RespHeadersFilter(Headers)
|
||||||
|
end,
|
||||||
|
resp_start=RespStart
|
||||||
|
});
|
||||||
|
%% @todo It might be worthwhile to keep the sendfile information around,
|
||||||
|
%% especially if these frames ultimately result in a sendfile syscall.
|
||||||
|
fold([{data, nofin, Data}|Tail], State=#state{resp_body_length=RespBodyLen}) ->
|
||||||
|
fold(Tail, State#state{
|
||||||
|
resp_body_length=RespBodyLen + resp_body_length(Data)
|
||||||
|
});
|
||||||
|
fold([{data, fin, Data}|Tail], State=#state{resp_body_length=RespBodyLen}) ->
|
||||||
|
RespEnd = erlang:monotonic_time(),
|
||||||
|
fold(Tail, State#state{
|
||||||
|
resp_end=RespEnd,
|
||||||
|
resp_body_length=RespBodyLen + resp_body_length(Data)
|
||||||
|
});
|
||||||
|
fold([{set_options, SetOpts}|Tail], State0=#state{user_data=OldUserData}) ->
|
||||||
|
State = case SetOpts of
|
||||||
|
#{metrics_user_data := NewUserData} ->
|
||||||
|
State0#state{user_data=maps:merge(OldUserData, NewUserData)};
|
||||||
|
_ ->
|
||||||
|
State0
|
||||||
|
end,
|
||||||
|
fold(Tail, State);
|
||||||
|
fold([_|Tail], State) ->
|
||||||
|
fold(Tail, State).
|
||||||
|
|
||||||
|
-spec terminate(cowboy_stream:streamid(), cowboy_stream:reason(), #state{}) -> any().
|
||||||
|
terminate(StreamID, Reason, #state{next=Next, callback=Fun,
|
||||||
|
req=Req, resp_status=RespStatus, resp_headers=RespHeaders, ref=Ref,
|
||||||
|
req_start=ReqStart, req_body_start=ReqBodyStart,
|
||||||
|
req_body_end=ReqBodyEnd, resp_start=RespStart, resp_end=RespEnd,
|
||||||
|
procs=Procs, informational=Infos, user_data=UserData,
|
||||||
|
req_body_length=ReqBodyLen, resp_body_length=RespBodyLen}) ->
|
||||||
|
Res = cowboy_stream:terminate(StreamID, Reason, Next),
|
||||||
|
ReqEnd = erlang:monotonic_time(),
|
||||||
|
Metrics = #{
|
||||||
|
ref => Ref,
|
||||||
|
pid => self(),
|
||||||
|
streamid => StreamID,
|
||||||
|
reason => Reason,
|
||||||
|
req => Req,
|
||||||
|
resp_status => RespStatus,
|
||||||
|
resp_headers => RespHeaders,
|
||||||
|
req_start => ReqStart,
|
||||||
|
req_end => ReqEnd,
|
||||||
|
req_body_start => ReqBodyStart,
|
||||||
|
req_body_end => ReqBodyEnd,
|
||||||
|
resp_start => RespStart,
|
||||||
|
resp_end => RespEnd,
|
||||||
|
procs => Procs,
|
||||||
|
informational => lists:reverse(Infos),
|
||||||
|
req_body_length => ReqBodyLen,
|
||||||
|
resp_body_length => RespBodyLen,
|
||||||
|
user_data => UserData
|
||||||
|
},
|
||||||
|
Fun(Metrics),
|
||||||
|
Res.
|
||||||
|
|
||||||
|
-spec early_error(cowboy_stream:streamid(), cowboy_stream:reason(),
|
||||||
|
cowboy_stream:partial_req(), Resp, cowboy:opts()) -> Resp
|
||||||
|
when Resp::cowboy_stream:resp_command().
|
||||||
|
early_error(StreamID, Reason, PartialReq=#{ref := Ref}, Resp0, Opts=#{metrics_callback := Fun}) ->
|
||||||
|
Time = erlang:monotonic_time(),
|
||||||
|
Resp = {response, RespStatus, RespHeaders, RespBody}
|
||||||
|
= cowboy_stream:early_error(StreamID, Reason, PartialReq, Resp0, Opts),
|
||||||
|
%% As far as metrics go we are limited in what we can provide
|
||||||
|
%% in this case.
|
||||||
|
Metrics = #{
|
||||||
|
ref => Ref,
|
||||||
|
pid => self(),
|
||||||
|
streamid => StreamID,
|
||||||
|
reason => Reason,
|
||||||
|
partial_req => PartialReq,
|
||||||
|
resp_status => RespStatus,
|
||||||
|
resp_headers => RespHeaders,
|
||||||
|
early_error_time => Time,
|
||||||
|
resp_body_length => resp_body_length(RespBody)
|
||||||
|
},
|
||||||
|
Fun(Metrics),
|
||||||
|
Resp.
|
||||||
|
|
||||||
|
resp_body_length({sendfile, _, Len, _}) ->
|
||||||
|
Len;
|
||||||
|
resp_body_length(Data) ->
|
||||||
|
iolist_size(Data).
|
|
@ -0,0 +1,24 @@
|
||||||
|
%% Copyright (c) 2013-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_middleware).
|
||||||
|
|
||||||
|
-type env() :: #{atom() => any()}.
|
||||||
|
-export_type([env/0]).
|
||||||
|
|
||||||
|
-callback execute(Req, Env)
|
||||||
|
-> {ok, Req, Env}
|
||||||
|
| {suspend, module(), atom(), [any()]}
|
||||||
|
| {stop, Req}
|
||||||
|
when Req::cowboy_req:req(), Env::env().
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,603 @@
|
||||||
|
%% Copyright (c) 2011-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
%% Routing middleware.
|
||||||
|
%%
|
||||||
|
%% Resolve the handler to be used for the request based on the
|
||||||
|
%% routing information found in the <em>dispatch</em> environment value.
|
||||||
|
%% When found, the handler module and associated data are added to
|
||||||
|
%% the environment as the <em>handler</em> and <em>handler_opts</em> values
|
||||||
|
%% respectively.
|
||||||
|
%%
|
||||||
|
%% If the route cannot be found, processing stops with either
|
||||||
|
%% a 400 or a 404 reply.
|
||||||
|
-module(cowboy_router).
|
||||||
|
-behaviour(cowboy_middleware).
|
||||||
|
|
||||||
|
-export([compile/1]).
|
||||||
|
-export([execute/2]).
|
||||||
|
|
||||||
|
-type bindings() :: #{atom() => any()}.
|
||||||
|
-type tokens() :: [binary()].
|
||||||
|
-export_type([bindings/0]).
|
||||||
|
-export_type([tokens/0]).
|
||||||
|
|
||||||
|
-type route_match() :: '_' | iodata().
|
||||||
|
-type route_path() :: {Path::route_match(), Handler::module(), Opts::any()}
|
||||||
|
| {Path::route_match(), cowboy:fields(), Handler::module(), Opts::any()}.
|
||||||
|
-type route_rule() :: {Host::route_match(), Paths::[route_path()]}
|
||||||
|
| {Host::route_match(), cowboy:fields(), Paths::[route_path()]}.
|
||||||
|
-type routes() :: [route_rule()].
|
||||||
|
-export_type([routes/0]).
|
||||||
|
|
||||||
|
-type dispatch_match() :: '_' | <<_:8>> | [binary() | '_' | '...' | atom()].
|
||||||
|
-type dispatch_path() :: {dispatch_match(), cowboy:fields(), module(), any()}.
|
||||||
|
-type dispatch_rule() :: {Host::dispatch_match(), cowboy:fields(), Paths::[dispatch_path()]}.
|
||||||
|
-opaque dispatch_rules() :: [dispatch_rule()].
|
||||||
|
-export_type([dispatch_rules/0]).
|
||||||
|
|
||||||
|
-spec compile(routes()) -> dispatch_rules().
|
||||||
|
compile(Routes) ->
|
||||||
|
compile(Routes, []).
|
||||||
|
|
||||||
|
compile([], Acc) ->
|
||||||
|
lists:reverse(Acc);
|
||||||
|
compile([{Host, Paths}|Tail], Acc) ->
|
||||||
|
compile([{Host, [], Paths}|Tail], Acc);
|
||||||
|
compile([{HostMatch, Fields, Paths}|Tail], Acc) ->
|
||||||
|
HostRules = case HostMatch of
|
||||||
|
'_' -> '_';
|
||||||
|
_ -> compile_host(HostMatch)
|
||||||
|
end,
|
||||||
|
PathRules = compile_paths(Paths, []),
|
||||||
|
Hosts = case HostRules of
|
||||||
|
'_' -> [{'_', Fields, PathRules}];
|
||||||
|
_ -> [{R, Fields, PathRules} || R <- HostRules]
|
||||||
|
end,
|
||||||
|
compile(Tail, Hosts ++ Acc).
|
||||||
|
|
||||||
|
compile_host(HostMatch) when is_list(HostMatch) ->
|
||||||
|
compile_host(list_to_binary(HostMatch));
|
||||||
|
compile_host(HostMatch) when is_binary(HostMatch) ->
|
||||||
|
compile_rules(HostMatch, $., [], [], <<>>).
|
||||||
|
|
||||||
|
compile_paths([], Acc) ->
|
||||||
|
lists:reverse(Acc);
|
||||||
|
compile_paths([{PathMatch, Handler, Opts}|Tail], Acc) ->
|
||||||
|
compile_paths([{PathMatch, [], Handler, Opts}|Tail], Acc);
|
||||||
|
compile_paths([{PathMatch, Fields, Handler, Opts}|Tail], Acc)
|
||||||
|
when is_list(PathMatch) ->
|
||||||
|
compile_paths([{iolist_to_binary(PathMatch),
|
||||||
|
Fields, Handler, Opts}|Tail], Acc);
|
||||||
|
compile_paths([{'_', Fields, Handler, Opts}|Tail], Acc) ->
|
||||||
|
compile_paths(Tail, [{'_', Fields, Handler, Opts}] ++ Acc);
|
||||||
|
compile_paths([{<<"*">>, Fields, Handler, Opts}|Tail], Acc) ->
|
||||||
|
compile_paths(Tail, [{<<"*">>, Fields, Handler, Opts}|Acc]);
|
||||||
|
compile_paths([{<< $/, PathMatch/bits >>, Fields, Handler, Opts}|Tail],
|
||||||
|
Acc) ->
|
||||||
|
PathRules = compile_rules(PathMatch, $/, [], [], <<>>),
|
||||||
|
Paths = [{lists:reverse(R), Fields, Handler, Opts} || R <- PathRules],
|
||||||
|
compile_paths(Tail, Paths ++ Acc);
|
||||||
|
compile_paths([{PathMatch, _, _, _}|_], _) ->
|
||||||
|
error({badarg, "The following route MUST begin with a slash: "
|
||||||
|
++ binary_to_list(PathMatch)}).
|
||||||
|
|
||||||
|
compile_rules(<<>>, _, Segments, Rules, <<>>) ->
|
||||||
|
[Segments|Rules];
|
||||||
|
compile_rules(<<>>, _, Segments, Rules, Acc) ->
|
||||||
|
[[Acc|Segments]|Rules];
|
||||||
|
compile_rules(<< S, Rest/bits >>, S, Segments, Rules, <<>>) ->
|
||||||
|
compile_rules(Rest, S, Segments, Rules, <<>>);
|
||||||
|
compile_rules(<< S, Rest/bits >>, S, Segments, Rules, Acc) ->
|
||||||
|
compile_rules(Rest, S, [Acc|Segments], Rules, <<>>);
|
||||||
|
%% Colon on path segment start is special, otherwise allow.
|
||||||
|
compile_rules(<< $:, Rest/bits >>, S, Segments, Rules, <<>>) ->
|
||||||
|
{NameBin, Rest2} = compile_binding(Rest, S, <<>>),
|
||||||
|
Name = binary_to_atom(NameBin, utf8),
|
||||||
|
compile_rules(Rest2, S, Segments, Rules, Name);
|
||||||
|
compile_rules(<< $[, $., $., $., $], Rest/bits >>, S, Segments, Rules, Acc)
|
||||||
|
when Acc =:= <<>> ->
|
||||||
|
compile_rules(Rest, S, ['...'|Segments], Rules, Acc);
|
||||||
|
compile_rules(<< $[, $., $., $., $], Rest/bits >>, S, Segments, Rules, Acc) ->
|
||||||
|
compile_rules(Rest, S, ['...', Acc|Segments], Rules, Acc);
|
||||||
|
compile_rules(<< $[, S, Rest/bits >>, S, Segments, Rules, Acc) ->
|
||||||
|
compile_brackets(Rest, S, [Acc|Segments], Rules);
|
||||||
|
compile_rules(<< $[, Rest/bits >>, S, Segments, Rules, <<>>) ->
|
||||||
|
compile_brackets(Rest, S, Segments, Rules);
|
||||||
|
%% Open bracket in the middle of a segment.
|
||||||
|
compile_rules(<< $[, _/bits >>, _, _, _, _) ->
|
||||||
|
error(badarg);
|
||||||
|
%% Missing an open bracket.
|
||||||
|
compile_rules(<< $], _/bits >>, _, _, _, _) ->
|
||||||
|
error(badarg);
|
||||||
|
compile_rules(<< C, Rest/bits >>, S, Segments, Rules, Acc) ->
|
||||||
|
compile_rules(Rest, S, Segments, Rules, << Acc/binary, C >>).
|
||||||
|
|
||||||
|
%% Everything past $: until the segment separator ($. for hosts,
|
||||||
|
%% $/ for paths) or $[ or $] or end of binary is the binding name.
|
||||||
|
compile_binding(<<>>, _, <<>>) ->
|
||||||
|
error(badarg);
|
||||||
|
compile_binding(Rest = <<>>, _, Acc) ->
|
||||||
|
{Acc, Rest};
|
||||||
|
compile_binding(Rest = << C, _/bits >>, S, Acc)
|
||||||
|
when C =:= S; C =:= $[; C =:= $] ->
|
||||||
|
{Acc, Rest};
|
||||||
|
compile_binding(<< C, Rest/bits >>, S, Acc) ->
|
||||||
|
compile_binding(Rest, S, << Acc/binary, C >>).
|
||||||
|
|
||||||
|
compile_brackets(Rest, S, Segments, Rules) ->
|
||||||
|
{Bracket, Rest2} = compile_brackets_split(Rest, <<>>, 0),
|
||||||
|
Rules1 = compile_rules(Rest2, S, Segments, [], <<>>),
|
||||||
|
Rules2 = compile_rules(<< Bracket/binary, Rest2/binary >>,
|
||||||
|
S, Segments, [], <<>>),
|
||||||
|
Rules ++ Rules2 ++ Rules1.
|
||||||
|
|
||||||
|
%% Missing a close bracket.
|
||||||
|
compile_brackets_split(<<>>, _, _) ->
|
||||||
|
error(badarg);
|
||||||
|
%% Make sure we don't confuse the closing bracket we're looking for.
|
||||||
|
compile_brackets_split(<< C, Rest/bits >>, Acc, N) when C =:= $[ ->
|
||||||
|
compile_brackets_split(Rest, << Acc/binary, C >>, N + 1);
|
||||||
|
compile_brackets_split(<< C, Rest/bits >>, Acc, N) when C =:= $], N > 0 ->
|
||||||
|
compile_brackets_split(Rest, << Acc/binary, C >>, N - 1);
|
||||||
|
%% That's the right one.
|
||||||
|
compile_brackets_split(<< $], Rest/bits >>, Acc, 0) ->
|
||||||
|
{Acc, Rest};
|
||||||
|
compile_brackets_split(<< C, Rest/bits >>, Acc, N) ->
|
||||||
|
compile_brackets_split(Rest, << Acc/binary, C >>, N).
|
||||||
|
|
||||||
|
-spec execute(Req, Env)
|
||||||
|
-> {ok, Req, Env} | {stop, Req}
|
||||||
|
when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
||||||
|
execute(Req=#{host := Host, path := Path}, Env=#{dispatch := Dispatch0}) ->
|
||||||
|
Dispatch = case Dispatch0 of
|
||||||
|
{persistent_term, Key} -> persistent_term:get(Key);
|
||||||
|
_ -> Dispatch0
|
||||||
|
end,
|
||||||
|
case match(Dispatch, Host, Path) of
|
||||||
|
{ok, Handler, HandlerOpts, Bindings, HostInfo, PathInfo} ->
|
||||||
|
{ok, Req#{
|
||||||
|
host_info => HostInfo,
|
||||||
|
path_info => PathInfo,
|
||||||
|
bindings => Bindings
|
||||||
|
}, Env#{
|
||||||
|
handler => Handler,
|
||||||
|
handler_opts => HandlerOpts
|
||||||
|
}};
|
||||||
|
{error, notfound, host} ->
|
||||||
|
{stop, cowboy_req:reply(400, Req)};
|
||||||
|
{error, badrequest, path} ->
|
||||||
|
{stop, cowboy_req:reply(400, Req)};
|
||||||
|
{error, notfound, path} ->
|
||||||
|
{stop, cowboy_req:reply(404, Req)}
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% Internal.
|
||||||
|
|
||||||
|
%% Match hostname tokens and path tokens against dispatch rules.
|
||||||
|
%%
|
||||||
|
%% It is typically used for matching tokens for the hostname and path of
|
||||||
|
%% the request against a global dispatch rule for your listener.
|
||||||
|
%%
|
||||||
|
%% Dispatch rules are a list of <em>{Hostname, PathRules}</em> tuples, with
|
||||||
|
%% <em>PathRules</em> being a list of <em>{Path, HandlerMod, HandlerOpts}</em>.
|
||||||
|
%%
|
||||||
|
%% <em>Hostname</em> and <em>Path</em> are match rules and can be either the
|
||||||
|
%% atom <em>'_'</em>, which matches everything, `<<"*">>', which match the
|
||||||
|
%% wildcard path, or a list of tokens.
|
||||||
|
%%
|
||||||
|
%% Each token can be either a binary, the atom <em>'_'</em>,
|
||||||
|
%% the atom '...' or a named atom. A binary token must match exactly,
|
||||||
|
%% <em>'_'</em> matches everything for a single token, <em>'...'</em> matches
|
||||||
|
%% everything for the rest of the tokens and a named atom will bind the
|
||||||
|
%% corresponding token value and return it.
|
||||||
|
%%
|
||||||
|
%% The list of hostname tokens is reversed before matching. For example, if
|
||||||
|
%% we were to match "www.ninenines.eu", we would first match "eu", then
|
||||||
|
%% "ninenines", then "www". This means that in the context of hostnames,
|
||||||
|
%% the <em>'...'</em> atom matches properly the lower levels of the domain
|
||||||
|
%% as would be expected.
|
||||||
|
%%
|
||||||
|
%% When a result is found, this function will return the handler module and
|
||||||
|
%% options found in the dispatch list, a key-value list of bindings and
|
||||||
|
%% the tokens that were matched by the <em>'...'</em> atom for both the
|
||||||
|
%% hostname and path.
|
||||||
|
-spec match(dispatch_rules(), Host::binary() | tokens(), Path::binary())
|
||||||
|
-> {ok, module(), any(), bindings(),
|
||||||
|
HostInfo::undefined | tokens(),
|
||||||
|
PathInfo::undefined | tokens()}
|
||||||
|
| {error, notfound, host} | {error, notfound, path}
|
||||||
|
| {error, badrequest, path}.
|
||||||
|
match([], _, _) ->
|
||||||
|
{error, notfound, host};
|
||||||
|
%% If the host is '_' then there can be no constraints.
|
||||||
|
match([{'_', [], PathMatchs}|_Tail], _, Path) ->
|
||||||
|
match_path(PathMatchs, undefined, Path, #{});
|
||||||
|
match([{HostMatch, Fields, PathMatchs}|Tail], Tokens, Path)
|
||||||
|
when is_list(Tokens) ->
|
||||||
|
case list_match(Tokens, HostMatch, #{}) of
|
||||||
|
false ->
|
||||||
|
match(Tail, Tokens, Path);
|
||||||
|
{true, Bindings, HostInfo} ->
|
||||||
|
HostInfo2 = case HostInfo of
|
||||||
|
undefined -> undefined;
|
||||||
|
_ -> lists:reverse(HostInfo)
|
||||||
|
end,
|
||||||
|
case check_constraints(Fields, Bindings) of
|
||||||
|
{ok, Bindings2} ->
|
||||||
|
match_path(PathMatchs, HostInfo2, Path, Bindings2);
|
||||||
|
nomatch ->
|
||||||
|
match(Tail, Tokens, Path)
|
||||||
|
end
|
||||||
|
end;
|
||||||
|
match(Dispatch, Host, Path) ->
|
||||||
|
match(Dispatch, split_host(Host), Path).
|
||||||
|
|
||||||
|
-spec match_path([dispatch_path()],
|
||||||
|
HostInfo::undefined | tokens(), binary() | tokens(), bindings())
|
||||||
|
-> {ok, module(), any(), bindings(),
|
||||||
|
HostInfo::undefined | tokens(),
|
||||||
|
PathInfo::undefined | tokens()}
|
||||||
|
| {error, notfound, path} | {error, badrequest, path}.
|
||||||
|
match_path([], _, _, _) ->
|
||||||
|
{error, notfound, path};
|
||||||
|
%% If the path is '_' then there can be no constraints.
|
||||||
|
match_path([{'_', [], Handler, Opts}|_Tail], HostInfo, _, Bindings) ->
|
||||||
|
{ok, Handler, Opts, Bindings, HostInfo, undefined};
|
||||||
|
match_path([{<<"*">>, _, Handler, Opts}|_Tail], HostInfo, <<"*">>, Bindings) ->
|
||||||
|
{ok, Handler, Opts, Bindings, HostInfo, undefined};
|
||||||
|
match_path([_|Tail], HostInfo, <<"*">>, Bindings) ->
|
||||||
|
match_path(Tail, HostInfo, <<"*">>, Bindings);
|
||||||
|
match_path([{PathMatch, Fields, Handler, Opts}|Tail], HostInfo, Tokens,
|
||||||
|
Bindings) when is_list(Tokens) ->
|
||||||
|
case list_match(Tokens, PathMatch, Bindings) of
|
||||||
|
false ->
|
||||||
|
match_path(Tail, HostInfo, Tokens, Bindings);
|
||||||
|
{true, PathBinds, PathInfo} ->
|
||||||
|
case check_constraints(Fields, PathBinds) of
|
||||||
|
{ok, PathBinds2} ->
|
||||||
|
{ok, Handler, Opts, PathBinds2, HostInfo, PathInfo};
|
||||||
|
nomatch ->
|
||||||
|
match_path(Tail, HostInfo, Tokens, Bindings)
|
||||||
|
end
|
||||||
|
end;
|
||||||
|
match_path(_Dispatch, _HostInfo, badrequest, _Bindings) ->
|
||||||
|
{error, badrequest, path};
|
||||||
|
match_path(Dispatch, HostInfo, Path, Bindings) ->
|
||||||
|
match_path(Dispatch, HostInfo, split_path(Path), Bindings).
|
||||||
|
|
||||||
|
check_constraints([], Bindings) ->
|
||||||
|
{ok, Bindings};
|
||||||
|
check_constraints([Field|Tail], Bindings) when is_atom(Field) ->
|
||||||
|
check_constraints(Tail, Bindings);
|
||||||
|
check_constraints([Field|Tail], Bindings) ->
|
||||||
|
Name = element(1, Field),
|
||||||
|
case Bindings of
|
||||||
|
#{Name := Value0} ->
|
||||||
|
Constraints = element(2, Field),
|
||||||
|
case cowboy_constraints:validate(Value0, Constraints) of
|
||||||
|
{ok, Value} ->
|
||||||
|
check_constraints(Tail, Bindings#{Name => Value});
|
||||||
|
{error, _} ->
|
||||||
|
nomatch
|
||||||
|
end;
|
||||||
|
_ ->
|
||||||
|
check_constraints(Tail, Bindings)
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec split_host(binary()) -> tokens().
|
||||||
|
split_host(Host) ->
|
||||||
|
split_host(Host, []).
|
||||||
|
|
||||||
|
split_host(Host, Acc) ->
|
||||||
|
case binary:match(Host, <<".">>) of
|
||||||
|
nomatch when Host =:= <<>> ->
|
||||||
|
Acc;
|
||||||
|
nomatch ->
|
||||||
|
[Host|Acc];
|
||||||
|
{Pos, _} ->
|
||||||
|
<< Segment:Pos/binary, _:8, Rest/bits >> = Host,
|
||||||
|
false = byte_size(Segment) == 0,
|
||||||
|
split_host(Rest, [Segment|Acc])
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% Following RFC2396, this function may return path segments containing any
|
||||||
|
%% character, including <em>/</em> if, and only if, a <em>/</em> was escaped
|
||||||
|
%% and part of a path segment.
|
||||||
|
-spec split_path(binary()) -> tokens() | badrequest.
|
||||||
|
split_path(<< $/, Path/bits >>) ->
|
||||||
|
split_path(Path, []);
|
||||||
|
split_path(_) ->
|
||||||
|
badrequest.
|
||||||
|
|
||||||
|
split_path(Path, Acc) ->
|
||||||
|
try
|
||||||
|
case binary:match(Path, <<"/">>) of
|
||||||
|
nomatch when Path =:= <<>> ->
|
||||||
|
remove_dot_segments(lists:reverse([cow_uri:urldecode(S) || S <- Acc]), []);
|
||||||
|
nomatch ->
|
||||||
|
remove_dot_segments(lists:reverse([cow_uri:urldecode(S) || S <- [Path|Acc]]), []);
|
||||||
|
{Pos, _} ->
|
||||||
|
<< Segment:Pos/binary, _:8, Rest/bits >> = Path,
|
||||||
|
split_path(Rest, [Segment|Acc])
|
||||||
|
end
|
||||||
|
catch error:_ ->
|
||||||
|
badrequest
|
||||||
|
end.
|
||||||
|
|
||||||
|
remove_dot_segments([], Acc) ->
|
||||||
|
lists:reverse(Acc);
|
||||||
|
remove_dot_segments([<<".">>|Segments], Acc) ->
|
||||||
|
remove_dot_segments(Segments, Acc);
|
||||||
|
remove_dot_segments([<<"..">>|Segments], Acc=[]) ->
|
||||||
|
remove_dot_segments(Segments, Acc);
|
||||||
|
remove_dot_segments([<<"..">>|Segments], [_|Acc]) ->
|
||||||
|
remove_dot_segments(Segments, Acc);
|
||||||
|
remove_dot_segments([S|Segments], Acc) ->
|
||||||
|
remove_dot_segments(Segments, [S|Acc]).
|
||||||
|
|
||||||
|
-ifdef(TEST).
|
||||||
|
remove_dot_segments_test_() ->
|
||||||
|
Tests = [
|
||||||
|
{[<<"a">>, <<"b">>, <<"c">>, <<".">>, <<"..">>, <<"..">>, <<"g">>], [<<"a">>, <<"g">>]},
|
||||||
|
{[<<"mid">>, <<"content=5">>, <<"..">>, <<"6">>], [<<"mid">>, <<"6">>]},
|
||||||
|
{[<<"..">>, <<"a">>], [<<"a">>]}
|
||||||
|
],
|
||||||
|
[fun() -> R = remove_dot_segments(S, []) end || {S, R} <- Tests].
|
||||||
|
-endif.
|
||||||
|
|
||||||
|
-spec list_match(tokens(), dispatch_match(), bindings())
|
||||||
|
-> {true, bindings(), undefined | tokens()} | false.
|
||||||
|
%% Atom '...' matches any trailing path, stop right now.
|
||||||
|
list_match(List, ['...'], Binds) ->
|
||||||
|
{true, Binds, List};
|
||||||
|
%% Atom '_' matches anything, continue.
|
||||||
|
list_match([_E|Tail], ['_'|TailMatch], Binds) ->
|
||||||
|
list_match(Tail, TailMatch, Binds);
|
||||||
|
%% Both values match, continue.
|
||||||
|
list_match([E|Tail], [E|TailMatch], Binds) ->
|
||||||
|
list_match(Tail, TailMatch, Binds);
|
||||||
|
%% Bind E to the variable name V and continue,
|
||||||
|
%% unless V was already defined and E isn't identical to the previous value.
|
||||||
|
list_match([E|Tail], [V|TailMatch], Binds) when is_atom(V) ->
|
||||||
|
case Binds of
|
||||||
|
%% @todo This isn't right, the constraint must be applied FIRST
|
||||||
|
%% otherwise we can't check for example ints in both host/path.
|
||||||
|
#{V := E} ->
|
||||||
|
list_match(Tail, TailMatch, Binds);
|
||||||
|
#{V := _} ->
|
||||||
|
false;
|
||||||
|
_ ->
|
||||||
|
list_match(Tail, TailMatch, Binds#{V => E})
|
||||||
|
end;
|
||||||
|
%% Match complete.
|
||||||
|
list_match([], [], Binds) ->
|
||||||
|
{true, Binds, undefined};
|
||||||
|
%% Values don't match, stop.
|
||||||
|
list_match(_List, _Match, _Binds) ->
|
||||||
|
false.
|
||||||
|
|
||||||
|
%% Tests.
|
||||||
|
|
||||||
|
-ifdef(TEST).
|
||||||
|
compile_test_() ->
|
||||||
|
Tests = [
|
||||||
|
%% Match any host and path.
|
||||||
|
{[{'_', [{'_', h, o}]}],
|
||||||
|
[{'_', [], [{'_', [], h, o}]}]},
|
||||||
|
{[{"cowboy.example.org",
|
||||||
|
[{"/", ha, oa}, {"/path/to/resource", hb, ob}]}],
|
||||||
|
[{[<<"org">>, <<"example">>, <<"cowboy">>], [], [
|
||||||
|
{[], [], ha, oa},
|
||||||
|
{[<<"path">>, <<"to">>, <<"resource">>], [], hb, ob}]}]},
|
||||||
|
{[{'_', [{"/path/to/resource/", h, o}]}],
|
||||||
|
[{'_', [], [{[<<"path">>, <<"to">>, <<"resource">>], [], h, o}]}]},
|
||||||
|
% Cyrillic from a latin1 encoded file.
|
||||||
|
{[{'_', [{[47,208,191,209,131,209,130,209,140,47,208,186,47,209,128,
|
||||||
|
208,181,209,129,209,131,209,128,209,129,209,131,47], h, o}]}],
|
||||||
|
[{'_', [], [{[<<208,191,209,131,209,130,209,140>>, <<208,186>>,
|
||||||
|
<<209,128,208,181,209,129,209,131,209,128,209,129,209,131>>],
|
||||||
|
[], h, o}]}]},
|
||||||
|
{[{"cowboy.example.org.", [{'_', h, o}]}],
|
||||||
|
[{[<<"org">>, <<"example">>, <<"cowboy">>], [], [{'_', [], h, o}]}]},
|
||||||
|
{[{".cowboy.example.org", [{'_', h, o}]}],
|
||||||
|
[{[<<"org">>, <<"example">>, <<"cowboy">>], [], [{'_', [], h, o}]}]},
|
||||||
|
% Cyrillic from a latin1 encoded file.
|
||||||
|
{[{[208,189,208,181,208,186,208,184,208,185,46,209,129,208,176,
|
||||||
|
208,185,209,130,46,209,128,209,132,46], [{'_', h, o}]}],
|
||||||
|
[{[<<209,128,209,132>>, <<209,129,208,176,208,185,209,130>>,
|
||||||
|
<<208,189,208,181,208,186,208,184,208,185>>],
|
||||||
|
[], [{'_', [], h, o}]}]},
|
||||||
|
{[{":subdomain.example.org", [{"/hats/:name/prices", h, o}]}],
|
||||||
|
[{[<<"org">>, <<"example">>, subdomain], [], [
|
||||||
|
{[<<"hats">>, name, <<"prices">>], [], h, o}]}]},
|
||||||
|
{[{"ninenines.:_", [{"/hats/:_", h, o}]}],
|
||||||
|
[{['_', <<"ninenines">>], [], [{[<<"hats">>, '_'], [], h, o}]}]},
|
||||||
|
{[{"[www.]ninenines.eu",
|
||||||
|
[{"/horses", h, o}, {"/hats/[page/:number]", h, o}]}], [
|
||||||
|
{[<<"eu">>, <<"ninenines">>], [], [
|
||||||
|
{[<<"horses">>], [], h, o},
|
||||||
|
{[<<"hats">>], [], h, o},
|
||||||
|
{[<<"hats">>, <<"page">>, number], [], h, o}]},
|
||||||
|
{[<<"eu">>, <<"ninenines">>, <<"www">>], [], [
|
||||||
|
{[<<"horses">>], [], h, o},
|
||||||
|
{[<<"hats">>], [], h, o},
|
||||||
|
{[<<"hats">>, <<"page">>, number], [], h, o}]}]},
|
||||||
|
{[{'_', [{"/hats/:page/:number", h, o}]}], [{'_', [], [
|
||||||
|
{[<<"hats">>, page, number], [], h, o}]}]},
|
||||||
|
{[{'_', [{"/hats/[page/[:number]]", h, o}]}], [{'_', [], [
|
||||||
|
{[<<"hats">>], [], h, o},
|
||||||
|
{[<<"hats">>, <<"page">>], [], h, o},
|
||||||
|
{[<<"hats">>, <<"page">>, number], [], h, o}]}]},
|
||||||
|
{[{"[...]ninenines.eu", [{"/hats/[...]", h, o}]}],
|
||||||
|
[{[<<"eu">>, <<"ninenines">>, '...'], [], [
|
||||||
|
{[<<"hats">>, '...'], [], h, o}]}]},
|
||||||
|
%% Path segment containing a colon.
|
||||||
|
{[{'_', [{"/foo/bar:blah", h, o}]}], [{'_', [], [
|
||||||
|
{[<<"foo">>, <<"bar:blah">>], [], h, o}]}]}
|
||||||
|
],
|
||||||
|
[{lists:flatten(io_lib:format("~p", [Rt])),
|
||||||
|
fun() -> Rs = compile(Rt) end} || {Rt, Rs} <- Tests].
|
||||||
|
|
||||||
|
split_host_test_() ->
|
||||||
|
Tests = [
|
||||||
|
{<<"">>, []},
|
||||||
|
{<<"*">>, [<<"*">>]},
|
||||||
|
{<<"cowboy.ninenines.eu">>,
|
||||||
|
[<<"eu">>, <<"ninenines">>, <<"cowboy">>]},
|
||||||
|
{<<"ninenines.eu">>,
|
||||||
|
[<<"eu">>, <<"ninenines">>]},
|
||||||
|
{<<"ninenines.eu.">>,
|
||||||
|
[<<"eu">>, <<"ninenines">>]},
|
||||||
|
{<<"a.b.c.d.e.f.g.h.i.j.k.l.m.n.o.p.q.r.s.t.u.v.w.x.y.z">>,
|
||||||
|
[<<"z">>, <<"y">>, <<"x">>, <<"w">>, <<"v">>, <<"u">>, <<"t">>,
|
||||||
|
<<"s">>, <<"r">>, <<"q">>, <<"p">>, <<"o">>, <<"n">>, <<"m">>,
|
||||||
|
<<"l">>, <<"k">>, <<"j">>, <<"i">>, <<"h">>, <<"g">>, <<"f">>,
|
||||||
|
<<"e">>, <<"d">>, <<"c">>, <<"b">>, <<"a">>]}
|
||||||
|
],
|
||||||
|
[{H, fun() -> R = split_host(H) end} || {H, R} <- Tests].
|
||||||
|
|
||||||
|
split_path_test_() ->
|
||||||
|
Tests = [
|
||||||
|
{<<"/">>, []},
|
||||||
|
{<<"/extend//cowboy">>, [<<"extend">>, <<>>, <<"cowboy">>]},
|
||||||
|
{<<"/users">>, [<<"users">>]},
|
||||||
|
{<<"/users/42/friends">>, [<<"users">>, <<"42">>, <<"friends">>]},
|
||||||
|
{<<"/users/a%20b/c%21d">>, [<<"users">>, <<"a b">>, <<"c!d">>]}
|
||||||
|
],
|
||||||
|
[{P, fun() -> R = split_path(P) end} || {P, R} <- Tests].
|
||||||
|
|
||||||
|
match_test_() ->
|
||||||
|
Dispatch = [
|
||||||
|
{[<<"eu">>, <<"ninenines">>, '_', <<"www">>], [], [
|
||||||
|
{[<<"users">>, '_', <<"mails">>], [], match_any_subdomain_users, []}
|
||||||
|
]},
|
||||||
|
{[<<"eu">>, <<"ninenines">>], [], [
|
||||||
|
{[<<"users">>, id, <<"friends">>], [], match_extend_users_friends, []},
|
||||||
|
{'_', [], match_extend, []}
|
||||||
|
]},
|
||||||
|
{[var, <<"ninenines">>], [], [
|
||||||
|
{[<<"threads">>, var], [], match_duplicate_vars,
|
||||||
|
[we, {expect, two}, var, here]}
|
||||||
|
]},
|
||||||
|
{[ext, <<"erlang">>], [], [
|
||||||
|
{'_', [], match_erlang_ext, []}
|
||||||
|
]},
|
||||||
|
{'_', [], [
|
||||||
|
{[<<"users">>, id, <<"friends">>], [], match_users_friends, []},
|
||||||
|
{'_', [], match_any, []}
|
||||||
|
]}
|
||||||
|
],
|
||||||
|
Tests = [
|
||||||
|
{<<"any">>, <<"/">>, {ok, match_any, [], #{}}},
|
||||||
|
{<<"www.any.ninenines.eu">>, <<"/users/42/mails">>,
|
||||||
|
{ok, match_any_subdomain_users, [], #{}}},
|
||||||
|
{<<"www.ninenines.eu">>, <<"/users/42/mails">>,
|
||||||
|
{ok, match_any, [], #{}}},
|
||||||
|
{<<"www.ninenines.eu">>, <<"/">>,
|
||||||
|
{ok, match_any, [], #{}}},
|
||||||
|
{<<"www.any.ninenines.eu">>, <<"/not_users/42/mails">>,
|
||||||
|
{error, notfound, path}},
|
||||||
|
{<<"ninenines.eu">>, <<"/">>,
|
||||||
|
{ok, match_extend, [], #{}}},
|
||||||
|
{<<"ninenines.eu">>, <<"/users/42/friends">>,
|
||||||
|
{ok, match_extend_users_friends, [], #{id => <<"42">>}}},
|
||||||
|
{<<"erlang.fr">>, '_',
|
||||||
|
{ok, match_erlang_ext, [], #{ext => <<"fr">>}}},
|
||||||
|
{<<"any">>, <<"/users/444/friends">>,
|
||||||
|
{ok, match_users_friends, [], #{id => <<"444">>}}},
|
||||||
|
{<<"any">>, <<"/users//friends">>,
|
||||||
|
{ok, match_users_friends, [], #{id => <<>>}}}
|
||||||
|
],
|
||||||
|
[{lists:flatten(io_lib:format("~p, ~p", [H, P])), fun() ->
|
||||||
|
{ok, Handler, Opts, Binds, undefined, undefined}
|
||||||
|
= match(Dispatch, H, P)
|
||||||
|
end} || {H, P, {ok, Handler, Opts, Binds}} <- Tests].
|
||||||
|
|
||||||
|
match_info_test_() ->
|
||||||
|
Dispatch = [
|
||||||
|
{[<<"eu">>, <<"ninenines">>, <<"www">>], [], [
|
||||||
|
{[<<"pathinfo">>, <<"is">>, <<"next">>, '...'], [], match_path, []}
|
||||||
|
]},
|
||||||
|
{[<<"eu">>, <<"ninenines">>, '...'], [], [
|
||||||
|
{'_', [], match_any, []}
|
||||||
|
]}
|
||||||
|
],
|
||||||
|
Tests = [
|
||||||
|
{<<"ninenines.eu">>, <<"/">>,
|
||||||
|
{ok, match_any, [], #{}, [], undefined}},
|
||||||
|
{<<"bugs.ninenines.eu">>, <<"/">>,
|
||||||
|
{ok, match_any, [], #{}, [<<"bugs">>], undefined}},
|
||||||
|
{<<"cowboy.bugs.ninenines.eu">>, <<"/">>,
|
||||||
|
{ok, match_any, [], #{}, [<<"cowboy">>, <<"bugs">>], undefined}},
|
||||||
|
{<<"www.ninenines.eu">>, <<"/pathinfo/is/next">>,
|
||||||
|
{ok, match_path, [], #{}, undefined, []}},
|
||||||
|
{<<"www.ninenines.eu">>, <<"/pathinfo/is/next/path_info">>,
|
||||||
|
{ok, match_path, [], #{}, undefined, [<<"path_info">>]}},
|
||||||
|
{<<"www.ninenines.eu">>, <<"/pathinfo/is/next/foo/bar">>,
|
||||||
|
{ok, match_path, [], #{}, undefined, [<<"foo">>, <<"bar">>]}}
|
||||||
|
],
|
||||||
|
[{lists:flatten(io_lib:format("~p, ~p", [H, P])), fun() ->
|
||||||
|
R = match(Dispatch, H, P)
|
||||||
|
end} || {H, P, R} <- Tests].
|
||||||
|
|
||||||
|
match_constraints_test() ->
|
||||||
|
Dispatch0 = [{'_', [],
|
||||||
|
[{[<<"path">>, value], [{value, int}], match, []}]}],
|
||||||
|
{ok, _, [], #{value := 123}, _, _} = match(Dispatch0,
|
||||||
|
<<"ninenines.eu">>, <<"/path/123">>),
|
||||||
|
{ok, _, [], #{value := 123}, _, _} = match(Dispatch0,
|
||||||
|
<<"ninenines.eu">>, <<"/path/123/">>),
|
||||||
|
{error, notfound, path} = match(Dispatch0,
|
||||||
|
<<"ninenines.eu">>, <<"/path/NaN/">>),
|
||||||
|
Dispatch1 = [{'_', [],
|
||||||
|
[{[<<"path">>, value, <<"more">>], [{value, nonempty}], match, []}]}],
|
||||||
|
{ok, _, [], #{value := <<"something">>}, _, _} = match(Dispatch1,
|
||||||
|
<<"ninenines.eu">>, <<"/path/something/more">>),
|
||||||
|
{error, notfound, path} = match(Dispatch1,
|
||||||
|
<<"ninenines.eu">>, <<"/path//more">>),
|
||||||
|
Dispatch2 = [{'_', [], [{[<<"path">>, username],
|
||||||
|
[{username, fun(_, Value) ->
|
||||||
|
case cowboy_bstr:to_lower(Value) of
|
||||||
|
Value -> {ok, Value};
|
||||||
|
_ -> {error, not_lowercase}
|
||||||
|
end end}],
|
||||||
|
match, []}]}],
|
||||||
|
{ok, _, [], #{username := <<"essen">>}, _, _} = match(Dispatch2,
|
||||||
|
<<"ninenines.eu">>, <<"/path/essen">>),
|
||||||
|
{error, notfound, path} = match(Dispatch2,
|
||||||
|
<<"ninenines.eu">>, <<"/path/ESSEN">>),
|
||||||
|
ok.
|
||||||
|
|
||||||
|
match_same_bindings_test() ->
|
||||||
|
Dispatch = [{[same, same], [], [{'_', [], match, []}]}],
|
||||||
|
{ok, _, [], #{same := <<"eu">>}, _, _} = match(Dispatch,
|
||||||
|
<<"eu.eu">>, <<"/">>),
|
||||||
|
{error, notfound, host} = match(Dispatch,
|
||||||
|
<<"ninenines.eu">>, <<"/">>),
|
||||||
|
Dispatch2 = [{[<<"eu">>, <<"ninenines">>, user], [],
|
||||||
|
[{[<<"path">>, user], [], match, []}]}],
|
||||||
|
{ok, _, [], #{user := <<"essen">>}, _, _} = match(Dispatch2,
|
||||||
|
<<"essen.ninenines.eu">>, <<"/path/essen">>),
|
||||||
|
{ok, _, [], #{user := <<"essen">>}, _, _} = match(Dispatch2,
|
||||||
|
<<"essen.ninenines.eu">>, <<"/path/essen/">>),
|
||||||
|
{error, notfound, path} = match(Dispatch2,
|
||||||
|
<<"essen.ninenines.eu">>, <<"/path/notessen">>),
|
||||||
|
Dispatch3 = [{'_', [], [{[same, same], [], match, []}]}],
|
||||||
|
{ok, _, [], #{same := <<"path">>}, _, _} = match(Dispatch3,
|
||||||
|
<<"ninenines.eu">>, <<"/path/path">>),
|
||||||
|
{error, notfound, path} = match(Dispatch3,
|
||||||
|
<<"ninenines.eu">>, <<"/path/to">>),
|
||||||
|
ok.
|
||||||
|
-endif.
|
|
@ -0,0 +1,418 @@
|
||||||
|
%% Copyright (c) 2013-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%% Copyright (c) 2011, Magnus Klaar <magnus.klaar@gmail.com>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_static).
|
||||||
|
|
||||||
|
-export([init/2]).
|
||||||
|
-export([malformed_request/2]).
|
||||||
|
-export([forbidden/2]).
|
||||||
|
-export([content_types_provided/2]).
|
||||||
|
-export([charsets_provided/2]).
|
||||||
|
-export([ranges_provided/2]).
|
||||||
|
-export([resource_exists/2]).
|
||||||
|
-export([last_modified/2]).
|
||||||
|
-export([generate_etag/2]).
|
||||||
|
-export([get_file/2]).
|
||||||
|
|
||||||
|
-type extra_charset() :: {charset, module(), function()} | {charset, binary()}.
|
||||||
|
-type extra_etag() :: {etag, module(), function()} | {etag, false}.
|
||||||
|
-type extra_mimetypes() :: {mimetypes, module(), function()}
|
||||||
|
| {mimetypes, binary() | {binary(), binary(), [{binary(), binary()}]}}.
|
||||||
|
-type extra() :: [extra_charset() | extra_etag() | extra_mimetypes()].
|
||||||
|
-type opts() :: {file | dir, string() | binary()}
|
||||||
|
| {file | dir, string() | binary(), extra()}
|
||||||
|
| {priv_file | priv_dir, atom(), string() | binary()}
|
||||||
|
| {priv_file | priv_dir, atom(), string() | binary(), extra()}.
|
||||||
|
-export_type([opts/0]).
|
||||||
|
|
||||||
|
-include_lib("kernel/include/file.hrl").
|
||||||
|
|
||||||
|
-type state() :: {binary(), {direct | archive, #file_info{}}
|
||||||
|
| {error, atom()}, extra()}.
|
||||||
|
|
||||||
|
%% Resolve the file that will be sent and get its file information.
|
||||||
|
%% If the handler is configured to manage a directory, check that the
|
||||||
|
%% requested file is inside the configured directory.
|
||||||
|
|
||||||
|
-spec init(Req, opts()) -> {cowboy_rest, Req, error | state()} when Req::cowboy_req:req().
|
||||||
|
init(Req, {Name, Path}) ->
|
||||||
|
init_opts(Req, {Name, Path, []});
|
||||||
|
init(Req, {Name, App, Path})
|
||||||
|
when Name =:= priv_file; Name =:= priv_dir ->
|
||||||
|
init_opts(Req, {Name, App, Path, []});
|
||||||
|
init(Req, Opts) ->
|
||||||
|
init_opts(Req, Opts).
|
||||||
|
|
||||||
|
init_opts(Req, {priv_file, App, Path, Extra}) ->
|
||||||
|
{PrivPath, HowToAccess} = priv_path(App, Path),
|
||||||
|
init_info(Req, absname(PrivPath), HowToAccess, Extra);
|
||||||
|
init_opts(Req, {file, Path, Extra}) ->
|
||||||
|
init_info(Req, absname(Path), direct, Extra);
|
||||||
|
init_opts(Req, {priv_dir, App, Path, Extra}) ->
|
||||||
|
{PrivPath, HowToAccess} = priv_path(App, Path),
|
||||||
|
init_dir(Req, PrivPath, HowToAccess, Extra);
|
||||||
|
init_opts(Req, {dir, Path, Extra}) ->
|
||||||
|
init_dir(Req, Path, direct, Extra).
|
||||||
|
|
||||||
|
priv_path(App, Path) ->
|
||||||
|
case code:priv_dir(App) of
|
||||||
|
{error, bad_name} ->
|
||||||
|
error({badarg, "Can't resolve the priv_dir of application "
|
||||||
|
++ atom_to_list(App)});
|
||||||
|
PrivDir when is_list(Path) ->
|
||||||
|
{
|
||||||
|
PrivDir ++ "/" ++ Path,
|
||||||
|
how_to_access_app_priv(PrivDir)
|
||||||
|
};
|
||||||
|
PrivDir when is_binary(Path) ->
|
||||||
|
{
|
||||||
|
<< (list_to_binary(PrivDir))/binary, $/, Path/binary >>,
|
||||||
|
how_to_access_app_priv(PrivDir)
|
||||||
|
}
|
||||||
|
end.
|
||||||
|
|
||||||
|
how_to_access_app_priv(PrivDir) ->
|
||||||
|
%% If the priv directory is not a directory, it must be
|
||||||
|
%% inside an Erlang application .ez archive. We call
|
||||||
|
%% how_to_access_app_priv1() to find the corresponding archive.
|
||||||
|
case filelib:is_dir(PrivDir) of
|
||||||
|
true -> direct;
|
||||||
|
false -> how_to_access_app_priv1(PrivDir)
|
||||||
|
end.
|
||||||
|
|
||||||
|
how_to_access_app_priv1(Dir) ->
|
||||||
|
%% We go "up" by one path component at a time and look for a
|
||||||
|
%% regular file.
|
||||||
|
Archive = filename:dirname(Dir),
|
||||||
|
case Archive of
|
||||||
|
Dir ->
|
||||||
|
%% filename:dirname() returned its argument:
|
||||||
|
%% we reach the root directory. We found no
|
||||||
|
%% archive so we return 'direct': the given priv
|
||||||
|
%% directory doesn't exist.
|
||||||
|
direct;
|
||||||
|
_ ->
|
||||||
|
case filelib:is_regular(Archive) of
|
||||||
|
true -> {archive, Archive};
|
||||||
|
false -> how_to_access_app_priv1(Archive)
|
||||||
|
end
|
||||||
|
end.
|
||||||
|
|
||||||
|
absname(Path) when is_list(Path) ->
|
||||||
|
filename:absname(list_to_binary(Path));
|
||||||
|
absname(Path) when is_binary(Path) ->
|
||||||
|
filename:absname(Path).
|
||||||
|
|
||||||
|
init_dir(Req, Path, HowToAccess, Extra) when is_list(Path) ->
|
||||||
|
init_dir(Req, list_to_binary(Path), HowToAccess, Extra);
|
||||||
|
init_dir(Req, Path, HowToAccess, Extra) ->
|
||||||
|
Dir = fullpath(filename:absname(Path)),
|
||||||
|
case cowboy_req:path_info(Req) of
|
||||||
|
%% When dir/priv_dir are used and there is no path_info
|
||||||
|
%% this is a configuration error and we abort immediately.
|
||||||
|
undefined ->
|
||||||
|
{ok, cowboy_req:reply(500, Req), error};
|
||||||
|
PathInfo ->
|
||||||
|
case validate_reserved(PathInfo) of
|
||||||
|
error ->
|
||||||
|
{cowboy_rest, Req, error};
|
||||||
|
ok ->
|
||||||
|
Filepath = filename:join([Dir|PathInfo]),
|
||||||
|
Len = byte_size(Dir),
|
||||||
|
case fullpath(Filepath) of
|
||||||
|
<< Dir:Len/binary, $/, _/binary >> ->
|
||||||
|
init_info(Req, Filepath, HowToAccess, Extra);
|
||||||
|
<< Dir:Len/binary >> ->
|
||||||
|
init_info(Req, Filepath, HowToAccess, Extra);
|
||||||
|
_ ->
|
||||||
|
{cowboy_rest, Req, error}
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end.
|
||||||
|
|
||||||
|
validate_reserved([]) ->
|
||||||
|
ok;
|
||||||
|
validate_reserved([P|Tail]) ->
|
||||||
|
case validate_reserved1(P) of
|
||||||
|
ok -> validate_reserved(Tail);
|
||||||
|
error -> error
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% We always reject forward slash, backward slash and NUL as
|
||||||
|
%% those have special meanings across the supported platforms.
|
||||||
|
%% We could support the backward slash on some platforms but
|
||||||
|
%% for the sake of consistency and simplicity we don't.
|
||||||
|
validate_reserved1(<<>>) ->
|
||||||
|
ok;
|
||||||
|
validate_reserved1(<<$/, _/bits>>) ->
|
||||||
|
error;
|
||||||
|
validate_reserved1(<<$\\, _/bits>>) ->
|
||||||
|
error;
|
||||||
|
validate_reserved1(<<0, _/bits>>) ->
|
||||||
|
error;
|
||||||
|
validate_reserved1(<<_, Rest/bits>>) ->
|
||||||
|
validate_reserved1(Rest).
|
||||||
|
|
||||||
|
fullpath(Path) ->
|
||||||
|
fullpath(filename:split(Path), []).
|
||||||
|
fullpath([], Acc) ->
|
||||||
|
filename:join(lists:reverse(Acc));
|
||||||
|
fullpath([<<".">>|Tail], Acc) ->
|
||||||
|
fullpath(Tail, Acc);
|
||||||
|
fullpath([<<"..">>|Tail], Acc=[_]) ->
|
||||||
|
fullpath(Tail, Acc);
|
||||||
|
fullpath([<<"..">>|Tail], [_|Acc]) ->
|
||||||
|
fullpath(Tail, Acc);
|
||||||
|
fullpath([Segment|Tail], Acc) ->
|
||||||
|
fullpath(Tail, [Segment|Acc]).
|
||||||
|
|
||||||
|
init_info(Req, Path, HowToAccess, Extra) ->
|
||||||
|
Info = read_file_info(Path, HowToAccess),
|
||||||
|
{cowboy_rest, Req, {Path, Info, Extra}}.
|
||||||
|
|
||||||
|
read_file_info(Path, direct) ->
|
||||||
|
case file:read_file_info(Path, [{time, universal}]) of
|
||||||
|
{ok, Info} -> {direct, Info};
|
||||||
|
Error -> Error
|
||||||
|
end;
|
||||||
|
read_file_info(Path, {archive, Archive}) ->
|
||||||
|
case file:read_file_info(Archive, [{time, universal}]) of
|
||||||
|
{ok, ArchiveInfo} ->
|
||||||
|
%% The Erlang application archive is fine.
|
||||||
|
%% Now check if the requested file is in that
|
||||||
|
%% archive. We also need the file_info to merge
|
||||||
|
%% them with the archive's one.
|
||||||
|
PathS = binary_to_list(Path),
|
||||||
|
case erl_prim_loader:read_file_info(PathS) of
|
||||||
|
{ok, ContainedFileInfo} ->
|
||||||
|
Info = fix_archived_file_info(
|
||||||
|
ArchiveInfo,
|
||||||
|
ContainedFileInfo),
|
||||||
|
{archive, Info};
|
||||||
|
error ->
|
||||||
|
{error, enoent}
|
||||||
|
end;
|
||||||
|
Error ->
|
||||||
|
Error
|
||||||
|
end.
|
||||||
|
|
||||||
|
fix_archived_file_info(ArchiveInfo, ContainedFileInfo) ->
|
||||||
|
%% We merge the archive and content #file_info because we are
|
||||||
|
%% interested by the timestamps of the archive, but the type and
|
||||||
|
%% size of the contained file/directory.
|
||||||
|
%%
|
||||||
|
%% We reset the access to 'read', because we won't rewrite the
|
||||||
|
%% archive.
|
||||||
|
ArchiveInfo#file_info{
|
||||||
|
size = ContainedFileInfo#file_info.size,
|
||||||
|
type = ContainedFileInfo#file_info.type,
|
||||||
|
access = read
|
||||||
|
}.
|
||||||
|
|
||||||
|
-ifdef(TEST).
|
||||||
|
fullpath_test_() ->
|
||||||
|
Tests = [
|
||||||
|
{<<"/home/cowboy">>, <<"/home/cowboy">>},
|
||||||
|
{<<"/home/cowboy">>, <<"/home/cowboy/">>},
|
||||||
|
{<<"/home/cowboy">>, <<"/home/cowboy/./">>},
|
||||||
|
{<<"/home/cowboy">>, <<"/home/cowboy/./././././.">>},
|
||||||
|
{<<"/home/cowboy">>, <<"/home/cowboy/abc/..">>},
|
||||||
|
{<<"/home/cowboy">>, <<"/home/cowboy/abc/../">>},
|
||||||
|
{<<"/home/cowboy">>, <<"/home/cowboy/abc/./../.">>},
|
||||||
|
{<<"/">>, <<"/home/cowboy/../../../../../..">>},
|
||||||
|
{<<"/etc/passwd">>, <<"/home/cowboy/../../etc/passwd">>}
|
||||||
|
],
|
||||||
|
[{P, fun() -> R = fullpath(P) end} || {R, P} <- Tests].
|
||||||
|
|
||||||
|
good_path_check_test_() ->
|
||||||
|
Tests = [
|
||||||
|
<<"/home/cowboy/file">>,
|
||||||
|
<<"/home/cowboy/file/">>,
|
||||||
|
<<"/home/cowboy/./file">>,
|
||||||
|
<<"/home/cowboy/././././././file">>,
|
||||||
|
<<"/home/cowboy/abc/../file">>,
|
||||||
|
<<"/home/cowboy/abc/../file">>,
|
||||||
|
<<"/home/cowboy/abc/./.././file">>
|
||||||
|
],
|
||||||
|
[{P, fun() ->
|
||||||
|
case fullpath(P) of
|
||||||
|
<< "/home/cowboy/", _/bits >> -> ok
|
||||||
|
end
|
||||||
|
end} || P <- Tests].
|
||||||
|
|
||||||
|
bad_path_check_test_() ->
|
||||||
|
Tests = [
|
||||||
|
<<"/home/cowboy/../../../../../../file">>,
|
||||||
|
<<"/home/cowboy/../../etc/passwd">>
|
||||||
|
],
|
||||||
|
[{P, fun() ->
|
||||||
|
error = case fullpath(P) of
|
||||||
|
<< "/home/cowboy/", _/bits >> -> ok;
|
||||||
|
_ -> error
|
||||||
|
end
|
||||||
|
end} || P <- Tests].
|
||||||
|
|
||||||
|
good_path_win32_check_test_() ->
|
||||||
|
Tests = case os:type() of
|
||||||
|
{unix, _} ->
|
||||||
|
[];
|
||||||
|
{win32, _} ->
|
||||||
|
[
|
||||||
|
<<"c:/home/cowboy/file">>,
|
||||||
|
<<"c:/home/cowboy/file/">>,
|
||||||
|
<<"c:/home/cowboy/./file">>,
|
||||||
|
<<"c:/home/cowboy/././././././file">>,
|
||||||
|
<<"c:/home/cowboy/abc/../file">>,
|
||||||
|
<<"c:/home/cowboy/abc/../file">>,
|
||||||
|
<<"c:/home/cowboy/abc/./.././file">>
|
||||||
|
]
|
||||||
|
end,
|
||||||
|
[{P, fun() ->
|
||||||
|
case fullpath(P) of
|
||||||
|
<< "c:/home/cowboy/", _/bits >> -> ok
|
||||||
|
end
|
||||||
|
end} || P <- Tests].
|
||||||
|
|
||||||
|
bad_path_win32_check_test_() ->
|
||||||
|
Tests = case os:type() of
|
||||||
|
{unix, _} ->
|
||||||
|
[];
|
||||||
|
{win32, _} ->
|
||||||
|
[
|
||||||
|
<<"c:/home/cowboy/../../secretfile.bat">>,
|
||||||
|
<<"c:/home/cowboy/c:/secretfile.bat">>,
|
||||||
|
<<"c:/home/cowboy/..\\..\\secretfile.bat">>,
|
||||||
|
<<"c:/home/cowboy/c:\\secretfile.bat">>
|
||||||
|
]
|
||||||
|
end,
|
||||||
|
[{P, fun() ->
|
||||||
|
error = case fullpath(P) of
|
||||||
|
<< "c:/home/cowboy/", _/bits >> -> ok;
|
||||||
|
_ -> error
|
||||||
|
end
|
||||||
|
end} || P <- Tests].
|
||||||
|
-endif.
|
||||||
|
|
||||||
|
%% Reject requests that tried to access a file outside
|
||||||
|
%% the target directory, or used reserved characters.
|
||||||
|
|
||||||
|
-spec malformed_request(Req, State)
|
||||||
|
-> {boolean(), Req, State}.
|
||||||
|
malformed_request(Req, State) ->
|
||||||
|
{State =:= error, Req, State}.
|
||||||
|
|
||||||
|
%% Directories, files that can't be accessed at all and
|
||||||
|
%% files with no read flag are forbidden.
|
||||||
|
|
||||||
|
-spec forbidden(Req, State)
|
||||||
|
-> {boolean(), Req, State}
|
||||||
|
when State::state().
|
||||||
|
forbidden(Req, State={_, {_, #file_info{type=directory}}, _}) ->
|
||||||
|
{true, Req, State};
|
||||||
|
forbidden(Req, State={_, {error, eacces}, _}) ->
|
||||||
|
{true, Req, State};
|
||||||
|
forbidden(Req, State={_, {_, #file_info{access=Access}}, _})
|
||||||
|
when Access =:= write; Access =:= none ->
|
||||||
|
{true, Req, State};
|
||||||
|
forbidden(Req, State) ->
|
||||||
|
{false, Req, State}.
|
||||||
|
|
||||||
|
%% Detect the mimetype of the file.
|
||||||
|
|
||||||
|
-spec content_types_provided(Req, State)
|
||||||
|
-> {[{binary(), get_file}], Req, State}
|
||||||
|
when State::state().
|
||||||
|
content_types_provided(Req, State={Path, _, Extra}) when is_list(Extra) ->
|
||||||
|
case lists:keyfind(mimetypes, 1, Extra) of
|
||||||
|
false ->
|
||||||
|
{[{cow_mimetypes:web(Path), get_file}], Req, State};
|
||||||
|
{mimetypes, Module, Function} ->
|
||||||
|
{[{Module:Function(Path), get_file}], Req, State};
|
||||||
|
{mimetypes, Type} ->
|
||||||
|
{[{Type, get_file}], Req, State}
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% Detect the charset of the file.
|
||||||
|
|
||||||
|
-spec charsets_provided(Req, State)
|
||||||
|
-> {[binary()], Req, State}
|
||||||
|
when State::state().
|
||||||
|
charsets_provided(Req, State={Path, _, Extra}) ->
|
||||||
|
case lists:keyfind(charset, 1, Extra) of
|
||||||
|
%% We simulate the callback not being exported.
|
||||||
|
false ->
|
||||||
|
no_call;
|
||||||
|
{charset, Module, Function} ->
|
||||||
|
{[Module:Function(Path)], Req, State};
|
||||||
|
{charset, Charset} when is_binary(Charset) ->
|
||||||
|
{[Charset], Req, State}
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% Enable support for range requests.
|
||||||
|
|
||||||
|
-spec ranges_provided(Req, State)
|
||||||
|
-> {[{binary(), auto}], Req, State}
|
||||||
|
when State::state().
|
||||||
|
ranges_provided(Req, State) ->
|
||||||
|
{[{<<"bytes">>, auto}], Req, State}.
|
||||||
|
|
||||||
|
%% Assume the resource doesn't exist if it's not a regular file.
|
||||||
|
|
||||||
|
-spec resource_exists(Req, State)
|
||||||
|
-> {boolean(), Req, State}
|
||||||
|
when State::state().
|
||||||
|
resource_exists(Req, State={_, {_, #file_info{type=regular}}, _}) ->
|
||||||
|
{true, Req, State};
|
||||||
|
resource_exists(Req, State) ->
|
||||||
|
{false, Req, State}.
|
||||||
|
|
||||||
|
%% Generate an etag for the file.
|
||||||
|
|
||||||
|
-spec generate_etag(Req, State)
|
||||||
|
-> {{strong | weak, binary()}, Req, State}
|
||||||
|
when State::state().
|
||||||
|
generate_etag(Req, State={Path, {_, #file_info{size=Size, mtime=Mtime}},
|
||||||
|
Extra}) ->
|
||||||
|
case lists:keyfind(etag, 1, Extra) of
|
||||||
|
false ->
|
||||||
|
{generate_default_etag(Size, Mtime), Req, State};
|
||||||
|
{etag, Module, Function} ->
|
||||||
|
{Module:Function(Path, Size, Mtime), Req, State};
|
||||||
|
{etag, false} ->
|
||||||
|
{undefined, Req, State}
|
||||||
|
end.
|
||||||
|
|
||||||
|
generate_default_etag(Size, Mtime) ->
|
||||||
|
{strong, integer_to_binary(erlang:phash2({Size, Mtime}, 16#ffffffff))}.
|
||||||
|
|
||||||
|
%% Return the time of last modification of the file.
|
||||||
|
|
||||||
|
-spec last_modified(Req, State)
|
||||||
|
-> {calendar:datetime(), Req, State}
|
||||||
|
when State::state().
|
||||||
|
last_modified(Req, State={_, {_, #file_info{mtime=Modified}}, _}) ->
|
||||||
|
{Modified, Req, State}.
|
||||||
|
|
||||||
|
%% Stream the file.
|
||||||
|
|
||||||
|
-spec get_file(Req, State)
|
||||||
|
-> {{sendfile, 0, non_neg_integer(), binary()}, Req, State}
|
||||||
|
when State::state().
|
||||||
|
get_file(Req, State={Path, {direct, #file_info{size=Size}}, _}) ->
|
||||||
|
{{sendfile, 0, Size, Path}, Req, State};
|
||||||
|
get_file(Req, State={Path, {archive, _}, _}) ->
|
||||||
|
PathS = binary_to_list(Path),
|
||||||
|
{ok, Bin, _} = erl_prim_loader:get_file(PathS),
|
||||||
|
{Bin, Req, State}.
|
|
@ -0,0 +1,193 @@
|
||||||
|
%% Copyright (c) 2015-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_stream).
|
||||||
|
|
||||||
|
-type state() :: any().
|
||||||
|
-type human_reason() :: atom().
|
||||||
|
|
||||||
|
-type streamid() :: any().
|
||||||
|
-export_type([streamid/0]).
|
||||||
|
|
||||||
|
-type fin() :: fin | nofin.
|
||||||
|
-export_type([fin/0]).
|
||||||
|
|
||||||
|
%% @todo Perhaps it makes more sense to have resp_body in this module?
|
||||||
|
|
||||||
|
-type resp_command()
|
||||||
|
:: {response, cowboy:http_status(), cowboy:http_headers(), cowboy_req:resp_body()}.
|
||||||
|
-export_type([resp_command/0]).
|
||||||
|
|
||||||
|
-type commands() :: [{inform, cowboy:http_status(), cowboy:http_headers()}
|
||||||
|
| resp_command()
|
||||||
|
| {headers, cowboy:http_status(), cowboy:http_headers()}
|
||||||
|
| {data, fin(), cowboy_req:resp_body()}
|
||||||
|
| {trailers, cowboy:http_headers()}
|
||||||
|
| {push, binary(), binary(), binary(), inet:port_number(),
|
||||||
|
binary(), binary(), cowboy:http_headers()}
|
||||||
|
| {flow, pos_integer()}
|
||||||
|
| {spawn, pid(), timeout()}
|
||||||
|
| {error_response, cowboy:http_status(), cowboy:http_headers(), iodata()}
|
||||||
|
| {switch_protocol, cowboy:http_headers(), module(), state()}
|
||||||
|
| {internal_error, any(), human_reason()}
|
||||||
|
| {set_options, map()}
|
||||||
|
| {log, logger:level(), io:format(), list()}
|
||||||
|
| stop].
|
||||||
|
-export_type([commands/0]).
|
||||||
|
|
||||||
|
-type reason() :: normal | switch_protocol
|
||||||
|
| {internal_error, timeout | {error | exit | throw, any()}, human_reason()}
|
||||||
|
| {socket_error, closed | atom(), human_reason()}
|
||||||
|
| {stream_error, cow_http2:error(), human_reason()}
|
||||||
|
| {connection_error, cow_http2:error(), human_reason()}
|
||||||
|
| {stop, cow_http2:frame() | {exit, any()}, human_reason()}.
|
||||||
|
-export_type([reason/0]).
|
||||||
|
|
||||||
|
-type partial_req() :: map(). %% @todo Take what's in cowboy_req with everything? optional.
|
||||||
|
-export_type([partial_req/0]).
|
||||||
|
|
||||||
|
-callback init(streamid(), cowboy_req:req(), cowboy:opts()) -> {commands(), state()}.
|
||||||
|
-callback data(streamid(), fin(), binary(), State) -> {commands(), State} when State::state().
|
||||||
|
-callback info(streamid(), any(), State) -> {commands(), State} when State::state().
|
||||||
|
-callback terminate(streamid(), reason(), state()) -> any().
|
||||||
|
-callback early_error(streamid(), reason(), partial_req(), Resp, cowboy:opts())
|
||||||
|
-> Resp when Resp::resp_command().
|
||||||
|
|
||||||
|
%% @todo To optimize the number of active timers we could have a command
|
||||||
|
%% that enables a timeout that is called in the absence of any other call,
|
||||||
|
%% similar to what gen_server does. However the nice thing about this is
|
||||||
|
%% that the connection process can keep a single timer around (the same
|
||||||
|
%% one that would be used to detect half-closed sockets) and use this
|
||||||
|
%% timer and other events to trigger the timeout in streams at their
|
||||||
|
%% intended time.
|
||||||
|
%%
|
||||||
|
%% This same timer can be used to try and send PING frames to help detect
|
||||||
|
%% that the connection is indeed unresponsive.
|
||||||
|
|
||||||
|
-export([init/3]).
|
||||||
|
-export([data/4]).
|
||||||
|
-export([info/3]).
|
||||||
|
-export([terminate/3]).
|
||||||
|
-export([early_error/5]).
|
||||||
|
-export([make_error_log/5]).
|
||||||
|
|
||||||
|
%% Note that this and other functions in this module do NOT catch
|
||||||
|
%% exceptions. We want the exception to go all the way down to the
|
||||||
|
%% protocol code.
|
||||||
|
%%
|
||||||
|
%% OK the failure scenario is not so clear. The problem is
|
||||||
|
%% that the failure at any point in init/3 will result in the
|
||||||
|
%% corresponding state being lost. I am unfortunately not
|
||||||
|
%% confident we can do anything about this. If the crashing
|
||||||
|
%% handler just created a process, we'll never know about it.
|
||||||
|
%% Therefore at this time I choose to leave all failure handling
|
||||||
|
%% to the protocol process.
|
||||||
|
%%
|
||||||
|
%% Note that a failure in init/3 will result in terminate/3
|
||||||
|
%% NOT being called. This is because the state is not available.
|
||||||
|
|
||||||
|
-spec init(streamid(), cowboy_req:req(), cowboy:opts())
|
||||||
|
-> {commands(), {module(), state()} | undefined}.
|
||||||
|
init(StreamID, Req, Opts) ->
|
||||||
|
case maps:get(stream_handlers, Opts, [cowboy_stream_h]) of
|
||||||
|
[] ->
|
||||||
|
{[], undefined};
|
||||||
|
[Handler|Tail] ->
|
||||||
|
%% We call the next handler and remove it from the list of
|
||||||
|
%% stream handlers. This means that handlers that run after
|
||||||
|
%% it have no knowledge it exists. Should user require this
|
||||||
|
%% knowledge they can just define a separate option that will
|
||||||
|
%% be left untouched.
|
||||||
|
{Commands, State} = Handler:init(StreamID, Req, Opts#{stream_handlers => Tail}),
|
||||||
|
{Commands, {Handler, State}}
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec data(streamid(), fin(), binary(), {Handler, State} | undefined)
|
||||||
|
-> {commands(), {Handler, State} | undefined}
|
||||||
|
when Handler::module(), State::state().
|
||||||
|
data(_, _, _, undefined) ->
|
||||||
|
{[], undefined};
|
||||||
|
data(StreamID, IsFin, Data, {Handler, State0}) ->
|
||||||
|
{Commands, State} = Handler:data(StreamID, IsFin, Data, State0),
|
||||||
|
{Commands, {Handler, State}}.
|
||||||
|
|
||||||
|
-spec info(streamid(), any(), {Handler, State} | undefined)
|
||||||
|
-> {commands(), {Handler, State} | undefined}
|
||||||
|
when Handler::module(), State::state().
|
||||||
|
info(_, _, undefined) ->
|
||||||
|
{[], undefined};
|
||||||
|
info(StreamID, Info, {Handler, State0}) ->
|
||||||
|
{Commands, State} = Handler:info(StreamID, Info, State0),
|
||||||
|
{Commands, {Handler, State}}.
|
||||||
|
|
||||||
|
-spec terminate(streamid(), reason(), {module(), state()} | undefined) -> ok.
|
||||||
|
terminate(_, _, undefined) ->
|
||||||
|
ok;
|
||||||
|
terminate(StreamID, Reason, {Handler, State}) ->
|
||||||
|
_ = Handler:terminate(StreamID, Reason, State),
|
||||||
|
ok.
|
||||||
|
|
||||||
|
-spec early_error(streamid(), reason(), partial_req(), Resp, cowboy:opts())
|
||||||
|
-> Resp when Resp::resp_command().
|
||||||
|
early_error(StreamID, Reason, PartialReq, Resp, Opts) ->
|
||||||
|
case maps:get(stream_handlers, Opts, [cowboy_stream_h]) of
|
||||||
|
[] ->
|
||||||
|
Resp;
|
||||||
|
[Handler|Tail] ->
|
||||||
|
%% This is the same behavior as in init/3.
|
||||||
|
Handler:early_error(StreamID, Reason,
|
||||||
|
PartialReq, Resp, Opts#{stream_handlers => Tail})
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec make_error_log(init | data | info | terminate | early_error,
|
||||||
|
list(), error | exit | throw, any(), list())
|
||||||
|
-> {log, error, string(), list()}.
|
||||||
|
make_error_log(init, [StreamID, Req, Opts], Class, Exception, Stacktrace) ->
|
||||||
|
{log, error,
|
||||||
|
"Unhandled exception ~p:~p in cowboy_stream:init(~p, Req, Opts)~n"
|
||||||
|
"Stacktrace: ~p~n"
|
||||||
|
"Req: ~p~n"
|
||||||
|
"Opts: ~p~n",
|
||||||
|
[Class, Exception, StreamID, Stacktrace, Req, Opts]};
|
||||||
|
make_error_log(data, [StreamID, IsFin, Data, State], Class, Exception, Stacktrace) ->
|
||||||
|
{log, error,
|
||||||
|
"Unhandled exception ~p:~p in cowboy_stream:data(~p, ~p, Data, State)~n"
|
||||||
|
"Stacktrace: ~p~n"
|
||||||
|
"Data: ~p~n"
|
||||||
|
"State: ~p~n",
|
||||||
|
[Class, Exception, StreamID, IsFin, Stacktrace, Data, State]};
|
||||||
|
make_error_log(info, [StreamID, Msg, State], Class, Exception, Stacktrace) ->
|
||||||
|
{log, error,
|
||||||
|
"Unhandled exception ~p:~p in cowboy_stream:info(~p, Msg, State)~n"
|
||||||
|
"Stacktrace: ~p~n"
|
||||||
|
"Msg: ~p~n"
|
||||||
|
"State: ~p~n",
|
||||||
|
[Class, Exception, StreamID, Stacktrace, Msg, State]};
|
||||||
|
make_error_log(terminate, [StreamID, Reason, State], Class, Exception, Stacktrace) ->
|
||||||
|
{log, error,
|
||||||
|
"Unhandled exception ~p:~p in cowboy_stream:terminate(~p, Reason, State)~n"
|
||||||
|
"Stacktrace: ~p~n"
|
||||||
|
"Reason: ~p~n"
|
||||||
|
"State: ~p~n",
|
||||||
|
[Class, Exception, StreamID, Stacktrace, Reason, State]};
|
||||||
|
make_error_log(early_error, [StreamID, Reason, PartialReq, Resp, Opts],
|
||||||
|
Class, Exception, Stacktrace) ->
|
||||||
|
{log, error,
|
||||||
|
"Unhandled exception ~p:~p in cowboy_stream:early_error(~p, Reason, PartialReq, Resp, Opts)~n"
|
||||||
|
"Stacktrace: ~p~n"
|
||||||
|
"Reason: ~p~n"
|
||||||
|
"PartialReq: ~p~n"
|
||||||
|
"Resp: ~p~n"
|
||||||
|
"Opts: ~p~n",
|
||||||
|
[Class, Exception, StreamID, Stacktrace, Reason, PartialReq, Resp, Opts]}.
|
|
@ -0,0 +1,324 @@
|
||||||
|
%% Copyright (c) 2016-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_stream_h).
|
||||||
|
-behavior(cowboy_stream).
|
||||||
|
|
||||||
|
-export([init/3]).
|
||||||
|
-export([data/4]).
|
||||||
|
-export([info/3]).
|
||||||
|
-export([terminate/3]).
|
||||||
|
-export([early_error/5]).
|
||||||
|
|
||||||
|
-export([request_process/3]).
|
||||||
|
-export([resume/5]).
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
next :: any(),
|
||||||
|
ref = undefined :: ranch:ref(),
|
||||||
|
pid = undefined :: pid(),
|
||||||
|
expect = undefined :: undefined | continue,
|
||||||
|
read_body_pid = undefined :: pid() | undefined,
|
||||||
|
read_body_ref = undefined :: reference() | undefined,
|
||||||
|
read_body_timer_ref = undefined :: reference() | undefined,
|
||||||
|
read_body_length = 0 :: non_neg_integer() | infinity | auto,
|
||||||
|
read_body_is_fin = nofin :: nofin | {fin, non_neg_integer()},
|
||||||
|
read_body_buffer = <<>> :: binary(),
|
||||||
|
body_length = 0 :: non_neg_integer(),
|
||||||
|
stream_body_pid = undefined :: pid() | undefined,
|
||||||
|
stream_body_status = normal :: normal | blocking | blocked
|
||||||
|
}).
|
||||||
|
|
||||||
|
-spec init(cowboy_stream:streamid(), cowboy_req:req(), cowboy:opts())
|
||||||
|
-> {[{spawn, pid(), timeout()}], #state{}}.
|
||||||
|
init(StreamID, Req=#{ref := Ref}, Opts) ->
|
||||||
|
Env = maps:get(env, Opts, #{}),
|
||||||
|
Middlewares = maps:get(middlewares, Opts, [cowboy_router, cowboy_handler]),
|
||||||
|
Shutdown = maps:get(shutdown_timeout, Opts, 5000),
|
||||||
|
Pid = proc_lib:spawn_link(?MODULE, request_process, [Req, Env, Middlewares]),
|
||||||
|
Expect = expect(Req),
|
||||||
|
{Commands, Next} = cowboy_stream:init(StreamID, Req, Opts),
|
||||||
|
{[{spawn, Pid, Shutdown}|Commands],
|
||||||
|
#state{next=Next, ref=Ref, pid=Pid, expect=Expect}}.
|
||||||
|
|
||||||
|
%% Ignore the expect header in HTTP/1.0.
|
||||||
|
expect(#{version := 'HTTP/1.0'}) ->
|
||||||
|
undefined;
|
||||||
|
expect(Req) ->
|
||||||
|
try cowboy_req:parse_header(<<"expect">>, Req) of
|
||||||
|
Expect ->
|
||||||
|
Expect
|
||||||
|
catch _:_ ->
|
||||||
|
undefined
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% If we receive data and stream is waiting for data:
|
||||||
|
%% If we accumulated enough data or IsFin=fin, send it.
|
||||||
|
%% If we are in auto mode, send it and update flow control.
|
||||||
|
%% If not, buffer it.
|
||||||
|
%% If not, buffer it.
|
||||||
|
%%
|
||||||
|
%% We always reset the expect field when we receive data,
|
||||||
|
%% since the client started sending the request body before
|
||||||
|
%% we could send a 100 continue response.
|
||||||
|
|
||||||
|
-spec data(cowboy_stream:streamid(), cowboy_stream:fin(), cowboy_req:resp_body(), State)
|
||||||
|
-> {cowboy_stream:commands(), State} when State::#state{}.
|
||||||
|
%% Stream isn't waiting for data.
|
||||||
|
data(StreamID, IsFin, Data, State=#state{
|
||||||
|
read_body_ref=undefined, read_body_buffer=Buffer, body_length=BodyLen}) ->
|
||||||
|
do_data(StreamID, IsFin, Data, [], State#state{
|
||||||
|
expect=undefined,
|
||||||
|
read_body_is_fin=IsFin,
|
||||||
|
read_body_buffer= << Buffer/binary, Data/binary >>,
|
||||||
|
body_length=BodyLen + byte_size(Data)
|
||||||
|
});
|
||||||
|
%% Stream is waiting for data using auto mode.
|
||||||
|
%%
|
||||||
|
%% There is no buffering done in auto mode.
|
||||||
|
data(StreamID, IsFin, Data, State=#state{read_body_pid=Pid, read_body_ref=Ref,
|
||||||
|
read_body_length=auto, body_length=BodyLen}) ->
|
||||||
|
send_request_body(Pid, Ref, IsFin, BodyLen, Data),
|
||||||
|
do_data(StreamID, IsFin, Data, [{flow, byte_size(Data)}], State#state{
|
||||||
|
read_body_ref=undefined,
|
||||||
|
%% @todo This is wrong, it's missing byte_size(Data).
|
||||||
|
body_length=BodyLen
|
||||||
|
});
|
||||||
|
%% Stream is waiting for data but we didn't receive enough to send yet.
|
||||||
|
data(StreamID, IsFin=nofin, Data, State=#state{
|
||||||
|
read_body_length=ReadLen, read_body_buffer=Buffer, body_length=BodyLen})
|
||||||
|
when byte_size(Data) + byte_size(Buffer) < ReadLen ->
|
||||||
|
do_data(StreamID, IsFin, Data, [], State#state{
|
||||||
|
expect=undefined,
|
||||||
|
read_body_buffer= << Buffer/binary, Data/binary >>,
|
||||||
|
body_length=BodyLen + byte_size(Data)
|
||||||
|
});
|
||||||
|
%% Stream is waiting for data and we received enough to send.
|
||||||
|
data(StreamID, IsFin, Data, State=#state{read_body_pid=Pid, read_body_ref=Ref,
|
||||||
|
read_body_timer_ref=TRef, read_body_buffer=Buffer, body_length=BodyLen0}) ->
|
||||||
|
BodyLen = BodyLen0 + byte_size(Data),
|
||||||
|
ok = erlang:cancel_timer(TRef, [{async, true}, {info, false}]),
|
||||||
|
send_request_body(Pid, Ref, IsFin, BodyLen, <<Buffer/binary, Data/binary>>),
|
||||||
|
do_data(StreamID, IsFin, Data, [], State#state{
|
||||||
|
expect=undefined,
|
||||||
|
read_body_ref=undefined,
|
||||||
|
read_body_timer_ref=undefined,
|
||||||
|
read_body_buffer= <<>>,
|
||||||
|
body_length=BodyLen
|
||||||
|
}).
|
||||||
|
|
||||||
|
do_data(StreamID, IsFin, Data, Commands1, State=#state{next=Next0}) ->
|
||||||
|
{Commands2, Next} = cowboy_stream:data(StreamID, IsFin, Data, Next0),
|
||||||
|
{Commands1 ++ Commands2, State#state{next=Next}}.
|
||||||
|
|
||||||
|
-spec info(cowboy_stream:streamid(), any(), State)
|
||||||
|
-> {cowboy_stream:commands(), State} when State::#state{}.
|
||||||
|
info(StreamID, Info={'EXIT', Pid, normal}, State=#state{pid=Pid}) ->
|
||||||
|
do_info(StreamID, Info, [stop], State);
|
||||||
|
info(StreamID, Info={'EXIT', Pid, {{request_error, Reason, _HumanReadable}, _}},
|
||||||
|
State=#state{pid=Pid}) ->
|
||||||
|
Status = case Reason of
|
||||||
|
timeout -> 408;
|
||||||
|
payload_too_large -> 413;
|
||||||
|
_ -> 400
|
||||||
|
end,
|
||||||
|
%% @todo Headers? Details in body? Log the crash? More stuff in debug only?
|
||||||
|
do_info(StreamID, Info, [
|
||||||
|
{error_response, Status, #{<<"content-length">> => <<"0">>}, <<>>},
|
||||||
|
stop
|
||||||
|
], State);
|
||||||
|
info(StreamID, Exit={'EXIT', Pid, {Reason, Stacktrace}}, State=#state{ref=Ref, pid=Pid}) ->
|
||||||
|
Commands0 = [{internal_error, Exit, 'Stream process crashed.'}],
|
||||||
|
Commands = case Reason of
|
||||||
|
normal -> Commands0;
|
||||||
|
shutdown -> Commands0;
|
||||||
|
{shutdown, _} -> Commands0;
|
||||||
|
_ -> [{log, error,
|
||||||
|
"Ranch listener ~p, connection process ~p, stream ~p "
|
||||||
|
"had its request process ~p exit with reason "
|
||||||
|
"~999999p and stacktrace ~999999p~n",
|
||||||
|
[Ref, self(), StreamID, Pid, Reason, Stacktrace]}
|
||||||
|
|Commands0]
|
||||||
|
end,
|
||||||
|
do_info(StreamID, Exit, [
|
||||||
|
{error_response, 500, #{<<"content-length">> => <<"0">>}, <<>>}
|
||||||
|
|Commands], State);
|
||||||
|
%% Request body, auto mode, no body buffered.
|
||||||
|
info(StreamID, Info={read_body, Pid, Ref, auto, infinity}, State=#state{read_body_buffer= <<>>}) ->
|
||||||
|
do_info(StreamID, Info, [], State#state{
|
||||||
|
read_body_pid=Pid,
|
||||||
|
read_body_ref=Ref,
|
||||||
|
read_body_length=auto
|
||||||
|
});
|
||||||
|
%% Request body, auto mode, body buffered or complete.
|
||||||
|
info(StreamID, Info={read_body, Pid, Ref, auto, infinity}, State=#state{
|
||||||
|
read_body_is_fin=IsFin, read_body_buffer=Buffer, body_length=BodyLen}) ->
|
||||||
|
send_request_body(Pid, Ref, IsFin, BodyLen, Buffer),
|
||||||
|
do_info(StreamID, Info, [{flow, byte_size(Buffer)}],
|
||||||
|
State#state{read_body_buffer= <<>>});
|
||||||
|
%% Request body, body buffered large enough or complete.
|
||||||
|
%%
|
||||||
|
%% We do not send a 100 continue response if the client
|
||||||
|
%% already started sending the body.
|
||||||
|
info(StreamID, Info={read_body, Pid, Ref, Length, _}, State=#state{
|
||||||
|
read_body_is_fin=IsFin, read_body_buffer=Buffer, body_length=BodyLen})
|
||||||
|
when IsFin =:= fin; byte_size(Buffer) >= Length ->
|
||||||
|
send_request_body(Pid, Ref, IsFin, BodyLen, Buffer),
|
||||||
|
do_info(StreamID, Info, [], State#state{read_body_buffer= <<>>});
|
||||||
|
%% Request body, not enough to send yet.
|
||||||
|
info(StreamID, Info={read_body, Pid, Ref, Length, Period}, State=#state{expect=Expect}) ->
|
||||||
|
Commands = case Expect of
|
||||||
|
continue -> [{inform, 100, #{}}, {flow, Length}];
|
||||||
|
undefined -> [{flow, Length}]
|
||||||
|
end,
|
||||||
|
TRef = erlang:send_after(Period, self(), {{self(), StreamID}, {read_body_timeout, Ref}}),
|
||||||
|
do_info(StreamID, Info, Commands, State#state{
|
||||||
|
read_body_pid=Pid,
|
||||||
|
read_body_ref=Ref,
|
||||||
|
read_body_timer_ref=TRef,
|
||||||
|
read_body_length=Length
|
||||||
|
});
|
||||||
|
%% Request body reading timeout; send what we got.
|
||||||
|
info(StreamID, Info={read_body_timeout, Ref}, State=#state{read_body_pid=Pid, read_body_ref=Ref,
|
||||||
|
read_body_is_fin=IsFin, read_body_buffer=Buffer, body_length=BodyLen}) ->
|
||||||
|
send_request_body(Pid, Ref, IsFin, BodyLen, Buffer),
|
||||||
|
do_info(StreamID, Info, [], State#state{
|
||||||
|
read_body_ref=undefined,
|
||||||
|
read_body_timer_ref=undefined,
|
||||||
|
read_body_buffer= <<>>
|
||||||
|
});
|
||||||
|
info(StreamID, Info={read_body_timeout, _}, State) ->
|
||||||
|
do_info(StreamID, Info, [], State);
|
||||||
|
%% Response.
|
||||||
|
%%
|
||||||
|
%% We reset the expect field when a 100 continue response
|
||||||
|
%% is sent or when any final response is sent.
|
||||||
|
info(StreamID, Inform={inform, Status, _}, State0) ->
|
||||||
|
State = case cow_http:status_to_integer(Status) of
|
||||||
|
100 -> State0#state{expect=undefined};
|
||||||
|
_ -> State0
|
||||||
|
end,
|
||||||
|
do_info(StreamID, Inform, [Inform], State);
|
||||||
|
info(StreamID, Response={response, _, _, _}, State) ->
|
||||||
|
do_info(StreamID, Response, [Response], State#state{expect=undefined});
|
||||||
|
info(StreamID, Headers={headers, _, _}, State) ->
|
||||||
|
do_info(StreamID, Headers, [Headers], State#state{expect=undefined});
|
||||||
|
%% Sending data involves the data message, the stream_buffer_full alarm
|
||||||
|
%% and the connection_buffer_full alarm. We stop sending acks when an alarm is on.
|
||||||
|
%%
|
||||||
|
%% We only apply backpressure when the message includes a pid. Otherwise
|
||||||
|
%% it is a message from Cowboy, or the user circumventing the backpressure.
|
||||||
|
%%
|
||||||
|
%% We currently do not support sending data from multiple processes concurrently.
|
||||||
|
info(StreamID, Data={data, _, _}, State) ->
|
||||||
|
do_info(StreamID, Data, [Data], State);
|
||||||
|
info(StreamID, Data0={data, Pid, _, _}, State0=#state{stream_body_status=Status}) ->
|
||||||
|
State = case Status of
|
||||||
|
normal ->
|
||||||
|
Pid ! {data_ack, self()},
|
||||||
|
State0;
|
||||||
|
blocking ->
|
||||||
|
State0#state{stream_body_pid=Pid, stream_body_status=blocked};
|
||||||
|
blocked ->
|
||||||
|
State0
|
||||||
|
end,
|
||||||
|
Data = erlang:delete_element(2, Data0),
|
||||||
|
do_info(StreamID, Data, [Data], State);
|
||||||
|
info(StreamID, Alarm={alarm, Name, on}, State0=#state{stream_body_status=Status})
|
||||||
|
when Name =:= connection_buffer_full; Name =:= stream_buffer_full ->
|
||||||
|
State = case Status of
|
||||||
|
normal -> State0#state{stream_body_status=blocking};
|
||||||
|
_ -> State0
|
||||||
|
end,
|
||||||
|
do_info(StreamID, Alarm, [], State);
|
||||||
|
info(StreamID, Alarm={alarm, Name, off}, State=#state{stream_body_pid=Pid, stream_body_status=Status})
|
||||||
|
when Name =:= connection_buffer_full; Name =:= stream_buffer_full ->
|
||||||
|
_ = case Status of
|
||||||
|
normal -> ok;
|
||||||
|
blocking -> ok;
|
||||||
|
blocked -> Pid ! {data_ack, self()}
|
||||||
|
end,
|
||||||
|
do_info(StreamID, Alarm, [], State#state{stream_body_pid=undefined, stream_body_status=normal});
|
||||||
|
info(StreamID, Trailers={trailers, _}, State) ->
|
||||||
|
do_info(StreamID, Trailers, [Trailers], State);
|
||||||
|
info(StreamID, Push={push, _, _, _, _, _, _, _}, State) ->
|
||||||
|
do_info(StreamID, Push, [Push], State);
|
||||||
|
info(StreamID, SwitchProtocol={switch_protocol, _, _, _}, State) ->
|
||||||
|
do_info(StreamID, SwitchProtocol, [SwitchProtocol], State#state{expect=undefined});
|
||||||
|
%% Convert the set_options message to a command.
|
||||||
|
info(StreamID, SetOptions={set_options, _}, State) ->
|
||||||
|
do_info(StreamID, SetOptions, [SetOptions], State);
|
||||||
|
%% Unknown message, either stray or meant for a handler down the line.
|
||||||
|
info(StreamID, Info, State) ->
|
||||||
|
do_info(StreamID, Info, [], State).
|
||||||
|
|
||||||
|
do_info(StreamID, Info, Commands1, State0=#state{next=Next0}) ->
|
||||||
|
{Commands2, Next} = cowboy_stream:info(StreamID, Info, Next0),
|
||||||
|
{Commands1 ++ Commands2, State0#state{next=Next}}.
|
||||||
|
|
||||||
|
-spec terminate(cowboy_stream:streamid(), cowboy_stream:reason(), #state{}) -> ok.
|
||||||
|
terminate(StreamID, Reason, #state{next=Next}) ->
|
||||||
|
cowboy_stream:terminate(StreamID, Reason, Next).
|
||||||
|
|
||||||
|
-spec early_error(cowboy_stream:streamid(), cowboy_stream:reason(),
|
||||||
|
cowboy_stream:partial_req(), Resp, cowboy:opts()) -> Resp
|
||||||
|
when Resp::cowboy_stream:resp_command().
|
||||||
|
early_error(StreamID, Reason, PartialReq, Resp, Opts) ->
|
||||||
|
cowboy_stream:early_error(StreamID, Reason, PartialReq, Resp, Opts).
|
||||||
|
|
||||||
|
send_request_body(Pid, Ref, nofin, _, Data) ->
|
||||||
|
Pid ! {request_body, Ref, nofin, Data},
|
||||||
|
ok;
|
||||||
|
send_request_body(Pid, Ref, fin, BodyLen, Data) ->
|
||||||
|
Pid ! {request_body, Ref, fin, BodyLen, Data},
|
||||||
|
ok.
|
||||||
|
|
||||||
|
%% Request process.
|
||||||
|
|
||||||
|
%% We add the stacktrace to exit exceptions here in order
|
||||||
|
%% to simplify the debugging of errors. The proc_lib library
|
||||||
|
%% already adds the stacktrace to other types of exceptions.
|
||||||
|
-spec request_process(cowboy_req:req(), cowboy_middleware:env(), [module()]) -> ok.
|
||||||
|
request_process(Req, Env, Middlewares) ->
|
||||||
|
try
|
||||||
|
execute(Req, Env, Middlewares)
|
||||||
|
catch
|
||||||
|
exit:Reason={shutdown, _}:Stacktrace ->
|
||||||
|
erlang:raise(exit, Reason, Stacktrace);
|
||||||
|
exit:Reason:Stacktrace when Reason =/= normal, Reason =/= shutdown ->
|
||||||
|
erlang:raise(exit, {Reason, Stacktrace}, Stacktrace)
|
||||||
|
end.
|
||||||
|
|
||||||
|
execute(_, _, []) ->
|
||||||
|
ok;
|
||||||
|
execute(Req, Env, [Middleware|Tail]) ->
|
||||||
|
case Middleware:execute(Req, Env) of
|
||||||
|
{ok, Req2, Env2} ->
|
||||||
|
execute(Req2, Env2, Tail);
|
||||||
|
{suspend, Module, Function, Args} ->
|
||||||
|
proc_lib:hibernate(?MODULE, resume, [Env, Tail, Module, Function, Args]);
|
||||||
|
{stop, _Req2} ->
|
||||||
|
ok
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec resume(cowboy_middleware:env(), [module()], module(), atom(), [any()]) -> ok.
|
||||||
|
resume(Env, Tail, Module, Function, Args) ->
|
||||||
|
case apply(Module, Function, Args) of
|
||||||
|
{ok, Req2, Env2} ->
|
||||||
|
execute(Req2, Env2, Tail);
|
||||||
|
{suspend, Module2, Function2, Args2} ->
|
||||||
|
proc_lib:hibernate(?MODULE, resume, [Env, Tail, Module2, Function2, Args2]);
|
||||||
|
{stop, _Req2} ->
|
||||||
|
ok
|
||||||
|
end.
|
|
@ -0,0 +1,24 @@
|
||||||
|
%% Copyright (c) 2013-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%% Copyright (c) 2013, James Fish <james@fishcakez.com>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_sub_protocol).
|
||||||
|
|
||||||
|
-callback upgrade(Req, Env, module(), any())
|
||||||
|
-> {ok, Req, Env} | {suspend, module(), atom(), [any()]} | {stop, Req}
|
||||||
|
when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
||||||
|
|
||||||
|
-callback upgrade(Req, Env, module(), any(), any())
|
||||||
|
-> {ok, Req, Env} | {suspend, module(), atom(), [any()]} | {stop, Req}
|
||||||
|
when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
|
@ -0,0 +1,30 @@
|
||||||
|
%% Copyright (c) 2011-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_sup).
|
||||||
|
-behaviour(supervisor).
|
||||||
|
|
||||||
|
-export([start_link/0]).
|
||||||
|
-export([init/1]).
|
||||||
|
|
||||||
|
-spec start_link() -> {ok, pid()}.
|
||||||
|
start_link() ->
|
||||||
|
supervisor:start_link({local, ?MODULE}, ?MODULE, []).
|
||||||
|
|
||||||
|
-spec init([])
|
||||||
|
-> {ok, {{supervisor:strategy(), 10, 10}, [supervisor:child_spec()]}}.
|
||||||
|
init([]) ->
|
||||||
|
Procs = [{cowboy_clock, {cowboy_clock, start_link, []},
|
||||||
|
permanent, 5000, worker, [cowboy_clock]}],
|
||||||
|
{ok, {{one_for_one, 10, 10}, Procs}}.
|
|
@ -0,0 +1,58 @@
|
||||||
|
%% Copyright (c) 2015-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_tls).
|
||||||
|
-behavior(ranch_protocol).
|
||||||
|
|
||||||
|
-export([start_link/3]).
|
||||||
|
-export([start_link/4]).
|
||||||
|
-export([connection_process/4]).
|
||||||
|
|
||||||
|
%% Ranch 1.
|
||||||
|
-spec start_link(ranch:ref(), ssl:sslsocket(), module(), cowboy:opts()) -> {ok, pid()}.
|
||||||
|
start_link(Ref, _Socket, Transport, Opts) ->
|
||||||
|
start_link(Ref, Transport, Opts).
|
||||||
|
|
||||||
|
%% Ranch 2.
|
||||||
|
-spec start_link(ranch:ref(), module(), cowboy:opts()) -> {ok, pid()}.
|
||||||
|
start_link(Ref, Transport, Opts) ->
|
||||||
|
Pid = proc_lib:spawn_link(?MODULE, connection_process,
|
||||||
|
[self(), Ref, Transport, Opts]),
|
||||||
|
{ok, Pid}.
|
||||||
|
|
||||||
|
-spec connection_process(pid(), ranch:ref(), module(), cowboy:opts()) -> ok.
|
||||||
|
connection_process(Parent, Ref, Transport, Opts) ->
|
||||||
|
ProxyInfo = get_proxy_info(Ref, Opts),
|
||||||
|
{ok, Socket} = ranch:handshake(Ref),
|
||||||
|
case ssl:negotiated_protocol(Socket) of
|
||||||
|
{ok, <<"h2">>} ->
|
||||||
|
init(Parent, Ref, Socket, Transport, ProxyInfo, Opts, cowboy_http2);
|
||||||
|
_ -> %% http/1.1 or no protocol negotiated.
|
||||||
|
init(Parent, Ref, Socket, Transport, ProxyInfo, Opts, cowboy_http)
|
||||||
|
end.
|
||||||
|
|
||||||
|
init(Parent, Ref, Socket, Transport, ProxyInfo, Opts, Protocol) ->
|
||||||
|
_ = case maps:get(connection_type, Opts, supervisor) of
|
||||||
|
worker -> ok;
|
||||||
|
supervisor -> process_flag(trap_exit, true)
|
||||||
|
end,
|
||||||
|
Protocol:init(Parent, Ref, Socket, Transport, ProxyInfo, Opts).
|
||||||
|
|
||||||
|
get_proxy_info(Ref, #{proxy_header := true}) ->
|
||||||
|
case ranch:recv_proxy_header(Ref, 1000) of
|
||||||
|
{ok, ProxyInfo} -> ProxyInfo;
|
||||||
|
{error, closed} -> exit({shutdown, closed})
|
||||||
|
end;
|
||||||
|
get_proxy_info(_, _) ->
|
||||||
|
undefined.
|
|
@ -0,0 +1,192 @@
|
||||||
|
%% Copyright (c) 2017-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
-module(cowboy_tracer_h).
|
||||||
|
-behavior(cowboy_stream).
|
||||||
|
|
||||||
|
-export([init/3]).
|
||||||
|
-export([data/4]).
|
||||||
|
-export([info/3]).
|
||||||
|
-export([terminate/3]).
|
||||||
|
-export([early_error/5]).
|
||||||
|
|
||||||
|
-export([set_trace_patterns/0]).
|
||||||
|
|
||||||
|
-export([tracer_process/3]).
|
||||||
|
-export([system_continue/3]).
|
||||||
|
-export([system_terminate/4]).
|
||||||
|
-export([system_code_change/4]).
|
||||||
|
|
||||||
|
-type match_predicate()
|
||||||
|
:: fun((cowboy_stream:streamid(), cowboy_req:req(), cowboy:opts()) -> boolean()).
|
||||||
|
|
||||||
|
-type tracer_match_specs() :: [match_predicate()
|
||||||
|
| {method, binary()}
|
||||||
|
| {host, binary()}
|
||||||
|
| {path, binary()}
|
||||||
|
| {path_start, binary()}
|
||||||
|
| {header, binary()}
|
||||||
|
| {header, binary(), binary()}
|
||||||
|
| {peer_ip, inet:ip_address()}
|
||||||
|
].
|
||||||
|
-export_type([tracer_match_specs/0]).
|
||||||
|
|
||||||
|
-type tracer_callback() :: fun((init | terminate | tuple(), any()) -> any()).
|
||||||
|
-export_type([tracer_callback/0]).
|
||||||
|
|
||||||
|
-spec init(cowboy_stream:streamid(), cowboy_req:req(), cowboy:opts())
|
||||||
|
-> {cowboy_stream:commands(), any()}.
|
||||||
|
init(StreamID, Req, Opts) ->
|
||||||
|
init_tracer(StreamID, Req, Opts),
|
||||||
|
cowboy_stream:init(StreamID, Req, Opts).
|
||||||
|
|
||||||
|
-spec data(cowboy_stream:streamid(), cowboy_stream:fin(), cowboy_req:resp_body(), State)
|
||||||
|
-> {cowboy_stream:commands(), State} when State::any().
|
||||||
|
data(StreamID, IsFin, Data, Next) ->
|
||||||
|
cowboy_stream:data(StreamID, IsFin, Data, Next).
|
||||||
|
|
||||||
|
-spec info(cowboy_stream:streamid(), any(), State)
|
||||||
|
-> {cowboy_stream:commands(), State} when State::any().
|
||||||
|
info(StreamID, Info, Next) ->
|
||||||
|
cowboy_stream:info(StreamID, Info, Next).
|
||||||
|
|
||||||
|
-spec terminate(cowboy_stream:streamid(), cowboy_stream:reason(), any()) -> any().
|
||||||
|
terminate(StreamID, Reason, Next) ->
|
||||||
|
cowboy_stream:terminate(StreamID, Reason, Next).
|
||||||
|
|
||||||
|
-spec early_error(cowboy_stream:streamid(), cowboy_stream:reason(),
|
||||||
|
cowboy_stream:partial_req(), Resp, cowboy:opts()) -> Resp
|
||||||
|
when Resp::cowboy_stream:resp_command().
|
||||||
|
early_error(StreamID, Reason, PartialReq, Resp, Opts) ->
|
||||||
|
cowboy_stream:early_error(StreamID, Reason, PartialReq, Resp, Opts).
|
||||||
|
|
||||||
|
%% API.
|
||||||
|
|
||||||
|
%% These trace patterns are most likely not suitable for production.
|
||||||
|
-spec set_trace_patterns() -> ok.
|
||||||
|
set_trace_patterns() ->
|
||||||
|
erlang:trace_pattern({'_', '_', '_'}, [{'_', [], [{return_trace}]}], [local]),
|
||||||
|
erlang:trace_pattern(on_load, [{'_', [], [{return_trace}]}], [local]),
|
||||||
|
ok.
|
||||||
|
|
||||||
|
%% Internal.
|
||||||
|
|
||||||
|
init_tracer(StreamID, Req, Opts=#{tracer_match_specs := List, tracer_callback := _}) ->
|
||||||
|
case match(List, StreamID, Req, Opts) of
|
||||||
|
false ->
|
||||||
|
ok;
|
||||||
|
true ->
|
||||||
|
start_tracer(StreamID, Req, Opts)
|
||||||
|
end;
|
||||||
|
%% When the options tracer_match_specs or tracer_callback
|
||||||
|
%% are not provided we do not enable tracing.
|
||||||
|
init_tracer(_, _, _) ->
|
||||||
|
ok.
|
||||||
|
|
||||||
|
match([], _, _, _) ->
|
||||||
|
true;
|
||||||
|
match([Predicate|Tail], StreamID, Req, Opts) when is_function(Predicate) ->
|
||||||
|
case Predicate(StreamID, Req, Opts) of
|
||||||
|
true -> match(Tail, StreamID, Req, Opts);
|
||||||
|
false -> false
|
||||||
|
end;
|
||||||
|
match([{method, Value}|Tail], StreamID, Req=#{method := Value}, Opts) ->
|
||||||
|
match(Tail, StreamID, Req, Opts);
|
||||||
|
match([{host, Value}|Tail], StreamID, Req=#{host := Value}, Opts) ->
|
||||||
|
match(Tail, StreamID, Req, Opts);
|
||||||
|
match([{path, Value}|Tail], StreamID, Req=#{path := Value}, Opts) ->
|
||||||
|
match(Tail, StreamID, Req, Opts);
|
||||||
|
match([{path_start, PathStart}|Tail], StreamID, Req=#{path := Path}, Opts) ->
|
||||||
|
Len = byte_size(PathStart),
|
||||||
|
case Path of
|
||||||
|
<<PathStart:Len/binary, _/bits>> -> match(Tail, StreamID, Req, Opts);
|
||||||
|
_ -> false
|
||||||
|
end;
|
||||||
|
match([{header, Name}|Tail], StreamID, Req=#{headers := Headers}, Opts) ->
|
||||||
|
case Headers of
|
||||||
|
#{Name := _} -> match(Tail, StreamID, Req, Opts);
|
||||||
|
_ -> false
|
||||||
|
end;
|
||||||
|
match([{header, Name, Value}|Tail], StreamID, Req=#{headers := Headers}, Opts) ->
|
||||||
|
case Headers of
|
||||||
|
#{Name := Value} -> match(Tail, StreamID, Req, Opts);
|
||||||
|
_ -> false
|
||||||
|
end;
|
||||||
|
match([{peer_ip, IP}|Tail], StreamID, Req=#{peer := {IP, _}}, Opts) ->
|
||||||
|
match(Tail, StreamID, Req, Opts);
|
||||||
|
match(_, _, _, _) ->
|
||||||
|
false.
|
||||||
|
|
||||||
|
%% We only start the tracer if one wasn't started before.
|
||||||
|
start_tracer(StreamID, Req, Opts) ->
|
||||||
|
case erlang:trace_info(self(), tracer) of
|
||||||
|
{tracer, []} ->
|
||||||
|
TracerPid = proc_lib:spawn_link(?MODULE, tracer_process, [StreamID, Req, Opts]),
|
||||||
|
%% The default flags are probably not suitable for production.
|
||||||
|
Flags = maps:get(tracer_flags, Opts, [
|
||||||
|
send, 'receive', call, return_to,
|
||||||
|
procs, ports, monotonic_timestamp,
|
||||||
|
%% The set_on_spawn flag is necessary to catch events
|
||||||
|
%% from request processes.
|
||||||
|
set_on_spawn
|
||||||
|
]),
|
||||||
|
erlang:trace(self(), true, [{tracer, TracerPid}|Flags]),
|
||||||
|
ok;
|
||||||
|
_ ->
|
||||||
|
ok
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% Tracer process.
|
||||||
|
|
||||||
|
-spec tracer_process(_, _, _) -> no_return().
|
||||||
|
tracer_process(StreamID, Req=#{pid := Parent}, Opts=#{tracer_callback := Fun}) ->
|
||||||
|
%% This is necessary because otherwise the tracer could stop
|
||||||
|
%% before it has finished processing the events in its queue.
|
||||||
|
process_flag(trap_exit, true),
|
||||||
|
State = Fun(init, {StreamID, Req, Opts}),
|
||||||
|
tracer_loop(Parent, Opts, State).
|
||||||
|
|
||||||
|
tracer_loop(Parent, Opts=#{tracer_callback := Fun}, State0) ->
|
||||||
|
receive
|
||||||
|
Msg when element(1, Msg) =:= trace; element(1, Msg) =:= trace_ts ->
|
||||||
|
State = Fun(Msg, State0),
|
||||||
|
tracer_loop(Parent, Opts, State);
|
||||||
|
{'EXIT', Parent, Reason} ->
|
||||||
|
tracer_terminate(Reason, Opts, State0);
|
||||||
|
{system, From, Request} ->
|
||||||
|
sys:handle_system_msg(Request, From, Parent, ?MODULE, [], {Opts, State0});
|
||||||
|
Msg ->
|
||||||
|
cowboy:log(warning, "~p: Tracer process received stray message ~9999p~n",
|
||||||
|
[?MODULE, Msg], Opts),
|
||||||
|
tracer_loop(Parent, Opts, State0)
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec tracer_terminate(_, _, _) -> no_return().
|
||||||
|
tracer_terminate(Reason, #{tracer_callback := Fun}, State) ->
|
||||||
|
_ = Fun(terminate, State),
|
||||||
|
exit(Reason).
|
||||||
|
|
||||||
|
%% System callbacks.
|
||||||
|
|
||||||
|
-spec system_continue(pid(), _, {cowboy:opts(), any()}) -> no_return().
|
||||||
|
system_continue(Parent, _, {Opts, State}) ->
|
||||||
|
tracer_loop(Parent, Opts, State).
|
||||||
|
|
||||||
|
-spec system_terminate(any(), _, _, _) -> no_return().
|
||||||
|
system_terminate(Reason, _, _, {Opts, State}) ->
|
||||||
|
tracer_terminate(Reason, Opts, State).
|
||||||
|
|
||||||
|
-spec system_code_change(Misc, _, _, _) -> {ok, Misc} when Misc::any().
|
||||||
|
system_code_change(Misc, _, _, _) ->
|
||||||
|
{ok, Misc}.
|
|
@ -0,0 +1,707 @@
|
||||||
|
%% Copyright (c) 2011-2024, Loïc Hoguin <essen@ninenines.eu>
|
||||||
|
%%
|
||||||
|
%% Permission to use, copy, modify, and/or distribute this software for any
|
||||||
|
%% purpose with or without fee is hereby granted, provided that the above
|
||||||
|
%% copyright notice and this permission notice appear in all copies.
|
||||||
|
%%
|
||||||
|
%% THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||||
|
%% WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||||
|
%% MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||||
|
%% ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||||
|
%% WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||||
|
%% ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||||
|
%% OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||||
|
|
||||||
|
%% Cowboy supports versions 7 through 17 of the Websocket drafts.
|
||||||
|
%% It also supports RFC6455, the proposed standard for Websocket.
|
||||||
|
-module(cowboy_websocket).
|
||||||
|
-behaviour(cowboy_sub_protocol).
|
||||||
|
|
||||||
|
-export([is_upgrade_request/1]).
|
||||||
|
-export([upgrade/4]).
|
||||||
|
-export([upgrade/5]).
|
||||||
|
-export([takeover/7]).
|
||||||
|
-export([loop/3]).
|
||||||
|
|
||||||
|
-export([system_continue/3]).
|
||||||
|
-export([system_terminate/4]).
|
||||||
|
-export([system_code_change/4]).
|
||||||
|
|
||||||
|
-type commands() :: [cow_ws:frame()
|
||||||
|
| {active, boolean()}
|
||||||
|
| {deflate, boolean()}
|
||||||
|
| {set_options, map()}
|
||||||
|
| {shutdown_reason, any()}
|
||||||
|
].
|
||||||
|
-export_type([commands/0]).
|
||||||
|
|
||||||
|
-type call_result(State) :: {commands(), State} | {commands(), State, hibernate}.
|
||||||
|
|
||||||
|
-type deprecated_call_result(State) :: {ok, State}
|
||||||
|
| {ok, State, hibernate}
|
||||||
|
| {reply, cow_ws:frame() | [cow_ws:frame()], State}
|
||||||
|
| {reply, cow_ws:frame() | [cow_ws:frame()], State, hibernate}
|
||||||
|
| {stop, State}.
|
||||||
|
|
||||||
|
-type terminate_reason() :: normal | stop | timeout
|
||||||
|
| remote | {remote, cow_ws:close_code(), binary()}
|
||||||
|
| {error, badencoding | badframe | closed | atom()}
|
||||||
|
| {crash, error | exit | throw, any()}.
|
||||||
|
|
||||||
|
-callback init(Req, any())
|
||||||
|
-> {ok | module(), Req, any()}
|
||||||
|
| {module(), Req, any(), any()}
|
||||||
|
when Req::cowboy_req:req().
|
||||||
|
|
||||||
|
-callback websocket_init(State)
|
||||||
|
-> call_result(State) | deprecated_call_result(State) when State::any().
|
||||||
|
-optional_callbacks([websocket_init/1]).
|
||||||
|
|
||||||
|
-callback websocket_handle(ping | pong | {text | binary | ping | pong, binary()}, State)
|
||||||
|
-> call_result(State) | deprecated_call_result(State) when State::any().
|
||||||
|
-callback websocket_info(any(), State)
|
||||||
|
-> call_result(State) | deprecated_call_result(State) when State::any().
|
||||||
|
|
||||||
|
-callback terminate(any(), cowboy_req:req(), any()) -> ok.
|
||||||
|
-optional_callbacks([terminate/3]).
|
||||||
|
|
||||||
|
-type opts() :: #{
|
||||||
|
active_n => pos_integer(),
|
||||||
|
compress => boolean(),
|
||||||
|
deflate_opts => cow_ws:deflate_opts(),
|
||||||
|
idle_timeout => timeout(),
|
||||||
|
max_frame_size => non_neg_integer() | infinity,
|
||||||
|
req_filter => fun((cowboy_req:req()) -> map()),
|
||||||
|
validate_utf8 => boolean()
|
||||||
|
}.
|
||||||
|
-export_type([opts/0]).
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
parent :: undefined | pid(),
|
||||||
|
ref :: ranch:ref(),
|
||||||
|
socket = undefined :: inet:socket() | {pid(), cowboy_stream:streamid()} | undefined,
|
||||||
|
transport = undefined :: module() | undefined,
|
||||||
|
opts = #{} :: opts(),
|
||||||
|
active = true :: boolean(),
|
||||||
|
handler :: module(),
|
||||||
|
key = undefined :: undefined | binary(),
|
||||||
|
timeout_ref = undefined :: undefined | reference(),
|
||||||
|
messages = undefined :: undefined | {atom(), atom(), atom()}
|
||||||
|
| {atom(), atom(), atom(), atom()},
|
||||||
|
hibernate = false :: boolean(),
|
||||||
|
frag_state = undefined :: cow_ws:frag_state(),
|
||||||
|
frag_buffer = <<>> :: binary(),
|
||||||
|
utf8_state :: cow_ws:utf8_state(),
|
||||||
|
deflate = true :: boolean(),
|
||||||
|
extensions = #{} :: map(),
|
||||||
|
req = #{} :: map(),
|
||||||
|
shutdown_reason = normal :: any()
|
||||||
|
}).
|
||||||
|
|
||||||
|
%% Because the HTTP/1.1 and HTTP/2 handshakes are so different,
|
||||||
|
%% this function is necessary to figure out whether a request
|
||||||
|
%% is trying to upgrade to the Websocket protocol.
|
||||||
|
|
||||||
|
-spec is_upgrade_request(cowboy_req:req()) -> boolean().
|
||||||
|
is_upgrade_request(#{version := 'HTTP/2', method := <<"CONNECT">>, protocol := Protocol}) ->
|
||||||
|
<<"websocket">> =:= cowboy_bstr:to_lower(Protocol);
|
||||||
|
is_upgrade_request(Req=#{version := 'HTTP/1.1', method := <<"GET">>}) ->
|
||||||
|
ConnTokens = cowboy_req:parse_header(<<"connection">>, Req, []),
|
||||||
|
case lists:member(<<"upgrade">>, ConnTokens) of
|
||||||
|
false ->
|
||||||
|
false;
|
||||||
|
true ->
|
||||||
|
UpgradeTokens = cowboy_req:parse_header(<<"upgrade">>, Req),
|
||||||
|
lists:member(<<"websocket">>, UpgradeTokens)
|
||||||
|
end;
|
||||||
|
is_upgrade_request(_) ->
|
||||||
|
false.
|
||||||
|
|
||||||
|
%% Stream process.
|
||||||
|
|
||||||
|
-spec upgrade(Req, Env, module(), any())
|
||||||
|
-> {ok, Req, Env}
|
||||||
|
when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
||||||
|
upgrade(Req, Env, Handler, HandlerState) ->
|
||||||
|
upgrade(Req, Env, Handler, HandlerState, #{}).
|
||||||
|
|
||||||
|
-spec upgrade(Req, Env, module(), any(), opts())
|
||||||
|
-> {ok, Req, Env}
|
||||||
|
when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
||||||
|
%% @todo Immediately crash if a response has already been sent.
|
||||||
|
upgrade(Req0=#{version := Version}, Env, Handler, HandlerState, Opts) ->
|
||||||
|
FilteredReq = case maps:get(req_filter, Opts, undefined) of
|
||||||
|
undefined -> maps:with([method, version, scheme, host, port, path, qs, peer], Req0);
|
||||||
|
FilterFun -> FilterFun(Req0)
|
||||||
|
end,
|
||||||
|
Utf8State = case maps:get(validate_utf8, Opts, true) of
|
||||||
|
true -> 0;
|
||||||
|
false -> undefined
|
||||||
|
end,
|
||||||
|
State0 = #state{opts=Opts, handler=Handler, utf8_state=Utf8State, req=FilteredReq},
|
||||||
|
try websocket_upgrade(State0, Req0) of
|
||||||
|
{ok, State, Req} ->
|
||||||
|
websocket_handshake(State, Req, HandlerState, Env);
|
||||||
|
%% The status code 426 is specific to HTTP/1.1 connections.
|
||||||
|
{error, upgrade_required} when Version =:= 'HTTP/1.1' ->
|
||||||
|
{ok, cowboy_req:reply(426, #{
|
||||||
|
<<"connection">> => <<"upgrade">>,
|
||||||
|
<<"upgrade">> => <<"websocket">>
|
||||||
|
}, Req0), Env};
|
||||||
|
%% Use a generic 400 error for HTTP/2.
|
||||||
|
{error, upgrade_required} ->
|
||||||
|
{ok, cowboy_req:reply(400, Req0), Env}
|
||||||
|
catch _:_ ->
|
||||||
|
%% @todo Probably log something here?
|
||||||
|
%% @todo Test that we can have 2 /ws 400 status code in a row on the same connection.
|
||||||
|
%% @todo Does this even work?
|
||||||
|
{ok, cowboy_req:reply(400, Req0), Env}
|
||||||
|
end.
|
||||||
|
|
||||||
|
websocket_upgrade(State, Req=#{version := Version}) ->
|
||||||
|
case is_upgrade_request(Req) of
|
||||||
|
false ->
|
||||||
|
{error, upgrade_required};
|
||||||
|
true when Version =:= 'HTTP/1.1' ->
|
||||||
|
Key = cowboy_req:header(<<"sec-websocket-key">>, Req),
|
||||||
|
false = Key =:= undefined,
|
||||||
|
websocket_version(State#state{key=Key}, Req);
|
||||||
|
true ->
|
||||||
|
websocket_version(State, Req)
|
||||||
|
end.
|
||||||
|
|
||||||
|
websocket_version(State, Req) ->
|
||||||
|
WsVersion = cowboy_req:parse_header(<<"sec-websocket-version">>, Req),
|
||||||
|
case WsVersion of
|
||||||
|
7 -> ok;
|
||||||
|
8 -> ok;
|
||||||
|
13 -> ok
|
||||||
|
end,
|
||||||
|
websocket_extensions(State, Req#{websocket_version => WsVersion}).
|
||||||
|
|
||||||
|
websocket_extensions(State=#state{opts=Opts}, Req) ->
|
||||||
|
%% @todo We want different options for this. For example
|
||||||
|
%% * compress everything auto
|
||||||
|
%% * compress only text auto
|
||||||
|
%% * compress only binary auto
|
||||||
|
%% * compress nothing auto (but still enabled it)
|
||||||
|
%% * disable compression
|
||||||
|
Compress = maps:get(compress, Opts, false),
|
||||||
|
case {Compress, cowboy_req:parse_header(<<"sec-websocket-extensions">>, Req)} of
|
||||||
|
{true, Extensions} when Extensions =/= undefined ->
|
||||||
|
websocket_extensions(State, Req, Extensions, []);
|
||||||
|
_ ->
|
||||||
|
{ok, State, Req}
|
||||||
|
end.
|
||||||
|
|
||||||
|
websocket_extensions(State, Req, [], []) ->
|
||||||
|
{ok, State, Req};
|
||||||
|
websocket_extensions(State, Req, [], [<<", ">>|RespHeader]) ->
|
||||||
|
{ok, State, cowboy_req:set_resp_header(<<"sec-websocket-extensions">>, lists:reverse(RespHeader), Req)};
|
||||||
|
%% For HTTP/2 we ARE on the controlling process and do NOT want to update the owner.
|
||||||
|
websocket_extensions(State=#state{opts=Opts, extensions=Extensions},
|
||||||
|
Req=#{pid := Pid, version := Version},
|
||||||
|
[{<<"permessage-deflate">>, Params}|Tail], RespHeader) ->
|
||||||
|
DeflateOpts0 = maps:get(deflate_opts, Opts, #{}),
|
||||||
|
DeflateOpts = case Version of
|
||||||
|
'HTTP/1.1' -> DeflateOpts0#{owner => Pid};
|
||||||
|
_ -> DeflateOpts0
|
||||||
|
end,
|
||||||
|
try cow_ws:negotiate_permessage_deflate(Params, Extensions, DeflateOpts) of
|
||||||
|
{ok, RespExt, Extensions2} ->
|
||||||
|
websocket_extensions(State#state{extensions=Extensions2},
|
||||||
|
Req, Tail, [<<", ">>, RespExt|RespHeader]);
|
||||||
|
ignore ->
|
||||||
|
websocket_extensions(State, Req, Tail, RespHeader)
|
||||||
|
catch exit:{error, incompatible_zlib_version, _} ->
|
||||||
|
websocket_extensions(State, Req, Tail, RespHeader)
|
||||||
|
end;
|
||||||
|
websocket_extensions(State=#state{opts=Opts, extensions=Extensions},
|
||||||
|
Req=#{pid := Pid, version := Version},
|
||||||
|
[{<<"x-webkit-deflate-frame">>, Params}|Tail], RespHeader) ->
|
||||||
|
DeflateOpts0 = maps:get(deflate_opts, Opts, #{}),
|
||||||
|
DeflateOpts = case Version of
|
||||||
|
'HTTP/1.1' -> DeflateOpts0#{owner => Pid};
|
||||||
|
_ -> DeflateOpts0
|
||||||
|
end,
|
||||||
|
try cow_ws:negotiate_x_webkit_deflate_frame(Params, Extensions, DeflateOpts) of
|
||||||
|
{ok, RespExt, Extensions2} ->
|
||||||
|
websocket_extensions(State#state{extensions=Extensions2},
|
||||||
|
Req, Tail, [<<", ">>, RespExt|RespHeader]);
|
||||||
|
ignore ->
|
||||||
|
websocket_extensions(State, Req, Tail, RespHeader)
|
||||||
|
catch exit:{error, incompatible_zlib_version, _} ->
|
||||||
|
websocket_extensions(State, Req, Tail, RespHeader)
|
||||||
|
end;
|
||||||
|
websocket_extensions(State, Req, [_|Tail], RespHeader) ->
|
||||||
|
websocket_extensions(State, Req, Tail, RespHeader).
|
||||||
|
|
||||||
|
-spec websocket_handshake(#state{}, Req, any(), Env)
|
||||||
|
-> {ok, Req, Env}
|
||||||
|
when Req::cowboy_req:req(), Env::cowboy_middleware:env().
|
||||||
|
websocket_handshake(State=#state{key=Key},
|
||||||
|
Req=#{version := 'HTTP/1.1', pid := Pid, streamid := StreamID},
|
||||||
|
HandlerState, Env) ->
|
||||||
|
Challenge = base64:encode(crypto:hash(sha,
|
||||||
|
<< Key/binary, "258EAFA5-E914-47DA-95CA-C5AB0DC85B11" >>)),
|
||||||
|
%% @todo We don't want date and server headers.
|
||||||
|
Headers = cowboy_req:response_headers(#{
|
||||||
|
<<"connection">> => <<"Upgrade">>,
|
||||||
|
<<"upgrade">> => <<"websocket">>,
|
||||||
|
<<"sec-websocket-accept">> => Challenge
|
||||||
|
}, Req),
|
||||||
|
Pid ! {{Pid, StreamID}, {switch_protocol, Headers, ?MODULE, {State, HandlerState}}},
|
||||||
|
{ok, Req, Env};
|
||||||
|
%% For HTTP/2 we do not let the process die, we instead keep it
|
||||||
|
%% for the Websocket stream. This is because in HTTP/2 we only
|
||||||
|
%% have a stream, it doesn't take over the whole connection.
|
||||||
|
websocket_handshake(State, Req=#{ref := Ref, pid := Pid, streamid := StreamID},
|
||||||
|
HandlerState, _Env) ->
|
||||||
|
%% @todo We don't want date and server headers.
|
||||||
|
Headers = cowboy_req:response_headers(#{}, Req),
|
||||||
|
Pid ! {{Pid, StreamID}, {switch_protocol, Headers, ?MODULE, {State, HandlerState}}},
|
||||||
|
takeover(Pid, Ref, {Pid, StreamID}, undefined, undefined, <<>>,
|
||||||
|
{State, HandlerState}).
|
||||||
|
|
||||||
|
%% Connection process.
|
||||||
|
|
||||||
|
-record(ps_header, {
|
||||||
|
buffer = <<>> :: binary()
|
||||||
|
}).
|
||||||
|
|
||||||
|
-record(ps_payload, {
|
||||||
|
type :: cow_ws:frame_type(),
|
||||||
|
len :: non_neg_integer(),
|
||||||
|
mask_key :: cow_ws:mask_key(),
|
||||||
|
rsv :: cow_ws:rsv(),
|
||||||
|
close_code = undefined :: undefined | cow_ws:close_code(),
|
||||||
|
unmasked = <<>> :: binary(),
|
||||||
|
unmasked_len = 0 :: non_neg_integer(),
|
||||||
|
buffer = <<>> :: binary()
|
||||||
|
}).
|
||||||
|
|
||||||
|
-type parse_state() :: #ps_header{} | #ps_payload{}.
|
||||||
|
|
||||||
|
-spec takeover(pid(), ranch:ref(), inet:socket() | {pid(), cowboy_stream:streamid()},
|
||||||
|
module() | undefined, any(), binary(),
|
||||||
|
{#state{}, any()}) -> no_return().
|
||||||
|
takeover(Parent, Ref, Socket, Transport, _Opts, Buffer,
|
||||||
|
{State0=#state{handler=Handler}, HandlerState}) ->
|
||||||
|
%% @todo We should have an option to disable this behavior.
|
||||||
|
ranch:remove_connection(Ref),
|
||||||
|
Messages = case Transport of
|
||||||
|
undefined -> undefined;
|
||||||
|
_ -> Transport:messages()
|
||||||
|
end,
|
||||||
|
State = loop_timeout(State0#state{parent=Parent,
|
||||||
|
ref=Ref, socket=Socket, transport=Transport,
|
||||||
|
key=undefined, messages=Messages}),
|
||||||
|
%% We call parse_header/3 immediately because there might be
|
||||||
|
%% some data in the buffer that was sent along with the handshake.
|
||||||
|
%% While it is not allowed by the protocol to send frames immediately,
|
||||||
|
%% we still want to process that data if any.
|
||||||
|
case erlang:function_exported(Handler, websocket_init, 1) of
|
||||||
|
true -> handler_call(State, HandlerState, #ps_header{buffer=Buffer},
|
||||||
|
websocket_init, undefined, fun after_init/3);
|
||||||
|
false -> after_init(State, HandlerState, #ps_header{buffer=Buffer})
|
||||||
|
end.
|
||||||
|
|
||||||
|
after_init(State=#state{active=true}, HandlerState, ParseState) ->
|
||||||
|
%% Enable active,N for HTTP/1.1, and auto read_body for HTTP/2.
|
||||||
|
%% We must do this only after calling websocket_init/1 (if any)
|
||||||
|
%% to give the handler a chance to disable active mode immediately.
|
||||||
|
setopts_active(State),
|
||||||
|
maybe_read_body(State),
|
||||||
|
parse_header(State, HandlerState, ParseState);
|
||||||
|
after_init(State, HandlerState, ParseState) ->
|
||||||
|
parse_header(State, HandlerState, ParseState).
|
||||||
|
|
||||||
|
%% We have two ways of reading the body for Websocket. For HTTP/1.1
|
||||||
|
%% we have full control of the socket and can therefore use active,N.
|
||||||
|
%% For HTTP/2 we are just a stream, and are instead using read_body
|
||||||
|
%% (automatic mode). Technically HTTP/2 will only go passive after
|
||||||
|
%% receiving the next data message, while HTTP/1.1 goes passive
|
||||||
|
%% immediately but there might still be data to be processed in
|
||||||
|
%% the message queue.
|
||||||
|
|
||||||
|
setopts_active(#state{transport=undefined}) ->
|
||||||
|
ok;
|
||||||
|
setopts_active(#state{socket=Socket, transport=Transport, opts=Opts}) ->
|
||||||
|
N = maps:get(active_n, Opts, 100),
|
||||||
|
Transport:setopts(Socket, [{active, N}]).
|
||||||
|
|
||||||
|
maybe_read_body(#state{socket=Stream={Pid, _}, transport=undefined, active=true}) ->
|
||||||
|
%% @todo Keep Ref around.
|
||||||
|
ReadBodyRef = make_ref(),
|
||||||
|
Pid ! {Stream, {read_body, self(), ReadBodyRef, auto, infinity}},
|
||||||
|
ok;
|
||||||
|
maybe_read_body(_) ->
|
||||||
|
ok.
|
||||||
|
|
||||||
|
active(State) ->
|
||||||
|
setopts_active(State),
|
||||||
|
maybe_read_body(State),
|
||||||
|
State#state{active=true}.
|
||||||
|
|
||||||
|
passive(State=#state{transport=undefined}) ->
|
||||||
|
%% Unfortunately we cannot currently cancel read_body.
|
||||||
|
%% But that's OK, we will just stop reading the body
|
||||||
|
%% after the next message.
|
||||||
|
State#state{active=false};
|
||||||
|
passive(State=#state{socket=Socket, transport=Transport, messages=Messages}) ->
|
||||||
|
Transport:setopts(Socket, [{active, false}]),
|
||||||
|
flush_passive(Socket, Messages),
|
||||||
|
State#state{active=false}.
|
||||||
|
|
||||||
|
flush_passive(Socket, Messages) ->
|
||||||
|
receive
|
||||||
|
{Passive, Socket} when Passive =:= element(4, Messages);
|
||||||
|
%% Hardcoded for compatibility with Ranch 1.x.
|
||||||
|
Passive =:= tcp_passive; Passive =:= ssl_passive ->
|
||||||
|
flush_passive(Socket, Messages)
|
||||||
|
after 0 ->
|
||||||
|
ok
|
||||||
|
end.
|
||||||
|
|
||||||
|
before_loop(State=#state{hibernate=true}, HandlerState, ParseState) ->
|
||||||
|
proc_lib:hibernate(?MODULE, loop,
|
||||||
|
[State#state{hibernate=false}, HandlerState, ParseState]);
|
||||||
|
before_loop(State, HandlerState, ParseState) ->
|
||||||
|
loop(State, HandlerState, ParseState).
|
||||||
|
|
||||||
|
-spec loop_timeout(#state{}) -> #state{}.
|
||||||
|
loop_timeout(State=#state{opts=Opts, timeout_ref=PrevRef}) ->
|
||||||
|
_ = case PrevRef of
|
||||||
|
undefined -> ignore;
|
||||||
|
PrevRef -> erlang:cancel_timer(PrevRef)
|
||||||
|
end,
|
||||||
|
case maps:get(idle_timeout, Opts, 60000) of
|
||||||
|
infinity ->
|
||||||
|
State#state{timeout_ref=undefined};
|
||||||
|
Timeout ->
|
||||||
|
TRef = erlang:start_timer(Timeout, self(), ?MODULE),
|
||||||
|
State#state{timeout_ref=TRef}
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec loop(#state{}, any(), parse_state()) -> no_return().
|
||||||
|
loop(State=#state{parent=Parent, socket=Socket, messages=Messages,
|
||||||
|
timeout_ref=TRef}, HandlerState, ParseState) ->
|
||||||
|
receive
|
||||||
|
%% Socket messages. (HTTP/1.1)
|
||||||
|
{OK, Socket, Data} when OK =:= element(1, Messages) ->
|
||||||
|
State2 = loop_timeout(State),
|
||||||
|
parse(State2, HandlerState, ParseState, Data);
|
||||||
|
{Closed, Socket} when Closed =:= element(2, Messages) ->
|
||||||
|
terminate(State, HandlerState, {error, closed});
|
||||||
|
{Error, Socket, Reason} when Error =:= element(3, Messages) ->
|
||||||
|
terminate(State, HandlerState, {error, Reason});
|
||||||
|
{Passive, Socket} when Passive =:= element(4, Messages);
|
||||||
|
%% Hardcoded for compatibility with Ranch 1.x.
|
||||||
|
Passive =:= tcp_passive; Passive =:= ssl_passive ->
|
||||||
|
setopts_active(State),
|
||||||
|
loop(State, HandlerState, ParseState);
|
||||||
|
%% Body reading messages. (HTTP/2)
|
||||||
|
{request_body, _Ref, nofin, Data} ->
|
||||||
|
maybe_read_body(State),
|
||||||
|
State2 = loop_timeout(State),
|
||||||
|
parse(State2, HandlerState, ParseState, Data);
|
||||||
|
%% @todo We need to handle this case as if it was an {error, closed}
|
||||||
|
%% but not before we finish processing frames. We probably should have
|
||||||
|
%% a check in before_loop to let us stop looping if a flag is set.
|
||||||
|
{request_body, _Ref, fin, _, Data} ->
|
||||||
|
maybe_read_body(State),
|
||||||
|
State2 = loop_timeout(State),
|
||||||
|
parse(State2, HandlerState, ParseState, Data);
|
||||||
|
%% Timeouts.
|
||||||
|
{timeout, TRef, ?MODULE} ->
|
||||||
|
websocket_close(State, HandlerState, timeout);
|
||||||
|
{timeout, OlderTRef, ?MODULE} when is_reference(OlderTRef) ->
|
||||||
|
before_loop(State, HandlerState, ParseState);
|
||||||
|
%% System messages.
|
||||||
|
{'EXIT', Parent, Reason} ->
|
||||||
|
%% @todo We should exit gracefully.
|
||||||
|
exit(Reason);
|
||||||
|
{system, From, Request} ->
|
||||||
|
sys:handle_system_msg(Request, From, Parent, ?MODULE, [],
|
||||||
|
{State, HandlerState, ParseState});
|
||||||
|
%% Calls from supervisor module.
|
||||||
|
{'$gen_call', From, Call} ->
|
||||||
|
cowboy_children:handle_supervisor_call(Call, From, [], ?MODULE),
|
||||||
|
before_loop(State, HandlerState, ParseState);
|
||||||
|
Message ->
|
||||||
|
handler_call(State, HandlerState, ParseState,
|
||||||
|
websocket_info, Message, fun before_loop/3)
|
||||||
|
end.
|
||||||
|
|
||||||
|
parse(State, HandlerState, PS=#ps_header{buffer=Buffer}, Data) ->
|
||||||
|
parse_header(State, HandlerState, PS#ps_header{
|
||||||
|
buffer= <<Buffer/binary, Data/binary>>});
|
||||||
|
parse(State, HandlerState, PS=#ps_payload{buffer=Buffer}, Data) ->
|
||||||
|
parse_payload(State, HandlerState, PS#ps_payload{buffer= <<>>},
|
||||||
|
<<Buffer/binary, Data/binary>>).
|
||||||
|
|
||||||
|
parse_header(State=#state{opts=Opts, frag_state=FragState, extensions=Extensions},
|
||||||
|
HandlerState, ParseState=#ps_header{buffer=Data}) ->
|
||||||
|
MaxFrameSize = maps:get(max_frame_size, Opts, infinity),
|
||||||
|
case cow_ws:parse_header(Data, Extensions, FragState) of
|
||||||
|
%% All frames sent from the client to the server are masked.
|
||||||
|
{_, _, _, _, undefined, _} ->
|
||||||
|
websocket_close(State, HandlerState, {error, badframe});
|
||||||
|
{_, _, _, Len, _, _} when Len > MaxFrameSize ->
|
||||||
|
websocket_close(State, HandlerState, {error, badsize});
|
||||||
|
{Type, FragState2, Rsv, Len, MaskKey, Rest} ->
|
||||||
|
parse_payload(State#state{frag_state=FragState2}, HandlerState,
|
||||||
|
#ps_payload{type=Type, len=Len, mask_key=MaskKey, rsv=Rsv}, Rest);
|
||||||
|
more ->
|
||||||
|
before_loop(State, HandlerState, ParseState);
|
||||||
|
error ->
|
||||||
|
websocket_close(State, HandlerState, {error, badframe})
|
||||||
|
end.
|
||||||
|
|
||||||
|
parse_payload(State=#state{frag_state=FragState, utf8_state=Incomplete, extensions=Extensions},
|
||||||
|
HandlerState, ParseState=#ps_payload{
|
||||||
|
type=Type, len=Len, mask_key=MaskKey, rsv=Rsv,
|
||||||
|
unmasked=Unmasked, unmasked_len=UnmaskedLen}, Data) ->
|
||||||
|
case cow_ws:parse_payload(Data, MaskKey, Incomplete, UnmaskedLen,
|
||||||
|
Type, Len, FragState, Extensions, Rsv) of
|
||||||
|
{ok, CloseCode, Payload, Utf8State, Rest} ->
|
||||||
|
dispatch_frame(State#state{utf8_state=Utf8State}, HandlerState,
|
||||||
|
ParseState#ps_payload{unmasked= <<Unmasked/binary, Payload/binary>>,
|
||||||
|
close_code=CloseCode}, Rest);
|
||||||
|
{ok, Payload, Utf8State, Rest} ->
|
||||||
|
dispatch_frame(State#state{utf8_state=Utf8State}, HandlerState,
|
||||||
|
ParseState#ps_payload{unmasked= <<Unmasked/binary, Payload/binary>>},
|
||||||
|
Rest);
|
||||||
|
{more, CloseCode, Payload, Utf8State} ->
|
||||||
|
before_loop(State#state{utf8_state=Utf8State}, HandlerState,
|
||||||
|
ParseState#ps_payload{len=Len - byte_size(Data), close_code=CloseCode,
|
||||||
|
unmasked= <<Unmasked/binary, Payload/binary>>,
|
||||||
|
unmasked_len=UnmaskedLen + byte_size(Data)});
|
||||||
|
{more, Payload, Utf8State} ->
|
||||||
|
before_loop(State#state{utf8_state=Utf8State}, HandlerState,
|
||||||
|
ParseState#ps_payload{len=Len - byte_size(Data),
|
||||||
|
unmasked= <<Unmasked/binary, Payload/binary>>,
|
||||||
|
unmasked_len=UnmaskedLen + byte_size(Data)});
|
||||||
|
Error = {error, _Reason} ->
|
||||||
|
websocket_close(State, HandlerState, Error)
|
||||||
|
end.
|
||||||
|
|
||||||
|
dispatch_frame(State=#state{opts=Opts, frag_state=FragState, frag_buffer=SoFar}, HandlerState,
|
||||||
|
#ps_payload{type=Type0, unmasked=Payload0, close_code=CloseCode0}, RemainingData) ->
|
||||||
|
MaxFrameSize = maps:get(max_frame_size, Opts, infinity),
|
||||||
|
case cow_ws:make_frame(Type0, Payload0, CloseCode0, FragState) of
|
||||||
|
%% @todo Allow receiving fragments.
|
||||||
|
{fragment, _, _, Payload} when byte_size(Payload) + byte_size(SoFar) > MaxFrameSize ->
|
||||||
|
websocket_close(State, HandlerState, {error, badsize});
|
||||||
|
{fragment, nofin, _, Payload} ->
|
||||||
|
parse_header(State#state{frag_buffer= << SoFar/binary, Payload/binary >>},
|
||||||
|
HandlerState, #ps_header{buffer=RemainingData});
|
||||||
|
{fragment, fin, Type, Payload} ->
|
||||||
|
handler_call(State#state{frag_state=undefined, frag_buffer= <<>>}, HandlerState,
|
||||||
|
#ps_header{buffer=RemainingData},
|
||||||
|
websocket_handle, {Type, << SoFar/binary, Payload/binary >>},
|
||||||
|
fun parse_header/3);
|
||||||
|
close ->
|
||||||
|
websocket_close(State, HandlerState, remote);
|
||||||
|
{close, CloseCode, Payload} ->
|
||||||
|
websocket_close(State, HandlerState, {remote, CloseCode, Payload});
|
||||||
|
Frame = ping ->
|
||||||
|
transport_send(State, nofin, frame(pong, State)),
|
||||||
|
handler_call(State, HandlerState,
|
||||||
|
#ps_header{buffer=RemainingData},
|
||||||
|
websocket_handle, Frame, fun parse_header/3);
|
||||||
|
Frame = {ping, Payload} ->
|
||||||
|
transport_send(State, nofin, frame({pong, Payload}, State)),
|
||||||
|
handler_call(State, HandlerState,
|
||||||
|
#ps_header{buffer=RemainingData},
|
||||||
|
websocket_handle, Frame, fun parse_header/3);
|
||||||
|
Frame ->
|
||||||
|
handler_call(State, HandlerState,
|
||||||
|
#ps_header{buffer=RemainingData},
|
||||||
|
websocket_handle, Frame, fun parse_header/3)
|
||||||
|
end.
|
||||||
|
|
||||||
|
handler_call(State=#state{handler=Handler}, HandlerState,
|
||||||
|
ParseState, Callback, Message, NextState) ->
|
||||||
|
try case Callback of
|
||||||
|
websocket_init -> Handler:websocket_init(HandlerState);
|
||||||
|
_ -> Handler:Callback(Message, HandlerState)
|
||||||
|
end of
|
||||||
|
{Commands, HandlerState2} when is_list(Commands) ->
|
||||||
|
handler_call_result(State,
|
||||||
|
HandlerState2, ParseState, NextState, Commands);
|
||||||
|
{Commands, HandlerState2, hibernate} when is_list(Commands) ->
|
||||||
|
handler_call_result(State#state{hibernate=true},
|
||||||
|
HandlerState2, ParseState, NextState, Commands);
|
||||||
|
%% The following call results are deprecated.
|
||||||
|
{ok, HandlerState2} ->
|
||||||
|
NextState(State, HandlerState2, ParseState);
|
||||||
|
{ok, HandlerState2, hibernate} ->
|
||||||
|
NextState(State#state{hibernate=true}, HandlerState2, ParseState);
|
||||||
|
{reply, Payload, HandlerState2} ->
|
||||||
|
case websocket_send(Payload, State) of
|
||||||
|
ok ->
|
||||||
|
NextState(State, HandlerState2, ParseState);
|
||||||
|
stop ->
|
||||||
|
terminate(State, HandlerState2, stop);
|
||||||
|
Error = {error, _} ->
|
||||||
|
terminate(State, HandlerState2, Error)
|
||||||
|
end;
|
||||||
|
{reply, Payload, HandlerState2, hibernate} ->
|
||||||
|
case websocket_send(Payload, State) of
|
||||||
|
ok ->
|
||||||
|
NextState(State#state{hibernate=true},
|
||||||
|
HandlerState2, ParseState);
|
||||||
|
stop ->
|
||||||
|
terminate(State, HandlerState2, stop);
|
||||||
|
Error = {error, _} ->
|
||||||
|
terminate(State, HandlerState2, Error)
|
||||||
|
end;
|
||||||
|
{stop, HandlerState2} ->
|
||||||
|
websocket_close(State, HandlerState2, stop)
|
||||||
|
catch Class:Reason:Stacktrace ->
|
||||||
|
websocket_send_close(State, {crash, Class, Reason}),
|
||||||
|
handler_terminate(State, HandlerState, {crash, Class, Reason}),
|
||||||
|
erlang:raise(Class, Reason, Stacktrace)
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec handler_call_result(#state{}, any(), parse_state(), fun(), commands()) -> no_return().
|
||||||
|
handler_call_result(State0, HandlerState, ParseState, NextState, Commands) ->
|
||||||
|
case commands(Commands, State0, []) of
|
||||||
|
{ok, State} ->
|
||||||
|
NextState(State, HandlerState, ParseState);
|
||||||
|
{stop, State} ->
|
||||||
|
terminate(State, HandlerState, stop);
|
||||||
|
{Error = {error, _}, State} ->
|
||||||
|
terminate(State, HandlerState, Error)
|
||||||
|
end.
|
||||||
|
|
||||||
|
commands([], State, []) ->
|
||||||
|
{ok, State};
|
||||||
|
commands([], State, Data) ->
|
||||||
|
Result = transport_send(State, nofin, lists:reverse(Data)),
|
||||||
|
{Result, State};
|
||||||
|
commands([{active, Active}|Tail], State0=#state{active=Active0}, Data) when is_boolean(Active) ->
|
||||||
|
State = if
|
||||||
|
Active, not Active0 ->
|
||||||
|
active(State0);
|
||||||
|
Active0, not Active ->
|
||||||
|
passive(State0);
|
||||||
|
true ->
|
||||||
|
State0
|
||||||
|
end,
|
||||||
|
commands(Tail, State#state{active=Active}, Data);
|
||||||
|
commands([{deflate, Deflate}|Tail], State, Data) when is_boolean(Deflate) ->
|
||||||
|
commands(Tail, State#state{deflate=Deflate}, Data);
|
||||||
|
commands([{set_options, SetOpts}|Tail], State0=#state{opts=Opts}, Data) ->
|
||||||
|
State = case SetOpts of
|
||||||
|
#{idle_timeout := IdleTimeout} ->
|
||||||
|
loop_timeout(State0#state{opts=Opts#{idle_timeout => IdleTimeout}});
|
||||||
|
_ ->
|
||||||
|
State0
|
||||||
|
end,
|
||||||
|
commands(Tail, State, Data);
|
||||||
|
commands([{shutdown_reason, ShutdownReason}|Tail], State, Data) ->
|
||||||
|
commands(Tail, State#state{shutdown_reason=ShutdownReason}, Data);
|
||||||
|
commands([Frame|Tail], State, Data0) ->
|
||||||
|
Data = [frame(Frame, State)|Data0],
|
||||||
|
case is_close_frame(Frame) of
|
||||||
|
true ->
|
||||||
|
_ = transport_send(State, fin, lists:reverse(Data)),
|
||||||
|
{stop, State};
|
||||||
|
false ->
|
||||||
|
commands(Tail, State, Data)
|
||||||
|
end.
|
||||||
|
|
||||||
|
transport_send(#state{socket=Stream={Pid, _}, transport=undefined}, IsFin, Data) ->
|
||||||
|
Pid ! {Stream, {data, IsFin, Data}},
|
||||||
|
ok;
|
||||||
|
transport_send(#state{socket=Socket, transport=Transport}, _, Data) ->
|
||||||
|
Transport:send(Socket, Data).
|
||||||
|
|
||||||
|
-spec websocket_send(cow_ws:frame(), #state{}) -> ok | stop | {error, atom()}.
|
||||||
|
websocket_send(Frames, State) when is_list(Frames) ->
|
||||||
|
websocket_send_many(Frames, State, []);
|
||||||
|
websocket_send(Frame, State) ->
|
||||||
|
Data = frame(Frame, State),
|
||||||
|
case is_close_frame(Frame) of
|
||||||
|
true ->
|
||||||
|
_ = transport_send(State, fin, Data),
|
||||||
|
stop;
|
||||||
|
false ->
|
||||||
|
transport_send(State, nofin, Data)
|
||||||
|
end.
|
||||||
|
|
||||||
|
websocket_send_many([], State, Acc) ->
|
||||||
|
transport_send(State, nofin, lists:reverse(Acc));
|
||||||
|
websocket_send_many([Frame|Tail], State, Acc0) ->
|
||||||
|
Acc = [frame(Frame, State)|Acc0],
|
||||||
|
case is_close_frame(Frame) of
|
||||||
|
true ->
|
||||||
|
_ = transport_send(State, fin, lists:reverse(Acc)),
|
||||||
|
stop;
|
||||||
|
false ->
|
||||||
|
websocket_send_many(Tail, State, Acc)
|
||||||
|
end.
|
||||||
|
|
||||||
|
is_close_frame(close) -> true;
|
||||||
|
is_close_frame({close, _}) -> true;
|
||||||
|
is_close_frame({close, _, _}) -> true;
|
||||||
|
is_close_frame(_) -> false.
|
||||||
|
|
||||||
|
-spec websocket_close(#state{}, any(), terminate_reason()) -> no_return().
|
||||||
|
websocket_close(State, HandlerState, Reason) ->
|
||||||
|
websocket_send_close(State, Reason),
|
||||||
|
terminate(State, HandlerState, Reason).
|
||||||
|
|
||||||
|
websocket_send_close(State, Reason) ->
|
||||||
|
_ = case Reason of
|
||||||
|
Normal when Normal =:= stop; Normal =:= timeout ->
|
||||||
|
transport_send(State, fin, frame({close, 1000, <<>>}, State));
|
||||||
|
{error, badframe} ->
|
||||||
|
transport_send(State, fin, frame({close, 1002, <<>>}, State));
|
||||||
|
{error, badencoding} ->
|
||||||
|
transport_send(State, fin, frame({close, 1007, <<>>}, State));
|
||||||
|
{error, badsize} ->
|
||||||
|
transport_send(State, fin, frame({close, 1009, <<>>}, State));
|
||||||
|
{crash, _, _} ->
|
||||||
|
transport_send(State, fin, frame({close, 1011, <<>>}, State));
|
||||||
|
remote ->
|
||||||
|
transport_send(State, fin, frame(close, State));
|
||||||
|
{remote, Code, _} ->
|
||||||
|
transport_send(State, fin, frame({close, Code, <<>>}, State))
|
||||||
|
end,
|
||||||
|
ok.
|
||||||
|
|
||||||
|
%% Don't compress frames while deflate is disabled.
|
||||||
|
frame(Frame, #state{deflate=false, extensions=Extensions}) ->
|
||||||
|
cow_ws:frame(Frame, Extensions#{deflate => false});
|
||||||
|
frame(Frame, #state{extensions=Extensions}) ->
|
||||||
|
cow_ws:frame(Frame, Extensions).
|
||||||
|
|
||||||
|
-spec terminate(#state{}, any(), terminate_reason()) -> no_return().
|
||||||
|
terminate(State=#state{shutdown_reason=Shutdown}, HandlerState, Reason) ->
|
||||||
|
handler_terminate(State, HandlerState, Reason),
|
||||||
|
case Shutdown of
|
||||||
|
normal -> exit(normal);
|
||||||
|
_ -> exit({shutdown, Shutdown})
|
||||||
|
end.
|
||||||
|
|
||||||
|
handler_terminate(#state{handler=Handler, req=Req}, HandlerState, Reason) ->
|
||||||
|
cowboy_handler:terminate(Reason, Req, HandlerState, Handler).
|
||||||
|
|
||||||
|
%% System callbacks.
|
||||||
|
|
||||||
|
-spec system_continue(_, _, {#state{}, any(), parse_state()}) -> no_return().
|
||||||
|
system_continue(_, _, {State, HandlerState, ParseState}) ->
|
||||||
|
loop(State, HandlerState, ParseState).
|
||||||
|
|
||||||
|
-spec system_terminate(any(), _, _, {#state{}, any(), parse_state()}) -> no_return().
|
||||||
|
system_terminate(Reason, _, _, {State, HandlerState, _}) ->
|
||||||
|
%% @todo We should exit gracefully, if possible.
|
||||||
|
terminate(State, HandlerState, Reason).
|
||||||
|
|
||||||
|
-spec system_code_change(Misc, _, _, _)
|
||||||
|
-> {ok, Misc} when Misc::{#state{}, any(), parse_state()}.
|
||||||
|
system_code_change(Misc, _, _, _) ->
|
||||||
|
{ok, Misc}.
|
Loading…
Reference in New Issue