[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Bug 62362] New: Proxied content not properly rate-limited


            Bug ID: 62362
           Summary: Proxied content not properly rate-limited
           Product: Apache httpd-2
           Version: 2.4.33
          Hardware: PC
                OS: Mac OS X 10.1
            Status: NEW
          Severity: normal
          Priority: P2
         Component: mod_ratelimit
          Assignee: bugs@xxxxxxxxxxxxxxxx
          Reporter: toscano.luca@xxxxxxxxx
  Target Milestone: ---

On users@ it was reported the following scenario:

I'm using Apache 2.4.24 on Debian 9 Stable, behind a DSL connection, with an
estimated upload capacity of ~130kB/s.
I'm trying to limit the bandwidth available to my users (per-connection limit
is fine).
However, it seems to me that the rate-limit parameter is coarsely grained :

- if I set it to 8, users are limited to 8 kB/s
- if I set it to 20, or 30, users are limited to 40 kB/s
- if I set it to 50, 60 or 80, users are limited to my BW, so ~120 kB/s

After following up with the user it seems that the issue happens with proxied
content. So I've set up the following experiment:

- Directory with a 4MB file inside
- Simple Location that proxies content via mod_proxy_http to a Python process
running a webserver, capable of returning the same 4MB file outlined above.

I tested the rate limit using curl's summary (average Dload speed for example).

This is what I gathered:

- when httpd serves the file directly, mod_ratelimit's output filter is called
once and the bucket brigade contains all the data contained in the file. This
is probably due to how bucket brigates work when morphing a file content?

- when httpd serves the file via mod_proxy, the output filter is called
multiple times, and each time the buckets are maximum the size of
ProxyIOBufferSize (8192 by default). Still not completely sure about this one,
so please let me know if I am totally wrong :)

The main problem is, IIUC, in the output's filter logic that does this: it
calculates the size of a chunk, based on the rate-limit set in the httpd's
conf, and then it splits the bucket brigade, if necessary, in buckets of that
chunk size, interleaving them with FLUSH buckets (and sleeping 200ms).

So a trace of execution with say a chunk size of 8192 would be something like:

First call of the filter: 8192 --> FLUSH --> sleep(200ms) --> 8192 --> ... ->
last chunk (either 8192 or something less).

This happens correctly when httpd serves directly the content, but not when

First call of the filter: 8192 -> FLUSH (no sleep, since do_sleep turns to 1
only after the first flush)

Second call of the filter: 8192 -> FLUSH (no sleep)


So one way to alleviate this issue is to move do_sleep to the ctx data
structure, so if the filter gets called multiple times it will "remember" to
sleep between flushes (with the assumption that it is allocated for each
request). It remains the problem that when the rate-limit speed sets a chunk
size different than the ProxyIOBufferSize (8192 by default) then the client
will be rate limited to the speed dictated by the buffer size (for example,
8192 should correspond to ~40KB/s).

You are receiving this mail because:
You are the assignee for the bug.
To unsubscribe, e-mail: bugs-unsubscribe@xxxxxxxxxxxxxxxx
For additional commands, e-mail: bugs-help@xxxxxxxxxxxxxxxx