osdir.com


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Bug 61616] mod_proxy_connect: stall and connection loss on bi-directional traffic


https://bz.apache.org/bugzilla/show_bug.cgi?id=61616

--- Comment #42 from Carsten Wolff <carsten@xxxxxxxxxxxxxxx> ---
Created attachment 35639
  --> https://bz.apache.org/bugzilla/attachment.cgi?id=35639&action=edit
pcaps of a stall in rsync handshake (Test4)

Hello Yann, I wish you a happy new year!

I did some more testing on different combinations of these patches. Here are
the results (I will call patch 35620 "v2" and patch 35623 "v10"):

Test1 (only v10 applied):
    - wget for static contents: OK
    - rsync through SSL+proxy_connect: sometimes still stalls at the very end
of the transfer (just like in comment#38)

Test2 (v10 and v2 applied):
    - wget for static contents: OK
    - rsync through SSL+proxy_connect: often stalls at the beginning of the
transfer (during the rsync protocol handshake)

Test3 (v10 and v2 applied, configuration from comment#24 added):
    - wget for static contents: OK
    - rsync through SSL+proxy_connect: OK (can't reproduce in the scenario I
used so far ... but ...)

Because I was under the impression that timing plays a role in that code, I
decided to repeat Test3 under different network conditions and used netem(1) to
change the characteristics of the virtual network:

Test4 (v10 and v2 applied, configuration from comment#24 added, netem):
    - wget for static contents: OK
    - rsync through SSL+proxy_connect: often stalls(2)(3) at the beginning of
the transfer (during the rsync protocol handshake)

So I think it's safe to say that the bug in attachment 35447 has been fixed by
"v2". But, unfortunately, stalls in SSL+proxy_connect can still be provoked
under certain conditions. Next, I tried without SSL:

Test5 (v10 and v2 applied, configuration from comment#24 added, netem, no SSL): 
    - rsync through proxy_connect: OK (cannot seem to reproduce the stalls)

I have a feeling I should repeat Tests 1 and 2 without SSL to see if SSL is
necessary for all the stalls to be reproduced. I'll let you know.

(1)
tc qdisc add dev vnet0 root handle 1: netem delay 100ms 50ms 25%
tc qdisc add dev vnet0 parent 1:1 pfifo limit 1000

(2) log of stall in Test4
[Thu Dec 28 09:06:01.752182 2017] [proxy_connect:debug] [pid 9674:tid
139811025307392] mod_proxy_connect.c(503): [client 192.168.122.12:33950] client
wait writable
[Thu Dec 28 09:06:01.752210 2017] [proxy_connect:debug] [pid 9674:tid
139811025307392] mod_proxy_connect.c(456): [client 192.168.122.12:33950]
AH01024: woke from poll(), i=1
[Thu Dec 28 09:06:01.752227 2017] [proxy_connect:debug] [pid 9674:tid
139811025307392] mod_proxy_connect.c(531): [client 192.168.122.12:33950] client
was writable
[Thu Dec 28 09:06:01.752316 2017] [core:trace6] [pid 9674:tid 139811025307392]
core_filters.c(740): [client 192.168.122.12:33950] writev_nonblocking: 69/69
bytes
[Thu Dec 28 09:06:01.894674 2017] [proxy_connect:debug] [pid 9674:tid
139811025307392] mod_proxy_connect.c(456): [client 192.168.122.12:33950]
AH01024: woke from poll(), i=1
[Thu Dec 28 09:06:01.894840 2017] [proxy_connect:debug] [pid 9674:tid
139811025307392] mod_proxy_connect.c(487): [client 192.168.122.12:33950]
AH01025: client was readable
[Thu Dec 28 09:06:01.895319 2017] [ssl:trace4] [pid 9674:tid 139811025307392]
ssl_engine_io.c(2220): [client 192.168.122.12:33950] OpenSSL: read 5/5 bytes
from BIO#7f282c002270 [mem: 7f282c002bc3] (BIO dump follows)
[Thu Dec 28 09:06:01.895393 2017] [ssl:trace4] [pid 9674:tid 139811025307392]
ssl_engine_io.c(2220): [client 192.168.122.12:33950] OpenSSL: read 64/64 bytes
from BIO#7f282c002270 [mem: 7f282c002bc8] (BIO dump follows)
[Thu Dec 28 09:06:01.895495 2017] [proxy:trace2] [pid 9674:tid 139811025307392]
proxy_util.c(3845): [client 192.168.122.12:33950]
ap_proxy_transfer_between_connections: output filters full
[Thu Dec 28 09:06:01.895531 2017] [proxy:trace2] [pid 9674:tid 139811025307392]
proxy_util.c(3866): (70008)Partial results are valid but processing is
incomplete: [client 192.168.122.12:33950] ap_proxy_transfer_between_connections
finished
[Thu Dec 28 09:06:01.895557 2017] [proxy_connect:debug] [pid 9674:tid
139811025307392] mod_proxy_connect.c(503): [client 192.168.122.12:33950]
backend wait writable
[Thu Dec 28 09:06:01.895737 2017] [proxy_connect:debug] [pid 9674:tid
139811025307392] mod_proxy_connect.c(456): [client 192.168.122.12:33950]
AH01024: woke from poll(), i=1
[Thu Dec 28 09:06:01.895785 2017] [proxy_connect:debug] [pid 9674:tid
139811025307392] mod_proxy_connect.c(531): [client 192.168.122.12:33950]
backend was writable
[Thu Dec 28 09:06:01.895915 2017] [core:trace6] [pid 9674:tid 139811025307392]
core_filters.c(740): [remote ::1:873] writev_nonblocking: 9/9 bytes

(3) bt of stall in Test4
Thread 3 (Thread 0x7f284bfff700 (LWP 9827)):
#0  0x00007f2855f3f0f3 in epoll_wait () at
../sysdeps/unix/syscall-template.S:84
#1  0x00007f2856435201 in impl_pollset_poll (pollset=0x7f2856b1e028,
timeout=<optimized out>, num=0x7f284bffca34, descriptors=0x7f284bffca40) at
./poll/unix/epoll.c:266
#2  0x00007f2852d4c4b5 in proxy_connect_handler (r=0x7f2856b1c0a0,
worker=<optimized out>, conf=<optimized out>, url=<optimized out>,
proxyname=<optimized out>, proxyport=<optimized out>) at
mod_proxy_connect.c:446
#3  0x00007f2852f5b3fc in proxy_run_scheme_handler (r=r@entry=0x7f2856b1c0a0,
worker=0x7f2856c638d8, conf=conf@entry=0x7f2856c29320, url=0x7f2856b1d706
"localhost:873", proxyhost=proxyhost@entry=0x0, 
    proxyport=proxyport@entry=0) at mod_proxy.c:2888
#4  0x00007f2852f5c371 in proxy_handler (r=0x7f2856b1c0a0) at mod_proxy.c:1230

-- 
You are receiving this mail because:
You are the assignee for the bug.
---------------------------------------------------------------------
To unsubscribe, e-mail: bugs-unsubscribe@xxxxxxxxxxxxxxxx
For additional commands, e-mail: bugs-help@xxxxxxxxxxxxxxxx