[Octavia]-Seeking performance numbers on Octavia
As you have mentioned, it is very challenging to get accurate
performance results in cloud environments. There are a large
number(very large in fact) of factors that can impact the overall
performance of OpenStack and Octavia.
In our OpenDev testing environment, we only have software emulation
virtual machines available (Qemu running with the TCG engine) which
performs extremely poorly. This means that the testing environment
does not reflect how the software is used in real world deployments.
An example of this is simply booting a VM can take up to ten minutes
on Qemu with TCG when it takes about twenty seconds on a real
With this resource limitation, we cannot effectively run performance
benchmarking test jobs on the OpenDev environment.
Because of this, we don't publish performance numbers as they will not
reflect what you can achieve in your environment.
Let me try to speak to your bullet points:
1. The Octavia team has never (to my knowledge) claimed the Amphora
driver is "carrier grade". We do consider the Amphora driver to be
"operator grade", which speaks to a cloud operator's perspective
versus the previous offering that did not support high availability,
have appropriate maintenance tooling, upgrade paths, performance, etc.
To me, "carrier grade" has an additional level of requirements
including performance, latency, scale, and availability SLAs. This is
not what the Octavia Amphora driver is currently ready for. That said,
third party provider drivers for Octavia may be able to provide a
"carrier grade" level of load balancing for OpenStack.
2. As for performance tuning, much of this is either automatically
handled by Octavia or are dependent on the application you are load
balancing and your cloud deployment. For example we have many
configuration settings to tune how many retries we attempt when
interacting with other services. In performing and stable clouds,
these can be tuned down, in others the defaults may be appropriate. If
you would like faster failover, at the expense of slightly more
network traffic, you can tune the health monitoring and
keepalived_vrrp settings. We do not currently have a performance
tuning guide for Octavia but would support someone authoring one.
3. We do not currently have a guide for this. I will say with the
version of HAproxy currently being shipped with the distributions,
going beyond the 1vCPU per amphora does not gain you much. With the
release of HAProxy 2.0 this has changed and we expect to be adding
support for vertically scaling the Amphora in future releases. Disk
space is only necessary if you are storing the flow logs locally,
which I would not recommend for a performance load balancer (See the
notes in the log offloading guide:
Finally, the RAM usage is a factor of the number of concurrent
connections and if you are enabling TLS on the load balancer. For
typical load balancing loads, the default is typically fine. However,
if you have high connection counts and/or TLS offloading, you may want
to experiment with increasing the available RAM.
4. The source IP issue is a known issue
(https://storyboard.openstack.org/#!/story/1629066). We have not
prioritized addressing this as we have not had anyone come forward
that they needed this in their deployment. If this is an issue
impacting your use case, please comment on the story to that effect
and provide a use case. This will help the team prioritize this work.
Also, patches are welcome! If you are interested in working on this
issue, I can help you with information about how this could be added.
It should also be noted that it is a limitation of 64,000 connections
per-backend server, not per load balancer.
5. The team uses the #openstack-lbaas IRC channel on freenode and is
happy to answer questions, etc.
To date, we have had limited resources (people and equipment)
available to do performance evaluation and tuning. There are
definitely kernel and HAProxy tuning settings we have evaluated and
added to the Amphora driver, but I know there is more work that can be
done. If you are interested in help us with this work, please let us
P.S. Here are just a few considerations that can/will impact the
performance of an Octavia Amphora load balancer:
Hardware used for the compute nodes
Network Interface Cards (NICs) used in the compute nodes
Number of network ports enabled on the compute hosts
Network switch configurations (Jumbo frames, and so on)
Cloud network topology (leafâ??spine, fatâ??tree, and so on)
The OpenStack Neutron networking configuration (ML2 and ML3 drivers)
Tenant networking configuration (VXLAN, VLANS, GRE, and so on)
Colocation of applications and Octavia amphorae
Over subscription of the compute and networking resources
Protocols being load balanced
Configuration settings used when creating the load balancer
(connection limits, and so on)
Version of OpenStack services (nova, neutron, and so on)
Version of OpenStack Octavia
Flavor of the OpenStack Octavia load balancer
OS and hypervisor versions used
Deployed security mitigations (Spectre, Meltdown, and so on)
Customer application performance
Health of the customer application
On Fri, Jul 19, 2019 at 8:52 AM Singh, Prabhjit
<Prabhjit.Singh22 at t-mobile.com> wrote:
> I have been trying to test Octavia with some traffic generators and my tests are inconclusive. Appreciate your inputs on the following
> It would be really nice to have some performance numbers that you guys have been able to achieve for this to be termed as carrier grade.
> Would also appreciate if you could share any inputs on performance tuning Octavia
> Any recommended flavor sizes for spinning up Amphorae, the default size of 1 core, 2 Gb disk and 1 Gig RAM does not seem enough.
> Also I noticed when the Amphorae are spun up, at one time only one master is talking to the backend servers and has one IP that its using, it has to run out of ports after 64000 TCP concurrent sessions, id there a way to add more IPs or is this the limitation
> If I needed some help with Octavia and some guidance around performance tuning can someone from the community help
> Thanks & Regards
> Prabhjit Singh