osdir.com


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

creating instances, haproxy eats CPU, glance eats RAM


I am bad at containers, just starting to learn them, not sure how they are
limited.

So you are using local hard drives. I guess it is one of the points for
slow down, somehow.  I ask my developers to use heat to create more than 1
instance/resource.
Try checking CEPH speed. I think CEPH has the option to send "created"
callback after 1 copy created/written to HDD, and then finish duplicating
or tripling data in the background, what makes CEPH data not so reliable
but MUUUCH faster. Need to google for that, I do not remember it.

sorry, yes my fault, not domiflist but domblklist:
virsh domblklist instance-00000##


Generally, I have the same issue as you have, but on older version of
OpenStack (Mitaka, Mirantis implementation).
I have difficulties when I have an instance, which is using CEPH based
volume and sharing it over NFS in the instance1 in compute1 to another
instance2 in another compute2. I receive around 13KB/s, if I reshare it on
root drive, I get around 30KB/s still too low.

On Thu, 15 Aug 2019 at 09:35, Gregory Orange <gregory.orange at pawsey.org.au>
wrote:

> Hello Ruslanas and thank you for the response. I didn't see it until now!
> I have given some responses inline...
>
> On 1/8/19 3:57 pm, Ruslanas Gžibovskis wrote:
> > when in newton release were introduced role separation, we divided
> memory hungry processes into 4 different VM's on 3 physical boxes:
> > 1) Networker: all Neutron agent processes (network throughput)
> > 2) Systemd: all services started by systemd (Neutron)
> > 3) pcs: all services controlled by pcs (Galera + RabbitMQ)
> > 4) horizon
>
> We have separated each control plane service (Glance, Neutron, Cinder,
> etc) onto its own VM. We are considering containers instead of VMs in
> future.
>
>
> > Gregory > do you have local storage for swift and cinder background?
>
> Our Cinder and Glance use Ceph as backend. No Swift installed.
>
>
> > also double check where _base image is located? is it in
> /var/lib/nova/instances/_base/* ? and flavor disks stored in
> /var/lib/nova/instances ? (can check on compute by: virsh domiflist
> instance-00000## )
>
> domiflist shows the VM's interface - how does that help?
>
> Greg.
>


-- 
Ruslanas Gžibovskis
+370 6030 7030
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190815/8f74555c/attachment.html>