Re: Multi-DC Install: Install Tips?
There’s obviously a lot of design decisions you have to make in this process, but to answer your initial questions:
- Networking design is your biggest challenge – if your lowest spec server only has two NICs then this is most likely your lowest common denominator unless you put more NICs in. So your choice here is to either run a single dual-NIC bond for resilience and run all CloudStack traffic over this, or run non-resilient and split traffic.
- On the above note also keep in mind if you have a mix of hardware that certain hypervisor operations (migrations mainly) require the same CPU family and generation.
- Storage wise it looks like you’ll be using local primary storage – if you end up using shared primary keep in mind you need to work out which hypervisor / storage protocol combination works for you (hint: KVM and iSCSI/block is challenging, NFS is easy).
- How much secondary storage: sorry but “it depends”. Your secondary storage needs to keep all your templates, your ISOs, as well as all VM volume uploads and downloads – so even a TB of storage can run out quickly in a larger environment. You can however always add more secondary storage pools later on.
- I assume you want to run your two DCs as separate CloudStack availability zones in the same CloudStack instance. If so the order doesn’t really matter than much since your main job is network config and hypervisor builds. I would start with one site and get that up and running first though and then add the second DC as a new zone later on.
- Keep in mind resilience again though – you will want to run management servers at each DC for failover, as well as a master and slave MySQL server again distributed.
Hope this helps.
On 04/07/2018, 09:28, "Donald Fountain" <don@xxxxxxxxxxxxxxxxxx> wrote:
Have been reading through the documentation, have prepped quite a bit of
hardware (2 full racks in each DC, 10G/40G uplinks), and have decided to
take the plunge and setup Cloudstack on it. It's destined to be a
multi-data center install, but I think it would be best to start a DC at
a time, obviously, unless there are considerations for this.
General setup is EX4200s, SRX240s, all local storage on hardware RAID-10
SSDs (except for a few high storage servers). Some servers have mobo
dual NICs (about half of these have quad NIC cards, tho), some have mobo
quads, all have some type of remote management.
Have experience with virtualization long-term
(vmWare/Proxmox/self-rolled KVM/LXC/Docker/Kubernetes), as well as the
* Any install tips? Gotchas, or things to watch out for?
* How much secondary storage is recommended per DC?
* Would it be best to do a single location at a time, or to setup the
management servers for all locations first and then proceed to build
out each DC?
* Any tips you have as a longer-term user?
53 Chandos Place, Covent Garden, London WC2N 4HSUK