This week I replaced a system board in an HPE DL560 Gen10. The picture doesn’t capture it but this model is quite a bit more involved when it comes to removing the old board. There are chassis components that have to be unscrewed and removed to gain access. Thankfully I didn’t lose any screws and the replacement board booted right up.
Stumbled across this gem in an inherited server rack
So I did find out that when you assign a VM vCPUs in ESXi you have to consider queuing. Consider if you have a host with 8 threads and give a VM 8 and then another VM a single one. If that VM with one vCPU is running at 100% that VM with 8 is going to have to wait for all 8 to become available at the same time. VMWare will eventually split the time but it is still a huge performance impact. They say 3 physical cores to vCPUs is a good ratio to maintain and the more physical cores the better.
I recently came across a neat article on VMWare over provisioning and what are considered best practices. Something I always thought was that as long as CPUs are in use on the host you can keep handing out vCPUs. It appears that you start reaching points where the hypervisor needs to queue up workloads so the VMs end up waiting for a CPU to become available. One thing I wonder though is for multi core VMs. If a VM has 8 vCPUs does it have to wait for 8 threads to become available at the same time or can VMWare provide partials until it has more? Here’s the paper:
[pdf-embedder url=”https://serverhobbyist.com/wp-content/uploads/2018/10/Dell-Best-Practices-for-Oversubscription-of-CPU-Memory-and-Storage-in-vSphere-Virtual-Environments_0.pdf” title=”Dell Best Practices for Oversubscription of CPU Memory and Storage in vSphere Virtual Environments_0″]
A few years ago I bought a Dell R610 to add to my fleet of servers. It did alright but I had always dreamed of having better storage. I found a Dell Direct Attached Storage array on eBay for cheap. It had some mismatched drives but they were all the same size and it seemed to run. I bought the external RAID card for the R610 and hooked it up. I loaded VMWare ESXi 6.0 on the server and it immediately saw the storage. I had some questions about RAID on the array but settled on a RAID 5 across all the disks. Between the 512 MB cache on the card and spanning all of the 10k RPM SAS disks it is fast. It is much more difficult to bottleneck the disks. I learned a couple good lessons from this. That many disks running in your basement will function as a space heater. More importantly I learned the importance of a good RAID controller. Having that cache really helps prevent the disks from getting bogged down when multiple VMs try to read or write to the disks.