An attempt to use Resource Pools to implement three different service levels |
Now all VMs are added to one of the pools according to their requested performance level. The expectation here is that a VM just inherits the share setting of its parent resource pool. But this is not how resource pools work!
In fact we have created three independent pools here that are entitled for 1/7 (Low = 14%), 2/7 (Normal = 28%) and 4/7 (High = 57%) of the overall resources, and all VMs that we add to e.g. the "Normal" pool (probably the majority) must share only 28% of the CPU and RAM resources (on average)! Let's do the math using an example: We have added a number of VMs that have 20 vCPUs altogether to the "Low" pool, 40 vCPUs were added to the "Normal" and 10 vCPUs were added to the "High" pool. What is the resultant percentage of shares per vCPU?
ResPool | RP %Shares | VM vCPU# | %Shares per vCPU |
Low | 14 | 20 | 0.7 |
Normal | 28 | 40 | 0.7 |
High | 57 | 10 | 5.7 |
That means we end up with the VMs in "Low" and "Normal" getting the same CPU shares, and the VMs in "High" get disproportionally high shares. Certainly not what we intended ...
This seeming paradox has been discussed before: There are older blog posts of Duncan Epping and Chris Wahl that describe it in yet more detail. So to some of you this might only be a reminder, but an important one!
What are the consequences of this misconception?
In an underutilized environment this setup will probably not have any noticeable bad effects, because shares will only have an effect if there is resource contention and VMs compete for resources.
In highly utilized (or even overloaded) environments though you will see high CPU %READY times and memory ballooning where you would not expect this, because the VMs are limited by their resource pool in a way that you did not intend.
How to do better?
One way to do better is to keep the resource pools, but to use custom shares on them that you constantly adjust whenever you provision or power on a new VM or remove/power off an existing VM. This could be automated through a script, but I find this a bit clumsy ...
A better way is to remove the additional pools, keep all VMs in the invisible root resource pools of the clusters, and to apply individual shares to them if needed.
If you want to implement different service levels based on shares then I suggest to store a VM's priority in a vCenter custom attribute, and have a script that applies the share settings based on this attributes value. This way you have an easily visible indication of what class a VM belongs to (the custom attribute) plus an automated process to ensure the correct settings. (Please note: With vSphere 5.1 and the new Web Client you can/should use tags instead of custom attributes. See e.g. this post on the vSphere blog.)
Resource pools can still be used e.g. to guarantee (or limit!) resources that are available to a group of specific VMs, but you should then configure them with suitable reservations (resp. limits). Be aware though that reserving resources somewhat defeats an important benefit of server virtualization: Saving resources by sharing them. In enterprise environments that serve lots of different customers you will need to find a good balance between reserving resources for customers who want to have guaranteed resources one the one hand, and not wasting your resources by dedicating them to machines that do not really need them all the time on the other hand.
No comments:
Post a Comment
***** All comments will be moderated! *****
- Please post only comments or questions that are related to this post's contents!
- Advertising and link spamming will not be tolerated!