Yes and no
Lets step through it (just based on memory):
- All 20 of your hosts running happily and everybody is having a great day (Lets say you have 200GB of available physical memory across the hosts)
- Host failures to tolerate are 2 (cluster sets aside/reserves 20GB of memory to allow for guests restarts when hosts fail) NOTE: Active memory is not reserved memory
- Memory actively used is 190GB across the cluster
- Percentage of performance degradation is set to 0 (zero) %
- You go to deploy a new VM to the cluster
- vSphere Availability checks to see what is currently being consumed within the cluster against what resources the cluster would have if 2 hosts failed (190GB currently needed but would only have 180GB available IF 2 hosts were to fail)
In the above case you get a warning as there would likely be a performance issue in a HA event.
Does that help?