Return to site

Esxi Host Memory Slots

broken image


Easy to install ESXi (no hacking of drivers or VIBs) Break the 64 Gb RAM barrier so It's possible to go and have some nested ESXi and stay on single host. Well designed, robust case; Power efficient PSU; Power efficient platform (DDR4, Xeon E5-2630L) So that's why I opted for socket 2011 R3 and low power Intel Xeon E5-2630L v3 CPU which has. This article provides steps to find information about the physical memory installed in each slot on an ESX/ESXi host. Resolution To determine how much RAM is installed in each slot on an ESX/ESXi host.

This blog post is adapted from my book, titled VMware vRealize Operations Performance and Capacity Management. It is published by Packt Publishing. You can buy it at Amazon or Packt.

I mentioned earlier that vRealize Operations does not simply regurgitate what vCenter has. Some of the vSphere-specific characteristics are not properly understood by traditional management tools. Partial understanding can lead to misunderstanding. vRealize Operations starts by fully understanding the unique behavior of vSphere, then simplifying it by consolidating and standardizing the counters. For example, vRealize Operations creates derived counters such as Contention and Workload, then applies them to CPU, RAM, disk, and network.

Let's take a look at one example of how partial information can be misleading in a troubleshooting scenario. It is common for customers to invest in an ESXi host with plenty of RAM. I've seen hosts with 256 to 512 GB of RAM. One reason behind this is the way vCenter displays information. In the following screenshot, vCenter is giving me an alert. The host is running on high memory utilization. I'm not showing the other host, but you can see that it has a warning, as it is high too. The screenshots are all from vCenter 5.0 and vCenter Operations 5.7, but the behavior is still the same in vCenter 5.5 Update 2 and vRealize Operations 6.0.

I'm using vSphere 5.0 and vCenter Operations 5.x to show the screenshots as I want to provide an example of the point I stated in Chapter 1 of the book, which is the rapid change of vCloud Suite and VMware SDDC products. I have verified that the behaviour is still the same in vSphere 5.5 U2.

The first step is to check if someone has modified the alarm by reducing the threshold. The next screenshot shows that utilization above 95% will trigger an alert, while utilization above 90% will trigger a warning. The threshold has to be breached by at least 5 minutes. The alarm is set to a suitably high configuration, so we will assume the alert is genuinely indicating a high utilization on the host.

Let's verify the memory utilization. I'm checking both the hosts as there are two of them in the cluster. Both are indeed high. The utilization for vmsgesxi006 has gone down in the time taken to review the Alarm Settings tab and move to this view, so both hosts are now in the Warning status.

Now we will look at the vmsgesxi006 specification. From the following screenshot, we can see it has 32 GB of physical RAM, and RAM usage is 30747 MB. It is at 93.8%utilization.

Since all the numbers shown in the preceding screenshot are refreshed within minutes, we need to check with a longer timeline to make sure this is not a one-time spike. So let's check for the last 24 hours. The next screenshot shows that the utilization was indeed consistently high. For the entire 24-hour period, it has consistently been above 92.5%, and it hits 95% several times. So this ESXi host was indeed in need of more RAM.

Deciding whether to add more RAM is complex; there are many factors to be considered. There will be downtime on the host, and you need to do it for every host in the cluster since you need to maintain a consistent build cluster-wide. There will be lots of vMotion as each host needs to be shutdown. Because the ESXi is highly utilized, I should increase the RAM significantly so that I can support more VMs or larger VMs. Buying bigger DIMMs may mean throwing away the existing DIMMs, as there are rules restricting the mixing of DIMMs. Mixing DIMMs also increases management complexity. The new DIMM may require a BIOS update, which may trigger a change request. Alternatively, the large DIMM may not be compatible with the existing host, in which case I have to buy a new box. So a RAM upgrade may trigger a host upgrade, which is a larger project.

Before jumping in to a procurement cycle to buy more RAM, let's double-check our findings. It is important to ask what is the host used for? and who is using it?.

In this example scenario, we examined a lab environment, the VMware ASEAN lab. Let's check out the memory utilization again, this time with the context in mind. The preceding graph shows high memory utilization over a 24-hour period, yet no one was using the lab in the early hours of the morning! I am aware of this as I am the lab administrator in the past 6+ years!

We will now turn to vCenter Operations for an alternative view. The following screenshot from vCenter Operations 5 tells a different story. CPU, RAM, disk, and network are all in the healthy range. Specifically for RAM, it has 97% utilization but 32% demand. Note that the Memory chart is divided into 2 parts. The upper one is at the ESXi level, while the lower one shows individual VMs in that host.

The upper part is in turn split into 2. The green rectangle (Demand) sits on top of a grey rectangle (Usage). The green rectangle shows a healthy figure at around 10 GB. The grey rectangle is much longer, almost filling the entire area.

The lower part shows the hypervisor and the VMs' memory utilization. Each little green box represents 1 VM.

On the bottom left, note the KEY METRICS section. vCenter Operations 5 shows that Memory | Contention is 0%. This means none of the VMs running on the host is contending for memory. They are all being served well!

Esxi Host Memory Slots Game

I shared earlier that the behavior remains the same in vCenter 5.5. So, let's take a look at how memory utilization is shown in vCenter 5.5. Largest casino in paradise island crossword clue.

The next screenshot shows the counters provided by vCenter 5.5. This is from a different ESXi host, as I want to provide you with a second example. It is not related to the 1st example at all. This host has 48 GB of RAM. About 26 GB has been mapped to VM or VMkernel, which is shown by the Consumed counter (the highest line in the chart; notice that the value is almost constant). The Usage counter shows 52% because it takes from Consumed. The active memory is a lot lower, as you can see from the line at the bottom. Notice that the line is not a simple straight line, as the value goes up and down. This proves that the Usage counter is actually the Consumed counter.

Notice that the ballooning is 0, so there is no memory pressure for this host. Can you explain why? It is because Consumed is below the threshold.

At this point, some readers might wonder whether that's a bug in vCenter. No, it is not. ESXi is simply taking full advantage of the extra RAM. This is a similar behaviour by Windows 7 and Windows 2008.

Esxi host memory

There are situations in which you want to use the consumed memory and not the active memory. In fact, some applications may not run properly if you use active memory. I will cover this when we discuss memory counters in further chapters. Also, technically, it is not a bug as the data it gives is correct. It is just that additional data will give a more complete picture since we are at the ESXi level and not at the VM level. vRealize Operations distinguishes between the active memory and consumed memory and provides both types of data. vCenter uses the Consumed counter for utilization for the ESXi host.

As you will see later in this book, vCenter uses the Active counter for utilization for VM. So the Usage counter has a different formula in vCenter depending upon the object. This makes sense as they are at different levels. In my book I explain these 2 levels.

vRealize Operations uses the Active counter for utilization. Just because a physical DIMM on the motherboard is mapped to a virtual DIMM in the VM, it does not mean it is used (read or write). You can use that DIMM for other VMs and you will not incur (for practical purposes) performance degradation. It is common for Microsoft Windows to initialize pages upon boot with zeroes, but never use them subsequently.

Having said all the above, none of the counters & information above should be used in isolation. You should always check the VMs. It's a common mistake to look at ESXi RAM counters (be it Consumed, Active, Usage, or Workload) and conclude there is RAM performanceor RAM capacityissue. You should always check the VMs first. Are they having Memory Contention? If not, there is no RAM performance issue. What about Capacity? It's the same principle. Take into account performance first, before you look at utilization. See this series of articles on what super metric you need.

For further information on this topic, I would recommend reviewing Kit Colbert's presentation on Memory in vSphere at VMworld, 2012. The content is still relevant for vSphere 5.x. The title is Understanding Virtualized Memory Performance Management and the session ID is INF-VSP1729. If the link has changed, here is the link to the full list of VMworld 2012 sessions.

Not all performance management tools understand this vCenter-specific characteristic. They would have given you a recommendation to buy more RAM.

Now that you know your ESXi does not need so much RAM, you can consider a single socket ESXi.

[20 March 2016: added 3rd example]

User Jengl at VMware communities posted a question here. He wanted to know why, when 'we patched the half of our cluster we had high consumed/usage of memory and the vCenter started to initiate balloon, compress and finally swap on the ESXi Hosts. I was not expecting that, because active memory was only a small percentage of consumed.

He shared his screenshot, which I copied here so I can explain it.

In the above screenshot, it looks like ESX Host (and not cluster or VM). We could see around 10:50 am, there was a spike. The Memory Usage counter hit 98.92%. That's high, as that's an average of 5 minutes. It probably touched 100% within that 5 minutes. Memory Workload, which is based on Active, was low at 19%. This is possible. The VMs were not actively using the RAM. Contention rose to a 1.82%, which indicates some VMs were contending for RAM.

The whole situation went back to normal. Contention disappeared pretty quickly, and Memory Usage went down. It's hard to see fro the chart, but it looks like it dropped below 95%.

Based on the above, I'd expect balloon to have kicked in at 10:50 am too. It would then taper off. Because the VMs were not actively using the RAM, I'd not expect the value to drop to 0.

Bloomington, IL 61704 Par-A-Dice Hotel Casino 21 Blackjack Boulevard, East Peoria, IL 61611 Harrah's Joliet 151 N. Joliet Street, Joliet, IL 60432 Four Winds Casino Resort 11111 Wilson Rd, New Buffalo, MI 49117 Jumer's Casino & Hotel 777 Jumer Dr.

Guess what? Are casino chips worth anything. The screenshot below confirmed that.

I hope that's clear.

A complete list of built-in ESXi RAM and Host SSD Caches

VMware has two categories of SSD Host Caches — those that swap out memory contents to SSD when host memory is full, and those that cache storage requests to host Flash / RAM.

In the former category are the below VMware features.

And in the latter category are:

I will start with defining memory swapping in general.

ESXi Host memory has two consumers – VMs and ESXi host processes. When host memory starts to run out, VMware will free up memory by writing some memory contents to in-host SSD or an SSD based Datastore. This process is called swapping and is a standard feature in all operating systems. In ESXi, you can monitor swap usage by looking at ESXi memory parameters — ‘Swap in to host cache' and ‘Swap out of host cache'. Ideally, your host should never swap, since it will deteriorate VM performance.1

Esxi Check Memory Slots

VMware's ‘Host Cache Configuration'

Esxi Host Memory Slots Free

Also known as ‘Swap to Host Cache' or ‘Host Swap Cache'. This feature is not really a Host Cache, it is an SSD based Datastore configured as a swap location.

Menu location in vCenter GUI: Host > Configure > Storage > Host Cache Configuration.

Available in all ESXi editions.

It is the same exact functionality as VMware ‘Host Cache Configuration' above, the only difference, in this case, is that the swap location is an in-host SSD.

Menu location in vCenter GUI: Host > Configure > Virtual Flash > Virtual Flash Host Swap Cache.

Available in ESXi Enterprise Plus edition only.

Esxi Host Memory Slots

If ‘Host Cache Configuration' or ‘Virtual Flash Host Swap Cache' is configured, it will override the two VMware swapping related configuration options listed below — ‘Virtual Machine Swap File Location' and ‘System Swap'.

VMware's ‘Virtual Machine Swap File Location' feature

This is a cluster level feature that defines the location where all the VMs will swap to. By default, VMs will swap to the same datastore where the VM is located. This is a logical choice since it allows for smooth vmotions. You really don't want to change this default configuration.

Menu location in vCenter GUI: The default setting is at the cluster level. Cluster > Configure > General. It can be overridden at the host level. Host > Configure > Virtual Machines > Swap File Location.

Available in all ESXi editions.

This is a config option that lets ESXi swap out memory contents used by idle ESXi host processes (not VMs) and then assigns the newly freed up memory to VMs. The ‘System Swap' location can be any SSD Datastore, VMware ‘Host Cache' location, or Datastore specified in ‘VM Swap File Location'. System Swap is restricted to 1GB only. Because of this constraint and the fact that swapping of system processes will adversely affect ESXi performance, I recommend not changing the default option.

Menu location in vCenter GUI: Host > Configure > System > System Swap.

Available in all editions.

Now moving on to caching features in VMware, I will start by defining what the ideal ESXi host caching software should do.

Generic Definition: Caching

This is different from swapping. Good caching software should cache all storage IO (reads and writes) from all VMs and VMware kernel to in-host SSD or RAM. So, caching should do what swapping does and a lot more, and regardless of if the memory is under pressure or not. This is the ideal goal, though VMware's VFRC and CBRC features in this area have shortcomings in what they cache (more on that below).

VMware's ‘Virtual Flash Read Cache' / ‘vFlash Read Cache' feature.

VFRC caches read requests from VMs to an in-host SSD. To use this feature, you must first configure ‘Virtual Flash Resource Management' on an in-host SSD, thereafter you divvy up this SSD and assign SSD capacity at the VMDK level for each VM on that host. VFRC will then improve the read performance of VMs by caching frequently used reads to that SSD.

Menu location in vCenter GUI:Available in 6.7 or earlier (End-of-Life in 7.x). In 6.7 vCenter > Host > Configure > Virtual Flash Resource Management. Then add SSD capacity to each VM by going to VM > Edit Settings > Virtual Flash Read Cache > add SSD capacity per VMDK.

In 6.7 and earlier ESXi versions, it is available in Enterprise Plus edition only.

VMware's ‘Content Based Read Cache' (CBRC) / ‘Horizon View Storage Accelerator' feature.

CBRC is a configuration option at the host level that is used only by the View Storage Accelerator (VSA) feature in Horizon VDI. So CBRC by itself doesn't do anything. VSA is the feature that exposes CBRC to Administrators. VSA caches only reads and only from the replica/parent VM in Horizon. It doesn't cache reads / writes from the end-user VDI VMs. VSA / CBRC don't work for server VMs. By default, VSA caches to 1GB host RAM, but you can increase the cache size to 32GB RAM per host. Since CBRC / VSA caches only the parent/replica VM, a cache size greater than the size of the replica VM will be of no use.

This is host caching software from us, and it is the only 3rd party host side caching software for VMware. It encompasses all the above features from VMware plus more. In brief, it caches ALL reads and writes from all VMs and VMware kernel to in-host SSD or RAM. It is extremely easy to use. And it is certified by VMware.

With VirtuCache installed in the host:

  1. If VMware starts to swap out memory contents, VirtuCache will automatically intercept it and write it to in-host SSD assigned to VirtuCache. To be clear this is not a special feature that VirtuCache has, it is simply the fact that VirtuCache intercepts all storage IO (reads and writes) from VMware and caches it to in-host cache media (SSD or RAM) assigned to it. So just let all the VMware swapping features described above remain at their default values.
  2. Don't configure VFRC (it's end-of-lifed anyway in ESXi 7.x). VirtuCache directly competes with it. Some big differences are that we cache reads and writes, VFRC caches only reads; we support VDI, VFRC doesn't. For a complete list of differences between VirtuCache and VFRC, review this link.
  3. Though VirtuCache interoperates with CBRC/VSA, you shouldn't configure CBRC/VSA, since VirtuCache does a lot more. For instance VirtuCache caches all storage IO from VMware, VSA caches only reads from replica VMs in Horizon. CBRC only supports RAM, VirtuCache supports SSD and RAM. For a complete list of differences between VSA/CBRC and VirtuCache, review this link.
Summary:
  1. You should have plenty of memory so there is no swapping.
  2. If you have Virtucache installed, keep all VMware cache and swap features in their default state, since you don't need those anymore. VirtuCache does a lot more.
Esxi host memory slots online

There are situations in which you want to use the consumed memory and not the active memory. In fact, some applications may not run properly if you use active memory. I will cover this when we discuss memory counters in further chapters. Also, technically, it is not a bug as the data it gives is correct. It is just that additional data will give a more complete picture since we are at the ESXi level and not at the VM level. vRealize Operations distinguishes between the active memory and consumed memory and provides both types of data. vCenter uses the Consumed counter for utilization for the ESXi host.

As you will see later in this book, vCenter uses the Active counter for utilization for VM. So the Usage counter has a different formula in vCenter depending upon the object. This makes sense as they are at different levels. In my book I explain these 2 levels.

vRealize Operations uses the Active counter for utilization. Just because a physical DIMM on the motherboard is mapped to a virtual DIMM in the VM, it does not mean it is used (read or write). You can use that DIMM for other VMs and you will not incur (for practical purposes) performance degradation. It is common for Microsoft Windows to initialize pages upon boot with zeroes, but never use them subsequently.

Having said all the above, none of the counters & information above should be used in isolation. You should always check the VMs. It's a common mistake to look at ESXi RAM counters (be it Consumed, Active, Usage, or Workload) and conclude there is RAM performanceor RAM capacityissue. You should always check the VMs first. Are they having Memory Contention? If not, there is no RAM performance issue. What about Capacity? It's the same principle. Take into account performance first, before you look at utilization. See this series of articles on what super metric you need.

For further information on this topic, I would recommend reviewing Kit Colbert's presentation on Memory in vSphere at VMworld, 2012. The content is still relevant for vSphere 5.x. The title is Understanding Virtualized Memory Performance Management and the session ID is INF-VSP1729. If the link has changed, here is the link to the full list of VMworld 2012 sessions.

Not all performance management tools understand this vCenter-specific characteristic. They would have given you a recommendation to buy more RAM.

Now that you know your ESXi does not need so much RAM, you can consider a single socket ESXi.

[20 March 2016: added 3rd example]

User Jengl at VMware communities posted a question here. He wanted to know why, when 'we patched the half of our cluster we had high consumed/usage of memory and the vCenter started to initiate balloon, compress and finally swap on the ESXi Hosts. I was not expecting that, because active memory was only a small percentage of consumed.

He shared his screenshot, which I copied here so I can explain it.

In the above screenshot, it looks like ESX Host (and not cluster or VM). We could see around 10:50 am, there was a spike. The Memory Usage counter hit 98.92%. That's high, as that's an average of 5 minutes. It probably touched 100% within that 5 minutes. Memory Workload, which is based on Active, was low at 19%. This is possible. The VMs were not actively using the RAM. Contention rose to a 1.82%, which indicates some VMs were contending for RAM.

The whole situation went back to normal. Contention disappeared pretty quickly, and Memory Usage went down. It's hard to see fro the chart, but it looks like it dropped below 95%.

Based on the above, I'd expect balloon to have kicked in at 10:50 am too. It would then taper off. Because the VMs were not actively using the RAM, I'd not expect the value to drop to 0.

Bloomington, IL 61704 Par-A-Dice Hotel Casino 21 Blackjack Boulevard, East Peoria, IL 61611 Harrah's Joliet 151 N. Joliet Street, Joliet, IL 60432 Four Winds Casino Resort 11111 Wilson Rd, New Buffalo, MI 49117 Jumer's Casino & Hotel 777 Jumer Dr.

Guess what? Are casino chips worth anything. The screenshot below confirmed that.

I hope that's clear.

A complete list of built-in ESXi RAM and Host SSD Caches

VMware has two categories of SSD Host Caches — those that swap out memory contents to SSD when host memory is full, and those that cache storage requests to host Flash / RAM.

In the former category are the below VMware features.

And in the latter category are:

I will start with defining memory swapping in general.

ESXi Host memory has two consumers – VMs and ESXi host processes. When host memory starts to run out, VMware will free up memory by writing some memory contents to in-host SSD or an SSD based Datastore. This process is called swapping and is a standard feature in all operating systems. In ESXi, you can monitor swap usage by looking at ESXi memory parameters — ‘Swap in to host cache' and ‘Swap out of host cache'. Ideally, your host should never swap, since it will deteriorate VM performance.1

Esxi Check Memory Slots

VMware's ‘Host Cache Configuration'

Esxi Host Memory Slots Free

Also known as ‘Swap to Host Cache' or ‘Host Swap Cache'. This feature is not really a Host Cache, it is an SSD based Datastore configured as a swap location.

Menu location in vCenter GUI: Host > Configure > Storage > Host Cache Configuration.

Available in all ESXi editions.

It is the same exact functionality as VMware ‘Host Cache Configuration' above, the only difference, in this case, is that the swap location is an in-host SSD.

Menu location in vCenter GUI: Host > Configure > Virtual Flash > Virtual Flash Host Swap Cache.

Available in ESXi Enterprise Plus edition only.

If ‘Host Cache Configuration' or ‘Virtual Flash Host Swap Cache' is configured, it will override the two VMware swapping related configuration options listed below — ‘Virtual Machine Swap File Location' and ‘System Swap'.

VMware's ‘Virtual Machine Swap File Location' feature

This is a cluster level feature that defines the location where all the VMs will swap to. By default, VMs will swap to the same datastore where the VM is located. This is a logical choice since it allows for smooth vmotions. You really don't want to change this default configuration.

Menu location in vCenter GUI: The default setting is at the cluster level. Cluster > Configure > General. It can be overridden at the host level. Host > Configure > Virtual Machines > Swap File Location.

Available in all ESXi editions.

This is a config option that lets ESXi swap out memory contents used by idle ESXi host processes (not VMs) and then assigns the newly freed up memory to VMs. The ‘System Swap' location can be any SSD Datastore, VMware ‘Host Cache' location, or Datastore specified in ‘VM Swap File Location'. System Swap is restricted to 1GB only. Because of this constraint and the fact that swapping of system processes will adversely affect ESXi performance, I recommend not changing the default option.

Menu location in vCenter GUI: Host > Configure > System > System Swap.

Available in all editions.

Now moving on to caching features in VMware, I will start by defining what the ideal ESXi host caching software should do.

Generic Definition: Caching

This is different from swapping. Good caching software should cache all storage IO (reads and writes) from all VMs and VMware kernel to in-host SSD or RAM. So, caching should do what swapping does and a lot more, and regardless of if the memory is under pressure or not. This is the ideal goal, though VMware's VFRC and CBRC features in this area have shortcomings in what they cache (more on that below).

VMware's ‘Virtual Flash Read Cache' / ‘vFlash Read Cache' feature.

VFRC caches read requests from VMs to an in-host SSD. To use this feature, you must first configure ‘Virtual Flash Resource Management' on an in-host SSD, thereafter you divvy up this SSD and assign SSD capacity at the VMDK level for each VM on that host. VFRC will then improve the read performance of VMs by caching frequently used reads to that SSD.

Menu location in vCenter GUI:Available in 6.7 or earlier (End-of-Life in 7.x). In 6.7 vCenter > Host > Configure > Virtual Flash Resource Management. Then add SSD capacity to each VM by going to VM > Edit Settings > Virtual Flash Read Cache > add SSD capacity per VMDK.

In 6.7 and earlier ESXi versions, it is available in Enterprise Plus edition only.

VMware's ‘Content Based Read Cache' (CBRC) / ‘Horizon View Storage Accelerator' feature.

CBRC is a configuration option at the host level that is used only by the View Storage Accelerator (VSA) feature in Horizon VDI. So CBRC by itself doesn't do anything. VSA is the feature that exposes CBRC to Administrators. VSA caches only reads and only from the replica/parent VM in Horizon. It doesn't cache reads / writes from the end-user VDI VMs. VSA / CBRC don't work for server VMs. By default, VSA caches to 1GB host RAM, but you can increase the cache size to 32GB RAM per host. Since CBRC / VSA caches only the parent/replica VM, a cache size greater than the size of the replica VM will be of no use.

This is host caching software from us, and it is the only 3rd party host side caching software for VMware. It encompasses all the above features from VMware plus more. In brief, it caches ALL reads and writes from all VMs and VMware kernel to in-host SSD or RAM. It is extremely easy to use. And it is certified by VMware.

With VirtuCache installed in the host:

  1. If VMware starts to swap out memory contents, VirtuCache will automatically intercept it and write it to in-host SSD assigned to VirtuCache. To be clear this is not a special feature that VirtuCache has, it is simply the fact that VirtuCache intercepts all storage IO (reads and writes) from VMware and caches it to in-host cache media (SSD or RAM) assigned to it. So just let all the VMware swapping features described above remain at their default values.
  2. Don't configure VFRC (it's end-of-lifed anyway in ESXi 7.x). VirtuCache directly competes with it. Some big differences are that we cache reads and writes, VFRC caches only reads; we support VDI, VFRC doesn't. For a complete list of differences between VirtuCache and VFRC, review this link.
  3. Though VirtuCache interoperates with CBRC/VSA, you shouldn't configure CBRC/VSA, since VirtuCache does a lot more. For instance VirtuCache caches all storage IO from VMware, VSA caches only reads from replica VMs in Horizon. CBRC only supports RAM, VirtuCache supports SSD and RAM. For a complete list of differences between VSA/CBRC and VirtuCache, review this link.
Summary:
  1. You should have plenty of memory so there is no swapping.
  2. If you have Virtucache installed, keep all VMware cache and swap features in their default state, since you don't need those anymore. VirtuCache does a lot more.
Cross-references
1 – Paragraph 2 and 3 on page 10 of this VMware white paper lists two interesting issues with the way VMware does swapping– (1) that ESXi has no knowledge of which guest VM pages it is swapping and (2) the issue of double paging. These are the two reasons why you should have adequate memory on the host to ensure no swapping.




broken image