Why?
New RHEL6-based OpenVZ kernel has a new memory management model, which supersedes User BeanCounters. After dragging my feet for a long time I decided to convert my running OpenVZ containers to the newer VSwap enabled scheme. Using VSwap makes the management and tracking of the old user beancounters a whole lot easier:
- Only the RAM and Swap parameters are mandatory in any container’s configuration
- Previous beancounters (container parameters) remain usable but are optional
How-to
Use following snippet to update the container config file:
# CTID=101
# RAM=512M
# SWAP=1G
# CFG=/etc/vz/conf/${CTID}.conf
# cp $CFG $CFG.pre-vswap
# grep -Ev '^(KMEMSIZE|LOCKEDPAGES|PRIVVMPAGES|SHMPAGES|NUMPROC|PHYSPAGES|VMGUARPAGES|OOMGUARPAGES|NUMTCPSOCK|NUMFLOCK|NUMPTY|NUMSIGINFO|TCPSNDBUF|TCPRCVBUF|OTHERSOCKBUF|DGRAMRCVBUF|NUMOTHERSOCK|DCACHESIZE|NUMFILE|AVNUMPROC|NUMIPTENT|ORIGIN_SAMPLE|SWAPPAGES)=' > $CFG < $CFG.pre-vswap
# vzctl set $CTID --ram $RAM --swap $SWAP --save
# vzctl set $CTID --reset_ub
Results
Unfortunately the VSwap schema breaks the vzmemcheck utility as most its output is now complete unusable due to its many decimals introduced into its memory counters. A more modest AWK script called vzoversell tries to close that gap.
Using the new tool vzoversell
(vzmemcheck
displays strange values for VSwap enabled containers):
A first version of vzoversell utility is added. This is a proposed vzmemcheck replacement for VSwap mode. Currently it just summarizes RAM and swap limits for all VSwap containers, and compares it to RAM and swap available on the host. Surely you can oversell RAM (as long as you have enough swap), but sum of all RAM+swap limits should not exceed RAM+swap on the node, and the main purpose of this utility is to check that constraint.
Pre-VSwap
# vzoversell
--------- RAM --------- -------- Swap --------- Flags
used peak limit fails used peak limit fails
224 250M 511M 512M - 6.87M 7.05M 256M -
108 305M 1G 1G - 56.8M 202M 512M -
105 181M 384M 384M - 15.5M 22.2M 512M -
103 185M 458M 1G - - - 512M -
102 1.07G 3G 3G - 346M 615M 1G -
106 76.2M 228M 256M - - - 512M -
101 234M 606M 1G - 6.97M 20.7M 512M - f
222 170M 512M 512M - 34.2M 142M 256M - f
104 704M 2.5G 2.5G - 114M 400M 512M - f
----- ----- ----- ----- ----- ----- ----- -----
TOTAL 3.13G 9.14G 10.1G - 581M 1.38G 4.5G -
VSwap enabled
# vzoversell
--------- RAM --------- -------- Swap --------- Flags
used peak limit fails used peak limit fails
102 980M 1G 1G - 275M 276M 1G -
104 345M 511M 512M - 14.1M 16.1M 1G -
108 210M 255M 256M - 124K 516K 512M -
224 216M 242M 256M - - - 512M -
222 86M 128M 128M - 24.1M 25.7M 256M -
106 22.3M 26.6M 128M - - - 256M -
103 136M 179M 256M - - - 512M -
101 107M 152M 256M - - - 512M -
105 97.1M 384M 128M - 47.2M 53.4M 256M -
----- ----- ----- ----- ----- ----- ----- -----
TOTAL 2.15G 2.84G 2.88G - 361M 371M 4.75G -
RAM available: 7.67G allocated: 2.88G oversell: 37%
Swap available: 7.81G allocated: 4.75G oversell: 60%
RAM+Swap available: 15.5G allocated: 7.62G oversell: 49%
VM overcommit
There is a new parameter vm_overcommit
, and it works in the following way – if set, it is used as a multiplier to ram+swap to set privvmpages. In layman terms, this is a ratio of real memory (ram+swap) to virtual memory (privvmpages). Again, physpages
limits RAM, and physpages
+swappages
limits real memory used by a container. On the other side, privvmpages
limits memory allocated by a container. While it depends on the application, generally not all allocated memory is used – sometimes allocated memory is 5 or 10 times more than used memory. What vm_overcommit
gives is a way to set this gap.
For example:
# vzctl set $CTID --ram 2G --swap 4G --vm_overcommit 3 --save
I used it for following containers:
102.conf:VM_OVERCOMMIT="2"
104.conf:VM_OVERCOMMIT="2"
108.conf:VM_OVERCOMMIT="2"
Leave a comment