Saturday, November 21, 2015
vSphere 6.0 -Difference between vSphere 5.0, 5.1, 5.5 and vSphere 6.0
NAGARAJU AVALA
Saturday, November 21, 2015
0
Tags
# VMware
Continue Reading
VMWare HA Slots Calculation
NAGARAJU AVALA
Saturday, November 21, 2015
0
As per VMWare’s Definition,
“A slot is a logical representation of the memory and CPU resources that satisfy the requirements for any powered-on virtual machine in the cluster.”
If you have configured reservations at VM level, It influence the HA slot calculation. Highest memory reservation and highest CPU reservation of the VM in your cluster determines the slot size for the cluster.
Here is the Example,
If you have the VM configured with the highest memory reservation of 8192 MB (8 GB) and highest CPU reservation of 4096 MHZ. among the other VM’s in the cluster, then the slot size for memory is 8192 MB and slot size for CPU is 4096 MHZ. in the cluster.
If no VM level reservation is configured , Minimum CPU size of 256 MHZ and memory size of 0 MB + VM memory overhead will be considered as CPU and Memory slot size.
Calculation for Number of Slots in cluster :-
Once we got the Slot size for memory and CPU by the above method , Use the below calculation
Num of CPU Slots = Total available CPU resource of ESX or cluster / CPU Slot Size
Num of memory slots = Total available memory resource of ESX or cluster minus memory used for service console & ESX system / Memory Slot size
Let’s take a Example,
I have 3 host on the cluster and 6 Virtual machine is running on the cluster and Each host capacity as follows
RAM = 50 GB per Host
CPU = 8 X 2.666 GHZ per host
Cluster RAM Resources = 50 X 3 = 150 GB – Memory for service console and system = 143 GB
Cluster CPU resources = 8 X 2.6 X 3 = 63 GHZ (63000 MHZ) of total CPU capacity in the cluster – CPU Capacity used by the ESX System = 60384 MHZ
I don’t have any memory or CPU reservation in my cluster, So, the default CPU slot size 256 MHZ and one of my Virtual machine is assigned with 8 vcpu and its memory overhead is 344.98 MB (which is the highest overhead among my 6 virtual machines in the cluster)
Let’s calculate the num of CPU & Memory slots
Num of CPU Slots = Total available CPU resource of cluster / CPUSlot size in MHZ
No of CPU Slots = 60384 MHZ / 256 MHZ = 235.875 Approx
Num of Memory Slots = Total available Memory resource of cluster / memory Slot Size in MB
Num of Memory Slots = 146432 / 345 = 424 Approx
The most restrictive number among CPU and Memory slots determines the amount of slots for this cluster. We have 235 slots available for CPU and 424 Slots available for Memory. So the most restrictive number is 235.
So, Total number of slots for my cluster is 235 Approx. Please find the below snapshot
Tags
# VMware
Continue Reading
Installing Esxi Patches by using LCI
NAGARAJU AVALA
Saturday, November 21, 2015
0
Pre-requisites steps for installing ESXi patches
Download the patches applicable for our ESX/ESXi version manually
We can install patches using esxcli command by using SSH connection or via ESXi shell using remote console connections like ILO, DRAC.
Now the downloaded patches needs to be transferred to the datastore of ESX/ESXi hosts
Implementation steps
1. Login to your ESXi host using SSH or ESXi shell with your root credentials
2. Browse towards the Patch location in your datastore and verify the donwloaded patches are alreadys in and note down the complete path for the patch.
3 .Before installing patches placing your ESXi host in maintenance mode is very important.
esxcli software vib install -d /vmfs/volumes/datastore1/ESXi\ patches/ESXi510-201210001.zip
To verify the installed VIB's installed on your host execute the below command
esxcli software vib list
Reboot your ESXi host for the changes to take effect and exit your host from the maintenance mode
Download the patches applicable for our ESX/ESXi version manually
We can install patches using esxcli command by using SSH connection or via ESXi shell using remote console connections like ILO, DRAC.
Now the downloaded patches needs to be transferred to the datastore of ESX/ESXi hosts
Implementation steps
1. Login to your ESXi host using SSH or ESXi shell with your root credentials
2. Browse towards the Patch location in your datastore and verify the donwloaded patches are alreadys in and note down the complete path for the patch.
3 .Before installing patches placing your ESXi host in maintenance mode is very important.
esxcli software vib install -d /vmfs/volumes/datastore1/ESXi\ patches/ESXi510-201210001.zip
To verify the installed VIB's installed on your host execute the below command
esxcli software vib list
Reboot your ESXi host for the changes to take effect and exit your host from the maintenance mode
Tags
# VMware
Continue Reading
Explain the Vmotion Background Process
NAGARAJU AVALA
Saturday, November 21, 2015
0
Vmotion Background Process
- The Virtual Machine Memory state is copied over the Vmotion Network from the source Host to the Target Host. users continue to access the virtual machine and potentially update pages in memory. A list of modified pages in memory is kept in a memory Bitmap on the source Host.
- After most of the virtual machine memory is copied from the source host to target host the virtual machine quiesced no additional activity occurs on the virtual machine. In quiesce period VMOTION transfers the virtual machine device state and memory Bitmap to the destination Host.
- Immediately after the virtual machine is quiesced on the source host, the virtual machine initialized and starts running on the target host.
- Users access the virtual machine on the target host instead of the source host.
- The memory pages that the virtual machine was using on the source host are marked as free.
Tags
# VMware
Continue Reading
Difference Between Esx and Esxi
NAGARAJU AVALA
Saturday, November 21, 2015
0
Difference Between Esx and Esxi

Tags
# VMware
Continue Reading
Thursday, November 19, 2015
How to Ignore the Local Disks when Generating Multipath Devices in Linux Server
Unknown
Thursday, November 19, 2015
Some machines have local SCSI cards for their internal disks. DM-Multipath is not recommended for these devices.
The following procedure shows how to modify the multipath configuration file to ignore the local disks when configuring multipath.
1. Determine which disks are the internal disks and mark them as the ones to blacklist.
In this example, /dev/sda is the internal disk. Note that as originally configured in the default multipath configuration file, executing the multipath -v2 shows the local disk, /dev/sda, in the multipath map.
[root@test ~]# multipath -v2
create: SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1
[size=33 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 0:0:0:0 sda 8:0 [---------
device-mapper ioctl cmd 9 failed: Invalid argument
device-mapper ioctl cmd 14 failed: No such device or address
create: 3600a0b80001327d80000006d43621677
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:0 sdb 8:16
\_ 3:0:0:0 sdf 8:80
create: 3600a0b80001327510000009a436215ec
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:1 sdc 8:32
\_ 3:0:0:1 sdg 8:96
create: 3600a0b80001327d800000070436216b3
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:2 sdd 8:48
\_ 3:0:0:2 sdh 8:112
2. In order to prevent the device mapper from mapping /dev/sda in its multipath maps, edit the blacklist section of the /etc/multipath.conf file to include this device. Although you could blacklist the sda device using a devnode type, that would not be safe procedure since /dev/sda is not guaranteed to be the same on reboot. To blacklist individual devices, you can blacklist using the WWID of that device.
ote that in the output to the multipath -v2 command, the WWID of the /dev/sda device is SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1.
To blacklist this device, include the following in the /etc/multipath.conf file.
blacklist {
wwid SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1
}
3. After you have updated the /etc/multipath.conf file, you must manually tell the multipathd daemon to reload the file.
The following command reloads the updated /etc/multipath.conf file.
service multipathd reload
4. Run the following commands:
multipath -F
multipath -v2
[root@test~]# multipath -F
[root@test ~]# multipath -v2
create: 3600a0b80001327d80000006d43621677
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:0 sdb 8:16
\_ 3:0:0:0 sdf 8:80
create: 3600a0b80001327510000009a436215ec
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:1 sdc 8:32
\_ 3:0:0:1 sdg 8:96
create: 3600a0b80001327d800000070436216b3
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:2 sdd 8:48
\_ 3:0:0:2 sdh 8:112
The following procedure shows how to modify the multipath configuration file to ignore the local disks when configuring multipath.
1. Determine which disks are the internal disks and mark them as the ones to blacklist.
In this example, /dev/sda is the internal disk. Note that as originally configured in the default multipath configuration file, executing the multipath -v2 shows the local disk, /dev/sda, in the multipath map.
[root@test ~]# multipath -v2
create: SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1
[size=33 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 0:0:0:0 sda 8:0 [---------
device-mapper ioctl cmd 9 failed: Invalid argument
device-mapper ioctl cmd 14 failed: No such device or address
create: 3600a0b80001327d80000006d43621677
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:0 sdb 8:16
\_ 3:0:0:0 sdf 8:80
create: 3600a0b80001327510000009a436215ec
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:1 sdc 8:32
\_ 3:0:0:1 sdg 8:96
create: 3600a0b80001327d800000070436216b3
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:2 sdd 8:48
\_ 3:0:0:2 sdh 8:112
2. In order to prevent the device mapper from mapping /dev/sda in its multipath maps, edit the blacklist section of the /etc/multipath.conf file to include this device. Although you could blacklist the sda device using a devnode type, that would not be safe procedure since /dev/sda is not guaranteed to be the same on reboot. To blacklist individual devices, you can blacklist using the WWID of that device.
ote that in the output to the multipath -v2 command, the WWID of the /dev/sda device is SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1.
To blacklist this device, include the following in the /etc/multipath.conf file.
blacklist {
wwid SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1
}
3. After you have updated the /etc/multipath.conf file, you must manually tell the multipathd daemon to reload the file.
The following command reloads the updated /etc/multipath.conf file.
service multipathd reload
4. Run the following commands:
multipath -F
multipath -v2
[root@test~]# multipath -F
[root@test ~]# multipath -v2
create: 3600a0b80001327d80000006d43621677
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:0 sdb 8:16
\_ 3:0:0:0 sdf 8:80
create: 3600a0b80001327510000009a436215ec
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:1 sdc 8:32
\_ 3:0:0:1 sdg 8:96
create: 3600a0b80001327d800000070436216b3
[size=12 GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:2 sdd 8:48
\_ 3:0:0:2 sdh 8:112
Tuesday, November 17, 2015
Explain Multipath command output in Linux Server
Unknown
Tuesday, November 17, 2015
When you create, modify, or list a multipath device, you get a printout of the current device setup. The format is as follows.
For each multipath device:
action_if_any: alias (wwid_if_different_from_alias) [size][features][hardware_handler]
For each path group:
\_ scheduling_policy [path_group_priority_if_known] [path_group_status_if_known]
For each path:
\_ host:channel:id:lun devnode major:minor [path_status] [dm_status_if_known]
For example, the output of a multipath command might appear as follows:
mpath1 (3600d0230003228bc000339414edb8101) [size=10 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [prio=1][active]
\_ 2:0:0:6 sdb 8:16 [active][ready]
\_ round-robin 0 [prio=1][enabled]
\_ 3:0:0:6 sdc 8:64 [active][ready]
If the path is up and ready for I/O, the status of the path is ready or active. If the path is down, the status is faulty or failed.
The path status is updated periodically by the multipathd daemon based on the polling interval defined in the /etc/multipath.conf file.
The dm status is similar to the path status, but from the kernel's point of view. The dm tatus has two states: failed, which is analogous to faulty, and active which covers all other path states. Occasionally, the path state and the dm state of a device will temporarily not agree.
For each multipath device:
action_if_any: alias (wwid_if_different_from_alias) [size][features][hardware_handler]
For each path group:
\_ scheduling_policy [path_group_priority_if_known] [path_group_status_if_known]
For each path:
\_ host:channel:id:lun devnode major:minor [path_status] [dm_status_if_known]
For example, the output of a multipath command might appear as follows:
mpath1 (3600d0230003228bc000339414edb8101) [size=10 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [prio=1][active]
\_ 2:0:0:6 sdb 8:16 [active][ready]
\_ round-robin 0 [prio=1][enabled]
\_ 3:0:0:6 sdc 8:64 [active][ready]
If the path is up and ready for I/O, the status of the path is ready or active. If the path is down, the status is faulty or failed.
The path status is updated periodically by the multipathd daemon based on the polling interval defined in the /etc/multipath.conf file.
The dm status is similar to the path status, but from the kernel's point of view. The dm tatus has two states: failed, which is analogous to faulty, and active which covers all other path states. Occasionally, the path state and the dm state of a device will temporarily not agree.
Friday, November 13, 2015
DM-Multipath includes compiled-in default settings that are suitable for common multipath configurations.
Setting up DM-multipath is often a simple procedure.
The basic procedure for configuring your system with DM-Multipath is as follows:
1. Install device-mapper-multipath rpm.
Before setting up DM-Multipath on your system, ensure that your system has been updated and includes the device-mapper-multipath package.
2. Edit the multipath.conf configuration file:
Edit the /etc/multipath.conf file by commenting out the following lines at the top of the file. This section of the configuration file, in its initial state, blacklists all devices. You must comment it out to enable multipathing.
blacklist {
devnode "*"
}
The default settings for DM-Multipath are compiled in to the system and do not need to be explicitly set in the /etc/multipath.conf file.
The default value of path_grouping_policy is set to failover, so in this example you do not need to change the default value.
The initial defaults section of the configuration file configures your system that the names of the multipath devices are of the form mpathn; without this setting, the names of the multipath devices would be aliased to the WWID of the device.
Save the configuration file and exit the editor.
3. Start the multipath daemons.
modprobe dm-multipath
service multipathd start
multipath -v2
The multipath -v2 command prints out multipathed paths that show which devices are multipathed. If the command does not print anything out, ensure that all SAN connections are set up properly and the system is multipathed.
4. Execute the following command to ensure sure that the multipath daemon starts on bootup:
chkconfig multipathd on
Since the value of user_friendly_name is set to yes in the configuration file the multipath devices will be created as /dev/mapper/mpathn
Setting up DM-multipath is often a simple procedure.
The basic procedure for configuring your system with DM-Multipath is as follows:
1. Install device-mapper-multipath rpm.
Before setting up DM-Multipath on your system, ensure that your system has been updated and includes the device-mapper-multipath package.
2. Edit the multipath.conf configuration file:
Edit the /etc/multipath.conf file by commenting out the following lines at the top of the file. This section of the configuration file, in its initial state, blacklists all devices. You must comment it out to enable multipathing.
blacklist {
devnode "*"
}
The default settings for DM-Multipath are compiled in to the system and do not need to be explicitly set in the /etc/multipath.conf file.
The default value of path_grouping_policy is set to failover, so in this example you do not need to change the default value.
The initial defaults section of the configuration file configures your system that the names of the multipath devices are of the form mpathn; without this setting, the names of the multipath devices would be aliased to the WWID of the device.
Save the configuration file and exit the editor.
3. Start the multipath daemons.
modprobe dm-multipath
service multipathd start
multipath -v2
The multipath -v2 command prints out multipathed paths that show which devices are multipathed. If the command does not print anything out, ensure that all SAN connections are set up properly and the system is multipathed.
4. Execute the following command to ensure sure that the multipath daemon starts on bootup:
chkconfig multipathd on
Since the value of user_friendly_name is set to yes in the configuration file the multipath devices will be created as /dev/mapper/mpathn