This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Sunday, November 12, 2017

Performance collection tools to gather data for fault analysis in VMware

Sunday, November 12, 2017
This article explains how to use performance collection tools to gather data for analysis of faults such as:
    Unresponsive ESX hosts
    Unresponsive virtual machines
    ESX host purple diagnostic screens

Why gather performance data for a fault?

If the diagnostic logs do not help you determine the cause of a fault, you may need to use performance collection tools to gather further data for analysis. Set up performance collections tools to gather data about faults that may occur.

Performance gathering tools

VMware recommends the following tools for gathering performance data: 

top
The top utility provides a list of CPU-intensive tasks for the ESX host Service Console.
Use top in batch mode for Fault troubleshooting by directing the output to a file so that it can be reviewed after a recurrence.


Note: The top command is not available for ESXi.
To run the top utility, run the command:


# top –bc –d <delay in seconds> [–n <iterations>] > output-perf-stats-file.txt

 
Use the information in the output file to identify any trends before the fault. 


esxtop
The esxtop tool provides performance statistics for the entire ESX/ESXi host. It provides details of network, storage, CPU, and memory load from the VMkernel perspective. It provides details on a VMkernel world basis.
esxtop
To collect the data over long periods of time, run esxtop in batch mode. Direct the output to a file so that it can be reviewed after the fault.


To run the esxtop tool, run the command:


# esxtop –b –d <delay in seconds> [-n <iterations>] > output-perf-statistics-file.csv

 
Like esxtop, the resxtop tool provides performance statistics for a specified ESX host in the network. It provides the same performance information as esxtop and may be used either after deploying the VMware vSphere Management Assistant (vMA) virtual appliance or installing the VMware Command-Line Interface (vCLI). 


To run the resxtop tool and collect batch performance data, log into the vMA or open the vCLI, and execute the command:


# resxtop [server] [vihost] [portnumber] [username] -b -d <delay in seconds> [-n <interations>] > output-perf-statistics-file.csv


vm-support -s

 
Use the vm-support command with the -s parameter to collect performance statistics, system configuration information, and logging. Submit the file generated by this command to VMware Support for further assistance, if required. 


Performance Monitor (PERFMON.EXE)

 
Microsoft's Performance Monitor is a utility that comes with every Microsoft Windows NT-based Operating System. This utility can be used to monitor local and remote Microsoft Windows machines. It can log performance data and display data from logs or real-time data.


This utility is useful when reviewing data collected from the esxtop tool and for troubleshooting virtual machine unresponsiveness. When using Performance Monitor for virtual machine unresponsiveness, collect the data remotely from another Microsoft Windows machine so that the utility does not affect the data being gathered.
For more information about Performance Monitor on your specific version of Windows, refer to Microsoft support sites.

Friday, November 10, 2017

Time command in Linux Server - Brief explanation

Friday, November 10, 2017
NAME
       time - time a simple command or give resource usage
      
Format
       time [options] command [arguments...]

The time command runs the specified program command with the given arguments.  When command finishes, time writes a message to standard error giving timing statistics about this program run.  These statistics consist of 


(i) the elapsed real time between invocation and termination,
(ii) the user  CPU time (the sum of the tms_utime and tms_cutime values in a struct tms as returned by times.
(iii) the system CPU time (the sum of the tms_stime and tms_cstime values in a struct tms as returned by times.

real %e
user %U
sys %S

%e - Elapsed real time (in seconds).
%U - Total number of CPU-seconds that the process spent in user mode.
%S - Total number of CPU-seconds that the process spent in kernel mode.

Ex:

[root@nsk-linux ~]# time route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.2.0        0.0.0.0         255.255.255.0   U     1      0        0 eth4
0.0.0.0         10.0.2.2        0.0.0.0         UG    0      0        0 eth4

real    0m0.001s
user    0m0.000s
sys     0m0.001s

[root@nsk-linux ~]# time uptime

 08:34:58 up 57 min,  2 users,  load average: 0.04, 0.12, 0.08

real    0m0.003s
user    0m0.002s
sys     0m0.001s

For more option, please read man time.

Thursday, November 9, 2017

How to find the file in between some days & delete the same?

Thursday, November 09, 2017
Follow the below steps to find out the particular modified file in between 20 to 30 days & delete the same.

Command to find and list the file

#find / -mtime +20 -mtime -30 -type f -name test.* -exec ls -al {} \;   

Command to delete the listed file.

#find / -mtime +20 -mtime -30 -type f -name test.* -exec rm {} \;

As per your needs, change the file name.

Hope it helps.

Default Queue Depth values for QLogic HBAs for various ESXi/ESX versions

Thursday, November 09, 2017
This table lists the default Queue Depth values for QLogic HBAs for various ESXi/ESX versions:



The default Queue Depth value for Emulex adapters has not changed for all versions of ESXi/ESX released to date. The Queue Depth is 32 by default, and because 2 buffers are reserved, 30 are available for I/O data.

The default Queue Depth value for Brocade adapters is 32.

Wednesday, November 8, 2017

Enabling Intel VT-x and AMD-V Virtualization Hardware Extensions in BIOS of ESXI

Wednesday, November 08, 2017
This section describes how to identify hardware virtualization extensions and enable them in your BIOS if they are disabled. The Intel VT-x extensions can be disabled in the BIOS. The virtualization extensions cannot be disabled in the BIOS for AMD-V. 

Procedure for Enabling virtualization extensions in BIOS


1.    Reboot the computer and open the system's BIOS menu. This can usually be done by pressing the delete key, the F1 key or Alt and F4 keys or F10 key depending on the Harware.


2.    Enabling the virtualization extensions in BIOS


        a.    Open the Processor submenu The processor settings menu may be hidden in the Chipset, Advanced CPU Configuration or Northbridge.
         b.    Enable Intel Virtualization Technology (also known as Intel VT-x). AMD-V extensions cannot be disabled in the BIOS and should already be enabled. The virtualization extensions may be labeled Virtualization Extensions, Vanderpool or various other names depending on the OEM and system BIOS.
         c.    Enable Intel VT-d or AMD IOMMU, if the options are available. Intel VT-d and AMD IOMMU are used for PCI device assignment.
         d.    Select Save & Exit.


3.    Reboot the machine.


4.    When the machine has booted, run cat /proc/cpuinfo |grep -E "vmx|svm". Specifying --color is optional, but useful if you want the search term highlighted. If the command outputs, the virtualization extensions are now enabled. If there is no output your system may not have the virtualization extensions or the correct BIOS setting enabled.

Tuesday, November 7, 2017

How to Determin if Intel Virtualization Technology or AMD Virtualization is enabled in the BIOS without reboot

Tuesday, November 07, 2017
When troubleshooting vMotion, Enhanced VMotion Capability (EVC), or 64bit virtual machine performance, you may need to determine if the Intel Virtualization Technology (VT) or AMD Virtualization (AMD-V) are enabled in the BIOS.

This section describes how to identify hardware virtualization extensions and enable them in your BIOS if they are disabled.

Log in to the ESX host as the root user.

    Run this command:

    esxcfg-info|grep "\----\HV Support"

    The output of the HV Support command indicates the type of Hyper-visor support available. These are the descriptions for the possible values:

    0 - VT/AMD-V indicates that support is not available for this hardware.
    1 - VT/AMD-V indicates that VT or AMD-V might be available but it is not supported for this hardware.2 - VT/AMD-V indicates that VT or AMD-V is available but is currently not enabled in the BIOS.3 - VT/AMD-V indicates that VT or AMD-V is enabled in the BIOS and can be used.

How to check the Listening Ports on Linux Server - Linvirtshell

Tuesday, November 07, 2017

We can check the listening ports on Linux Server by below ways

fuser - identify processes using files or sockets (Refer- Man Pages for more information)

Linvirtshell.com

netstat -  Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships (Refer- Man Pages for more information)

Linvirtshell.com

ssh  - OpenSSH SSH client (remote login program)  (Refer- Man Pages for more information)

[root@nsk-linux ~]# ssh -vv 10.0.2.15 25

OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 10.0.2.15 [10.0.2.15] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file /root/.ssh/identity type -1
debug1: identity file /root/.ssh/identity-cert type -1
debug1: identity file /root/.ssh/id_rsa type -1
debug1: identity file /root/.ssh/id_rsa-cert type -1
....
...

lsof - list open files (Refer- Man Pages for more information)

Linvirtshell.com

  nmap - Network exploration tool and security / port scanner (Refer- Man Pages for more information) 

Here for ex, i took output from localhost, you can change any ip or server name.
Linvirtshell.com

telnet -  user interface to the TELNET protocol  (Refer- Man Pages for more information)

Here for ex, i took output from localhost, you can change any ip or server name.

Linvirtshell.com

Monday, November 6, 2017

How to Avoid & Solve the Disk Limitations error with KVM guests?

Monday, November 06, 2017
There are some limitations specific to the virtio-blk driver that will be discussed in this kbase article. Please note, these are not general limitations of KVM, but rather relevant only to cases where virtio-blk is used.

Disks under KVM are para-virtualized block devices when used with the virtio-blk driver. All para-virtualized devices (e.g. disk, network, balloon, etc.) are PCI devices. Presently, guests are limited to a maximum of 32 PCI devices. Of the 32, 4 are required by the guest for minimal baseline functionality and are
therefore reserved.


When adding a disk to a KVM guest, the default method assigns a separate virtual PCI controller for every disk to allow hot-plug support (i.e. the ability to add/remove disks from a running VM without downtime). Therefore, if no other PCI devices have been assigned, the max number of hot-pluggable disks is 28.


If a guest requires more disks than the available PCI slots allow, then there are three possible work-arounds.


1. Use PCI pass-through to assign a physical disk controller (i.e. FC HBA, SCSI controller, etc.) to the VM and subsequently use as many devices as that controller supports.
2. Forego the ability to hot-plug and assign the virtual disks using multi-function PCI addressing.
3. Use the virtio-scsi driver, which creates a virtual SCSI HBA that occupies a single PCI address and supports thousands of hot-plug disks.


Here i have used the option 2 for correcting this problem.

Multi-function PCI addressing allows up to 8 virtual disks per controller. Therefore you can have n * 8 possible disks, where n is the number of available PCI slots.

On a system with 28 free PCI slots, you can assign up to 224 virtual disks to that VM. However, as previously stated you will not be able to add and/or remove the multi-function disks from the guest without a reboot of the guest.

Any disks assigned without multifunction can however continue to use
hot-plug.

The XML config below, demonstrates how to configure a multi-function PCI controller:

<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/rootvg/lvtest01'/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/rootvg/lvtest02'/>
<target dev='vdc' bus='virtio'/>
<alias name='virtio-disk2'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x07' function='0x1' multifunction='on'/>


In the above, we defined a new controller in slot 7. And then attached 2 disks (vdb, vdc) to that controller. So that only one PCI slot is used, slot 7. And 2 disks are attached to it. We could add 8 more disks to the controller before having to create a new controller in slot 8 (assuming 8 is the next available slot).

You can check a guests config from the virtualization host, by using "virsh dumpxml <guest>". This will show you which slots are in use and therefore which are available.


To add one or more multifunction controllers you would use "virsh edit <guest>" and then add the appropriate XML entries modeled after the example above. Remember, the guest must be rebooted before the changes to the XML config are applied.