This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Wednesday, October 28, 2015

Brief about ESXi Log Files and Locations.

Wednesday, October 28, 2015 0
Working with ESX(i) log files is important when troubleshooting issues within the virtual environment. So sharing few important log files of ESXI as follows.
  • /var/log/auth.log: ESXi Shell authentication success and failure attempts.
  • /var/log/dhclient.log: DHCP client log.
  • /var/log/esxupdate.log: ESXi patch and update installation logs.
  • /var/log/hostd.log: Host management service logs, including virtual machine and host Task and Events, communication with the vSphere Client and vCenter Server vpxa agent, and SDK connections.
  • /var/log/shell.log: ESXi Shell usage logs, including enable/disable and every command entered.
  • /var/log/boot.gz: A compressed file that contains boot log information and can be read using zcat /var/log/boot.gz|more.
  • /var/log/syslog.log: Management service initialization, watchdogs, scheduled tasks and DCUI use.
  • /var/log/usb.log: USB device arbitration events, such as discovery and pass-through to virtual machines.
  • /var/log/vob.log: VMkernel Observation events, similar to vob.component.event.
  • /var/log/vmkernel.log: Core VMkernel logs, including device discovery, storage and networking device and driver events, and virtual machine startup.
  • /var/log/vmkwarning.log: A summary of Warning and Alert log messages excerpted from the VMkernel logs.
/var/log/vmksummary.log: A summary of ESXi host startup and shutdown, and an hourly heartbeat with uptime, number of virtual machines running, and service resource consumption.

How to check the ESXi logs via Web browser?

Wednesday, October 28, 2015 0
Start you web browser and connect to the host via:

http://IP_of_Your_ESXi/host

That’s it. The hyperlinks to the different log files are shown. If you scroll all the way down you can see vpxa.log which is a vcenter log file. Another important log file is fdm.log (fault domain manager agent log) which allows you to troubleshoot HA problems.


http://buildvirtual.net/wp-content/uploads/2013/09/log_files3.jpg

How to change the queue depth configuration of QLogic, Emulex and Brocade HBAs for various Vmware ESXi/ESX versions

Wednesday, October 28, 2015 0
ESXi 6.0

This table lists the default Queue Depth values for QLogic HBAs for various ESXi/ESX versions:



The default Queue Depth value for Emulex adapters has not changed for all versions of ESXi/ESX released to date. The Queue Depth is 32 by default, and because 2 buffers are reserved, 30 are available for I/O data.

The default Queue Depth value for Brocade adapters is 32.

To adjust the queue depth for an HBA:

    Verify which HBA module is currently loaded by entering one of these commands on the service console:
        For QLogic:

        # esxcli system module list | grep qln
        For Emulex:

        # esxcli system module list | grep lpfc
        For Brocade:

        # esxcli system module list | grep bfa
    Run one of these commands:

    Note: The examples show the QLogic and Emulex modules. Use the appropriate module based on the outcome of the previous step.
        For QLogic:

        # esxcli system module parameters set -p qlfxmaxqdepth=64 -m qlnativefc

        For Emulex:

        # esxcli system module parameters set -p lpfc0_lun_queue_depth=64 -m lpfc

        For Brocade:

        # esxcli system module parameters set -p bfa_lun_queue_depth=64 -m bfa

    Notes:
        In these commands, both qlfxmaxqdepth and lpfc0 use the lowercase letter L, "l", and not the numeric digit 1.
        In this case, the HBAs have their LUN queue depths set to 64.
        If all Emulex cards on the host must be updated, apply the global parameter, lpfc_lun_queue_depth instead.
    Reboot your host.
    Run this command to confirm that your changes have been applied:
    # esxcli system module parameters list -m driver

    Where driver is your QLogic, Emulex, or Brocade adapter driver module, such as lpfc, qlnativefc, or bfa.

    The output appears similar to:

    Name                        Type  Value  Description
    --------------------------  ----  -----  --------------------------------------------------
    .....
    ql2xmaxqdepth               int   64     Maximum queue depth to report for target devices.
    .....


ESXi 5.0, 5.1, and 5.5

To adjust the queue depth for an HBA:


    Verify which HBA module is currently loaded by entering one of these commands on the service console:
        For QLogic:

        # esxcli system module list | grep qla
        For ESXi 5.5 QLogic native drivers:

        # esxcli system module list | grep qln
        For Emulex:

        # esxcli system module list | grep lpfc
        For Brocade:

        # esxcli system module list | grep bfa
    Run one of these commands:

    Note: The examples show the QLogic qla2xxx and Emulex lpfc820 modules. Use the appropriate module based on the outcome of the previous step.
        For QLogic:

        # esxcli system module parameters set -p ql2xmaxqdepth=64 -m qla2xxx
        For ESXi 5.5 QLogic native drivers:

        # esxcli system module parameters set -p ql2xmaxqdepth=64 -m qlnativefc
        For Emulex:

        # esxcli system module parameters set -p lpfc0_lun_queue_depth=64 -m lpfc820

        For ESXi 5.5 Emulex native drivers:

        # esxcli system module parameters set -p lpfc0_lun_queue_depth=64 -m lpfc
        For Brocade:

        # esxcli system module parameters set -p bfa_lun_queue_depth=64 -m bfa
    Notes:
        In these commands, both ql2xmaxqdepth and lpfc0 use the lowercase letter L, "l", and not the numeric digit 1.
        In this case, the HBAs represented by ql2x and lpfc0 have their LUN queue depths set to 64.
        If all Emulex cards on the host must be updated, apply the global parameter, lpfc_lun_queue_depth instead.
    Reboot your host.
    Run this command to confirm that your changes have been applied:

    # esxcli system module parameters list -m driver

    Where driver is your QLogic, Emulex, or Brocade adapter driver module, such as lpfc820, qla2xxx, or bfa.

    The output appears similar to:

    Name                        Type  Value  Description
    --------------------------  ----  -----  --------------------------------------------------
    .....
    ql2xmaxqdepth               int   64     Maximum queue depth to report for target devices.
    .....


ESXi/ESX 4.x

To adjust the queue depth for an HBA:


    Verify which HBA module is currently loaded by entering one of these commands on the service console:
        For QLogic:

        # vmkload_mod -l | grep qla
        For Emulex:

        # vmkload_mod -l | grep lpfc
        For Brocade:

        # vmkload_mod -l | grep bfa
    Run one of these commands:

    Note: The examples show the QLogic qla2xxx and Emulex lpfc820 modules. Use the appropriate module based on the outcome of the previous step.
        For QLogic:

        # esxcfg-module -s ql2xmaxqdepth=64 qla2xxx
        For Emulex:

        # esxcfg-module -s 'lpfc0_lun_queue_depth=64' lpfc820
        For Brocade:

        # esxcfg-module -s 'bfa_lun_queue_depth=64' bfa

    In this case, the HBAs represented by ql2x and lpfc0 have their LUN queue depths set to 64.

    Note: For multiple instances of Emulex HBAs being presented to a system, use:

    # esxcfg-module -s 'lpfc0_lun_queue_depth=64 lpfc1_lun_queue_depth=64' lpfc820
    Reboot your host.
    Run this command to confirm if your changes are applied:

    # esxcfg-module -g driver

    Where driver is your QLogic, Emulex, or Brocade adapter driver module, such as lpfc820, qla2xxx, or bfa.

How to Get the ESXi Hosts along with Specific Version by using Powercli?

Wednesday, October 28, 2015 0
Use the below Powercli command to get the Esxi Hosts version. 

get-vmhost | where-object { $_.version -eq "4.1.0" } | select name,version

Monday, October 19, 2015

How to do KVM Clock Sync?

Monday, October 19, 2015 0
These are the instructions to fix a KVM guest that has its clock jumped ahead a few hours after it is created/started.
The clock will eventually get corrected after ntpd gets running, but the server may run up to 1/2 hour on skewed time. The skewed time may cause issues with scheduled jobs
Update the Virtual Guests clock setting. This will prevent the clock on the virtual guest from jumping forward.

FROM the DOM-0 (KVM Host Server)
#vi /Path/to/server/configuration_file.xml
replace line:
<clock offset='utc'/>
with:
<clock offset='localtime'/>

Thursday, October 15, 2015

What is a defunct process in Linux?

Thursday, October 15, 2015 0
  • These are also termed as zombie process.
  • These are those process who have completed their execution but still has an entry in the process table.
  • When a process ends, all of the memory and resources associated with it are de-allocated so they can be used by other processes.
  • After the zombie is removed, its process identifier (PID) and entry in the process table can then be reused.
  • Zombies can be identified in the output from the Unix ps command by the presence of a "Z" in the "STAT" column

What are the performance enhancements in GFS2 as compared to GFS?

Thursday, October 15, 2015 0
 GFS2 features  
  • Better performance for heavy usage in a single directory.
  • Faster synchronous I/O operations.
  • Faster cached reads (no locking overhead).
  • Faster direct I/O with preallocated files (provided I/O size is reasonably large, such   as     4M    blocks).
  • Faster I/O operations in general.
  • Faster Execution of the df command, because of faster statfs calls.
  • Improved atime mode to reduce the number of write I/O operations generated by atime.
  When compared with GFS GFS2 supports the following features.
  • Extended file attributes (xattr) the lsattr() and chattr() attribute settings via standard ioctl() calls nanosecond timestamps
  • GFS2 uses less kernel memory.
  • GFS2 requires no metadata generation numbers.
  • Allocating GFS2 metadata does not require reads. Copies of metadata blocks in multiple journals are managed by revoking blocks from the journal before lock release.
  • GFS2 includes a much simpler log manager that knows nothing about unlinked inodes or quota changes.
  • The gfs2_grow and gfs2_jadd commands use locking to prevent multiple instances running at the same time.
  • The ACL code has been simplified for calls like creat() and mkdir().
  • Unlinked inodes, quota changes, and statfs changes are recovered without remounting the journal

What is a Quorum Disk in cluster?

Thursday, October 15, 2015 0
Quorum Disk is a disk-based quorum daemon, qdiskd, that provides supplemental heuristics to determine node fitness.
With heuristics you can determine factors that are important to the operation of the node in the event of a network partition.


    For a 3 node cluster a quorum state is present untill 2 of the 3 nodes are active i.e. more than half. But what if due to some reasons the 2nd node also stops communicating with the the 3rd node? In that case under a normal architecture  the cluster would dissolve and stop working. 

But for mission critical environments and such scenarios we use quorum disk in which an additional disk is configured which is mounted on all the nodes with qdiskd service running and a vote value is assigned to it.

    So suppose in above case I have assigned 1 vote to qdisk so even after 2 nodes stops communicating with 3rd node, the cluster would have 2 votes (1 qdisk + 1 from 3rd node) which is still more than half of vote count for a 3 node cluster. Now both the inactive nodes would be fenced and your 3rd node would be still up and running being a part of the cluster.