This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Monday, November 2, 2015

Difference Between RHEL 5, 6, AND 7 - SYSTEM BASICS & BASIC CONFIGURATION

Monday, November 02, 2015 0
Difference Between RHEL 5, 6, AND 7 - SYSTEM BASICS &  BASIC CONFIGURATION



Understanding the Virsh Command in Linux Virtualization

Monday, November 02, 2015 0
Connecting to a Hypervisor  (Unsupported now)
virsh connect <name>

Where <name> is the machine name of the hypervisor. If you want to initiate a read—only connection, append the above command with —readonly.

Creating a Virtual Machine
virsh create <path to XML configuration file>

Configuring an XML Dump
virsh dumpxml [domain-id | domain-name | domain-uuid]

This command outputs the domain information (in XML) to stdout . If you save the data to a file, you can use the create option to recreate the virtual machine.

Suspending a Virtual Machine
virsh suspend [domain-id | domain-name |domain-uuid]

When a domain is in a suspended state, it still consumes system RAM. There will also be no disk or network I/O when suspended. This operation is immediate and the virtual machine must be restarted with the resume option

Resuming a Virtual Machine
virsh resume [domain-id | domain-name | domain-uuid]

This operation is immediate and the virtual machine parameters are preserved in a suspend and resume cycle.

Saving a Virtual Machine
virsh save [domain-name][domain-id | domain-uuid][filename]

This stops the virtual machine you specify and saves the data to a file, which may take some time given the amount of memory in use by your virtual machine. You can restore the state of the virtual machine with the restore option

Restoring a Virtual Machine
virsh restore [filename]

This restarts the saved virtual machine, which may take some time. The virtual machine's name and UUID are preserved but are allocated for a new id.

Shutting Down a Virtual Machine
virsh shutdown [domain-id | domain-name | domain-uuid]

You can control the behavior of the rebooting virtual machine by modifying the on_shutdown parameter of the xmdomain.cfg file.

Rebooting a Virtual Machine
virsh reboot [domain-id | domain-name | domain-uuid]

 You can control the behavior of the rebooting virtual machine by modifying the on_reboot parameter of the xmdomain.cfg file.

Terminating a Domain
virsh destroy [domain-name | domain-id | domain-uuid]

This command does an immediate ungraceful shutdown and stops any guest domain sessions (which could potentially lead to file corrupted file systems still in use by the virtual machine). You should use the destroy option only when the virtual machine's
operating system is non-responsive. For a paravirtualized virtual machine, you should use the shutdown option .

Converting a Domain Name to a Domain ID
virsh domid [domain-name | domain-uuid]

Converting a Domain ID to a Domain Name
virsh domname [domain-name | domain-uuid]

Converting a Domain Name to a UUID
virsh domuuid [domain-id | domain-uuid]

Displaying Virtual Machine Information
virsh dominfo [domain-id | domain-name | domain-uuid]

Displaying Node Information
virsh nodeinfo

 The outputs displays something similar to:
CPU model                    x86_64
CPU (s)                      8
CPU frequency                2895 Mhz
CPU socket(s)                2    
Core(s) per socket           2
Threads per core:            2
Numa cell(s)                 1
Memory size:                 1046528 kb
This displays the node information and the machines that support the virtualization process.

Displaying the Virtual Machines
virsh list domain-name [ ——inactive  |  ——all]


The ——inactive option lists inactive domains (domains that have been defined but are not currently active).
The —all domain lists all domains, whether active or not. Your output should resemble the this example:
ID                 Name                 State
————————————————
0                   Domain0             running
1                   Domain202           paused
2                   Domain010           inactive
3                   Domain9600          crashed

Here are the six domain states:
running           lists domains currently active on the CPU
blocked           lists domains that are blocked
paused            lists domains that are suspended
shutdown          lists domains that are in process of shutting down
shutoff           lists domains that are completely down.
crashed           lists domains that are crashed

Displaying Virtual CPU Information
virsh vcpuinfo [domain-id | domain-name | domain-uuid]

Configuring Virtual CPU Affinity
virsh vcpupin [domain-id | domain-name | domain-uuid] [vcpu] , [cpulist]

Where [vcpu] is the virtual VCPU number and [cpulist] lists the physical number of CPUs.

Configuring Virtual CPU Count
virsh setvcpus [domain-name | domain-id | domain-uuid] [count]

 Note that the new count cannot exceed the amount you specified when you created the Virtual Machine

Configuring Memory Allocation
virsh setmem [domain-id | domain-name]  [count]

You must specify the [count] in kilobytes. Note that the new count cannot exceed the amount you specified when you created the Virtual Machine. Values lower than 64 MB probably won't work.You can adjust the Virtual Machine memory as necessary.

Configuring Maximum Memory
virsh setmaxmem  [domain-name | domain-id | domain-uuid] [count]

You must specify the [count] in kilobytes. Note that the new count cannot exceed the amount
you specified when you created the Virtual Machine. Values lower than 64 MB probably won't work. The maximum memory doesn't affect the current use of the Virtual Machine (unless the new value is lower which should shrink memory usage).

BASIC MANAGEMENT OPTIONS


Resource Management Options

setmem         : changes the allocated memory.
setmaxmem  : changes maximum memory limit
setvcpus        : changes number of virtual CPUs.
vcpuinfo         : domain vcpu information.
vcpupin          : control the domain vcpu affinity.

Monitoring and troubleshooting Options

version        : show version
dumpxml        : domain information in XML
nodeinfo        : node information

virsh command output

The following are example outputs from common virsh commands:
the list command:
virsh # list

Id  Name                 State
----------------------------------
0   Domain-0             running
13  r5b2-mySQL01         blocked

the dominfo domain command:
virsh # dominfo r5b2-mySQL01

Id:             13
Name:           r5b2-mySQL01
UUID:           4a4c59a7-ee3f-c781-96e4-288f2862f011
OS Type:                 linux
State:          blocked
CPU(s):         1
CPU time:               11.0s
Max memory:     512000 kB
Used memory:    512000 kB

the domstate domain command:

virsh # domstate r5b2-mySQL01
blocked

the domuuid domain command:

virsh # domuuid r5b2-mySQL01
4a4c59a7-ee3f-c781-96e4-288f2862f011

the vcpuinfo domain command:

virsh # vcpuinfo r5b2-mySQL01
VCPU:           0
CPU:            0
State:          blocked
CPU time:       0.0s
CPU Affinity:   yy

the dumpxml domain command:

virsh # dumpxml r5b2-mySQL01
<domain type='xen' id='13'>
            <name>r5b2-mySQL01</name>
            <uuid>4a4c59a7ee3fc78196e4288f2862f011</uuid>
            <bootloader>/usr/bin/pygrub</bootloader>
            <os>
                                 <type>linux</type>
                                 <kernel>/var/lib/xen/vmlinuz.2dgnU_</kernel>
                                 <initrd>/var/lib/xen/initrd.UQafMw</initrd>
                                <cmdline>ro root=/dev/VolGroup00/LogVol00 rhgb quiet</cmdline>
            </os>
            <memory>512000</memory>
            <vcpu>1</vcpu>
            <on_poweroff>destroy</on_poweroff>
            <on_reboot>restart</on_reboot>
            <on_crash>restart</on_crash>
            <devices>
                                <interface type='bridge'>
                                                     <source bridge='xenbr0'/>
                                                    <mac address='00:16:3e:49:1d:11'/>
                                                     <script path='vif-bridge'/>
                                 </interface>
                                 <graphics type='vnc' port='5900'/>
                                 <console tty='/dev/pts/4'/>
            </devices>

the version domain command:

virsh # version
Compiled against library: libvir 0.1.7
Using library: libvir 0.1.7
Using API: Xen 3.0.1
Running hypervisor: Xen 3.0.0

Wednesday, October 28, 2015

Brief about ESXi Log Files and Locations.

Wednesday, October 28, 2015 0
Working with ESX(i) log files is important when troubleshooting issues within the virtual environment. So sharing few important log files of ESXI as follows.
  • /var/log/auth.log: ESXi Shell authentication success and failure attempts.
  • /var/log/dhclient.log: DHCP client log.
  • /var/log/esxupdate.log: ESXi patch and update installation logs.
  • /var/log/hostd.log: Host management service logs, including virtual machine and host Task and Events, communication with the vSphere Client and vCenter Server vpxa agent, and SDK connections.
  • /var/log/shell.log: ESXi Shell usage logs, including enable/disable and every command entered.
  • /var/log/boot.gz: A compressed file that contains boot log information and can be read using zcat /var/log/boot.gz|more.
  • /var/log/syslog.log: Management service initialization, watchdogs, scheduled tasks and DCUI use.
  • /var/log/usb.log: USB device arbitration events, such as discovery and pass-through to virtual machines.
  • /var/log/vob.log: VMkernel Observation events, similar to vob.component.event.
  • /var/log/vmkernel.log: Core VMkernel logs, including device discovery, storage and networking device and driver events, and virtual machine startup.
  • /var/log/vmkwarning.log: A summary of Warning and Alert log messages excerpted from the VMkernel logs.
/var/log/vmksummary.log: A summary of ESXi host startup and shutdown, and an hourly heartbeat with uptime, number of virtual machines running, and service resource consumption.

How to check the ESXi logs via Web browser?

Wednesday, October 28, 2015 0
Start you web browser and connect to the host via:

http://IP_of_Your_ESXi/host

That’s it. The hyperlinks to the different log files are shown. If you scroll all the way down you can see vpxa.log which is a vcenter log file. Another important log file is fdm.log (fault domain manager agent log) which allows you to troubleshoot HA problems.


http://buildvirtual.net/wp-content/uploads/2013/09/log_files3.jpg

How to change the queue depth configuration of QLogic, Emulex and Brocade HBAs for various Vmware ESXi/ESX versions

Wednesday, October 28, 2015 0
ESXi 6.0

This table lists the default Queue Depth values for QLogic HBAs for various ESXi/ESX versions:



The default Queue Depth value for Emulex adapters has not changed for all versions of ESXi/ESX released to date. The Queue Depth is 32 by default, and because 2 buffers are reserved, 30 are available for I/O data.

The default Queue Depth value for Brocade adapters is 32.

To adjust the queue depth for an HBA:

    Verify which HBA module is currently loaded by entering one of these commands on the service console:
        For QLogic:

        # esxcli system module list | grep qln
        For Emulex:

        # esxcli system module list | grep lpfc
        For Brocade:

        # esxcli system module list | grep bfa
    Run one of these commands:

    Note: The examples show the QLogic and Emulex modules. Use the appropriate module based on the outcome of the previous step.
        For QLogic:

        # esxcli system module parameters set -p qlfxmaxqdepth=64 -m qlnativefc

        For Emulex:

        # esxcli system module parameters set -p lpfc0_lun_queue_depth=64 -m lpfc

        For Brocade:

        # esxcli system module parameters set -p bfa_lun_queue_depth=64 -m bfa

    Notes:
        In these commands, both qlfxmaxqdepth and lpfc0 use the lowercase letter L, "l", and not the numeric digit 1.
        In this case, the HBAs have their LUN queue depths set to 64.
        If all Emulex cards on the host must be updated, apply the global parameter, lpfc_lun_queue_depth instead.
    Reboot your host.
    Run this command to confirm that your changes have been applied:
    # esxcli system module parameters list -m driver

    Where driver is your QLogic, Emulex, or Brocade adapter driver module, such as lpfc, qlnativefc, or bfa.

    The output appears similar to:

    Name                        Type  Value  Description
    --------------------------  ----  -----  --------------------------------------------------
    .....
    ql2xmaxqdepth               int   64     Maximum queue depth to report for target devices.
    .....


ESXi 5.0, 5.1, and 5.5

To adjust the queue depth for an HBA:


    Verify which HBA module is currently loaded by entering one of these commands on the service console:
        For QLogic:

        # esxcli system module list | grep qla
        For ESXi 5.5 QLogic native drivers:

        # esxcli system module list | grep qln
        For Emulex:

        # esxcli system module list | grep lpfc
        For Brocade:

        # esxcli system module list | grep bfa
    Run one of these commands:

    Note: The examples show the QLogic qla2xxx and Emulex lpfc820 modules. Use the appropriate module based on the outcome of the previous step.
        For QLogic:

        # esxcli system module parameters set -p ql2xmaxqdepth=64 -m qla2xxx
        For ESXi 5.5 QLogic native drivers:

        # esxcli system module parameters set -p ql2xmaxqdepth=64 -m qlnativefc
        For Emulex:

        # esxcli system module parameters set -p lpfc0_lun_queue_depth=64 -m lpfc820

        For ESXi 5.5 Emulex native drivers:

        # esxcli system module parameters set -p lpfc0_lun_queue_depth=64 -m lpfc
        For Brocade:

        # esxcli system module parameters set -p bfa_lun_queue_depth=64 -m bfa
    Notes:
        In these commands, both ql2xmaxqdepth and lpfc0 use the lowercase letter L, "l", and not the numeric digit 1.
        In this case, the HBAs represented by ql2x and lpfc0 have their LUN queue depths set to 64.
        If all Emulex cards on the host must be updated, apply the global parameter, lpfc_lun_queue_depth instead.
    Reboot your host.
    Run this command to confirm that your changes have been applied:

    # esxcli system module parameters list -m driver

    Where driver is your QLogic, Emulex, or Brocade adapter driver module, such as lpfc820, qla2xxx, or bfa.

    The output appears similar to:

    Name                        Type  Value  Description
    --------------------------  ----  -----  --------------------------------------------------
    .....
    ql2xmaxqdepth               int   64     Maximum queue depth to report for target devices.
    .....


ESXi/ESX 4.x

To adjust the queue depth for an HBA:


    Verify which HBA module is currently loaded by entering one of these commands on the service console:
        For QLogic:

        # vmkload_mod -l | grep qla
        For Emulex:

        # vmkload_mod -l | grep lpfc
        For Brocade:

        # vmkload_mod -l | grep bfa
    Run one of these commands:

    Note: The examples show the QLogic qla2xxx and Emulex lpfc820 modules. Use the appropriate module based on the outcome of the previous step.
        For QLogic:

        # esxcfg-module -s ql2xmaxqdepth=64 qla2xxx
        For Emulex:

        # esxcfg-module -s 'lpfc0_lun_queue_depth=64' lpfc820
        For Brocade:

        # esxcfg-module -s 'bfa_lun_queue_depth=64' bfa

    In this case, the HBAs represented by ql2x and lpfc0 have their LUN queue depths set to 64.

    Note: For multiple instances of Emulex HBAs being presented to a system, use:

    # esxcfg-module -s 'lpfc0_lun_queue_depth=64 lpfc1_lun_queue_depth=64' lpfc820
    Reboot your host.
    Run this command to confirm if your changes are applied:

    # esxcfg-module -g driver

    Where driver is your QLogic, Emulex, or Brocade adapter driver module, such as lpfc820, qla2xxx, or bfa.

How to Get the ESXi Hosts along with Specific Version by using Powercli?

Wednesday, October 28, 2015 0
Use the below Powercli command to get the Esxi Hosts version. 

get-vmhost | where-object { $_.version -eq "4.1.0" } | select name,version

Monday, October 19, 2015

How to do KVM Clock Sync?

Monday, October 19, 2015 0
These are the instructions to fix a KVM guest that has its clock jumped ahead a few hours after it is created/started.
The clock will eventually get corrected after ntpd gets running, but the server may run up to 1/2 hour on skewed time. The skewed time may cause issues with scheduled jobs
Update the Virtual Guests clock setting. This will prevent the clock on the virtual guest from jumping forward.

FROM the DOM-0 (KVM Host Server)
#vi /Path/to/server/configuration_file.xml
replace line:
<clock offset='utc'/>
with:
<clock offset='localtime'/>

Thursday, October 15, 2015

What is a defunct process in Linux?

Thursday, October 15, 2015 0
  • These are also termed as zombie process.
  • These are those process who have completed their execution but still has an entry in the process table.
  • When a process ends, all of the memory and resources associated with it are de-allocated so they can be used by other processes.
  • After the zombie is removed, its process identifier (PID) and entry in the process table can then be reused.
  • Zombies can be identified in the output from the Unix ps command by the presence of a "Z" in the "STAT" column