Connecting to a Hypervisor (Unsupported now) virsh connect <name>
Where <name> is the machine name of the hypervisor. If you want to initiate a read—only connection, append the above command with —readonly.
Creating a Virtual Machine virsh create <path to XML configuration file>
Configuring an XML Dump virsh dumpxml [domain-id | domain-name | domain-uuid]
This command outputs the domain information (in XML) to stdout . If you save the data to a file, you can use the create option to recreate the virtual machine.
Suspending a Virtual Machine virsh suspend [domain-id | domain-name |domain-uuid]
When a domain is in a suspended state, it still consumes system RAM. There will also be no disk or network I/O when suspended. This operation is immediate and the virtual machine must be restarted with the resume option
This operation is immediate and the virtual machine parameters are preserved in a suspend and resume cycle.
Saving a Virtual Machine virsh save [domain-name][domain-id | domain-uuid][filename]
This stops the virtual machine you specify and saves the data to a file, which may take some time given the amount of memory in use by your virtual machine. You can restore the state of the virtual machine with the restore option
Restoring a Virtual Machine virsh restore [filename]
This restarts the saved virtual machine, which may take some time. The virtual machine's name and UUID are preserved but are allocated for a new id.
Shutting Down a Virtual Machine virsh shutdown [domain-id | domain-name | domain-uuid]
You can control the behavior of the rebooting virtual machine by modifying the on_shutdown parameter of the xmdomain.cfg file.
You can control the behavior of the rebooting virtual machine by modifying the on_reboot parameter of the xmdomain.cfg file.
Terminating a Domain virsh destroy [domain-name | domain-id | domain-uuid]
This command does an immediate ungraceful shutdown and stops any guest domain sessions (which could potentially lead to file corrupted file systems still in use by the virtual machine). You should use the destroy option only when the virtual machine's operating system is non-responsive. For a paravirtualized virtual machine, you should use the shutdown option .
Converting a Domain Name to a Domain ID virsh domid [domain-name | domain-uuid]
Converting a Domain ID to a Domain Name virsh domname [domain-name | domain-uuid]
Converting a Domain Name to a UUID virsh domuuid [domain-id | domain-uuid]
The outputs displays something similar to: CPU model x86_64 CPU (s) 8 CPU frequency 2895 Mhz CPU socket(s) 2 Core(s) per socket 2 Threads per core: 2 Numa cell(s) 1 Memory size: 1046528 kb This displays the node information and the machines that support the virtualization process.
Displaying the Virtual Machines virsh list domain-name [ ——inactive | ——all]
The ——inactive option lists inactive domains (domains that have been defined but are not currently active). The —all domain lists all domains, whether active or not. Your output should resemble the this example: ID Name State ———————————————— 0 Domain0 running 1 Domain202 paused 2 Domain010 inactive 3 Domain9600 crashed
Here are the six domain states: running lists domains currently active on the CPU blocked lists domains that are blocked paused lists domains that are suspended shutdown lists domains that are in process of shutting down shutoff lists domains that are completely down. crashed lists domains that are crashed
Displaying Virtual CPU Information virsh vcpuinfo [domain-id | domain-name | domain-uuid]
You must specify the [count] in kilobytes. Note that the new count cannot exceed the amount you specified when you created the Virtual Machine. Values lower than 64 MB probably won't work.You can adjust the Virtual Machine memory as necessary.
You must specify the [count] in kilobytes. Note that the new count cannot exceed the amount you specified when you created the Virtual Machine. Values lower than 64 MB probably won't work. The maximum memory doesn't affect the current use of the Virtual Machine (unless the new value is lower which should shrink memory usage).
BASIC MANAGEMENT OPTIONS
Resource Management Options
setmem : changes the allocated memory. setmaxmem : changes maximum memory limit setvcpus : changes number of virtual CPUs. vcpuinfo : domain vcpu information. vcpupin : control the domain vcpu affinity.
Monitoring and troubleshooting Options
version : show version dumpxml : domain information in XML nodeinfo : node information
virsh command output
The following are example outputs from common virsh commands: the list command: virsh # list
Id Name State ---------------------------------- 0 Domain-0 running 13 r5b2-mySQL01 blocked
the dominfo domain command: virsh # dominfo r5b2-mySQL01
Id: 13 Name: r5b2-mySQL01 UUID: 4a4c59a7-ee3f-c781-96e4-288f2862f011 OS Type: linux State: blocked CPU(s): 1 CPU time: 11.0s Max memory: 512000 kB Used memory: 512000 kB
Working with ESX(i) log files is important when troubleshooting issues within the virtual environment. So sharing few important log files of ESXI as follows.
/var/log/auth.log: ESXi Shell authentication success and failure attempts.
/var/log/dhclient.log: DHCP client log.
/var/log/esxupdate.log: ESXi patch and update installation logs.
/var/log/hostd.log: Host management service logs, including virtual machine and host Task and Events, communication with the vSphere Client and vCenter Server vpxa agent, and SDK connections.
/var/log/shell.log: ESXi Shell usage logs, including enable/disable and every command entered.
/var/log/boot.gz: A compressed file that contains boot log information and can be read using zcat /var/log/boot.gz|more.
/var/log/syslog.log: Management service initialization, watchdogs, scheduled tasks and DCUI use.
/var/log/usb.log: USB device arbitration events, such as discovery and pass-through to virtual machines.
/var/log/vob.log: VMkernel Observation events, similar to vob.component.event.
/var/log/vmkernel.log: Core VMkernel logs, including device discovery, storage and networking device and driver events, and virtual machine startup.
/var/log/vmkwarning.log: A summary of Warning and Alert log messages excerpted from the VMkernel logs.
/var/log/vmksummary.log: A summary of ESXi host startup and shutdown, and an hourly heartbeat with uptime, number of virtual machines running, and service resource consumption.
Start you web browser and connect to the host via:
http://IP_of_Your_ESXi/host
That’s it. The hyperlinks to the different log files are shown. If you scroll all the way down you can see vpxa.log which is a vcenter log file. Another important log file is fdm.log (fault domain manager agent log) which allows you to troubleshoot HA problems.
This table lists the default Queue Depth values for QLogic HBAs for various ESXi/ESX versions:
The default Queue Depth value for Emulex adapters has not changed for all versions of ESXi/ESX released to date. The Queue Depth is 32 by default, and because 2 buffers are reserved, 30 are available for I/O data.
The default Queue Depth value for Brocade adapters is 32.
To adjust the queue depth for an HBA:
Verify which HBA module is currently loaded by entering one of these commands on the service console: For QLogic:
# esxcli system module list | grep qln For Emulex:
# esxcli system module list | grep lpfc For Brocade:
# esxcli system module list | grep bfa Run one of these commands:
Note: The examples show the QLogic and Emulex modules. Use the appropriate module based on the outcome of the previous step. For QLogic:
# esxcli system module parameters set -p qlfxmaxqdepth=64 -m qlnativefc
For Emulex:
# esxcli system module parameters set -p lpfc0_lun_queue_depth=64 -m lpfc
For Brocade:
# esxcli system module parameters set -p bfa_lun_queue_depth=64 -m bfa
Notes: In these commands, both qlfxmaxqdepth and lpfc0 use the lowercase letter L, "l", and not the numeric digit 1. In this case, the HBAs have their LUN queue depths set to 64. If all Emulex cards on the host must be updated, apply the global parameter, lpfc_lun_queue_depth instead. Reboot your host. Run this command to confirm that your changes have been applied: # esxcli system module parameters list -m driver
Where driver is your QLogic, Emulex, or Brocade adapter driver module, such as lpfc, qlnativefc, or bfa.
The output appears similar to:
Name Type Value Description -------------------------- ---- ----- -------------------------------------------------- ..... ql2xmaxqdepth int 64 Maximum queue depth to report for target devices. .....
ESXi 5.0, 5.1, and 5.5
To adjust the queue depth for an HBA:
Verify which HBA module is currently loaded by entering one of these commands on the service console: For QLogic:
# esxcli system module list | grep qla For ESXi 5.5 QLogic native drivers:
# esxcli system module list | grep qln For Emulex:
# esxcli system module list | grep lpfc For Brocade:
# esxcli system module list | grep bfa Run one of these commands:
Note: The examples show the QLogic qla2xxx and Emulex lpfc820 modules. Use the appropriate module based on the outcome of the previous step. For QLogic:
# esxcli system module parameters set -p ql2xmaxqdepth=64 -m qla2xxx For ESXi 5.5 QLogic native drivers:
# esxcli system module parameters set -p ql2xmaxqdepth=64 -m qlnativefc For Emulex:
# esxcli system module parameters set -p lpfc0_lun_queue_depth=64 -m lpfc820
For ESXi 5.5 Emulex native drivers:
# esxcli system module parameters set -p lpfc0_lun_queue_depth=64 -m lpfc For Brocade:
# esxcli system module parameters set -p bfa_lun_queue_depth=64 -m bfa Notes: In these commands, both ql2xmaxqdepth and lpfc0 use the lowercase letter L, "l", and not the numeric digit 1. In this case, the HBAs represented by ql2x and lpfc0 have their LUN queue depths set to 64. If all Emulex cards on the host must be updated, apply the global parameter, lpfc_lun_queue_depth instead. Reboot your host. Run this command to confirm that your changes have been applied:
# esxcli system module parameters list -m driver
Where driver is your QLogic, Emulex, or Brocade adapter driver module, such as lpfc820, qla2xxx, or bfa.
The output appears similar to:
Name Type Value Description -------------------------- ---- ----- -------------------------------------------------- ..... ql2xmaxqdepth int 64 Maximum queue depth to report for target devices. .....
ESXi/ESX 4.x
To adjust the queue depth for an HBA:
Verify which HBA module is currently loaded by entering one of these commands on the service console: For QLogic:
# vmkload_mod -l | grep qla For Emulex:
# vmkload_mod -l | grep lpfc For Brocade:
# vmkload_mod -l | grep bfa Run one of these commands:
Note: The examples show the QLogic qla2xxx and Emulex lpfc820 modules. Use the appropriate module based on the outcome of the previous step. For QLogic:
# esxcfg-module -s ql2xmaxqdepth=64 qla2xxx For Emulex:
# esxcfg-module -s 'lpfc0_lun_queue_depth=64' lpfc820 For Brocade:
# esxcfg-module -s 'bfa_lun_queue_depth=64' bfa
In this case, the HBAs represented by ql2x and lpfc0 have their LUN queue depths set to 64.
Note: For multiple instances of Emulex HBAs being presented to a system, use:
# esxcfg-module -s 'lpfc0_lun_queue_depth=64 lpfc1_lun_queue_depth=64' lpfc820 Reboot your host. Run this command to confirm if your changes are applied:
# esxcfg-module -g driver
Where driver is your QLogic, Emulex, or Brocade adapter driver module, such as lpfc820, qla2xxx, or bfa.
These are the instructions to fix a KVM guest that has its clock jumped ahead a few hours after it is created/started. The clock will eventually get corrected after ntpd gets running, but the server may run up to 1/2 hour on skewed time. The skewed time may cause issues with scheduled jobs Update the Virtual Guests clock setting. This will prevent the clock on the virtual guest from jumping forward.
FROM the DOM-0 (KVM Host Server) #vi /Path/to/server/configuration_file.xml replace line: <clock offset='utc'/> with: <clock offset='localtime'/>