This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Showing posts with label Linux Virtualization. Show all posts
Showing posts with label Linux Virtualization. Show all posts

Friday, December 22, 2017

Install KVM in RHEL7

Friday, December 22, 2017 0

Install KVM in RHEL7

By default, a RHEL 7 system doesn't come with a KVM or libvirt preinstalled. This can be installed in three ways:

Through the graphical setup during the system's setupVia a kickstart installationThrough a manual installation from the command line


To install a KVM, you will require at least 6 GB of free disk space, 2 GB of RAM, and an additional core or thread per guest.

Check whether your CPU supports a virtualization flag (such as SVM or VMX). Some hardware vendors disable this in the BIOS, so you may want to check your BIOS as well. Run the following command:

# grep -E 'svm|vmx' /proc/cpuinfo
flags    : ... svm ...
Check whether the hardware virtualization modules (such as kvm_intel and kvm) are loaded in the kernel using the following command:

# lsmod | grep kvm
kvm_intel             155648  0
kvm                      495616  1 kvm_intel

Manual installation
This way of installing a KVM is generally done once the base system is installed by some other means. 

Install the software needed to provide an environment to host virtualized guests with the following command:
# yum -y install qemu-kvm qemu-img libvirt

The installation of these packages will include quite a lot of dependencies.

Install additional utilities required to configure libvirt and install virtual machines by running this command:
# yum -y install virt-install libvirt-python python-virthost libvirt-client

By default, the libvirt daemon is marked to autostart on each boot. Check whether it is enabled by executing the following command:
# systemctl status libvirtd

libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
   Active: inactive
   Docs: man:libvirtd(8)
    http://libvirt.org

If for some reason this is not the case, mark it for autostart by executing the following:
# systemctl enable libvirtd
To manually stop/start/restart the libvirt daemon, this is what you'll need to execute:
# systemctl stop libvirtd
# systemctl start libvirtd
# systemctl restart libvirtd

Kickstart installation
Installing a KVM during kickstart offers you an easy way to automate the installation of KVM instances. 

Add the following package groups to your kickstarted file in the %packages section:
@virtualization-hypervisor
@virtualization-client
@virtualization-platform
@virtualization-tools
Start the installation of your host with this kickstart file.

Graphical setup during the system's setup
This is probably the least common way of installing a KVM. The only time I used this was during the course of writing this recipe. Here's how you can do this:

Boot from the RHEL 7 Installation media.
Complete all steps besides the Software selection step.
Go to Software Selection to complete the KVM software selection.
Select the Virtualization host radio button in Base Environment, and check the Virtualization Platform checkbox in Add-Ons for Selected Environment:
Finalize the installation.
On the Installation Summary screen, complete any other steps and click on Begin Installation.

Monday, December 18, 2017

How to reboot the Xen Virtual Machine when was at hung state.

Monday, December 18, 2017 0

How to reboot the Xen Virtual Machine when was at hung state.

If console is not working from the host(Dom0), the console was opened in order to use sysrq magic packages
#xm console xenvm006

From other terminal on the node, run the below  comamnd one by one.
#xm sysrq xenvm006  h
#xm sysrq xenvm006  m    #should show the amount of memory been used
#xm sysrq xenvm006  t      #should show the current tasks

the output is  "console is opened and was not good"  also   "vm was with out resources"
Check the xentop, if  a high cpu usage is shown like 200%

Then reboot the virtual machine.

Open one more terminal & run the below command.

#xm destroy  xenvm006

if the virtual machine is running under cluster,  first disable the vm from cluster to avoid failover.

#clusvcadm -d vm:xenvm006

Then destroy the virtual machine.

Because as the vm is hung state, clusvcadm -d wont do a clean shutdown, so we need to destroy it manually.

#xm create xenvm006  (will start the vm)
#clusvcadm -e vm:xenvm006   (with cluster)

Thursday, December 14, 2017

Understanding the Redhat Virtualization - Log files

Thursday, December 14, 2017 0

Understanding the Redhat Virtualization -  Log files

Red Hat Virtualization features the xend daemon and qemu-dm process, two utilities that write the multiple log files to the /var/log/xen/ directory:

xend.log is the log file that contains all the data collected by the xend daemon, whether it is a normal system event, or an operator initiated action. All virtual machine operations (such as create, shutdown, destroy, etc.) appears here. The xend.log is usually the first place to look when you track down event or performance problems. It contains detailed entries and conditions of the error messages.

xend-debug.log is the log file that contains records of event errors from xend and the Virtualization subsystems (such as framebuffer, Python scripts, etc.)

Xen-hotplug-log is the log file that contains data from hotplug events. If a device or a network script does not come online, the event appears here.

qemu-dm. [PID].log is the log file created by the qemu-dm process for each fully virtualized guest. When using this log file, you must retrieve the given qemu-dm process PID, by using the ps command to examine process arguments to isolate the qemu-dm process on the virtual machine. Note that you must replace the [PID] symbol with the actual PID qemu-dm process.

If you encounter any errors with the Virtual Machine Manager, you can review the generated data in the virt-manager.log file that resides in the /.virt-manager directory. Note that every time you start the Virtual Machine Manager, it overwrites the existing log file contents. Make sure to backup the virt-manager.log file, before you restart the Virtual Machine manager after a system error.

Monday, November 6, 2017

How to Avoid & Solve the Disk Limitations error with KVM guests?

Monday, November 06, 2017
There are some limitations specific to the virtio-blk driver that will be discussed in this kbase article. Please note, these are not general limitations of KVM, but rather relevant only to cases where virtio-blk is used.

Disks under KVM are para-virtualized block devices when used with the virtio-blk driver. All para-virtualized devices (e.g. disk, network, balloon, etc.) are PCI devices. Presently, guests are limited to a maximum of 32 PCI devices. Of the 32, 4 are required by the guest for minimal baseline functionality and are
therefore reserved.


When adding a disk to a KVM guest, the default method assigns a separate virtual PCI controller for every disk to allow hot-plug support (i.e. the ability to add/remove disks from a running VM without downtime). Therefore, if no other PCI devices have been assigned, the max number of hot-pluggable disks is 28.


If a guest requires more disks than the available PCI slots allow, then there are three possible work-arounds.


1. Use PCI pass-through to assign a physical disk controller (i.e. FC HBA, SCSI controller, etc.) to the VM and subsequently use as many devices as that controller supports.
2. Forego the ability to hot-plug and assign the virtual disks using multi-function PCI addressing.
3. Use the virtio-scsi driver, which creates a virtual SCSI HBA that occupies a single PCI address and supports thousands of hot-plug disks.


Here i have used the option 2 for correcting this problem.

Multi-function PCI addressing allows up to 8 virtual disks per controller. Therefore you can have n * 8 possible disks, where n is the number of available PCI slots.

On a system with 28 free PCI slots, you can assign up to 224 virtual disks to that VM. However, as previously stated you will not be able to add and/or remove the multi-function disks from the guest without a reboot of the guest.

Any disks assigned without multifunction can however continue to use
hot-plug.

The XML config below, demonstrates how to configure a multi-function PCI controller:

<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/rootvg/lvtest01'/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/rootvg/lvtest02'/>
<target dev='vdc' bus='virtio'/>
<alias name='virtio-disk2'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x07' function='0x1' multifunction='on'/>


In the above, we defined a new controller in slot 7. And then attached 2 disks (vdb, vdc) to that controller. So that only one PCI slot is used, slot 7. And 2 disks are attached to it. We could add 8 more disks to the controller before having to create a new controller in slot 8 (assuming 8 is the next available slot).

You can check a guests config from the virtualization host, by using "virsh dumpxml <guest>". This will show you which slots are in use and therefore which are available.


To add one or more multifunction controllers you would use "virsh edit <guest>" and then add the appropriate XML entries modeled after the example above. Remember, the guest must be rebooted before the changes to the XML config are applied.

Tuesday, October 17, 2017

When the : "Error: Driver 'pcspkr' is already registered" will appear in Virtual Machine?

Tuesday, October 17, 2017 0

On Virtual machine's, if you are observing following message 'Error: Driver 'pcspkr' is already registered'  in /var/log/messages file, then we get rid of this by adding  'blacklist snd-pcsp' in /etc/modeprobe.d/blacklist.conf file.

#echo 'blacklist snd-pcsp' >> /etc/modprobe.d/blacklist.conf