This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Thursday, September 7, 2017

How to create Software Raid & How to replace the failed disk?

Thursday, September 07, 2017 0

Now let's create our RAID arrays /dev/md0/dev/md1, and /dev/md2 
/dev/sdb1 will be added to /dev/md0 
/dev/sdb2 to /dev/md1, and /dev/sdb3 to/dev/md2 

/dev/sda1/dev/sda2, and /dev/sda3 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following three commands:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1

mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2

mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3



The command

cat /proc/mdstat

should now show that you have three degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):

server1:~# cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]

md2 : active raid1 sdb3[1]

      4594496 blocks [2/1] [_U]



md1 : active raid1 sdb2[1]

      497920 blocks [2/1] [_U]



md0 : active raid1 sdb1[1]

      144448 blocks [2/1] [_U]



unused devices: <none>



Next we create filesystems on our RAID arrays (ext3 on /dev/md0 and /dev/md2 and swap on /dev/md1):

mkfs.ext3 /dev/md0

mkswap /dev/md1

mkfs.ext3 /dev/md2

Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig

mdadm --examine --scan >> /etc/mdadm/mdadm.conf



At the bottom of the file you should now see details about our three (degraded) RAID arrays:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# This file was auto-generated on Mon, 26 Nov 2007 21:22:04 +0100
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=72d23d35:35d103e3:01b5209e:be9ff10a
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a50c4299:9e19f9e4:01b5209e:be9ff10a
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=99fee3a5:ae381162:01b5209e:be9ff10a



Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback 1 right after default 0:

vi /boot/grub/menu.lst

[...]
default         0
fallback        1
[...]





This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails to boot, kernel #2 will be booted.

In the same file, go to the bottom where you should find some kernel stanzas. Copy the first of them and paste the stanza before the first existing stanza; replaceroot=/dev/sda3 with root=/dev/md2 and root (hd0,0) with root (hd1,0):



[...]
## ## End Default Options ##

title           Debian GNU/Linux, kernel 2.6.18-4-486 RAID (hd1)
root            (hd1,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

title           Debian GNU/Linux, kernel 2.6.18-4-486
root            (hd0,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

title           Debian GNU/Linux, kernel 2.6.18-4-486 (single-user mode)
root            (hd0,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro single
initrd          /initrd.img-2.6.18-4-486
savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST



root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will reboot the system in a few moments; the system will then try to boot from our (still degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback 1).

Next we adjust our ramdisk to the new situation:

update-initramfs -u

Now we copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):

cp -dpRx / /mnt/md2

cd /boot

cp -dpRx . /mnt/md0



Preparing GRUB (Part 1)

Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:

grub

On the GRUB shell, type in the following commands:

root (hd0,0)

grub> root (hd0,0)

 Filesystem type is ext2fs, partition type 0x83

grub>

setup (hd0)

grub> setup (hd0)

 Checking if "/boot/grub/stage1" exists... no

 Checking if "/grub/stage1" exists... yes

 Checking if "/grub/stage2" exists... yes

 Checking if "/grub/e2fs_stage1_5" exists... yes

 Running "embed /grub/e2fs_stage1_5 (hd0)"...  15 sectors are embedded.

succeeded

 Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/menu.lst"... succeeded

Done.



grub>

root (hd1,0)

grub> root (hd1,0)

 Filesystem type is ext2fs, partition type 0xfd



grub>

setup (hd1)

grub> setup (hd1)

 Checking if "/boot/grub/stage1" exists... no

 Checking if "/grub/stage1" exists... yes

 Checking if "/grub/stage2" exists... yes

 Checking if "/grub/e2fs_stage1_5" exists... yes

 Running "embed /grub/e2fs_stage1_5 (hd1)"...  15 sectors are embedded.

succeeded

 Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/menu.lst"... succeeded

Done.



grub>

quit

Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:

Reboot



Preparing /dev/sda

If all goes well, you should now find /dev/md0 and /dev/md2 in the output of

df -h

server1:~# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/md2              4.4G  730M  3.4G  18% /

tmpfs                 126M     0  126M   0% /lib/init/rw

udev                   10M   68K   10M   1% /dev

tmpfs                 126M     0  126M   0% /dev/shm

/dev/md0              137M   17M  114M  13% /boot



The output of

cat /proc/mdstat

should be as follows:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sdb3[1]

      4594496 blocks [2/1] [_U]



md1 : active raid1 sdb2[1]

      497920 blocks [2/1] [_U]



md0 : active raid1 sdb1[1]

      144448 blocks [2/1] [_U]



unused devices: <none>

server1:~#

Now we must change the partition types of our three partitions on /dev/sda to Linux raid autodetect as well:



fdisk /dev/sda

server1:~# fdisk /dev/sda



Command (m for help): <-- t

Partition number (1-4): <-- 1

Hex code (type L to list codes): <-- fd

Changed system type of partition 1 to fd (Linux raid autodetect)



Command (m for help): <-- t

Partition number (1-4): <-- 2

Hex code (type L to list codes): <-- fd

Changed system type of partition 2 to fd (Linux raid autodetect)



Command (m for help): <-- t

Partition number (1-4): <-- 3

Hex code (type L to list codes): <-- fd

Changed system type of partition 3 to fd (Linux raid autodetect)



Command (m for help): <-- w

The partition table has been altered!



Calling ioctl() to re-read partition table.



WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

server1:~#

Now we can add /dev/sda1/dev/sda2, and /dev/sda3 to the respective RAID arrays:

mdadm --add /dev/md0 /dev/sda1

mdadm --add /dev/md1 /dev/sda2

mdadm --add /dev/md2 /dev/sda3

Now take a look at

cat /proc/mdstat

... and you should see that the RAID arrays are being synchronized:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sda3[2] sdb3[1]

      4594496 blocks [2/1] [_U]

      [=====>...............]  recovery = 29.7% (1367040/4594496) finish=0.6min speed=85440K/sec



md1 : active raid1 sda2[0] sdb2[1]

      497920 blocks [2/2] [UU]



md0 : active raid1 sda1[0] sdb1[1]

      144448 blocks [2/2] [UU]



unused devices: <none>

server1:~#

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sda3[0] sdb3[1]

      4594496 blocks [2/2] [UU]



md1 : active raid1 sda2[0] sdb2[1]

      497920 blocks [2/2] [UU]



md0 : active raid1 sda1[0] sdb1[1]

      144448 blocks [2/2] [UU]



unused devices: <none>

server1:~#

).

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf

mdadm --examine --scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:



cat /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# This file was auto-generated on Mon, 26 Nov 2007 21:22:04 +0100
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=72d23d35:35d103e3:2b3d68b9:a903a704
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a50c4299:9e19f9e4:2b3d68b9:a903a704
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=99fee3a5:ae381162:2b3d68b9:a903a704



8 Preparing GRUB (Part 2)

We are almost done now. Now we must modify /boot/grub/menu.lst again. Right now it is configured to boot from /dev/sdb (hd1,0). Of course, we still want the system to be able to boot in case /dev/sdb fails. Therefore we copy the first kernel stanza (which contains hd1), paste it below and replace hd1 withhd0. Furthermore we comment out all other kernel stanzas so that it looks as follows:

vi /boot/grub/menu.lst

[...]
## ## End Default Options ##

title           Debian GNU/Linux, kernel 2.6.18-4-486 RAID (hd1)
root            (hd1,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

title           Debian GNU/Linux, kernel 2.6.18-4-486 RAID (hd0)
root            (hd0,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

#title          Debian GNU/Linux, kernel 2.6.18-4-486
#root           (hd0,0)
#kernel         /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro
#initrd         /initrd.img-2.6.18-4-486
#savedefault

#title          Debian GNU/Linux, kernel 2.6.18-4-486 (single-user mode)
#root           (hd0,0)
#kernel         /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro single
#initrd         /initrd.img-2.6.18-4-486
#savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST

In the same file, there's a kopt line; replace /dev/sda3 with /dev/md2 (don't remove the # at the beginning of the line!):

[...]
# kopt=root=/dev/md2 ro
[...]



Afterwards, update your ramdisk:

update-initramfs -u

... and reboot the system:

reboot

Testing

Now let's simulate a hard drive failure. It doesn't matter if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sdb has failed.

To simulate the hard drive failure, you can either shut down the system and remove /dev/sdb from the system, or you (soft-)remove it like this:

mdadm --manage /dev/md0 --fail /dev/sdb1

mdadm --manage /dev/md1 --fail /dev/sdb2

mdadm --manage /dev/md2 --fail /dev/sdb3

mdadm --manage /dev/md0 --remove /dev/sdb1

mdadm --manage /dev/md1 --remove /dev/sdb2

mdadm --manage /dev/md2 --remove /dev/sdb3

Shut down the system:

shutdown -h now

Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you should now put/dev/sdb in /dev/sda's place and connect the new HDD as /dev/sdb!) and boot the system. It should still start without problems.

Now run

cat /proc/mdstat

and you should see that we have a degraded array:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sda3[0]

      4594496 blocks [2/1] [U_]



md1 : active raid1 sda2[0]

      497920 blocks [2/1] [U_]



md0 : active raid1 sda1[0]

      144448 blocks [2/1] [U_]



unused devices: <none>

server1:~#

The output of

fdisk -l

should look as follows:

server1:~# fdisk -l



Disk /dev/sda: 5368 MB, 5368709120 bytes

255 heads, 63 sectors/track, 652 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          18      144553+  fd  Linux raid autodetect

/dev/sda2              19          80      498015   fd  Linux raid autodetect

/dev/sda3              81         652     4594590   fd  Linux raid autodetect



Disk /dev/sdb: 5368 MB, 5368709120 bytes

255 heads, 63 sectors/track, 652 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes



Disk /dev/sdb doesn't contain a valid partition table



Disk /dev/md0: 147 MB, 147914752 bytes

2 heads, 4 sectors/track, 36112 cylinders

Units = cylinders of 8 * 512 = 4096 bytes



Disk /dev/md0 doesn't contain a valid partition table



Disk /dev/md1: 509 MB, 509870080 bytes

2 heads, 4 sectors/track, 124480 cylinders

Units = cylinders of 8 * 512 = 4096 bytes



Disk /dev/md1 doesn't contain a valid partition table



Disk /dev/md2: 4704 MB, 4704763904 bytes

2 heads, 4 sectors/track, 1148624 cylinders

Units = cylinders of 8 * 512 = 4096 bytes



Disk /dev/md2 doesn't contain a valid partition table

server1:~#

Now we copy the partition table of /dev/sda to /dev/sdb:

sfdisk -d /dev/sda | sfdisk /dev/sdb

(If you get an error, you can try the --force option:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

)

server1:~# sfdisk -d /dev/sda | sfdisk /dev/sdb

Checking that no-one is using this disk right now ...

OK



Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track



sfdisk: ERROR: sector 0 does not have an msdos signature

 /dev/sdb: unrecognized partition table type

Old situation:

No partitions found

New situation:

Units = sectors of 512 bytes, counting from 0



   Device Boot    Start       End   #sectors  Id  System

/dev/sdb1   *        63    289169     289107  fd  Linux raid autodetect

/dev/sdb2        289170   1285199     996030  fd  Linux raid autodetect

/dev/sdb3       1285200  10474379    9189180  fd  Linux raid autodetect

/dev/sdb4             0         -          0   0  Empty

Successfully wrote the new partition table



Re-reading the partition table ...



If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)

to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1

(See fdisk(8).)

server1:~#

Afterwards we remove any remains of a previous RAID array from /dev/sdb...

mdadm --zero-superblock /dev/sdb1

mdadm --zero-superblock /dev/sdb2

mdadm --zero-superblock /dev/sdb3

... and add /dev/sdb to the RAID array:

mdadm -a /dev/md0 /dev/sdb1

mdadm -a /dev/md1 /dev/sdb2

mdadm -a /dev/md2 /dev/sdb3

Now take a look at

cat /proc/mdstat

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sdb3[2] sda3[0]

      4594496 blocks [2/1] [U_]

      [======>..............]  recovery = 30.8% (1416256/4594496) finish=0.6min speed=83309K/sec



md1 : active raid1 sdb2[1] sda2[0]

      497920 blocks [2/2] [UU]



md0 : active raid1 sdb1[1] sda1[0]

      144448 blocks [2/2] [UU]



unused devices: <none>

server1:~#

Wait until the synchronization has finished:

server1:~# cat /proc/mdstat

Personalities : [raid1]

md2 : active raid1 sdb3[1] sda3[0]

      4594496 blocks [2/2] [UU]



md1 : active raid1 sdb2[1] sda2[0]

      497920 blocks [2/2] [UU]



md0 : active raid1 sdb1[1] sda1[0]

      144448 blocks [2/2] [UU]



unused devices: <none>

server1:~#

Then run

grub

and install the bootloader on both HDDs:

root (hd0,0)

setup (hd0)

root (hd1,0)

setup (hd1)

quit

That's it. You've just replaced a failed hard drive in your RAID1 array.

Wednesday, September 6, 2017

Linux Interview Questions and Answers for fresher and senior - Linvirtshell

Wednesday, September 06, 2017 0
1. what is the maximum number of partitions that can be made on a hard drive?

 In a hard disk we can create 4 primary partitions or alternatively 3 primary partitions and an extended partition. 

The primary+extended partition can be divided into 63 logical partitions  ( Also it depends on the MBR table size)



2. How to extend the lvm with physical extent?

Free PE / Size 952 / 3.72 GB
In this case use Free PE # 952 ,this will assign the entire disk 3.72 GB, use -l and + sign.


#sudo lvextend -l +952 /dev/vol_grp1/logical_vol1


Also resize the file system resize2fs after extending the lvm


3. how to pass sudo password in shell script? 

echo "password" | su - -c "cp -pr /boot/ /home/nsk/"

4. How to Update an A record from Command Line in DNS? 

 It should be run an authoritive name server.

Edit the zone file with your favorite command line editor. In this example, we use ‘vi’.


[root@host /var/named/]% vi /var/named/example.com.db
Locate the appropriate line and update the up address. You will see something like the following:
ftp IN A 192.168.1.100
Update the Zone’s Serial number.
Make BIND aware of your DNS changes by reloading the DNS zone.


[root@host /var/named/]% rndc reload example.com


Test that your changes worked correctly using ‘dig’.
[root@host /var/named/]% dig @localhost ftp.example.com


5. What are the types of DNS servers available?

a. Master
b. Slavec. 
c. Caching only DNS server
d. Forwarding only DNS server

6. DNS default & Main configuration file.

Main configuration file for dns server is named.conf. By default this file is not created in /var/named/chroot/etc/ directory. Instead of named.conf a sample file /var/named/chroot/etc/named.caching-nameserver.conf is created. This file is use to make a caching only name server. You can also do editing in this file after changing its name to named.conf to configure master dns server or you can manually create a new named.conf file

We are using bind's chroot features so all our necessary files will be located in chroot directory. Set directory location to /var/named. Further we will set the location of forward zone and reverse lookup zone files. If you cannot create this file manually then download this file and copy to /var/named/chroot/etc/

We have defined two zone files example.com.zone for forward zone and 0.168.192.in-addr.arpa for reverse zone. These files will be store in /var/named/chroot/var/named/ location. We will use two sample files for creating these files.


7. what is the difference between yum update and yum upgrade?

yum upgrade and yum update will perform the same function that update to the latest current version of package.

But the difference is Upgrade will delete obsolete packages, while update will preserve them.


8. Explain the yum erase and yum remove.


yum erase : Remove all the rpms and keep the config file
yum remove : Remove all the file.


9. How to set up a SAN boot LUN on LInux.


We can set up a SAN boot LUN to work in a Red Hat Enterprise Linux environment that is using the FC protocol.

Before you begin verify that your system setup supports SAN boot LUNs. See the Interoperability Matrix.

Steps
Create a LUN on the storage system and map it to the host. This LUN will be the SAN boot LUN.
You should ensure the following:
a. The SAN boot LUN is mapped to the host.
b. Multiple paths to the LUN are available.
c. The LUN is visible to the host during the boot process.
d. Enable the BIOS of the HBA port to which the SAN boot LUN is mapped.
For information about how to enable the HBA BIOS, see your HBA vendor-specific documentation.
e. Configure the paths to the HBA boot BIOS as primary, secondary, tertiary, and so on, on the boot device.
For more information, see your vendor-specific documentation.
f. Save and exit.
g. Reboot the host.
h. Install the operating system on the SAN boot LUN.
Note: For Red Hat Enterprise Linux 5 series, you must specify Boot Option as linux mpath during the operating system installation. When you specify linux mpath, you can see the multipath devices (/dev/mapper/mpathx) as installation devices.
i. Install the Host Utilities.
j. Configure DM-Multipath.


10. How to create GFS file system.


mkfs.gfs2 -p LockProtoName -t clustername:clusterfilesystem -j NumberJournals BlockDevice
mkfs.gfs2 -p lock_dlm -t alpha:mydata1 -j 8 /dev/vg01/lvol0


11. How to extend the GFS file system

gfs2_grow /mygfs2fs


12. How to repair GFS file system
fsck.gfs2 -y /dev/testvg/testlv


13. How to add journal to a file systemgfs2_tool journals /mnt/gfs2      : find out how many journals the GFS2 file system currently contains

gfs2_jadd -j1 /mygfs2  


14. Is my hard drive is dying?   How to check?


check hard disk for errors using smartctl command
smartctl -a /dev/sda


15. Find out the largest directories or files eating disk space on a Unix-like systesm:

#du -a /ftpusers/tmp | sort -n -r | head -n 10

#du -cks * | sort -rn | head

Explain about vpxd.cfg Configuration

Wednesday, September 06, 2017 0
vpxd.cfg is an XML formatted file which can be modified to alter the native behavior of the VMware vCenter Server.  Sparse references on the internet document the changes that can be made in this environment. 

The vpxd.cfg file is located on the VMware vCenter Server by default at %ALLUSERPROFILE%\Application Data\VMware\VMware VirtualCenter\vpxd.cfg

  • On Windows Server 2008, this would generally be C:\ProgramData\VMware\VMware VirtualCenter\vpxd.cfg
  • On Windows Server 2003, this would generally be C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\vpxd.cfg
This collection of vpxd.cfg settings has been sourced from various places.  The parameters will generally apply to a version of vCenter Server ranging from 2.0 through 4.x.  A given parameter can apply to several or even all versions.Remember to restart the VMware VirtualCenter Server service in the Server Manager for changes to vpxd.cfg to take effect.

**Disclaimer**

As with anything found on this site and much of the internet in general, information is provided “as is” without warranty.  Modify settings at your own risk.  I suggest thoroughly researching the changes first and also checking with VMware Support.

Backup and Restore ESXi Configuration with PowerCLI Commands

Wednesday, September 06, 2017 0

Now, the process of backing up and restoring the configuration of ESXi is fairly simple.

 Backup and Restore ESXi Configuration with PowerCLI, the steps:

Open PowerCLI > Connect to vCenter with Connect VI-Server command and enter this single line to backup the configuration of all the hosts that are attached to vCenter server.

  get-vmhost | get-vmhostfirmware -BackupConfiguration -DestinationPath “C:\temp\anyfolder”  all files fileswill be copied to c:\temp\any folder 

Now, if We need to restore a host’s configuration,  need to put the host into maintenance mode first and then

Enter the restore command:
Set-VMHostFirmware -VMHost <IP_or_FQDN> -Restore -Force -SourcePath
The host will reboot immediately after we hit the enter key without any prompt.

Reset the root password of an ESXi server using Host Profiles

Wednesday, September 06, 2017 0
 Here is an overview of the steps to reset the root password:
  • Select the ESXI server.
  • Put the host in Maintenance mode
  • Open the vSphere Client
  • In Host Profiles, Create a Profile from existing host and select the host, enter the Name
  • Nagivate to the security configuration, Administrator Password and select “Configure a fixed administrator password”
  • Enter the new root password twice

  • Attach the host profile to the host
  • Right-click the host and select Apply Profile
  •  Wait till the Host Profile Compliance status is Compliant. The root password is now changed!
  • Test if it is possible to SSH to the ESXi host
  • Delete the Host Profile
  • Exit maintenance mode
 Enterprise Plus is need to use Hostprofile else requested to use  a 60 day trail license of VMware vSphere.

How to change the root password for all esxi hosts in a vcenter using script

Wednesday, September 06, 2017 0

The following script will change the root password for all esxi hosts in a vcenter. We should run the script using PowerCLI.

Before We  run the script you should create a scripts folder on the root of our C:\ drive and copy the script to there. We  need to have the current root password, the new root password and the name of the vCenter.

The script uses the Set-VMHostAccount cmdlet to change the root account password.

#Change Root Password Script for all hosts in a particular cluster

Copy the below code in a text file  and rename with passwd.ps1

#Prompt user for vCenter server and connect.
$vcenter = Read-Host "Enter vCenter Server: "
$vCenterUser = Read-Host "Enter your vCenter Username: "
$vCenterPw = Read-Host "Enter your vCenter Password: "

Connect-VIServer -Server $vcenter -User $vCenterUser -Password $vCenterPw

Write-Host "Connected to vCenter Server: $vcenter"

#Prompt user for datacenter and cluster
$datacenter = Read-Host "Enter Datacenter: "
$cluster = Read-Host "Enter Cluster: "
#Gather hosts from vCenter for chosen cluster

Write-Host "Getting hosts from datacenter..."

$MyVMHosts = Get-Datacenter $datacenter | Get-Cluster $cluster | Get-VMHost
# If we want to chnage for all ESXi hosts
# $MyVMHosts =Get-VMHost

#Disconnect from vCenter
Disconnect-VIServer -Confirm:$false
Write-Host "Got the hosts.  Next..."

#Prompt user for old root password and new password

$oldpassword = Read-Host "Enter onfiguration backup utility to download a backup of this to your management server. old root password: "
$newpassword = Read-Host "Enter new root password: "
$newpassword2 = Read-Host "Enter new root password again: "

#Connect to hosts and change root password, then disconnect.

if ($newpassword -eq $newpassword2){
    foreach ($line in $MyVMHosts) {
        Connect-VIServer -Server "$line" -User "root" -Password "$oldpassword" -WarningAction SilentlyContinue
        Set-VMHostAccount -UserAccount "root" -Password "$newpassword"
        Disconnect-VIServer -Confirm:$false
        Write-Host "$line...done."
}
}else{
 Write-Host "New passwords do not match!"
}

Tuesday, September 5, 2017

How to backup ESXi configuration using VMA

Tuesday, September 05, 2017 0
vMA -vSphere Management Assistant it’s free download and it comes with your VMware vSphere.

    First  we need to open a console session on vMA with vi-admin as a user.
    Then enter this command with an -s switch (S  is for save…)
    vicfg-cfgbackup -s -server 195.168.0.10 /tmp/esxi5

To restore the same its quite easy, we can use the same command with -l as a switch (jusyt like “load” configuration). The  ESXi server needs to reboot after the loading the configuration from backup and so we must reply YES to complete the command. The ESXi host will reboot.

vicfg-cfgbackup -l -server  195.168.0.10 /tmp/esxi5

After rebooting the ESXi server finds it’s configuration files restored.

ENABLE Copy Paste operation between a Virtual Machine and your local machine Via Powershell

Tuesday, September 05, 2017 0
Below steps allows you to ENABLE Copy Paste operation between a Virtual Machine and your local computer. 

However VMware does not recommend this manipulation to avoid and limit Exposure of Sensitive Data Copied to the Clipboard section.

Enable-VMCopyPaste function allows you to enable copy Paste operation between a Virtual Machine and your local machine.

When using PowerCli, this setting can be applied without powering off the VM. However you'll need to do a stun/unstun operation (i.e. power on/off, suspend/resume, create/delete snapshot/storage VMotion) to achieve the same thing.

Enable-VMCopyPaste -VM "test"

  This will enable the copy paste of the server/Computer name test.