Hot-Swapping Operating Systems Using Inter-Partition Application Migration

VMware has just submitted a new patent application, which could be a huge thing regarding the future of the IT.

For the details read the United States Patent Application 20160210141.

Posted in Uncategorized | Tagged , | Leave a comment

Holnap VMUG

A masodik VMUG-t holnap rendezik, a SZAMALK eloadojaban.


Az agenda a kovetkezo:

  • 9:00-9:40 – Regisztráció
  • 9:40-9:50 – VMUG Leader köszöntő
  • 9:50-10:20 – Eaton
    Előadó: Kókai Róbert, Business Development Manager
    Előadás címe: Eaton Intelligent Power Manager – energiamenedzsment virtualizált környezetekhez
  • 10:20-10:50 – Számalk
    Előadó: Lovas Balázs, VMware Specialist
    Előadás címe: vReliaze Orchestrator használata a mindennapokban
  • 10:50-11:20 – Kávészünet
  • 11:20-11:50 – Veeam
    Előadó: Keszler Mátyás, Territory Manager Veeam Software
    Előadás címe: Mentés és replikáció a felhőbe
  • 11:50-12:20 – VMware
    Előadó:  Czuczumanov Valentin, Systems Engineer – VMware
    Előadás címe: A VSAN bevetésre kész
  • 12:20-13:00 – EBÉD
  • 13:00-13:30 – Közösségi előadás
    Előadó: Bertalan Bence – Microsoft / VMware rendszergazda, Keller Zrt.
    Előadás címe: Kliens és alkalmazásvirtualizció
  • 13:30-14:00 – Sorsolás, Zárás

Bovebben infok, regisztracio a hivatalos VMware User Group honlapon. Sajnos holnap nem tudok reszt venni, de aki tud menjen!


Posted in Uncategorized | Tagged , | Leave a comment


Tortent ugynis, hogy kb 150 debian VM-en a kovetkezo hibauzenet jelet meg boot kozben:

     (i.e., without -a or -p options)


A szituacio azert is probelmas, habar kezzel kb 2 perc alatt fixalhato fsck-val ahogy a screenshoton is lathato, ez csak manualisan teheto meg. Ugyanis ekkor meg nincs halozat de meg a VMware Tools se fut, hogy esetleg Invoke-VMScript-tel buveszkedjunk. 150 Debian VM eseteben ez pedig igencsak idorablo lenne, de akad megoldas.

A problema akkor kezdett igazan erdekes lenni, amikor eszrevettem hogy csak egy bizonyos host-on levo VM-ek viselkednek igy, a tobbi Linuxos VM rendben felbootolt. Vajon mi lehet, ami ilyen hatassal bir a guest OS-re? Kis keresgeles utan feltunt, hogy az ido igen elmaszott, holott az NTP fut.


A VM pedig start alatt atveszi ezt a HW idot (bar a “Synchronize guest time with host” kikapcsolva), igy a Linuxon a fent emlitett helyzet alakul ki.


Az ESXi hoston fixaltam az idot, majd egy VM shutdown, majd start utan rendben bootoltak a Debianok mindenfele disk inkonzisztencia nelkul.

Posted in Uncategorized | Tagged , , , | Leave a comment

Managing Pluggable Storage Architecture – Part 2

In the first part I have posted about what is the Pluggable Storage Architecture, MNP, SATP, PSP. In this second part we will configure them.

Configuring Proper SATP and PSP for Your Storage Array

By default, on ESXi hosts every storage device handled by NMP. This is defined in claim rules which are used by PSA to determinate MPP or NMP is needed in order to manage paths to a device.

~ # esxcli storage core claimrule list


As we can see for USB, SATA, IDE, block, unknown and any other devices the NMP will be used. Almost all rules are already loaded (runtime). With “MASK_PATH” devices can be hidden from the ESXi host.

Sometimes the default PSPs and/or SATPs are not the best, or have changed in the Compatibility Guide, or the storage vendor recommends another plug-in, or settings. These situations can be solved via esxcli or PowerCLI.

Changing the default PSP for a SATP

For example, there was two SATPs (VMW_SATP_ALUA_CX and VMW_SATP_SYMM) which had in the past the VMW_PSP_FIXED as default PSP. EMC has recommended to change the default claim rules to VMW_PSP_RR. This can be done via esxcli:

~ # esxcli storage nmp satp set -b -P VMW_PSP_RR -s VMW_SATP_ALUA_CX
~ # esxcli storage nmp satp set -b -P VMW_PSP_RR -s VMW_SATP_SYMM


Changing the PSP on a single LUN

Find the device identifier of your LUN, then run the following to get the currently used PSP

~ # esxcli storage nmp device list -d naa.600507630080807a4000000000000002


The policy is VMW_PSP_MRU, but the IBM V7000 should use VMW_SATP_AULA with VMW_PSP_RR, so let’s change it to Round Robin

~ # esxcli storage nmp device set -d naa.600507630080807a4000000000000002 -P VMW_PSP_RR

Check again the PSP of this LUN:

~ # esxcli storage nmp device list -d naa.600507630080807a4000000000000002


Now we have the correct VMW_PSP_RR. This can be modified in Web Client also.


Changing the PSP on more than one LUN with PowerCLI

Usually we have more than one LUN and more than one ESXi host. That means a lots of PSP / SATP rules, so lots of manual work if something needs to be changed. With PowerCLI very sophisticated queries and modifications can be executed.

With the following PowerCLI one liner get all of the IBM MRU LUNs from a particular ESXi host with beginning only a specified device ID part.

PowerCLI C:\> Get-VMHost | Get-ScsiLun -LunType "disk" | where {$_.Vendor -eq "IBM" -and $_.canonicalname -like "*naa.600507630080807a4*" -and $_.Multipathpolicy -eq "MostRecentlyUsed" } |select canonicalname,multipathpolicy | sort canonicalname | ft -autosize


What if we want to change all of these LUNs to Round Robin? That is easy

PowerCLI C:\> Get-VMHost | Get-ScsiLun -LunType "disk" | where {$_.Vendor -eq "IBM" -and $_.canonicalname -like "*naa.600507630080807a4*" -and $_.Multipathpolicy -eq "MostRecentlyUsed" } | Set-scsiLun -MultipathPolicy "RoundRobin"


All of the filtered LUNs now have the VMW_PSP_RR Path Selection Plug-in.

Define new SATP & PSP for a storage device

Most of the arrays will work well with the default SATP and PSP settings, but some storage vendor has best practices which should be followed to ensure the best performance. In this case a new custom SATP claimrule needs to be added.

~ # esxcli storage nmp satp rule add -s VMW_SATP_ALUA -P VMW_PSP_RR -O iops=1 -c "tpgs_on" -V NewVend -M NewArr -e "MyVendr MyAr SATP claimrule"

This creates a new SATP rule for VMW_SATP_ALUA with default Round Robin PSP, where after one IOPS the next path will be used and with option -c the Target Port Group support is enabled.

Then get all of the SATP rules which are using the VMW_SATP_ALUA

~ # esxcli storage nmp satp rule list -s VMW_SATP_ALUA


The new rule is displayed, marked with “user” in the “Rule Group” column.


As you can see very important to know how PSA works. Verify that the proper SATP and PSP plug-ins are configured on all LUNs, based on the VMware Compatibility Guide. If available, try the MPPs. If a storage vendor has provided best practices for a particular array, ensure that all of them have applied. Use the latest firmware version for your storage array and the latest ESXi updates. Sometimes with the new releases the default PSP for a SATP has changed by VMware, or other SATP or PSP plug-ins are recommended. For example, from 5.1 U2 to 5.1 U3 with the IBM Storwize V7000 the default VMW_SATP_SVC with VMW_PSP_FIXED has changed to VMW_SATP_ALUA with VMW_PSP_RR. Additionally, it can be that a different SATP needs to be set for the same storage if using FC or iSCSI. (e.g. with NetApp FAS8000 series)

Posted in Uncategorized | Tagged , , , , | Leave a comment

Where in the World are VCAPs?

Just a quick re-blog of the yesterday’s post of VMware Edu with the same title. Jill has created a great infographic, click to enlarge it.


Unfortunately Hungary is in the “<100” category with the most of the other countries in EU, so we don’t know the exact numbers. I would say around 15-20, maybe:)

Posted in Uncategorized | Tagged , | 2 Comments

Managing Pluggable Storage Architecture – Part 1

In block-based storage environments (iSCSI, Fibre Channel, FCoE) ESXi hosts can use more NICs or HBAs to connect to the LUNs via (FC) switches and Storage Processors (SP). Because of this physical design an ESXi host has multiple ways to access to the storage, using multipathing. This technique provides failover and load balancing. The following logical design from VMware shows a simple multipathing architecture.


With vSphere 4 VMware has introduced a redesigned multipathing storage subsystem inside the VMkernel, which is called Pluggable Storage Architecture also known as PSA. This modular framework a collection of plugins, offers Storage APIs for 3rd party developers in order to integrate their storage solutions and capabilities into the VMkernel. It also manages the plugins, handles the path discovery with scanning.


The Native Multipath Plugin (NMP) is created by VMware. NMP manages the sub plug-ins (SATP and PSP), either VMware created or 3rd party. Provides default support for the storage arrays which are listed in the VMware Compatibility Guide. (aka HCL)

From esxcli run the following to display multipathing modules. In this case we have only NMP.

~ # esxcli storage core plugin list


NMP has two modules. Both are developed by VMware, but can be also 3rd party.

Storage Array Type Plugin (SATP)

Monitors each physical path and reports changes, handles the path failover. There are a pre-defined SATPs to every storage arrays, which VMware supports.

This commands shows the currently loaded SATP plug-ins on the ESXi host:

~ # esxcli storage nmp satp list



Every SATP plug-ins have one or more claiming rules. The following output is a huge list which contains claiming rules for each supported arrays. I have filtered it to VMW_SATP_ALUA:

~ # esxcli storage nmp satp rule list -s VMW_SATP_ALUA


For example, IBM Storwize V7000 (2145 is the Product ID) has also the VMW_SATP_ALUA plugin assigned with a specified PSP.

Path Selection Plug-In (PSP)

Is the second module inside the NMP. It responsible for choosing a path for every I/O request. Based on the loaded SATP plug-in (see esxcli nmp satp list above) the PSP will be selected automatically by the NMP for each LUN. (but it can be overridden). By default, vSphere supports three PSPs

~ # esxcli storage nmp psp list


  • VMW_PSP_MRU, aka Most Recently Used: The path, which was available first will be selected by the PSP. During failover it chooses another working path. The I/O remains on this new path even if the original becomes available again. So it doesn’t fail back. This is the default policy for most Active / Passive and ALUA arrays.
  • WMV_PSP_RR, aka Round Robin: The I/O will be rotated through all active available optimized paths, provides basic load balancing. By default, Round Robin will use the next path after 1000 IOPS. Can be used with Active / Active and Active / Passive and ALUA arrays.
  • WMV_PSP_FIXED, aka Fixed: The ESXi host will use a preferred path, which can be selected manually, or it will be the first available path during the ESXi boot. After a path failover during an outage it returns back to the original when the path becomes available again. This is the default policy for most Active / Active arrays. Do not use for Active / Passive arrays, because path thrashing could occur if using this PSP.

Storage vendors can provide own Multipathing Plug-ins (MPPs), which can run parallel with the NMP. This 3rd party plug-ins completely replace the NMP, SATP and PSP. It can optimize path failover, load-balancing and can also provide sophisticated path selection based on e.g. queue depth. Only a few vendors have developed MPPs, such as EMC (PowerPath/VE) or Veritas (Dynamic Multi-Pathing).

In the Part 2 we will configure proper SATP and PSP on GUI and also with esxcli and PowerCLI.

Posted in Uncategorized | Tagged , , , | 1 Comment

vSphere (HTML5 Web) Client 6.5 – new fling!

VMware has just announced a new fling, the vSphere HTML5 Web Client. As the well known ESXi Embedded Host Client, this solution is also using HTML5 and JS instead of Flash, which is very exciting. Hope this will be GA soon (with vSphere 6.5), maybe in August?


Let’s check the limited feature list:

  • VM power operations (common cases)
  • VM Edit Settings (simple CPU, Memory, Disk changes)
  • VM Console
  • VM and Host Summary pages
  • VM Migration (only to a Host)
  • Clone to Template/VM
  • Create VM on a Host (limited)
  • Additional monitoring views: Performance charts, Tasks, Events
  • Global Views: Recent tasks, Alarms (view only)
  • Feedback Tool (New feature to collect feedbacks from you)


The new client is compatible with vSphere 6.0 only (Windows and VCSA), distributed as an OVA (810 MB).


I will go through the installation with the VCSA6.


  • On VCSA6: enable SSH


  • Login to the VCSA6 via SSH to ensure that bash is the default shell. If you get the following screen after login, you are on appliancesh. If it looks like a normal shell prompt, skip to the H5 appliance login.


  • Enable it (3rd line) and launch bash (4th line). Then run the following command to change the default shell:
/usr/bin/chsh -s "/bin/bash" root


  • SSH or login to the H5 client appliance with root/demova


  • Register the appliance against your vCenter server with the following. This will create the necessary folders and files (which is done with a .bat script if you are using the Windows based vCenter server)
/etc/init.d/vsphere-client configure --start yes --user root --vc <IP_Address_Of_vCenter>


  • (optional) Set NTP server, if you want with
/etc/init.d/vsphere-client configure --start yes --user root --vc <IP_Address_Of_vCenter> --ntp <your_NTP_server>
  • Check that the times match on the H5 appliance and the SSO/PSC server, run on both side:
  • (optional) On VCSA you can change back the default shell from bash to appliancesh
/usr/bin/chsh -s /bin/appliancesh root
  • Configuration is completed, access to the new vSphere H5 client at


Very fast (tested with Chrome 49). Currently with limited functionality, but here it is and that’s the point. Below attached some screenshots. Buttons and icons are much bigger (I think smaller icons are better). During the short test the H5 client didn’t crash, there wasn’t any error, so it is a great launch.

Read the announcement and try out this fling!


Posted in Uncategorized | Tagged , , , | Leave a comment