VMs Inaccessible After iSCSI / NAS restart "quick fix"
VM Inaccessible After iSCSI / NAS restart
http://vmdamentals.com/?p=4503
After the iSCSI / NAS restart a lot of VMs are inaccessible. Actually, only VMs that were powered off at the time show up greyed out; all VMs that were running during the NAS failure have been hanging, but they resumed properly after the NAS came back online. In the example above I actually already fixed some of the VMs; originally ALL powered-off VMs were in the “inaccessible” state!
Fixing it the easy way
I figured I needed to somehow tell vSphere that the inaccessible VM was actually no longer inaccessible, and force it to reload the configuration. And vimsh can do exactly that… It has the ability to reload a VM into a host, and exactly that does the trick!
First, we need to access the host using ssh or the direct command line as root. From there, you can simply find all VMs that are inaccessible using the following command:
vim-cmd vmsvc/getallvms >grep skip
As an example, this will generate output like this:
~ #
~ # vim-cmd vmsvc/getallvms >grep skip
Skipping invalid VM '118'
Skipping invalid VM '127'
Skipping invalid VM '147'
Skipping invalid VM '16'
Skipping invalid VM '184'
Skipping invalid VM '185'
Skipping invalid VM '190'
Skipping invalid VM '25'
Skipping invalid VM '92'
Skipping invalid VM '94'
Skipping invalid VM '95'
Skipping invalid VM '97'
These are actually all the VMs that are inaccessible on this host right now. Now it is easily fixed, by simply calling a reload for each “skipped” VM using this command:
vim-cmd vmsvc/reload [NUMBER]
This will trigger reload actions on the host:
Comments
Post a Comment