Wednesday, August 16, 2017

Unable to Restore Resource Pool Setting for Hosts - vSAN error

After setting up a vSAN cluster I had the following error appear on the ALT-F11 console screen on one of my ESXi 6.0 hosts:

"Unable to restore Resource Pool Settings for hosts/vim/vmvisor/vsanperfsvc it is possible hardware or memory constraints have changed. please verify settings in the vsphere client"

Apparently this is somehow related to DRS.  I was able to solve this by switching DRS off and back on again and then rebooting the host that had the error.  After booting the host no longer showed that error on the console screen.  Hope this helps someone that is having the same issue.

Friday, December 16, 2016

VMware's View Storage Accelerator, just use it....

Originally named Content Based Read Cache (CBRC) when it was introduced in vSphere 5 starting in 5.0, VMware's View Storage Accelerator is a read caching solution that is supported in VMware View 5.1 and above that offers improved performance by caching common image blocks in RAM when reading virtual desktop images.  All this is completely transparent and can be used with other Storage Array technologies.

CBRC has been a great addition to the VMware architecture and offer's some real benefits to those looking to offload storage requests without additional products or costs.  Since the most common blocks are cached in memory and there is no need to request those blocks from the storage infrastructure, there is a reduction in overall storage traffic particularly during boot storms.

The following test data showing the reduction in IOPS and storage bandwidth was posted by VMware on their End User Computing Blog:

Screen shot 2012-05-01 at 9.46.48 AM

However, View Storage Accelerator does have some limitations:

1.) It's a read only caching technology.  

The write IOPS still have the same heavy impact as before.  If you take away boot storms, write IOs are usually estimated to be between 50-80% of the total IOs during steady state virtual desktop operation.  

2.) It's limited to 2GB of RAM per host.  

It is a dynamic cache, but in today's age of relatively cheap server DRAM, this is starting to seem a little limited.  I'm sure 2GB was determined at one time to be a maximum "sweet spot" for resource usage versus benefits gains.  However, since I run primarily linked clone non-persistent desktops in my environment, I wonder how fast a desktop could be if say 16GB of the desktop's main replica image could be cached in RAM.

Thursday, February 4, 2016

VMware preparing "Enabling the Digital Enterprise" announcements on February 9

Preregister for the VMware's February 9 announcements. This is continuing their trend from last year where it's kind of an in-between VMworld's mid-year major product announcements webcast.

From VMware:

Look for two exciting components: 
1) Enabling the Digital Enterprise: Deliver and Secure Your Digital Workspace
     Pat Gelsinger will be joined by Sanjay Poonen who will present VMware’s digital workspace vision and share exciting announcements that help companies securely deliver and manage any app on any device. 
2) Enabling the Digital Enterprise: Build and Manage Your Hybrid Cloud
     Raghu Raghuram joins Pat Gelsinger to share how VMware’s software-defined approach can help simplify how you build and manage your hybrid cloud.

Friday, January 15, 2016

Teradici CTO clarifies Tera1 Zero Client support in the future

A while back I received a response from Randy Groves from Teradici to my concern that Tera1 Zero clients were no longer officially supported after Horizon View 6.0.1.  It turns out that they will still work as before (at least through View 6.2) with their final 4.7.1 firmware, they just won't be getting any new features that will be added to the Tera2 Zeros.  His response follows:

As the Teradici CTO, let me add a little color to this discussion. When support for Network Appliances (e.g. Cisco, F5,...) was added to the protocol (back in View 5.x), this created a new authentication scheme to allow these devices to be an authorized-man-in-the-middle. We did not have the ability to support both the old and the new authentication scheme in the Tera1 devices so they stayed on the "old scheme". Starting with Horizon 6, VMware is only certifying clients that use the new authentication scheme and have not indicated that they will add any new capabilities nor provide support for clients that use the old scheme (which includes older software clients, too).
Because of the large install base of older clients, I do not expect that they will disable the old authentication mode. Since they are not doing any new feature development in that mode, they are also unlikely to break compatibility. However, fixing any issues that might arise for "old mode" clients is not committed by either Teradici or VMware. which is why we have made this End-of-Life announcement.
Tera1s have been on the market for 8 years and we stopped selling new chips to our partners 3 years ago. You will notice in the KB that we are providing "technical guidance" until December 2016 which will give everyone at least 5 years of guidance even if you were one of the last customers to buy a Tera1. Even as CTO, I have several Tera1 devices in my various offices (in fact I am using one right now). I am running pre-released code that is even beyond Horizon 6.1.1. I fully expect that they will continue to work well beyond December 2016. However, if you need new Horizon features or the new capabilities in Tera2 devices, then you will need to upgrade. If you need a guarantee that your Tera1 devices will continue to work on future Horizon releases, then you should also upgrade. Or, you can just keep using them until a Horizon release comes out that breaks compatibility with the old authentication mode - if and when that occurs.
We just want to be clear about the facts so you can make an informed decision.
Randy Groves - Teradici CTO

Wednesday, December 23, 2015

Slow View Event Database

After upgrading my environment from View 5.3.2 to Horizon View 6.0.1, I started having the following issues once production loads returned to normal:
  • The Event database performance in VMware View 6.0.x is extremely slow when browsing within View
  • High CPU usage on the SQL server, hosting the Event database
  • The larger the Event database becomes, the slower the queries run.
This is discussed in VMware KnowledgeBase article – The Event database performance in VMware View 6.0.x is extremely slow (2094580)
To resolve this issue, you have to create an index.  Run the following command on your SQL Event database:
CREATE INDEX IX_eventid ON dbo.VDIevent_data (eventid)
Substitute VDIevent_data for the table name using your Event database prefix.  In my case it was VDI_event_data so I ran:
CREATE INDEX IX_eventid ON dbo.VDI_event_data (eventid)
So now recent Events load in View Administrator in seconds instead of minutes.

Thursday, November 5, 2015

VSAN 5.5 - Verifying jumbo frames

I've been working on a Virtual SAN 5.5 POC (for various reasons, the environment is still on vSphere 5.5u2) and came across an issue following the guide provided by VMware:

Tips for a Successful Virtual SAN 5.5 Evaluation

It states on page 9 to use vmkping -S 9000 to test connectivity between hosts using jumbo frames.

There is where I hit a wall....  I kept getting the following error message:

~ # vmkping -S 9000 192.168.100.14
*** vmkernel stack not configured ***

I kept racking my brain, checking my configuration, making sure that MTU 9000 was set on the distributed switch and also on the VSAN VMkernel, and that my network switches were set to support jumbo frames on both the physical ports and on my VSAN VLAN (I use VLAN 1001 since VLAN 1000 is my vMotion network).  Then I came across the following post:


Even though that is an old post from well before VSAN existed, it is still entirely relevant.  Now armed with the correct formatting of the command:

~ # vmkping -I vmk2 -s 8972 -d 192.168.100.14
PING 192.168.100.14 (192.168.100.14): 8972 data bytes
8980 bytes from 192.168.100.14: icmp_seq=0 ttl=64 time=0.176 ms
8980 bytes from 192.168.100.14: icmp_seq=1 ttl=64 time=0.159 ms
8980 bytes from 192.168.100.14: icmp_seq=2 ttl=64 time=0.157 ms

--- 192.168.100.14 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.157/0.164/0.176 ms

Eureka!  (also vmk2 is my VSAN VMkernel interface and adding "-I vmk2" specifies what interface to force the ping to travel over)


Monday, October 5, 2015

VSAN 5.5 - The case of the missing new disks

I'm setting up a VMware VSAN 5.5 POC (can't upgrade to 6.0 yet for various other reasons) and even though each Dell R730 host had 16 drives, only around 10 of them were showing up in the vSphere Webclient to be claimed as a VSAN disk.

The R730's shipped from Dell with the drives setup in a RAID cluster and even though I had deleted the cluster and changed the controller to pass-thru operation, something must've still been present on some of the drives and the existence of any remnants of data on a drive seems to result in the disk not showing up on the VSAN configuration page:


However the drives were present if I would try to make a traditional datastore out of one of them:



So how to solve this?

You can wipe the disks straight from the ESXi console....

1.) First take note of the naa ID of the disks in question, then console into the host.

2.) Then use the "fdisk" command to do the following:

fdisk /dev/disks/naa.XXXXXXXXXXXXX

where XXXXXXXXXXXXX would be the rest of the naa ID.

3.) Ignore the "fdisk command is depreciated" message.

then you'll be prompted for a command

4.) Type o and press enter.  This command states that you want to create a new empty partition table on the drive, but nothing is actually committed to the drive yet.

5.) Type w and press enter.  This writes the change to the drive.

Here is a screenshot showing the above process:


Repeat this for all the drives in question.  Rescan the disk controller and the drives should appear available to add to VSAN in the vSphere Webclient: