Category Archives: VMware

How to configure vCloud Connector with vCloud Air Virtual Private Cloud OnDemand

This post starts with a bit of a mouthful, however if you want to configure your private ‘on-prem’ vSphere environment with vCloud Connector in order to access vCloud Air Virtual Private Cloud OnDemand resources you’ll need the following information.

If like me you have a small lab environment which consists of a single vCenter Standard appliance/server and you have access to credit on vCloud Air Virtual Private Cloud OnDemand then you will need to configure something called the vCloud Connector (referred here as the ‘Server’, and then the ‘Node’). These are two separate appliances which you’ll deploy via a simple OVF template and then link together with your vCenter instance (which is referred to as ‘vSphere’. VMware’s own documentation is pretty straight forward apart from one specific area which I think needs a little improvement.

First, download and deploy the vCloud Connector Server appliance, followed by the Node. Both of these steps are detailed here in the product documentation and simply require a static IP address, default gateway, DNS and subnet mask during the template deployment.

Once the appliances are online, check that the time zone is correct and in agreement between both appliances. Configure the Node first, by entering your ‘Cloud’ details which in my use-case is simply the vCenter server’s URL. Once this is complete, configure the Server component by registering the Node which you just worked with. This step links the Node to the Server, and completes the following relationship

Private Cloud (vCenter) Node <<—>> Server <—>

The vCloud Connector server maintains a local content repository which you can then use to synchronise content between the vCloud Air service and your own content catalogue (think templates).

The next step is to configure the Server with a connection to vCloud Air’s own Node – we’re lucky here because it’s already deployed as a shared resource within the infrastructure layer at VMware’s datacentre. Go to the Server’s ‘Nodes’ page and add another connection using the ‘Register Node’ button.

This time, you’ll need the URL of vCloud Air’s ‘On Demand’ servers, which are documented on the following location

http://pubs.vmware.com/vca/index.jsp#com.vmware.vcc.vca.doc/GUID-AD5E9377-7A9E-4EDA-95AD-9DBECEA55787.html

These URLs are different to the ones which you are redirected to if you select “Want to Migrate Virtual machines?” link in vCloud Air and correspond with the On Demand service.

vCHS1
Do you want to migrate virtual machines!?

Configure the appropriate URL for the location of your vCloud Air instance and then select the Public checkbox (this is required if there is a firewall/Internet between you and the datacenter). For some reason I needed to ignore the SSL certificate in order to authenticate correctly, but I’m not too worried about these things in a lab environment. The official explanation for this is below

vCloud Connector nodes in vCloud Air have SSL enabled and certificates from DigiCert installed. If you want to use the certificate, you must add a DigiCert High Assurance CA-3 intermediate certificate to your vCloud Connector server trusted keystore. Obtain the certificate, then see Add CA Root Certificate to Trusted Keystore for information on uploading it.

You should select ‘vCloud Director’ as the cloud type because this is the back-end core of the vCloud Air service, but the rest had me stumped for a little while. The VMware documentation says that you should just go ahead and enter your Organisation ID into the VCD Org Name box. But what is my org ID?

Specifically it says:

Specify the name of your vCloud Air virtual data center. (This is also the Organization name in the underlying vCloud Director instance.) You must use a valid name. vCloud Connector validates the name that you provide.

Luckily I noticed that the information was literally staring me in the face! Look in the URL of your vCloud Air management portal and you will find the GUID for it here (highlighted in bold) e.g.

https://uk-slough-1-6.vchs.vmware.com/compute/ui/?orgName=63567c98-f839-4632-9df2-b510155fa436&serviceInstanceId=.

It would have been nice had VMware provided a bit of a nudge here in terms of the field description, but I suppose it’s obvious now after going through the process.

vCHS2

Once this is done, enter your username and password details which you have already used in order to gain access to the vCloud Air portal and you should have a successful connector. Now if you’ve performed all of the steps as described then you will now have a local vCloud Connector Server coupled with a Node in your private cloud and another in vCloud Hybrid Air, looking something like this

vCHS3a

Now that you’re done with this we’ll return back to the original end to end connectivity to review the outcome

Private Cloud (vCenter) Node <<—>> Connector Server <—> vCloud Air Node

The two components on the left hand side belong to you and run on your private cloud infrastructure, whilst the right hand side connects you to VMware’s cloud platform. Once this is achieved we now have a new icon displayed within the vSphere Client which allows us to access our content library and begin to upload Templates, VMs and vApps to the cloud.

vCHS4

Check back for more vCloud fun soon.

 

 

 

Optimising Oracle DB with VMware’s vFlash Read Cache feature

This post is a slightly different one that I’ve usually made simply because it is more notes based than editorial or comment, however I hope that the simple steps and data captured here will be useful. In fact it’s taken me a while to get this data out, but even though it’s about a year old now the performance improvement should be even better with ESXi 6.x.

In this test we were interested basically in evaluating whether VMware’s new Flash Read Cache(vFRC)  feature released in ESXi 5.5 would benefit read heavy virtual workloads such as Oracle DB.

Test scenario: Oracle 11g 11.2.0.1 DB with 4vCPU, 8,192MB RAM and 200GB Oracle ASM disk for database.

HP DL380 G7 with 2 x Intel Xeon 5650 6C 2.67GHz CPU and 128GB RAM, locally attached 4 x 7.2K SAS RAID array

VMware ESXi 5.5 Enterprise Plus license with vFlash Read Cache capability.

Creating a baseline (before applying vFRC)

Using esxitop to establish typical baseline values:

Disk latency typical across measured virtual machines – 11.97ms latency

Correlation of baseline latency and command per second values with vCenter Operations Manager:

vFRC1

High and low water disk latency – between 4 and 16ms (using 7.2K RPM drives in 4 disk RAID5 array).

vFRC3

Disk usage was negligible following VM boot and Oracle DB startup:

vFRC2

vFRC4

In order to set the vFlash Read Cache block size correctly we need to find out the typical write block size (so that small writes do not consume too large a cache block if it is set higher than the mean).

Using vscsiStats to measure the frequency of different sized I/O commands:

vFRC5

Highlighted frequency values (above) show that 4,096 byte I/Os were the most common across both write and read buckets, and therefore the overall number of operations peaked in the same window.

In order to establish the baseline Oracle performance an I/O calibration script was run several times.

Oracle DB I/O metrics calculation:

vFRC6

Max IOPs were found to lie between 576 and 608 per second using a 200GB VMDK located on the 4 disk RAID array.

The high water mark for disk latency rose to 28ms during the test, versus 12ms when the instance was idle – indicating contention on the spindles during read/write activity.

vFRC7

During the I/O calibration test the high water mark for disk throughput rose to 76,000 KBps, versus 3,450 KBps when the instance was idle. This shows that the array throughput max is around 74MB/s.

vFRC8

Having established that the majority of writes during the above test were in fact using an 8KB block size (not as shown in the screenshot which was taken from a different test (4KB)) the vFRC was enabled only on the 200GB ASM disk using an arbitrary 50GB reservation (25% of total disk size). No reboot was required, VMware inserts the cache in front of the disk storage transparently to the VM.

With Flash Read Cache enabled on 200GB ASM disk

After adding a locally attached 200GB SATA SSD disk to the ESXi server and claiming the storage for Flash Read Cache a 50GB vFRC cache was enabled on the Oracle ASM data disk within the guest OS configuration:

vFRC9

vFRC10

Once the vFRC function was enabled the Oracle I/O calibration script was run again, and surprisingly the first pass was considerably slower than previous runs (max IOPs 268). This is because each read from the SSD cache initially fails, because prior writes have not primed the cache. By writing to SSD before committing to disk (write-through caching), data is continually added to the vFRC cache such that performance should improve over time:

vFRC11

Esxcli was used to view the resulting cache efficiency after running I/O calibration (showing 29% read hit rate via SSD cache vs reads from SAS disk):

vFRC12

In the example above, no blocks have been evicted from the cache yet meaning that the 50GB cache assigned to this VMDK still offers room for growth. When all of the cache blocks are exhausted the ESXi storage stack will begin to remove older blocks in favour of storing more relevant up to date data.

The resulting I/O calibration performance is shown below – both before and after enabling the vFRC feature.

vFRC13

 

In brief conclusion, the vFlash Read Cache feature is an excellent way to add in-line SSD based read caching for specific virtual machines and volumes. You must enable the option on specific VMs only, and then track their usage and cache effectiveness over time in order to make sure that you have allocated not too much, or not too little cache. However, once the cache is primed with data there is a marked and positive improvement to the read throughput, and a much reduced number of IOPS needing to be dealt with by the physical storage array. For Oracle servers which are read biased this should significantly improve performance where non-SSD storage arrays are being utilised.

Oracle licensing on hyper-converged platforms such as Nutanix, VSAN etc.

I recently posted on Michael Webster of Nutanix’ blog about Oracle licensing on VMware clusters and wanted to link back to it here as it’s something I’ve been involved with several times now.

With VMware vSphere 5.5 the vMotion boundary is defined by the individual datacenter object in vCenter, which means that you cannot move an individual VM between datacenters without exporting, removing it from the inventory, and reimporting somewhere else. This currently means that even if you deploy Oracle DB on an ESXi cluster having just two nodes that you could be required by Oracle to license all of the other CPU sockets in the datacenter! This rule is due to Oracle’s stance that they do not support soft partitioning or any kind of host or CPU affinity rules. Providing that a VM could run on a processor socket, through some kind of administrative operation, then that socket should be licensed. This doesn’t seem fair, and VMware even suggest that this can be counteracted by simply defining host affinity rules – but let’s be clear, the final say so has to be down to Oracle’s licensing agreement and not whether VMware thinks it should be acceptable.

http://www.vmware.com/files/pdf/techpaper/vmw-understanding-oracle-certification-supportlicensing-environments.pdf

So the only current solution is to build Oracle dedicated clusters with separate shared storage and separate vCenter instances consisting only of Oracle DB servers. This means that you are able to define exactly which CPU sockets should be licensed, in effect all those which make up part of one or more ESXi clusters within the vCenter datacenter object.

Now, with vSphere ESXi 6 there was a new feature introduced called long distance vMotion which facilitates being able to migrate a VM between cities, or even continents – even if they are managed by different vCenter instances. An excellent description of the new features can be found here. This rather complicates the matter, since Oracle will now need to consider how this effects the ‘reach’ of any particular VM instance, which now would appear to only be limited to the scope of your single sign-on domain, rather than how many hosts or clusters are defined within your datacenter. I will be interested to see how this develops and certainly post back here if anything moves us further towards clarity on this subject.

Permalink to Michael’s original article

 

Purple screen halt on ESXi 5.5 with Windows Server 2012 R2

Believe it or not, but it seems that it is possible to crash a clean ESXi 5.5 host right out of the box by installing a Windows Server 2012 R2 virtual machine with an E1000 virtual network adapter and attempting a file copy to another VM located on the same box.

I was trying recently to copy some data from a Windows Server 2003 VM onto a new 2012 R2 VM on the same host. Expecting that the file copy should be extremely fast (due to proximity of network traffic on the same switch) I was left scratching my head when I noticed only 3-10MB/s transfer rate. Because I was still running ESXi5.0 I thought it would be better to troubleshoot if I upgraded to the latest version of the hypervisor, only to find that the second I hit ‘paste’ to begin the file transfer the entire hypervisor crashed with a purple screen.

Needless to say, this isn’t a fringe case and others would appear to have noticed this behaviour too. The fix is simple enough, just swap out your E1000 vNIC on the 2012 R2 server with a vmxnet3 adaptor, but how is this simple scenario so dangerous that it is able to take out a whole host?

Thankfully, after swapping the vNIC I was then able to achieve 50-60MB/s throughput continuously, which was more than enough of an improvement given where I started before..

I’m going to link to the original post I found here, but nevertheless I’ll  update this page if I find that there is a known issue somewhere that explains how this behaviour has occurred.

Dell MEM, EqualLogic and VMware ESXi, how many iSCSI connections?

I’ve been working on a fairly large cluster recently which has access to a large number of LUNs. All 16 hosts can see all of the available disks, and so the EqualLogic firmware limits have started to present themselves, causing a few datastore disconnections.

As part of the research into the issue I came across several helpful documents, which hopefully should prove essential reading in case you haven’t come across the planning side of this before:

A description of Dell MEM parameters, taken from EqualLogic magazine

Dell EqualLogic PS Arrays – Scalability and Growth in Virtual Environments

EqualLogic iSCSI Volume Connection Count … – Dell Community

Best Practices when implementing VMware vSphere in a Dell …

Configuring and installing Dell MEM for EqualLogic PS series SANs on VMware

If you run into problems with iSCSI connection count then you will need to rethink which hosts are connecting and how many connections they maintain.

These factors are detailed within the documents linked to above, but  in brief, you can attempt to resolve the issue by:

  • Reducing number of LUNs by increasing datastore sizes
  • Reduce the number of parallel connections to a LUN that MEM initiates
  • Use access control lists to create sub-cluster groups of VMs that can see fewer LUNs
  • Break your clusters down further in order to separate different groups of disk from each other, e.g. on a per-storage-pool cluster basis

How not to behave in a VMware environment

I have to admit, I did something stupid the other day. I manage a system where the Virtual Centre instance on a ESXi cluster is running in a VM, and very often connect to the server using RDP where I can run the VC console. All well there, and pretty normal I suppose you might think. But there are drawbacks to this solution and I discovered all too well when I mistakenly attached an ISO image to the VC instance that was stored on the machine’s virtual disk itself. VMware/ESXi did actually present the ISO to the virtual machine, and displayed the content of the disk – but when I tried to read from the volume the VM crashed (blue screened). That would normally be the end of a stupid incident, but since the VM was running in a high availability (HA) cluster the host tried to reset the machine and restart Virtual Centre. At this point I realised that this might not be the best course of action, after all the VM could not start correctly as the ISO was only available within the machine that was trying to start. A catch 22 situation if ever I saw one. What was worse, I couldn’t control the HA status of the VM or control the ESXi cluster without Virtual Centre. After nearly two hours of trying to figure out what to do next the status of the pending task running on host (Reset virtual machine – 95%) changed eventually without me doing anything to Reset firmware, then Powering on virtual machine. Phew! Thanks VMware, we got there in the end – and learned a valuable lesson about how not to behave in a virtual environment.

Showing non-present devices within VMware guest OS

If you perform a hardware change that affects a VM guest machine you may find that you have non-present network adaptors configured with existing IP addresses that won’t load when you boot the machine. When you try to configure another network device with an existing IP address you may be notified that the address is already attached to a network card that is no longer present in the system. Follow this article to show ghosted non-present devices so that you can remove them:

http://support.microsoft.com/kb/315539