Upgrading Citrix XenApp 7.x VDA version using PowerShell

With the advent of XenApp 7 and more recently experiencing the higher frequency of VDA cumulative updates I would generally recommend implementing Citrix Machine Creation Services or other imaging mechanism (such as Provisioning Server) when rolling out new versions of the Virtual Desktop Agent to a large number of catalogs.

However, what happens when you only require one XA server per catalog, or when each one of those servers is handled manually when new application code is deployed? This is more common than you might imagine, especially in Citrix deployments which have per-customer or per-app specific catalogs. The work involved in maintaining a master image can be significant and the serviceability of such relies upon someone knowing how to treat image updates in a way that won’t introduce problems that could arise weeks or months later.

One customer of mine has at least 80 catalogs running one or more XenApp VMs and so it simply doesn’t make sense to maintain a single master image for each, especially when application code updates are delivered frequently. So I set about creating a simple PowerShell script which works in a VMware environment to attach the Citrix upgrade ISO and then run the setup installer within the context of a remote PowerShell session.

Using this method you can easily carry out a bulk upgrade of tens (possibly hundreds) of statically assigned VDAs individually by attaching the ISO and installing the update automatically. The advantage of this time saving approach is that it can even be run in a loop so that the upgrade is only attempted when a server is idle and not running any sessions.

NB – as always, please validate the behaviour of the script in a non-production environment and adjust where necessary to meet your own needs.

Here’s a walkthrough of the script, along with the complete example version included at the end.

  1. The script will load the required plugins from both Citrix and VMware PowerShell modules/plugins (I generally run things like this on the Citrix Delivery Controller and install PowerCLI alongside for convenience)
  2. Request credentials and connect to vCenter via a popup
  3. Request credentials for use with WinRM connections to remote Windows servers via a popup
  4. Create a collection of objects (XA servers) which are powered on, do not have any active sessions and don’t already have the target VDA version installed (see $targetvda variable)
  5. For each VM, sequentially:
    1. Attach the specified .iso image file to the resulting VMs
    2. Determine the drive letter where the XA ISO file has been mounted
    3. Create a command line for the setup installer, and save the command into c:\upgrade_vda.cmd on the XA server
    4. Connect via PowerShell remoting session to the remote XA server
    5. Adjust the EUEM registry node permissions (as per https://support.citrix.com/article/CTX215992)
    6. Execute the c:\upgrade_vda.cmd upgrade script on remote machine via PS session
    7. Disconnect the PowerShell remote session
    8. Reboot the VM via vCenter in order to restart the XA services

Review the script and edit the following variables to reflect your use-case:

$vcentersrv = "yourvcentersrv.domain.com"
$targetvda = '7.15.4000.653'
$isopath = "[DATASTORE] ParentFolderName\XenApp_and_XenDesktop_7_15_4000.iso"

Edit the selection criteria on the VMs which will be upgraded:

$targetvms = Get-BrokerMachine -DesktopKind Shared | Where-Object {($_.AgentVersion -ne $targetvda) -and ($_.PowerState -eq 'On') -and ($_.HostedMachineName -like 'SRV*')}

All servers in my example environment begin with virtual machine names SRV* so this line can be adapted according to the number of VMs which you would like to upgrade, or simply replace with the actual named servers if you want to be more selective:

($_.HostedMachineName -in 'SRV1','SRV2','SRV3')

Finally, consider modifying the following variable from $true to $false in order to actually begin the process of upgrading the selected VMs. I suggest running it in the default $true mode initially in order to validate the initial selection criteria.

$skiprun = $true

Additional work:

I would like additionally to incorporate the disconnection of previous VDA .ISO files from the VM before attempting to upgrade. I have noticed that the attached volume label search e.g. Get-Volume -FileSystemLabel ‘XA and XD*’ that determines the drive letter selection is too wide, and will erroneously detect both XA_7_15_4000.iso and XA_7_15_2000.iso versions without differentiating between them.

I would also like to do further parsing of the installation success result codes in order to decide whether to stop, or simply carry on – however I have used the script on tens of servers without hitting too many roadblocks.

This script could also be adapted to upgrade XenDesktop VDA versions where statically assigned VMs are provided to users.

Final note:

This script does not allow the Citrix installer telemetry to run during the installation because it requires internet access and this generates errors in PowerShell for XenApp servers which can’t talk outbound. You can choose to remove this command line parameter according to your circumstances:

/disableexperiencemetrics

Citrix also optionally collects and uploads anonymised product usage statistics, but again this requires internet access. In order to disable Citrix Telemetry the following setting is used:

/EXCLUDE "Citrix Telemetry Service"

Additionally the Personal vDisk feature is now deprecated, so the script excludes this item in order for it to be removed if it is currently present (so be aware if you’re using PvD):

/EXCLUDE "Personal vDisk"

PowerShell code example:

# Upgrade VDA on remote Citrix servers

if ((Get-PSSnapin -Name "Citrix.Broker.Admin.V2" -ErrorAction SilentlyContinue) -eq $Null){Add-PSSnapin Citrix.Broker.Admin.V2}
if ((Get-PSSnapin -Name "VMware.VimAutomation.Core" -ErrorAction SilentlyContinue) -eq $Null){Add-PSSnapin VMware.VimAutomation.Core}

$vcentersrv = "yourvcentersrv.domain.com"

if ($vmwarecreds -eq $null) {$vmwarecreds = Connect-VIServer -Server $vcentersrv}            # Authenticate with vCenter, you should enter using format DOMAIN\username, then password
if ($creds -eq $null) {$creds = Get-Credential -Message 'Enter Windows network credentials'} # Get Windows network credentials

clear

$targetvda = '7.15.4000.653' #Add the target VDA version number - anything which isn't correct will be upgraded
$isopath = "[DATASTORE] ParentFolderName\XenApp_and_XenDesktop_7_15_4000.iso" #Path to ISO image in VMware
$skiprun = $true #Set this variable to false in order to begin processing all listed VMs

$targetvms = Get-BrokerMachine -DesktopKind Shared | Where-Object {($_.AgentVersion -ne $targetvda) -and ($_.PowerState -eq 'On') -and ($_.HostedMachineName -like 'SRV*')}
Write-Host The following XA VMs will be targeted
Write-Host $targetvms.HostedMachineName
if ($skiprun -eq $true) {write-host Skip run is still enabled; exit}

foreach ($i in $targetvms){

if ($i.AgentVersion -ne $targetvda) {
    Write-Host Processing $i.HostedMachineName found VDA version $i.AgentVersion
    
    if ($i.sessioncount -ne $null) {Write-Host Processing $i.HostedMachineName found $i.sessioncount users are logged on}

    if ($i.sessioncount -eq 0) {#Only continue if there are no logged-on users

        Write-Host Processing $i.HostedMachineName verifying attachment of ISO image
        $cdstate = Get-VM $i.HostedMachineName | Get-CDDrive
        if (($cdstate.IsoPath -ne $isopath) -and ($cdstate -notcontains 'Connected')) { $cdstate | Set-CDDrive -ISOPath $isopath -Confirm:$false -Connected:$true;Write-Host ISO has been attached}

        $s = New-PSSession -ComputerName ($i.MachineName.split('\')[1]) -Credential $creds
            #Create the upgrade command script using correct drive letters
            Write-Host Processing $i.HostedMachineName -NoNewline
            invoke-command -Session $s {
                $drive = Get-Volume -FileSystemLabel 'XA and XD*'
                $workingdir = ($drive.driveletter + ":\x64\XenDesktop Setup\")
                $switches = " /COMPONENTS VDA /EXCLUDE `"Citrix Telemetry Service`",`"Personal vDisk`" /disableexperiencemetrics /QUIET"
                $cmdscript = "`"$workingdir" + "XenDesktopVDASetup.exe`"" + $switches
                Out-File -FilePath c:\upgrade_vda.cmd -InputObject $cmdscript -Force -Encoding ASCII
                Write-Host " wrote script using path" $workingdir
            }
            
            #Adjust the registry permissions remotely
            Write-Host Processing $i.HostedMachineName updating registry permissions
            Invoke-Command -Session $s {
                $acl = Get-Acl "HKLM:\SOFTWARE\Wow6432Node\Citrix\EUEM\LoggedEvents"
                $person = [System.Security.Principal.NTAccount]"Creator Owner"
                $access = [System.Security.AccessControl.RegistryRights]"FullControl"
                $inheritance = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit,ObjectInherit"
                $propagation = [System.Security.AccessControl.PropagationFlags]"None"
                $type = [System.Security.AccessControl.AccessControlType]"Allow"}
            Invoke-Command -Session $s {$rule = New-Object System.Security.AccessControl.RegistryAccessRule($person,$access,$inheritance,$propagation,$type)}
            Invoke-Command -Session $s {$acl.AddAccessRule($rule)}
            Invoke-Command -Session $s {$acl |Set-Acl}
                
            #Execute the command script
            Write-Host Processing $i.HostedMachineName, executing VDA install script
            Invoke-Command -Session $s {& c:\upgrade_vda.cmd} # Runs the upgrade script on remote server
            Remove-PSSession $s #Disconnect the remote PS session
            Restart-VMGuest -VM $i.HostedMachineName -Confirm:$false #Restart the server following either a successful or unsuccessful upgrade
            }
        }
    }

Using MSDeploy to migrate between IIS6 and IIS7/8.5

A recent server replacement task I worked on recently involved migrating content between an IIS6 instance running on Windows Server 2003 R2 and IIS8.5 on Windows Server 2012 R2.

One of the sticking points that arose was that the source and destination switches included the potential options of -source: and -dest: with the webserver60 or webserver switches, according to whether IIS6 or above was being targeted respectively.

However when attempting a cross-version migration it is not possible to sync the content directly when using mismatched webserver and webserver60 switches. Instead, it’s more appropriate to target the IIS metabase index for the web site using –source:metakey=lm/w3svc/1

Here’s a step by step example of the choices which you would use:

Obtain and install the MSDeploy tool on both source and destination servers, including the remote service.

To check the dependencies on the original IIS6 server
msdeploy -verb:getDependencies -source:webServer60

To check the installed modules on the target IIS8.5 server
msdeploy -verb:getDependencies -source:webServer

To check the dependencies on the original IIS6 metabase
msdeploy -verb:getDependencies -source:metakey=lm/w3svc/1

To check the installed modules on the target IIS8.5 metabase
msdeploy -verb:getDependencies -source:metakey=lm/w3svc/1

Once you have determined which options were previously installed on the source IIS instance and manually aligned the settings on the target server you can proceed with the initial synchronisation of the web server content. One way of doing this between servers connected by a slow connection is using a ZIP archive (termed package mode), which should ideally contain credentials encrypted using a password:

To export the files from the source IIS6 server metabase to an encrypted ZIP file, including Application Pools
msdeploy.exe -verb:Sync -source:metakey=lm/w3svc/1 -dest:package=C:\Output\MS_WebDeploy_WebDeploy\servername_6.0_1.zip,encryptPassword=xxx -enableLink:AppPoolExtension

Copy the package file (.zip) to the target server in order to import it.

NB. The IncludeACLs=true option does not do anything when transferring data using the package option (ZIP archive), but doesn’t generate any error

I found that it was necessary to switch the .NET priority order of the migration tool in “C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe.config” in order to use the earlier .NET version when importing data on the target server.

To import to IIS metabase on target server using the encrypted package
msdeploy -verb:sync -source:package=c:\Output\MS_WebDeploy_WebDeploy\servername_6.0_1.zip,encryptPassword=xxx -enableLink:AppPoolExtension -dest:metakey=lm/w3svc/1

Once you have completed the initial import of the source data on the target server it is often useful to resynchronise the data at a later point, e.g. after applying small changes at the source. This option can also be used directly without using the package based approach outlined previously if you are copying content between servers with a high-speed connection.

To import to IIS metabase on target server by pulling from the source machine, including ACLs

msdeploy -verb:sync -source:metakey=lm/w3svc/1,computername=servername,includeAcls=True -enableLink:AppPoolExtension -dest:metakey=lm/w3svc/1,includeAcls=True

These are fairly simple examples but they should be sufficient to migrate simple HTML or Classic ASP pages between the source and destination servers. You can find links to useful documentation below:

https://www.iis.net/downloads/microsoft/web-deploy

https://docs.microsoft.com/en-us/iis/publish/using-web-deploy/migrate-a-web-site-from-iis-60-to-iis-7-or-above

PowerCLI Get-Tag fails with ‘Could not load file or assembly ‘Newtonsoft.Json, Version=10.0.0.0’

Here’s a simple scenario which I came across today. You would like to work with your vSphere environment using the latest PowerCLI but discover that v6.5.1 is the latest downloadable version on VMware’s website. Hearing that the distribution for this code has now moved to the PowerShell Gallery you open a PS prompt and enter:

PS:\> Install-Module VMware.PowerCLI

The modules are downloaded and installed successfully, and you are able to connect to your vCenter environment:

Connect-VIServer -server vcenterserver.com -user 'DOMAIN\username'

But when you attempt to use a simple command such as:

Get-Tag

you receive an error similar to:

get-tag : 11/10/2018 21:06:20   Get-Tag         Could not load file or assembly 'Newtonsoft.Json, Version=10.0.0.0,
Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed' or one of its dependencies. The system cannot find the file
specified.
At line:1 char:1

In my case I found that other system components on my VM were using an older version of the Newtonsoft.Json.dll (e.g. Citrix Virtual Desktop Agent) that were found in the file search path before the PowerShell module’s location.

Searching for the file conflict using ProcMon I noticed that the Connect-VIServer cmdlet does indeed find and load a version of this .dll during the connection process, e.g. the one located in:

C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Newtonsoft.Json\v4.0_4.5.0.0__30ad4fe6b2a6aeed\Newtonsoft.Json.dll

However this version is 5.0.5.16108 on my Windows Server 2016 platform and we’re looking for 10.0.0.0 or newer.

Work-around

Retrieve the newer version of the file (supplied with the PowerCLI modules), located for instance in:

C:\Users\username\Documents\WindowsPowerShell\Modules\VMware.VimAutomation.Common\net45

and place a copy in somewhere PowerShell is likely to find it, e.g.:

C:\Windows\System32\WindowsPowerShell\v1.0\Newtonsoft.Json.dll

This simple work-around proved successful for me, but you should check of course to verify all other functionality which might depend on this file before making a similar change in a production environment.

Should the /psc URL work on both HA Platform Services nodes?

I recently ran into a strange issue following the enablement of two PSC 6.5 nodes in an HA configuration, as part of a larger rolling upgrade from vCenter 5.5.

NB – all URLs shown are internal, in use within my lab environment only.

During the migration of the existing customers vCenter environment we had to rehearse the externalisation of PSC from an initial embedded SSO instance. As part of this process the first PSC node in a new site was migrated from an original Window vCenter 5.5 SSO to PSC 6.5, and subsequently a second new node was joined to the first site in order for replication to be established.

I used a Citrix NetScaler to load balance the configuration and noticed at some point after the successful HA repointing was done that I was unable to access the https://hosso01.sbcpureconsult.internal/psc URL.

The second node, https://hosso2.sbcpureconsult.internal/psc worked correctly and redirects to the load balanced address psc-ha-vip.sbcpureconsult.internal for authentication before displaying the PSC client UI.

Irrespective of whichever node is selected I was able to log in to vCenter, then choose Administration, System Configuration, select a node then Manage, Settings or CA without receiving any errors.

If I deliberately dropped the first node out of the load balancing config on the NetScaler I didn’t have any issues when accessing the /psc URL by either host name or load balancer name, but if I tried to connect to the first node by its own DNS name or IP I received an HTTP 400 error and the following entry in:

/storage/log/vmware/psc-client/psc-client.log

[2018-10-08 12:05:20.347] [ERROR] tomcat-http--3 com.vmware.vsphere.client.security.websso.MetadataGeneratorImpl - Error when creating idp metadata.
java.lang.RuntimeException: java.io.IOException: HTTPS hostname wrong:  should be <psc-ha-vip.sbcpureconsult.internal>

It appeared that the HTTP 400 error is because the psc-client Tomcat application doesn’t start up correctly on the first node anymore, along with an error in..

/storage/log/vmware/rhttpproxy/rhttpproxy.log

2018-10-08T13:27:10.691Z warning rhttpproxy[7FEA4B941700] [Originator@6876 sub=Default] SSL Handshake failed for stream <SSL(<io_obj p:0x00007fea2c098010, h:27, <TCP '192.168.0.117:443'>, <TCP '192.168.0.121:26417'>>)>: N7Vmacore3Ssl12SSLExceptionE(SSL Exception: error:140000DB:SSL routines:SSL routines:short read)

I repeated the same series of steps in my lab environment I had experienced on the customer site, and was able to confirm the same behaviour. Let me explain at this point, that all other vCenter functionality was correct and our issue only affected the /psc URL.

Could this be deemed ‘correct’ behaviour?

If I chose https://psc-ha-vip.sbcpureconsult.internal/psc (which is the load balancer address) I was initially only able to connect if the second node is online and happens to be selected.

I wanted to confirm before signing off on the work that it should be possible to access the /psc URL on each node deliberately?

After what seemed like a lot of internal dialogue between myself and my inner tech support dept. (sleepless nights!) I was left wondering what could be going wrong.. especially if this was the documented procedure from VMware?

Good news, I was able to roll back my lab and re-run the updateSSOConfig.py and UpdateLsEndpoint.py scripts – only to find that the /psc URL did indeed load successfully on both nodes with the NetScaler load balancing in place!

So at least I knew that the correct behaviour is that you should be able to open /psc on both appliances.

By examining my snapshots at different stages I was able to identify a difference between the original migration node and the clean appliance:

When you run the updateSSOconfig.py Python script to repoint the SSO URL to the load balanced address it explains that hostname.txt and server.xml were modified:

# python updateSSOConfig.py --lb-fqdn=psc-ha-vip.sbcpureconsult.internal
script version:1.1.0
executing vmafd-cli command
Modifying hostname.txt
modifying server.xml
Executing StopService --all
Executing StartService --all

I was able to locate hostname.txt files (containing the load balancer address) in:

  • /etc/vmware/service-state/vmidentity/hostname.txt
  • /etc/vmware-sso/keys/hostname.txt (missing on node 2, but contained the local name on node 1)
  • /etc/vmware-sso/hostname.txt

but this second hostname file was missing on the second node. Why is this? I guess that it is used transiently during the script execution in order to inject the correct value into the server.xml file.

The server XML file is located in the folder:

/usr/lib/vmware-sso/vmware-sts/conf/server.xml

my faulty node contained the following certificate entries under the connector definition:

..store="STS_INTERNAL_SSL_CERT"
certificateKeystoreFile="STS_INTERNAL_SSL_CERT"..

my working node contained:

..store="MACHINE_SSL_CERT"
certificateKeystoreFile="MACHINE_SSL_CERT"..

So I was able to simply copy the server.xml file from the working node (overwriting the original on the faulty node) and also remove the /etc/vmware-sso/keys/hostname.txt file to match the configuration.

Following a reboot my first SSO node then responded correctly by redirecting https://hosso01.sbcpureconsult.internal/psc to https://psc-ha-vip.sbcpureconsult.internal/websso to obtain its SAML token before ultimately displaying the PSC client UI.

As a follow up, by examining the STS_INTERNAL_SSL_CERT store I could see that the machine certificate being used was issued by the original Windows vCenter Server 5.5 SSO CA to the subject name:

ssoserver,dc=vsphere,dc=local

This store was not present on the other node, and so the correct load balancing certificate replacement must somehow be omitted by one of the upgrade scripts when this scenario occurs (5.5 SSO to 6.5 PSC).

I hope that this bug gets removed by VMware in due course, particularly as more customers are moving to the appliance based model of vCenter 6.x, but this workaround and method should be considered at least if you run into a similar problem.

NB This post is adapted from a longer discussion on VMware Communities page available under https://communities.vmware.com/thread/598140.

Checking VMware Platform Services Controller 6.5 replication

Following installation of a second Platform Services Controller node in a site how will you know if replication is functioning correctly?

Assuming that you’ve got time to wait 30 seconds for each change to be replicated you could first try creating a test user on each node within the vsphere.local domain to verify bidirectional communication. But if you prefer to be a little more scientific or repeat the process programmatically you can follow a simple sequence of steps.

The following article from VMware explains the process, however it does omit a period (.) character at the beginning of the Linux commands such that the steps can’t be followed verbatim.

https://kb.vmware.com/s/article/2127057

I’ve rewritten the steps that I generally follow below:

Login to the PSC appliance over SSH as the root user

Enter the following commands to change directory and execute the vdcrepadmin tool (bearing in mind here that the administrator user is from the single-sign-on vsphere.local domain)

cd /usr/lib/vmware-vmdir/bin

./vdcrepadmin -f showservers -h hopsc01.xyz.company.com -u administrator -w password

This command lists out all of the PSC nodes which have joined the single-sign-on domain:

cn=hopsc01.xyz.company.com,cn=Servers,cn=HeadOffice,cn=Sites,cn=Configuration,dc=vsphere,dc=local
cn=hopsc02.xyz.company.com,cn=Servers,cn=HeadOffice,cn=Sites,cn=Configuration,dc=vsphere,dc=local

Repeat this step on the second (or additional) PSC nodes:

cn=hopsc01.xyz.company.com,cn=Servers,cn=HeadOffice,cn=Sites,cn=Configuration,dc=vsphere,dc=local
cn=hopsc02.xyz.company.com,cn=Servers,cn=HeadOffice,cn=Sites,cn=Configuration,dc=vsphere,dc=local

Enter the following commands to display the replication partners for each node:

./vdcrepadmin -f showpartners -h hopsc01.xyz.company.com -u administrator -w password

ldap://HOPSC02.xyz.company.com

./vdcrepadmin -f showpartners -h hopsc02.xyz.company.com -u administrator -w password

ldap://hopsc01.xyz.company.com

Enter the following commands to display the replication status of each node with its counterpart replication partners:

./vdcrepadmin -f showpartnerstatus -h hopsc01.xyz.company.com -u administrator -w password

Partner: HOPSC02.xyz.company.com
Host available: Yes
Status available: Yes
My last change number: 4676
Partner has seen my change number: 4676
Partner is 0 changes behind.

./vdcrepadmin -f showpartnerstatus -h hopsc02.xyz.company.com -u administrator -w password

Partner: hopsc01.xyz.company.com
Host available: Yes
Status available: Yes
My last change number: 8986
Partner has seen my change number: 8986
Partner is 0 changes behind.

In these examples the change numbers (unique sequence numbers) are specific to the local host, but are not necessarily the same if they were introduced to the site at different times. The important value to pay attention to is whether the replication partner shows that any changes are not yet communicated or if the other partner is unavailable.

Bespoke consulting for Virtualisation and Server Based Computing environments

SBC PureConsult is a specialist IT consultancy dedicated to the  design and implementation of Citrix XenApp/XenDesktop, VMware vSphere and vCloud based projects. Over the last 18 years we have carried out numerous projects in various industry sectors, and in several cases pioneered custom solutions for implementing other vendor’s software upon a virtualised application delivery platform.

We have enjoyed significant success in the Hospitality and Leisure markets and defined the standard for delivering Micros Fidelio’s Opera PMS/ORS product using Citrix XenApp/Presentation Server. In fact, we have  successfully completed PMS deployment projects for a number of top-tier hotel companies including delivery of desktop and application environments for several hotels using a private cloud model.

We have also developed specialist knowledge in the Communications Regulations sector surrounding the best practice implementation of LStelcom AG SPECTRA series of applications in a Citrix XenApp environment. Following success gained in deploying the SPECTRA solution using Citrix XenApp this has become a core strength within our portfolio.

We strongly believe that whilst you can read about our values and core competencies here, talking to us about your specific needs and expectations should be just as rewarding. Please take the time to read through the short introductions to our business and should you have any questions or queries don’t hesitate to discuss them with us.

Click here to view our proposition concerning the implementation of Micros Systems, Inc. Opera using Citrix.

Please contact us to hear about our specialist knowledge working with the LStelcom AG, SPECTRA suite.

Office 365 for Mac, Outlook unread count wrong

I recently received a new MacBook Pro and restored all of my previous applications and data from a Time Travel backup. One small issue that I noticed afterwards was that the Unread mail count (1) was incorrect, since even when I set a Filter to show only unread items there were no remaining mails shown. Despite a quick search for the answer online it seems that Office 365 (Outlook 15.0) for Mac is not widely written about yet. The solution I fell upon was quite simple (please be careful to check that your mailbox is correctly synchronised before beginning):

  1. Select the folder which shows the incorrect item count.
  2. Choose Properties on the folder.
  3. Click Empty Cache, in order to remove the local copies of the mailbox folder items (this assumes you’re using the Exchange mailbox as a primary store and not a POP server etc)

All mail items were then immediately removed from the local mailbox cache, following which you can right click on the folder concerned and then choose Synchronise Now.

This simple fix easily resolved my problem.

Locating Personal vDisk with PowerShell script

Dell vRanger is a backup solution for VMware which I’ve been using for a while to backup a customer’s ESXi environment. It’s generally OK, however the vRanger backup configuration wizard does not allow you to specifically exclude Citrix MCS base image disks which cannot themselves be backed up (.delta disk file types) – instead opting to force you to define the disks to exclude based upon Hard disk 1, Hard disk 2 names which apply to the whole job identically for each VM.

In this example I DO want to backup the pvDisk but DO NOT want to backup the other two disks which are deemed unnecessary. The issue which I’ve got with this approach is that sometimes (and I don’t quite understand why!) the virtual desktops added to the catalog sometimes use Hard disk 3 for the user’s pvDisk and others use Hard disk 2.

Perhaps this is just a timing issue with vCenter but nevertheless I needed to figure out a simple way of easily searching a group of VMs and selecting those which use Hard disk 2, and 3 and create separate backup jobs which exclude the non-backup targets i.e. the delta disk (non-persistent independent) and identity disk (persistent independent).

See below the script which I ended up with after a bit of tinkering. It makes an assumption that the identity disk is less than 1GB in size and that your pvDisk is greater than 1GB (otherwise you may not see anything returned):

#Connect-VIServer -Server vcentersrv1.domain.internal
$VMfilter = 'Win7-XD-C*'
$XenDesktopVMs = Get-VM -Name $VMfilter
Write-Host 'Listing pvDisks names for selected VMs:'foreach ($vm in $XenDesktopVMs) {$hdd=Get-HardDisk -VM $vm | Where {$_.Persistence -eq "Persistent"}foreach ($diskin$hdd | `
where-object {$_.CapacityGB -ge 1}) {Write-Host $vm.Name $Disk.Name '=' $disk.CapacityGB }}

Repointing vCenter Server to external PSC on load balanced FQDN fails

I have been  planning a migration project for a customer for a while which involves moving from an embedded SSO instance on vCenter 5.5 to an external Platform Services Controller instance on 6.5. Suffice to say, plenty of ‘how to’ guides exist, alongside the documentation from VMware – however, there is a generally scant outline of what steps to take when ‘repointing your vCenter to the new load balanced PSC virtual IP. The topic of this post is what happens when you follow the available load balancing documentation and your VMware Update Manager service fails to start afterwards.

I’ll include the reference articles up front, in case these are the ones which you might also have referred to:

Reference articles:

Configuring HA PSC load balancing on Citrix NetScaler – VMware KB article

Repoint vCenter Server to Another External Platform Services Controller in the Same Domain – VMware KB article

The repoint command:

At the step where you are reminded to repoint your vCenter instances at the new load balanced VIP address you’ll need to use the command:

cmsso-util repoint --repoint-psc psc-ha-vip.sbcpureconsult.internal

However, if you’ve followed the steps precisely, you’re likely to run into the following output when the repoint script attempts to restart the Update Manager service:

What happens:

Validating Provided Configuration …
Validation Completed Successfully.
Executing repointing steps. This will take few minutes to complete.
Please wait …
Stopping all the services …
All services stopped.
Starting all the services …

[… truncated …]

Stderr = Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting

Failed to start all the services. Error {
“resolution”: null,
“detail”: [
{
“args”: [
“Stderr: Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting\n\n”
],
“id”: “install.ciscommon.command.errinvoke”,
“localized”: “An error occurred while invoking external command : ‘Stderr: Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting\n\n'”,
“translatable”: “An error occurred while invoking external command : ‘%(0)s'”
}
],
“componentKey”: null,
“problemId”: null
}

Following this issue you might reboot or attempt to start all services directly on the vCenter appliance afterwards and receive:

service-control --start --all

Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting

This again is fairly unhelpful output and doesn’t provide any assistance as to the cause of the issue. After much investigation, it turns out that the list of TCP port numbers which the load balancing configuration details are not complete, causing the service startup to fail. Because we’re not running any other applications on the PSC hosts it’s possible to simplify the configuration on NetScaler by using wildcard port services for each server.

NetScaler configuration commands (specific to PSC load balancing):

The following alternative configuration ensures that any PSC service requested by your vCenter Server (or other solutions) will remain persistently connected on a ‘per host’ basis for up to 1440 minutes which is the default lifetime of a vCenter Web Client session. This is different to VMware’s documented approach which load balances each service individually, but obviously misses out some crucial port.

add server hosso01.sbcpureconsult.internal 192.168.0.117
add server hosso02.sbcpureconsult.internal 192.168.0.116

add service hosso01.sbcpureconsult.internal_TCP_ANY hosso01.sbcpureconsult.internal TCP * -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO

add service hosso02.sbcpureconsult.internal_TCP_ANY hosso02.sbcpureconsult.internal TCP * -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO

add lb vserver lb_hosso01_02_TCP_ANY TCP 192.168.0.122 * -persistenceType SOURCEIP -timeout 1440 -cltTimeout 9000

bind lb vserver lb_hosso01_02_TCP_ANY hosso01.sbcpureconsult.internal_TCP_ANY

bind lb vserver lb_hosso01_02_TCP_ANY hosso02.sbcpureconsult.internal_TCP_ANY

Once this configuration is put in place you’ll find that the vCenter Update Manager service will start correctly and your repoint will be successful.

Edit: Following the above configuration steps to get past the installation issue, I’ve since improved the list of ports that are load balanced by NetScaler to extend the list that VMware published for vCenter in their docs page. By enhancing the original series of ports I think we can resolve the initial issue without resorting to IP based wildcard load balancing.

I’ve included the full configuration below for reference:

Thanks for reading!

If you find this useful drop me a message via my contact page.

add server hosso01.sbcpureconsult.internal 192.168.0.117
add server hosso02.sbcpureconsult.internal 192.168.0.116
add service hosso01_TCP80 hosso01.sbcpureconsult.internal TCP 80 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP88 hosso01.sbcpureconsult.internal TCP 88 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP389 hosso01.sbcpureconsult.internal TCP 389 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP443 hosso01.sbcpureconsult.internal TCP 443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP514 hosso01.sbcpureconsult.internal TCP 514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP636 hosso01.sbcpureconsult.internal TCP 636 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP1514 hosso01.sbcpureconsult.internal TCP 1514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2012 hosso01.sbcpureconsult.internal TCP 2012 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2014 hosso01.sbcpureconsult.internal TCP 2014 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2015 hosso01.sbcpureconsult.internal TCP 2015 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2020 hosso01.sbcpureconsult.internal TCP 2020 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP5480 hosso01.sbcpureconsult.internal TCP 5480 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP7444 hosso01.sbcpureconsult.internal TCP 7444 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP80 hosso02.sbcpureconsult.internal TCP 80 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP88 hosso02.sbcpureconsult.internal TCP 88 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP389 hosso02.sbcpureconsult.internal TCP 389 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP443 hosso02.sbcpureconsult.internal TCP 443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP514 hosso02.sbcpureconsult.internal TCP 514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP636 hosso02.sbcpureconsult.internal TCP 636 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP1514 hosso02.sbcpureconsult.internal TCP 1514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2012 hosso02.sbcpureconsult.internal TCP 2012 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2014 hosso02.sbcpureconsult.internal TCP 2014 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2015 hosso02.sbcpureconsult.internal TCP 2015 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2020 hosso02.sbcpureconsult.internal TCP 2020 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP5480 hosso02.sbcpureconsult.internal TCP 5480 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP7444 hosso02.sbcpureconsult.internal TCP 7444 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add lb vserver lb_hosso01_02_80 TCP 192.168.0.122 80 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_88 TCP 192.168.0.122 88 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_389 TCP 192.168.0.122 389 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_443 TCP 192.168.0.122 443 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_514 TCP 192.168.0.122 514 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_636 TCP 192.168.0.122 636 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_1514 TCP 192.168.0.122 1514 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2012 TCP 192.168.0.122 2012 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2014 TCP 192.168.0.122 2014 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2015 TCP 192.168.0.122 2015 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2020 TCP 192.168.0.122 2020 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_5480 TCP 192.168.0.122 5480 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_7444 TCP 192.168.0.122 7444 -timeout 1440 -cltTimeout 9000
bind lb vserver lb_hosso01_02_80 hosso01_TCP80
bind lb vserver lb_hosso01_02_80 hosso02_TCP80
bind lb vserver lb_hosso01_02_88 hosso01_TCP88
bind lb vserver lb_hosso01_02_88 hosso02_TCP88
bind lb vserver lb_hosso01_02_389 hosso01_TCP389
bind lb vserver lb_hosso01_02_389 hosso02_TCP389
bind lb vserver lb_hosso01_02_443 hosso01_TCP443
bind lb vserver lb_hosso01_02_443 hosso02_TCP443
bind lb vserver lb_hosso01_02_514 hosso01_TCP514
bind lb vserver lb_hosso01_02_514 hosso02_TCP514
bind lb vserver lb_hosso01_02_636 hosso01_TCP636
bind lb vserver lb_hosso01_02_636 hosso02_TCP636
bind lb vserver lb_hosso01_02_1514 hosso01_TCP1514
bind lb vserver lb_hosso01_02_1514 hosso02_TCP1514
bind lb vserver lb_hosso01_02_2012 hosso01_TCP2012
bind lb vserver lb_hosso01_02_2012 hosso02_TCP2012
bind lb vserver lb_hosso01_02_2014 hosso01_TCP2014
bind lb vserver lb_hosso01_02_2014 hosso02_TCP2014
bind lb vserver lb_hosso01_02_2015 hosso01_TCP2015
bind lb vserver lb_hosso01_02_2015 hosso02_TCP2015
bind lb vserver lb_hosso01_02_2020 hosso01_TCP2020
bind lb vserver lb_hosso01_02_2020 hosso02_TCP2020
bind lb vserver lb_hosso01_02_5480 hosso01_TCP5480
bind lb vserver lb_hosso01_02_5480 hosso02_TCP5480
bind lb vserver lb_hosso01_02_7444 hosso01_TCP7444
bind lb vserver lb_hosso01_02_7444 hosso02_TCP7444
add lb group pg_hosso_01_02 -persistenceType SOURCEIP -timeout 1440
bind lb group pg_hosso_01_02 lb_hosso01_02_80
bind lb group pg_hosso_01_02 lb_hosso01_02_88
bind lb group pg_hosso_01_02 lb_hosso01_02_389
bind lb group pg_hosso_01_02 lb_hosso01_02_443
bind lb group pg_hosso_01_02 lb_hosso01_02_514
bind lb group pg_hosso_01_02 lb_hosso01_02_636
bind lb group pg_hosso_01_02 lb_hosso01_02_1514
bind lb group pg_hosso_01_02 lb_hosso01_02_2012
bind lb group pg_hosso_01_02 lb_hosso01_02_2014
bind lb group pg_hosso_01_02 lb_hosso01_02_2015
bind lb group pg_hosso_01_02 lb_hosso01_02_2020
bind lb group pg_hosso_01_02 lb_hosso01_02_5480
bind lb group pg_hosso_01_02 lb_hosso01_02_7444
set lb group pg_hosso_01_02 -persistenceType SOURCEIP -timeout 1440

XenApp 7.x open published apps session report PowerShell script

Whilst there’s many amazing things being introduced by Citrix recently (in the XenApp/XenDesktop space) I do sometimes feel that Citrix Studio can be somewhat limited in comparison to previous admin tools.

I would say one of the common things that administrators and consultants need to know on a daily basis is how many instances of each published app are being run within a Citrix environment. I was a little perplexed at first why this wasn’t easily available through Citrix Director without making connections directly to the database through an OData connection, but I guess in the end they decided that it simply wasn’t relevant .

So I’ve been working on a PowerShell script to give me a very simple view of how an environment’s application usage stacks up, and from there on in I can decide whether everything’s running fine or dig a little deeper.

The first drafts of the script originally required me to manually specify the delivery group(s) against which it would be run, but in this example I’m using a multi-select list box to allow me to choose more than one (just hold down the CTRL key). However,  since each execution of the script only gives me a point-in-time view this example script will refresh every 60 seconds until the maximum interval of one day has passed.

The sort order is currently defined based upon the total number of application instances running, ordered by largest to least, so bear this in mind when selecting multiple delivery groups as the resulting view may not be what you’re looking for.

if ((Get-PSSnapin -Name "Citrix.Broker.Admin.V2" -ErrorAction SilentlyContinue) -eq $Null){Add-PSSnapin Citrix.Broker.Admin.V2}
$selectmachines = @()
$count = 1440 # Script will run until 1 day has passed, updating every 60 seconds
$selectdg = Get-BrokerDesktopGroup | Select-Object -Property Name, UID | Sort-Object -Property UID | Out-GridView -OutputMode Multiple -Title 'Select one or more delivery groups to display active sessions'
foreach ($i in $selectdg) {
$selectmachines+=Get-BrokerMachine -DesktopGroupUid $i.Uid | Select-Object MachineName -ExpandProperty MachineName
}
Do {
clear #Reset the screen contents before redisplaying the connection count
Get-BrokerApplicationInstance -Filter 'MachineName -in $selectmachines'| group-Object -Property ApplicationName | sort-object -property Count -Descending | Format-Table -AutoSize -Property Count,Name
$count--
Start-sleep -Seconds 60
} while ($count -ne 0)