Deploying Citrix Ingress Controller with Kubernetes

Citrix Ingress Controller is a niche but seriously interesting innovation from Citrix – developed in order to bring an enhanced application delivery capability to the Kubernetes container orchestration platform. This article is intended to communicate some basics of Kubernetes and ingress using Citrix ADC, but more-so to highlight some specific gaps in the documentation which are no longer appropriate for Kubernetes 1.16 and above due to API changes .

Many Citrix application and networking users will already be familiar with the hardware based or virtual Citrix NetScaler or ADC platforms, bringing L4 through to L7 load balancing, URL responder and rewrite features (amongst others) to conventional or virtualised networking environments. What you now have with Citrix Ingress Controller with ADC MPX/VPX is the ability to integrate Kubernetes with your existing ADCs, or introduce Citrix ADC CPX containerised NetScaler(s) such that you are able to deploy transient containerised NetScaler ADC instances within your Kubernetes platform enabling per-application networking services.

What is great about this solution is the way that it creates an automated API interface between Kubernetes and Citrix’s Nitro REST API of NetScaler. When a new containerised app is presented to the outside via a specially annotated ingress CIC will instantly create load balancing and content switching vservers along with rewrite rules for you, and even update/remove them when your container is modified or removed. This takes all of the manual work out of updating your ADC configuration on a per-app basis.

There are two basic ways in which to incorporate Citrix ADC into Kubernetes, namely ‘north-south’ and ‘east-west’ options. Familiar ingress solutions such as NGINX are often used within Kubernetes to attach the container networking stack to the outside world, since pod networking is normally completely abstracted from the user network in order to facilitate clean application separation. In a ‘north-south’ implementation you can think of the ingress controller (e.g. NGINX or Citrix ADC) as the front door to your application, with the remaining container based application networking presented through service endpoints within the backend network.

In an ‘east-west’ topology you can implement Citrix ADC CPX as a side-car to your container application in order to provide advanced ADC features within the Kubernetes network to enhance inter-container communication. This is a more advanced topology, but nonetheless directly intended for deployment within the Kubernetes infrastructure as a container. Citrix have a nice series of diagrams which highlight the tier 1 and tier 2 scenarios here.

Prerequisites

I’m going to be talking about bare-metal scenarios here rather than cloud based environments such Azure AKS, however to user these examples you will need to have created a Kubernetes 1.16 cluster first and be able to interact with it using kubectl. I have been using Rancher in order to build my Kubernetes clusters on vSphere, which in itself is a whole other subject which I hope to return to in a different post.. but you could always use something like MiniKube running within a desktop hypervisor (let me know how you get on!).

In order to use the implementation examples below you will need to have deployed a Citrix NetScaler MPX or VPX v12.1 / 13 in your network which is able to communicate with the Kubernetes API and cluster nodes. My lab uses a flat network range of 192.168.0.0/24 for instance, in which case the Kubernetes API is available on the same network as my NetScaler. However the backend pod networks are in the range 10.42.x.0/24 where each node hosts a separate range. Citrix Ingress Controller will take care of adding the network routes to these backend networks so they don’t have to be reachable from your desktop.

For the purposes of a lab type exercise it doesn’t matter if your Citrix ADC is used for other features, e.g. LB, Citrix Gateway because Citrix Ingress Controller will complement your infrastructure without replacing any of the existing configuration. It’s probably not a great idea to launch straight into this using your Production ADC instance though, best stick to the lab environment!

Create a system user on Citrix ADC

Your Citrix Ingress Controller will talk to NetScaler Nitro API directly using a user account which you define within Kubernetes. Perhaps you will use an existing user, or create a new one. For instance the following command will create a new user called cic on the NetScaler and create a new command policy:

add system user cic my-password
add cmdpolicy cic-policy ALLOW “^(?!shell)(?!sftp)(?!scp)(?!batch)(?!source)(?!.*superuser)(?!.*nsroot)(?!install)(?!show\s+system\s+(user|cmdPolicy|file))(?!(set|add|rm|create|export|kill)\s+system)(?!(unbind|bind)\s+system\s+(user|group))(?!diff\s+ns\s+config)(?!(set|unset|add|rm|bind|unbind|switch)\s+ns\s+partition).*|(^install\s*(wi|wf))|(^(add|show)\s+system\s+file)”

NB I’ve seen a problem with the above where the command might error out with an error concerning unexpected quotes character, it doesn’t seem to interfere with the creation of the command policy though.

In case you have any difficulties whilst attempting to recreate the steps in this post you can always try first using the ‘superuser’ command policy and then refine it until it matches the command permissions that you’re comfortable with.

In addition to this you may need to add additional rewrite module permissions if you’re going to use the rewrite CRDs, you can just tack these on to the end of the existing definition before the final quote mark:

(^(?!rm)\S+\s+rewrite\s+\S+)|(^(?!rm)\S+\s+rewrite\s+\S+\s+.*)

Finally, bind the newly created command policy to your new cic user.

bind system user cic cic-policy 0

Deploy Citrix Ingress Controller using YAML

This section is slightly different to that which is outlined in the actual Citrix Ingress Controller instructions. Please take care to understand the differences, they are mainly due to a desire to create better separation between components and configuration settings.

Create a new namespace to hold the secret and other CIC components. The commands below show the namespace entry in bold in case you choose to omit this and just place the components in the default namespace. It’s up to you, but for tidiness I created a namespace.

kubectl create namespace ingress-citrix

Create a new Kubernetes secret to store your Nitro API username and password. Using kubectl connect to your cluster and create a new secret to store the data.

kubectl create secret generic nslogin --from-literal=username=cic --from-literal=password=mypassword -n ingress-citrix

In my testing I ran into what I think is a Citrix documentation error for the above command where they show using single quotes around the name cic and mypassword values. Kubernetes converts these values into base64 encoding before they are stored, and might also include the quotes in the final value if you’re not careful. In fact that messed up my configuration for a while until I converted the secret back into its original content, using:

kubectl get secret nslogin -n ingress-citrix -o=yaml

Take the values for password: and username: from the secret and pass them through a base64 decoder just to check that this hasn’t happened (there are also various web sites which can do this for you) by using the following Linux/MacOS command for either the username or password taken from the YAML form above.

echo bXlwYXNzd29yZA== | base64 --decode

Using this source file as a reference, modify/add the following entries (shown in bold) within the file in order to add the name of your namespace:

kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1beta1
 metadata:
   name: cic-k8s-role
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cic-k8s-role
 subjects:
 kind: ServiceAccount
 name: cic-k8s-role
 namespace: ingress-citrix 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cic-k8s-role
  namespace: ingress-citrix
apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: cic-k8s-ingress-controller
   namespace: ingress-citrix
 (entry continues)

Be aware – the default CIC configuration creates a cluster role which will see events across the whole system, however this can be deliberately (or mistakenly) restricted to only watching API events in specific namespaces if your role contains:

kind: Role

instead of:

kind: ClusterRole

or if you add a NAMESPACE environment variable when defining the env: section of your CIC deployment manifest.

Finally, add/edit the following entries to define how to contact your Citrix ADC i.e. the NetScaler management IP (NS_IP) and virtual server IP (NS_VIP) to be used for LB/content switching your ingress (the front door)

env:
         # Set NetScaler NSIP/SNIP, SNIP in case of HA (mgmt has to be enabled) 
         - name: "NS_IP"
           value: "192.168.0.99"
         - name: "NS_VIP"
           value: "192.168.0.110"
         - name: "LOGLEVEL"
           value: "INFO"
args:
           - --ingress-classes
             citrix
           - --feature-node-watch
             true

NB – the --feature-node-watch option allows NetScaler to create routes automatically in order to reach the backend pod network addresses

NB – the LOGLEVEL default value is DEBUG, you might want to leave this as an unspecified value until you’re happy with the functionality, and then change it to INFO as above.

The version of Citrix Ingress Controller is specified within this YAML file, hence if you wish to upgrade your CIC version it can be modified and redeployed (as long as no other changes to your deployment are required)

image: "quay.io/citrix/citrix-k8s-ingress-controller:1.6.1"

After updating the above entries as citrix-k8s-ingress-controller.yaml save the modified YAML file and then deploy it using kubectl

kubectl create -f citrix-k8s-ingress-controller.yaml

Check that your Citrix Ingress Controller container has deployed correctly:

kubectl get pods -n ingress-citrix

NB – in the following examples you can ignore the rancherpart of the above command, the kubectl statements are being proxied through Rancher in order to reach the correct cluster

Validate the installation of Citrix Ingress Controller

Once CIC is online you can access the logs generated by the container by switching the name of your container into the following command:

kubectl logs cic-k8s-ingress-controller-9bdf7f885-hbbjb -n ingress-citrix

You’ll want to see the following highlighted section within the log file which shows that CIC was able to connect to the Nitro interface and create a test vserver (which coincidentally validates that it was able to locate and use the secret which was created to store the credentials!):

2020-01-10 10:45:50,144  - INFO - [nitrointerface.py:_test_user_edit_permission:3729] (MainThread) Processing test user permission to edit configuration
 2020-01-10 10:45:50,144  - INFO - [nitrointerface.py:_test_user_edit_permission:3731] (MainThread) In this process, CIC will try to create a dummy LB VS with name k8s-dummy_csvs_to_test_edit.deleteme
 2020-01-10 10:45:50,174  - INFO - [nitrointerface.py:_test_user_edit_permission:3756] (MainThread) Successfully created test LB k8s-dummy_csvs_to_test_edit.deleteme  in NetScaler
 2020-01-10 10:45:50,188  - INFO - [nitrointerface.py:_test_user_edit_permission:3761] (MainThread) Finished processing test user permission to edit configuration
 2020-01-10 10:45:50,251  - INFO - [nitrointerface.py:_perform_post_configure_operation:575] (MainThread) NetScaler UPTime is recorded as 7225

At this point the Citrix Ingress Controller container will sit there listening out for any Kubernetes API calls which it might be interested to assist with, e.g. creation of an ingress or load balancer object. By default Citrix should pick up any ingress creation event, but in many environments you’ll already have NGINX deployed for various reasons (e.g. it’s a functional part of accessing a dashboard for instance).

The way that you can avoid getting things tangled up is by deliberately using ingress class annotations in your specifications. In this way other ingress controllers will ignore your requests to build an ingress but CIC will jump straight in to help. The annotation which is used for this is called:

kubernetes.io/ingress.class:"Citrix"

Deploying an application

Let’s start by deploying a simple application into the default namespace. The reason we’re going to do this is two-fold, firstly it is simple and most likely to work, and secondly it verifies that CIC is able to see services and ingresses outside of its own namespace. I like to use a hello-world image from Tutum because it tells us a little bit about where it’s running when you access the page.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  namespace: default
spec:
  selector:
    matchLabels:
      run: hello-world
  replicas: 1
  template:
    metadata:
      labels:
        run: hello-world
    spec:
      containers:
      - name: hello-world
        image: tutum/hello-world
        ports:
        - containerPort: 80

Create a new YAML file and save it as deploy-hello-world.yaml, then use kubectl to deploy it to Kubernetes. You’ll see that I’ve prepended rancher in all of my examples but you can omit that if you’re not using Rancher

kubectl apply -f deploy-hello-world.yaml

Creating a service

Now that the application is running in a container you’ll need to create a service using the following YAML. Save it as expose-hello-world.yaml. You could use a type spec of ClusterIP or NodePort – it doesn’t matter when CIC is configured with --feature-node-watch=true although the default is actually ClusterIP.

apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: default
  labels:
    run: hello-world
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: hello-world
kubectl apply -f expose-hello-world.yaml

Defining your ingress

An ingress is a rule which directs incoming traffic to a host address or a given path through to the backend application. It’s quite important to know that an ingress itself is just a rule, there may be load balancers or ingress controllers which receive incoming traffic in your environment but the ingress assists in directing that flow to the backend application.

Again the use of the ingress class kubernetes.io/ingress.class:"Citrix" is an essential component of the below ingress example. It ensures that CIC ‘notices’ the new ingress definition and tells it that it should instruct the Citrix ADC to build load balancing or content switching vservers to make sure your traffic is received when the outside world attempts to talk to your application.

In this ingress example we are going to simulate a scenario where you have a path based entry point into your application, which itself then redirects to the container’s root page. Create a new YAML file with the following content and call it ingress-hello-world.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-world-ingress
  namespace: default
  annotations:
   kubernetes.io/ingress.class: "citrix"
spec:
  rules:
  - host:  www.helloworld.com
    http:
      paths:
      - path: /hello-world
        backend:
          serviceName: hello-world
          servicePort: 80

NB The author, his company and this post has nothing whatsoever to do with any websites or businesses operating on any real domains such as ‘helloworld.com’. It is chosen simply as a convenient example.

kubectl apply -f ingress-hello-world.yaml

At this point, if everything has worked correctly you should be able to make a host file or DNS entry for www.helloworld.com (of course you could use anything else) which points to the same IP address you used to define the NS_VIP address of your load balancer in the Citrix Ingress Controller configuration (citrix-k8s-ingress-controller.yaml). In the examples above the mapping would be:

www.helloworld.com <---> 192.168.0.110

You’ll see the virtual IP now created for you within the Citrix ADC in two places, firstly a new content switch:

A new content switch with the IP address specified in NS_VIP entry, 192.168.0.110

This new content switch has one or more expressions which match traffic to actions (created through ingress definitions):

Therefore any incoming HTTP request matching the www.helloworld.com host where the request URL includes pages starting with the /hello-world location will be sent to the second newly created object – the vserver defined in the action below:

A new load balancing vserver has been created with address 0.0.0.0

This LB vserver includes a service group whose members are actually represented by the pods where the application is currently running. If you changed the deployment specification to include more replicas then you would see more nodes participating in the service group. Citrix ADC will monitor the health of the exposed node ports in order to ensure that traffic is only directed onto running pods.

And now when we visit the page, via the hostname and URL path defined on the ingress we should now see:

Adding a rewrite policy

Let’s say that you have a single ingress controller which is exposing endpoints on a path basis, e.g. /myapproot but the application available on that service is expecting /myapproot/ instead. Some applications I’ve seen won’t respond properly unless you rewrite your request URL to have the trailing forward slash. Fortunately Citrix Ingress Controller and ADC are able to take care of this through a rewrite rule.

Before you can use this you’ll need to deploy the Custom Resource Definitions for rewrite using the following instructions.

Download the CRD for rewrite and responder YAML from this Citrix URL. Save it as rewrite-responder-policies-deployment.yaml and then deploy it using

kubectl create -f rewrite-responder-policies-deployment.yaml

NB One very interesting ‘gotcha’ here is that if you associate a CRD with a namespace then it will only create rewrite policies and actions for services in that namespace, so I would recommend simply using the simplest form of the command shown above without placing the CRD into the ingress-citrix namespace used in this blog’s example.

Now that is deployed you should adapt the following YAML in order to define how the app rewrite should function and then save it as cic-rewrite-example.yaml:

apiVersion: citrix.com/v1
kind: rewritepolicy
metadata:
 name: httpapprootrequestmodify
 namespace: default
spec:
 rewrite-policies:
   - servicenames:
       - hello-world
     rewrite-policy:
       operation: replace
       target: http.req.url
       modify-expression: '"/hello-world/"'
       comment: 'HTTP app root request modify'
       direction: REQUEST
       rewrite-criteria: http.req.url.eq("/hello-world")
kubectl create -f cic-rewrite-example.yaml

Using a Load Balancer service instead of Ingress

In the example above I outlined how to create a hello-world deployment and service in order to correctly present an application via an ADC using ingress. However ingress will only work for HTTP/HTTPS type traffic and cannot be used for other services. One additional method you can use for other traffic is to define a service of type LoadBalancer rather than any other option, e.g. ClusterIP, NodePort.

Citrix Ingress Controller has a specific annotation for this scenario which can be added to the service definition to add the IP address which ADC should use. This is the equivalent of a cloud-provider based load balancer in your on-prem Kubernetes environment where you might not use ingress at all.

apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: default
  annotations:  
    service.citrix.com/frontend-ip: '192.168.0.115'
  labels:
    run: hello-world
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: hello-world

Save the YAML example above into cic-loadbalancer-example.yaml and apply it.

kubectl create -f cic-loadbalancer-example.yaml

If you now examine the service which is created it should be apparent that the type has now changed from NodePort or ClusterIP to LoadBalancer. The external IP address is now shown, as defined within the service.citrix.com/frontend-ip: '192.168.0.115' annotation.

Citrix ADC will now direct traffic arriving at that IP address through to any pods which match the label selector. This method allows you to quite simply plug the outside world in to your Kubernetes application infrastructure at L4 without using ingress or path matching rules.

Summary

Citrix Ingress Controller is well worth investigating if you are beginning to implement on-prem Kubernetes based applications and already have an investment in Citrix ADC. If you need additional features such as DDoS protection, advanced rewrite, TCP optimisations etc. then CIC offers quite a lot of benefits over a simple NGINX proxy. The next article planned in this series will examine the sidecar Citrix ADC CPX deployment and how this can enhance visibility of inter-container communication.

Addendum – Rancher specific ingress issue with Citrix Ingress Controller

This section has been included here in order to highlight a specific issue which is currently occurring in CIC 1.6.1 and Rancher 2.3.4 releases. It seems to be a purely cosmetic issue however it’s been the subject of a recent call I had with some of the Citrix people responsible for CIC who confirmed the behaviour with me. Basically when an ingress is created it is successfully created by CIC but its status does not move from ‘Initializing’ to ‘Active’ in Rancher. This is because Rancher is awaiting the External-IP value to be updated in the Status, but this does not occur because CIC doesn’t mandate that this be actively reported. I’ll update/remove this section from the post if and when this is resolved.

Upgrading Citrix XenApp 7.x VDA version using PowerShell

With the advent of XenApp 7 and more recently experiencing the higher frequency of VDA cumulative updates I would generally recommend implementing Citrix Machine Creation Services or other imaging mechanism (such as Provisioning Server) when rolling out new versions of the Virtual Desktop Agent to a large number of catalogs.

However, what happens when you only require one XA server per catalog, or when each one of those servers is handled manually when new application code is deployed? This is more common than you might imagine, especially in Citrix deployments which have per-customer or per-app specific catalogs. The work involved in maintaining a master image can be significant and the serviceability of such relies upon someone knowing how to treat image updates in a way that won’t introduce problems that could arise weeks or months later.

One customer of mine has at least 80 catalogs running one or more XenApp VMs and so it simply doesn’t make sense to maintain a single master image for each, especially when application code updates are delivered frequently. So I set about creating a simple PowerShell script which works in a VMware environment to attach the Citrix upgrade ISO and then run the setup installer within the context of a remote PowerShell session.

Using this method you can easily carry out a bulk upgrade of tens (possibly hundreds) of statically assigned VDAs individually by attaching the ISO and installing the update automatically. The advantage of this time saving approach is that it can even be run in a loop so that the upgrade is only attempted when a server is idle and not running any sessions.

NB – as always, please validate the behaviour of the script in a non-production environment and adjust where necessary to meet your own needs.

Here’s a walkthrough of the script, along with the complete example version included at the end.

  1. The script will load the required plugins from both Citrix and VMware PowerShell modules/plugins (I generally run things like this on the Citrix Delivery Controller and install PowerCLI alongside for convenience)
  2. Request credentials and connect to vCenter via a popup
  3. Request credentials for use with WinRM connections to remote Windows servers via a popup
  4. Create a collection of objects (XA servers) which are powered on, do not have any active sessions and don’t already have the target VDA version installed (see $targetvda variable)
  5. For each VM, sequentially:
    1. Attach the specified .iso image file to the resulting VMs
    2. Determine the drive letter where the XA ISO file has been mounted
    3. Create a command line for the setup installer, and save the command into c:\upgrade_vda.cmd on the XA server
    4. Connect via PowerShell remoting session to the remote XA server
    5. Adjust the EUEM registry node permissions (as per https://support.citrix.com/article/CTX215992)
    6. Execute the c:\upgrade_vda.cmd upgrade script on remote machine via PS session
    7. Disconnect the PowerShell remote session
    8. Reboot the VM via vCenter in order to restart the XA services

Review the script and edit the following variables to reflect your use-case:

$vcentersrv = "yourvcentersrv.domain.com"
$targetvda = '7.15.4000.653'
$isopath = "[DATASTORE] ParentFolderName\XenApp_and_XenDesktop_7_15_4000.iso"

Edit the selection criteria on the VMs which will be upgraded:

$targetvms = Get-BrokerMachine -DesktopKind Shared | Where-Object {($_.AgentVersion -ne $targetvda) -and ($_.PowerState -eq 'On') -and ($_.HostedMachineName -like 'SRV*')}

All servers in my example environment begin with virtual machine names SRV* so this line can be adapted according to the number of VMs which you would like to upgrade, or simply replace with the actual named servers if you want to be more selective:

($_.HostedMachineName -in 'SRV1','SRV2','SRV3')

Finally, consider modifying the following variable from $true to $false in order to actually begin the process of upgrading the selected VMs. I suggest running it in the default $true mode initially in order to validate the initial selection criteria.

$skiprun = $true

Additional work:

I would like additionally to incorporate the disconnection of previous VDA .ISO files from the VM before attempting to upgrade. I have noticed that the attached volume label search e.g. Get-Volume -FileSystemLabel ‘XA and XD*’ that determines the drive letter selection is too wide, and will erroneously detect both XA_7_15_4000.iso and XA_7_15_2000.iso versions without differentiating between them.

I would also like to do further parsing of the installation success result codes in order to decide whether to stop, or simply carry on – however I have used the script on tens of servers without hitting too many roadblocks.

This script could also be adapted to upgrade XenDesktop VDA versions where statically assigned VMs are provided to users.

Final note:

This script does not allow the Citrix installer telemetry to run during the installation because it requires internet access and this generates errors in PowerShell for XenApp servers which can’t talk outbound. You can choose to remove this command line parameter according to your circumstances:

/disableexperiencemetrics

Citrix also optionally collects and uploads anonymised product usage statistics, but again this requires internet access. In order to disable Citrix Telemetry the following setting is used:

/EXCLUDE "Citrix Telemetry Service"

Additionally the Personal vDisk feature is now deprecated, so the script excludes this item in order for it to be removed if it is currently present (so be aware if you’re using PvD):

/EXCLUDE "Personal vDisk"

PowerShell code example:

# Upgrade VDA on remote Citrix servers

if ((Get-PSSnapin -Name "Citrix.Broker.Admin.V2" -ErrorAction SilentlyContinue) -eq $Null){Add-PSSnapin Citrix.Broker.Admin.V2}
if ((Get-PSSnapin -Name "VMware.VimAutomation.Core" -ErrorAction SilentlyContinue) -eq $Null){Add-PSSnapin VMware.VimAutomation.Core}

$vcentersrv = "yourvcentersrv.domain.com"

if ($vmwarecreds -eq $null) {$vmwarecreds = Connect-VIServer -Server $vcentersrv}            # Authenticate with vCenter, you should enter using format DOMAIN\username, then password
if ($creds -eq $null) {$creds = Get-Credential -Message 'Enter Windows network credentials'} # Get Windows network credentials

clear

$targetvda = '7.15.4000.653' #Add the target VDA version number - anything which isn't correct will be upgraded
$isopath = "[DATASTORE] ParentFolderName\XenApp_and_XenDesktop_7_15_4000.iso" #Path to ISO image in VMware
$skiprun = $true #Set this variable to false in order to begin processing all listed VMs

$targetvms = Get-BrokerMachine -DesktopKind Shared | Where-Object {($_.AgentVersion -ne $targetvda) -and ($_.PowerState -eq 'On') -and ($_.HostedMachineName -like 'SRV*')}
Write-Host The following XA VMs will be targeted
Write-Host $targetvms.HostedMachineName
if ($skiprun -eq $true) {write-host Skip run is still enabled; exit}

foreach ($i in $targetvms){

if ($i.AgentVersion -ne $targetvda) {
    Write-Host Processing $i.HostedMachineName found VDA version $i.AgentVersion
    
    if ($i.sessioncount -ne $null) {Write-Host Processing $i.HostedMachineName found $i.sessioncount users are logged on}

    if ($i.sessioncount -eq 0) {#Only continue if there are no logged-on users

        Write-Host Processing $i.HostedMachineName verifying attachment of ISO image
        $cdstate = Get-VM $i.HostedMachineName | Get-CDDrive
        if (($cdstate.IsoPath -ne $isopath) -and ($cdstate -notcontains 'Connected')) { $cdstate | Set-CDDrive -ISOPath $isopath -Confirm:$false -Connected:$true;Write-Host ISO has been attached}

        $s = New-PSSession -ComputerName ($i.MachineName.split('\')[1]) -Credential $creds
            #Create the upgrade command script using correct drive letters
            Write-Host Processing $i.HostedMachineName -NoNewline
            invoke-command -Session $s {
                $drive = Get-Volume -FileSystemLabel 'XA and XD*'
                $workingdir = ($drive.driveletter + ":\x64\XenDesktop Setup\")
                $switches = " /COMPONENTS VDA /EXCLUDE `"Citrix Telemetry Service`",`"Personal vDisk`" /disableexperiencemetrics /QUIET"
                $cmdscript = "`"$workingdir" + "XenDesktopVDASetup.exe`"" + $switches
                Out-File -FilePath c:\upgrade_vda.cmd -InputObject $cmdscript -Force -Encoding ASCII
                Write-Host " wrote script using path" $workingdir
            }
            
            #Adjust the registry permissions remotely
            Write-Host Processing $i.HostedMachineName updating registry permissions
            Invoke-Command -Session $s {
                $acl = Get-Acl "HKLM:\SOFTWARE\Wow6432Node\Citrix\EUEM\LoggedEvents"
                $person = [System.Security.Principal.NTAccount]"Creator Owner"
                $access = [System.Security.AccessControl.RegistryRights]"FullControl"
                $inheritance = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit,ObjectInherit"
                $propagation = [System.Security.AccessControl.PropagationFlags]"None"
                $type = [System.Security.AccessControl.AccessControlType]"Allow"}
            Invoke-Command -Session $s {$rule = New-Object System.Security.AccessControl.RegistryAccessRule($person,$access,$inheritance,$propagation,$type)}
            Invoke-Command -Session $s {$acl.AddAccessRule($rule)}
            Invoke-Command -Session $s {$acl |Set-Acl}
                
            #Execute the command script
            Write-Host Processing $i.HostedMachineName, executing VDA install script
            Invoke-Command -Session $s {& c:\upgrade_vda.cmd} # Runs the upgrade script on remote server
            Remove-PSSession $s #Disconnect the remote PS session
            Restart-VMGuest -VM $i.HostedMachineName -Confirm:$false #Restart the server following either a successful or unsuccessful upgrade
            }
        }
    }

Locating Personal vDisk with PowerShell script

Dell vRanger is a backup solution for VMware which I’ve been using for a while to backup a customer’s ESXi environment. It’s generally OK, however the vRanger backup configuration wizard does not allow you to specifically exclude Citrix MCS base image disks which cannot themselves be backed up (.delta disk file types) – instead opting to force you to define the disks to exclude based upon Hard disk 1, Hard disk 2 names which apply to the whole job identically for each VM.

In this example I DO want to backup the pvDisk but DO NOT want to backup the other two disks which are deemed unnecessary. The issue which I’ve got with this approach is that sometimes (and I don’t quite understand why!) the virtual desktops added to the catalog sometimes use Hard disk 3 for the user’s pvDisk and others use Hard disk 2.

Perhaps this is just a timing issue with vCenter but nevertheless I needed to figure out a simple way of easily searching a group of VMs and selecting those which use Hard disk 2, and 3 and create separate backup jobs which exclude the non-backup targets i.e. the delta disk (non-persistent independent) and identity disk (persistent independent).

See below the script which I ended up with after a bit of tinkering. It makes an assumption that the identity disk is less than 1GB in size and that your pvDisk is greater than 1GB (otherwise you may not see anything returned):

#Connect-VIServer -Server vcentersrv1.domain.internal
$VMfilter = 'Win7-XD-C*'
$XenDesktopVMs = Get-VM -Name $VMfilter
Write-Host 'Listing pvDisks names for selected VMs:'foreach ($vm in $XenDesktopVMs) {$hdd=Get-HardDisk -VM $vm | Where {$_.Persistence -eq "Persistent"}foreach ($diskin$hdd | `
where-object {$_.CapacityGB -ge 1}) {Write-Host $vm.Name $Disk.Name '=' $disk.CapacityGB }}

Repointing vCenter Server to external PSC on load balanced FQDN fails

I have been  planning a migration project for a customer for a while which involves moving from an embedded SSO instance on vCenter 5.5 to an external Platform Services Controller instance on 6.5. Suffice to say, plenty of ‘how to’ guides exist, alongside the documentation from VMware – however, there is a generally scant outline of what steps to take when ‘repointing your vCenter to the new load balanced PSC virtual IP. The topic of this post is what happens when you follow the available load balancing documentation and your VMware Update Manager service fails to start afterwards.

I’ll include the reference articles up front, in case these are the ones which you might also have referred to:

Reference articles:

Configuring HA PSC load balancing on Citrix NetScaler – VMware KB article

Repoint vCenter Server to Another External Platform Services Controller in the Same Domain – VMware KB article

The repoint command:

At the step where you are reminded to repoint your vCenter instances at the new load balanced VIP address you’ll need to use the command:

cmsso-util repoint --repoint-psc psc-ha-vip.sbcpureconsult.internal

However, if you’ve followed the steps precisely, you’re likely to run into the following output when the repoint script attempts to restart the Update Manager service:

What happens:

Validating Provided Configuration …
Validation Completed Successfully.
Executing repointing steps. This will take few minutes to complete.
Please wait …
Stopping all the services …
All services stopped.
Starting all the services …

[… truncated …]

Stderr = Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting

Failed to start all the services. Error {
“resolution”: null,
“detail”: [
{
“args”: [
“Stderr: Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting\n\n”
],
“id”: “install.ciscommon.command.errinvoke”,
“localized”: “An error occurred while invoking external command : ‘Stderr: Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting\n\n'”,
“translatable”: “An error occurred while invoking external command : ‘%(0)s'”
}
],
“componentKey”: null,
“problemId”: null
}

Following this issue you might reboot or attempt to start all services directly on the vCenter appliance afterwards and receive:

service-control --start --all

Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting

This again is fairly unhelpful output and doesn’t provide any assistance as to the cause of the issue. After much investigation, it turns out that the list of TCP port numbers which the load balancing configuration details are not complete, causing the service startup to fail. Because we’re not running any other applications on the PSC hosts it’s possible to simplify the configuration on NetScaler by using wildcard port services for each server.

NetScaler configuration commands (specific to PSC load balancing):

The following alternative configuration ensures that any PSC service requested by your vCenter Server (or other solutions) will remain persistently connected on a ‘per host’ basis for up to 1440 minutes which is the default lifetime of a vCenter Web Client session. This is different to VMware’s documented approach which load balances each service individually, but obviously misses out some crucial port.

add server hosso01.sbcpureconsult.internal 192.168.0.117
add server hosso02.sbcpureconsult.internal 192.168.0.116

add service hosso01.sbcpureconsult.internal_TCP_ANY hosso01.sbcpureconsult.internal TCP * -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO

add service hosso02.sbcpureconsult.internal_TCP_ANY hosso02.sbcpureconsult.internal TCP * -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO

add lb vserver lb_hosso01_02_TCP_ANY TCP 192.168.0.122 * -persistenceType SOURCEIP -timeout 1440 -cltTimeout 9000

bind lb vserver lb_hosso01_02_TCP_ANY hosso01.sbcpureconsult.internal_TCP_ANY

bind lb vserver lb_hosso01_02_TCP_ANY hosso02.sbcpureconsult.internal_TCP_ANY

Once this configuration is put in place you’ll find that the vCenter Update Manager service will start correctly and your repoint will be successful.

Edit: Following the above configuration steps to get past the installation issue, I’ve since improved the list of ports that are load balanced by NetScaler to extend the list that VMware published for vCenter in their docs page. By enhancing the original series of ports I think we can resolve the initial issue without resorting to IP based wildcard load balancing.

I’ve included the full configuration below for reference:

Thanks for reading!

If you find this useful drop me a message via my contact page.

add server hosso01.sbcpureconsult.internal 192.168.0.117
add server hosso02.sbcpureconsult.internal 192.168.0.116
add service hosso01_TCP80 hosso01.sbcpureconsult.internal TCP 80 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP88 hosso01.sbcpureconsult.internal TCP 88 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP389 hosso01.sbcpureconsult.internal TCP 389 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP443 hosso01.sbcpureconsult.internal TCP 443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP514 hosso01.sbcpureconsult.internal TCP 514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP636 hosso01.sbcpureconsult.internal TCP 636 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP1514 hosso01.sbcpureconsult.internal TCP 1514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2012 hosso01.sbcpureconsult.internal TCP 2012 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2014 hosso01.sbcpureconsult.internal TCP 2014 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2015 hosso01.sbcpureconsult.internal TCP 2015 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2020 hosso01.sbcpureconsult.internal TCP 2020 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP5480 hosso01.sbcpureconsult.internal TCP 5480 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP7444 hosso01.sbcpureconsult.internal TCP 7444 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP80 hosso02.sbcpureconsult.internal TCP 80 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP88 hosso02.sbcpureconsult.internal TCP 88 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP389 hosso02.sbcpureconsult.internal TCP 389 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP443 hosso02.sbcpureconsult.internal TCP 443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP514 hosso02.sbcpureconsult.internal TCP 514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP636 hosso02.sbcpureconsult.internal TCP 636 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP1514 hosso02.sbcpureconsult.internal TCP 1514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2012 hosso02.sbcpureconsult.internal TCP 2012 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2014 hosso02.sbcpureconsult.internal TCP 2014 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2015 hosso02.sbcpureconsult.internal TCP 2015 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2020 hosso02.sbcpureconsult.internal TCP 2020 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP5480 hosso02.sbcpureconsult.internal TCP 5480 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP7444 hosso02.sbcpureconsult.internal TCP 7444 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add lb vserver lb_hosso01_02_80 TCP 192.168.0.122 80 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_88 TCP 192.168.0.122 88 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_389 TCP 192.168.0.122 389 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_443 TCP 192.168.0.122 443 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_514 TCP 192.168.0.122 514 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_636 TCP 192.168.0.122 636 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_1514 TCP 192.168.0.122 1514 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2012 TCP 192.168.0.122 2012 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2014 TCP 192.168.0.122 2014 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2015 TCP 192.168.0.122 2015 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2020 TCP 192.168.0.122 2020 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_5480 TCP 192.168.0.122 5480 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_7444 TCP 192.168.0.122 7444 -timeout 1440 -cltTimeout 9000
bind lb vserver lb_hosso01_02_80 hosso01_TCP80
bind lb vserver lb_hosso01_02_80 hosso02_TCP80
bind lb vserver lb_hosso01_02_88 hosso01_TCP88
bind lb vserver lb_hosso01_02_88 hosso02_TCP88
bind lb vserver lb_hosso01_02_389 hosso01_TCP389
bind lb vserver lb_hosso01_02_389 hosso02_TCP389
bind lb vserver lb_hosso01_02_443 hosso01_TCP443
bind lb vserver lb_hosso01_02_443 hosso02_TCP443
bind lb vserver lb_hosso01_02_514 hosso01_TCP514
bind lb vserver lb_hosso01_02_514 hosso02_TCP514
bind lb vserver lb_hosso01_02_636 hosso01_TCP636
bind lb vserver lb_hosso01_02_636 hosso02_TCP636
bind lb vserver lb_hosso01_02_1514 hosso01_TCP1514
bind lb vserver lb_hosso01_02_1514 hosso02_TCP1514
bind lb vserver lb_hosso01_02_2012 hosso01_TCP2012
bind lb vserver lb_hosso01_02_2012 hosso02_TCP2012
bind lb vserver lb_hosso01_02_2014 hosso01_TCP2014
bind lb vserver lb_hosso01_02_2014 hosso02_TCP2014
bind lb vserver lb_hosso01_02_2015 hosso01_TCP2015
bind lb vserver lb_hosso01_02_2015 hosso02_TCP2015
bind lb vserver lb_hosso01_02_2020 hosso01_TCP2020
bind lb vserver lb_hosso01_02_2020 hosso02_TCP2020
bind lb vserver lb_hosso01_02_5480 hosso01_TCP5480
bind lb vserver lb_hosso01_02_5480 hosso02_TCP5480
bind lb vserver lb_hosso01_02_7444 hosso01_TCP7444
bind lb vserver lb_hosso01_02_7444 hosso02_TCP7444
add lb group pg_hosso_01_02 -persistenceType SOURCEIP -timeout 1440
bind lb group pg_hosso_01_02 lb_hosso01_02_80
bind lb group pg_hosso_01_02 lb_hosso01_02_88
bind lb group pg_hosso_01_02 lb_hosso01_02_389
bind lb group pg_hosso_01_02 lb_hosso01_02_443
bind lb group pg_hosso_01_02 lb_hosso01_02_514
bind lb group pg_hosso_01_02 lb_hosso01_02_636
bind lb group pg_hosso_01_02 lb_hosso01_02_1514
bind lb group pg_hosso_01_02 lb_hosso01_02_2012
bind lb group pg_hosso_01_02 lb_hosso01_02_2014
bind lb group pg_hosso_01_02 lb_hosso01_02_2015
bind lb group pg_hosso_01_02 lb_hosso01_02_2020
bind lb group pg_hosso_01_02 lb_hosso01_02_5480
bind lb group pg_hosso_01_02 lb_hosso01_02_7444
set lb group pg_hosso_01_02 -persistenceType SOURCEIP -timeout 1440

XenApp 7.x open published apps session report PowerShell script

Whilst there’s many amazing things being introduced by Citrix recently (in the XenApp/XenDesktop space) I do sometimes feel that Citrix Studio can be somewhat limited in comparison to previous admin tools.

I would say one of the common things that administrators and consultants need to know on a daily basis is how many instances of each published app are being run within a Citrix environment. I was a little perplexed at first why this wasn’t easily available through Citrix Director without making connections directly to the database through an OData connection, but I guess in the end they decided that it simply wasn’t relevant .

So I’ve been working on a PowerShell script to give me a very simple view of how an environment’s application usage stacks up, and from there on in I can decide whether everything’s running fine or dig a little deeper.

The first drafts of the script originally required me to manually specify the delivery group(s) against which it would be run, but in this example I’m using a multi-select list box to allow me to choose more than one (just hold down the CTRL key). However,  since each execution of the script only gives me a point-in-time view this example script will refresh every 60 seconds until the maximum interval of one day has passed.

The sort order is currently defined based upon the total number of application instances running, ordered by largest to least, so bear this in mind when selecting multiple delivery groups as the resulting view may not be what you’re looking for.

if ((Get-PSSnapin -Name "Citrix.Broker.Admin.V2" -ErrorAction SilentlyContinue) -eq $Null){Add-PSSnapin Citrix.Broker.Admin.V2}
$selectmachines = @()
$count = 1440 # Script will run until 1 day has passed, updating every 60 seconds
$selectdg = Get-BrokerDesktopGroup | Select-Object -Property Name, UID | Sort-Object -Property UID | Out-GridView -OutputMode Multiple -Title 'Select one or more delivery groups to display active sessions'
foreach ($i in $selectdg) {
$selectmachines+=Get-BrokerMachine -DesktopGroupUid $i.Uid | Select-Object MachineName -ExpandProperty MachineName
}
Do {
clear #Reset the screen contents before redisplaying the connection count
Get-BrokerApplicationInstance -Filter 'MachineName -in $selectmachines'| group-Object -Property ApplicationName | sort-object -property Count -Descending | Format-Table -AutoSize -Property Count,Name
$count--
Start-sleep -Seconds 60
} while ($count -ne 0)

 

Why Citrix and Microsoft’s new servicing models now make sense

OK, so I wasted a little bit of time. I know.. it’s a shame when that happens, but it’s even worse to make the same mistake twice! So please read on in case you head down the same road without keeping your eyes peeled for the pitfalls. So what’s the take home message of this post? Microsoft and Citrix now need us (no actually, require us) to do as every professional should always do, and plan our release schedules properly!

This post discusses an issue I experienced installing Citrix XenDesktop VDA 7.15 on Windows 10 Fall Creator’s Update – receiving error 1603 when the Citrix Diagnostic Facility component failed to install. If you’re short on time, skip to the end for a series of helpful links – otherwise, bear with me and I’ll take you on a short journey to grudging mindset shift!

I’d wasted a morning patching a Citrix base image from Windows 10 build 1703 to 1709 Creator’s Fall update because we were looking to create a clean desktop for some developers to test their software releases on. But try as I might, the Citrix 7.15 VDA installer wouldn’t complete and always terminated with error 1603 –  the Citrix Diagnostic Facility (CDF) service had failed to install. After investigating the logs though it wasn’t clear why, other than a permissions failure on C:\Windows\assembly\tmp – and even checking those showed little evidence for the cause of the problem.

But here goes, after a little bit more digging I discovered that the latest Citrix VDA does NOT support the latest semi-annual ‘targeted’ release of Windows 10 (1709). See issue #1 on Citrix blog post.

Could I believe it? No, not at first really – how could a desktop OS release made generally available on 17th October 2017 not be compatible with the latest Citrix VDA which has also been chosen recently as the most recent Long Term Service Release version? Surely this new XenDesktop LTSR release would have been coordinated with Microsoft’s own release schedule, with release candidates being shared well in advance so that both vendor’s would have had a chance to test their interaction together?

Apparently not – and therein lies the message. You cannot expect that each vendor is attempting to align their minor and major servicing schedules with each other! ..Assuming.. that the latest Citrix VDA will work with the latest release of Windows is no longer going to float, and that’s why we all need to fully commit to the “test, test and test again” approach.

In fact, the logic was established a long time ago.  The last LTSR release of XenDesktop (7.6) did not support Windows 10 claiming this as a ‘notable exclusion’ despite the fact that early Windows 10 versions had been around for some time.

Notable Exclusions: These are components or features that are just not well suited for the extended lifecycle typically because this is newer technology that we plan on making significant enhancements to over time.  This is where Windows 10 fell when we originally launched 7.6 LTSR.

Citrix then later added retrospective support for Windows 10 by encouraging the use of VDA 7.9 in conjunction with the XenDesktop 7.6 LTSR release when it appeared that this combination worked well. However hope for the future compatibility was even made clear at this time with the following statement being added to the end of that post.

Finally, we want to note that Citrix is targeting to announce a new LTSR version in 2017 adding full LTSR benefits for the Windows 10 platform. However, this current announcement makes it easier for you to jump on Windows 10 desktop virtualization today while still maintaining all the benefits of being LTSR compliant.

And whilst it is indeed true that XenDesktop 7.15 LTSR release fully supports Windows 10 current branch/semi annual channel, it seems that only a simple statement on ‘requiring VDA 7.9 or later’ was made as long as you are happy to stick to the ‘Current release’ path:

Note about Windows 10: Regular support for Windows 10 is available through the Current Release path. Windows 10 does not get the full set of 7.15 LTSR benefits. For deployments that include Windows 10 machines, Citrix recommends that you use the Current Release Version 7.9 or later of the VDA for Desktop OS and of Provisioning Services.

A separate article entitled Windows 10 Compatibility with Citrix XenDesktop makes this clearer,

  • VDA: Although Semi-Annual Channel Targeted releases are intended for pilot trials, Citrix will provide limited support (configuration only) for VDA installations on Windows 10 Semi-Annual Channel Targeted releases, starting from version 1709 forward.

..and goes on further to say that ‘targeted’ releases such as Windows 10 Fall Creator’s Update are not guaranteed to be compatible:

While the Desktop OS VDA is expected to install and work on Windows 10 Semi-Annual Channel Targeted versions, Citrix does not guarantee proper functionality with these builds.

So there – it’s now clear. The LTSR releases, even the most recent, were never intended to deliver the latest compatibility with Microsoft’s own servicing schedule. It just happens in this case that VDA 7.15 is the most recent VDA available currently and for some reason Citrix also chose to adopt this as the version included in the latest LTSR release.

If you’re intending to use LTSR versions and maintain full compatibility with Windows 10 it seems that the only sensible way forward is to fall back on the most recent Semi-Annual Channel release (build 1703) and wait for the next LTSR cumulative release that adds support for the previously circulated Win10 ‘targeted’ version after all of the wrinkles have been ironed out. This is very well explained at the end of the linked article above, which simply states that you can’t be sure of support for specific Windows 10 versions unless you match them with the approved VDA for that Semi-annual channel release. Anything newer just might not work.

  • Windows 10 Creator’s Update (Version 1703) – use VDA 7.9/7.15 for LTSR support
  • Windows 10 Fall Creator’s Update (Version 1709) – Not supported!

So what’s the moral of the story, after all? Citrix and Microsoft have taken the stance to deliver frequent releases for those who are happy to trail-blaze and hotfix, depending upon their current release and semi-annual targeted releases respectively. But if you want to rely upon well-tested and proven operating system and VDA platforms – which are likely to survive the test of time (without high levels of maintenance and unpredictable results) then stick to the aligned Citrix LTSR and Windows Semi-Annual channel versions and plan your releases several months in advance. Anything else, and you could be left scratching your head for a short while until the penny drops!

Update: Since writing this post I’ve become aware of a clear summary of the current situation documented within Carl Stalhood’s excellent VDA 7.15 installation notes under point #7. Citrix have stated that they plan to provide retrospective support for VDA 7.15 on Windows 10 Version 1709 under two scenarios:

  • A new patch (now released) on Nov 14th 2017 (KB4051314) will provide the ability to update an existing Windows installation and existing VDA to Windows 10 version 1709
  • A new patch to be released via the Microsoft Update Catalogue in November Week 4 will allow you to do a fresh new VDA install on a clean Windows 10 version 1709.

NB This is a first draft of this post with minor edits. If you believe that anything included here is erroneous or misleading please get in contact/drop me a line so that I can clean it up. Thanks for reading!

Useful references:
Windows 10 Compatibility with Citrix XenDesktop
Windows 10 Fall Creators Update (v1709) – Citrix Known Issues
Windows 10 Creators Update (v1703) – Citrix Known Issues
XenApp and XenDesktop 7.15 LTSR
Adding Windows 10 Compatibility to XenApp and XenDesktop 7.6 LTSR
FAQ: XenApp, XenDesktop, and XenServer Servicing Options (LTSR)
Windows 10 update history
https://blogs.technet.microsoft.com/windowsitpro/2017/07/27/waas-simplified-and-aligned/
https://blogs.windows.com/windowsexperience/2017/10/17/get-windows-10-fall-creators-update/

Listing Citrix session count by application using PowerShell

You may have found that Citrix Director offers fairly limited set of information regarding the number of users which are connected to each XenApp host, and there was no simple way (until I think XA7.9 update) to view the published app session count for each application.Here’s a useful PowerShell snippet which should help you out if you didn’t upgrade yet. It’s a concatenation of several commands which basically list off all of the sessions and then group and sort them into a convenient list.You’ll need to open PowerShell on a Citrix delivery controller and then type:

Add-PSsnapin Citrix*

Following which, you should enter the following command:

Get-BrokerApplicationInstance | group-Object -Property ApplicationName | sort-object -property Count -Descending | Format-Table -AutoSize -Property Count,Name

The output generated should be as follows:

You can of course tailor the verb Get-BrokerApplicationInstance to select a smaller subset of sessions on which to group and sort using:

Get-BrokerApplicationInstance -MachineName DOM\\HOXENAPP01

Which will simply tell you the distribution of published application sessions for an individual XenApp host. Hope this helps!

Updating password field names with multiple NetScaler Gateway virtual servers

Imagine a situation where you want to change your NetScaler Gateway’s logon page to include alternative prompts for the Username, Password 1 and Password 2 fields and need to update the language specific .XML files. This has been documented before, and isn’t too hard to figure out once you’ve found a couple of ‘How to’ guides on the Internet. However I have since come across a limitation in trying to apply the NetScaler’s new ‘Custom’ design template to several different NetScaler Gateway virtual servers at the same time, because essentially whilst you can define your own custom design it is automatically applied to all instances of the virtual server residing on the NetScaler – so if you define custom fields then you’ve defined them for all.

This may not be a problem for some people, but what if the secondary authentication mechanism is an RSA token for one site, and a VASCO token for another? How do you go about configuring alternative sets of custom logon fields? Most of the answers are already out there in one form or another, but I lacked one simple beginning to end description of the solution (I tried several alternate options including rewrite policies which didn’t quite work before I opted for this approach):

Background (NetScaler 10.5.x build)The Citrix NetScaler VPN default logon page has already been modified in order to ask for ‘AD password’ and ‘VASCO token’ values instead of Password 1: and Password 2:, as detailed in http://support.citrix.com/article/CTX126206

This was achieved by editing index.html and login.js files in /var/netscaler/gui/vpn of the NS as per the Citrix article above.

In addition, the resources path which holds the language based .XML files in /var/netscaler/gui/vpn/resources has been backed up into /var/customisations so that the /nsconfig/rc.netscaler file can copy them back into the correct location if they get overwritten or lost following reboot.

Contents of rc.netscaler file

cp /var/customisations/login.js.mod /netscaler/ns_gui/vpn/login.jscp /var/customisations/en.xml.mod /netscaler/ns_gui/vpn/resources/en.xmlcp /var/customisations/de.xml.mod /netscaler/ns_gui/vpn/resources/de.xmlcp /var/customisations/es.xml.mod /netscaler/ns_gui/vpn/resources/es.xmlcp /var/customisations/fr.xml.mod /netscaler/ns_gui/vpn/resources/fr.xml

However, because these values apply globally there is an issue if a second NetScaler virtual server does not use a VASCO token as a secondary authentication mechanism. This causes the normal ‘Password’ entry box to be displayed as ‘VASCO token’. The only suitable workaround for this is to create a parallel set of logon files for each additional NS gateway virtual server and use a responder policy on the NS to redirect incoming requests for the index.html page of the VPN to a different file.

In the following examples, I have created a second configuration for a ‘Training NetScaler’, abbreviated to TrainingNS throughout. In summary,

Create separate login.js and index.html files for the alternate parameters, create a new /resources folder specifically for those and edit references within those before defining a responder action & policy in NS:

  1. Copy existing login.js to loginTrainingNS.js
  2. Copy existing index.html to indexTrainingNS.html
  3. Create a new folder called /netscaler/ns_gui/vpn/resourcesTrainingNS and give it the same owner/group permissions as the /netscaler/ns_gui/vpn/resources folder (use WinSCP to define the permissions, right click Properties on the file)
  4. Copy all of the .XML files from /netscaler/ns_gui/vpn/resources into the new folder
  5. Edit the indexTraining.html file and make the following change to reflect the new location of the resource files

var Resources = new ResourceManager("resourcesTrainingNS/{lang}", "logon");

Edit the indexTrainingNS.html file and make the modifications described in CTX1262067.

Edit the individual .XML files in the new folder as per the explanation in CTX126206

AD Password:
TwoFactorAuth Password:

(this second option will not be used if only a primary authentication mechanism is defined)

When all of the file changes are complete, using https://support.citrix.com/article/CTX123736 as a guide, define the responder action and policy on the NS:

  • Create a responder action using the URL: “https://trainingns.lstraining.ads/vpn/indexTrainingNS.html”
  • Create a responder policy using the expression: HTTP.REQ.HOSTNAME.EQ(“trainingns.lstraining.ads”) && HTTP.REQ.URL.CONTAINS(“index.html”)Bind the policy to the global defaults

Now when you launch the URL for the Training NetScaler it will redirect to the custom index.html file and load a separate logon.js and .xml resource files so that the logon box will be name differently.

In addition, the following article hints at an alternative resolution if the Responder feature cannot be licensed: http://www.carlstalhood.com/netscaler-gateway-virtual-server/#customize