Citrix Gateway 13.0 Registry value EPA scan examples

If you’re having trouble with getting Citrix Endpoint Analysis scans of client device registry values to work properly (on Citrix Gateway) you may come across the following issue I experienced in the latest versions of firmware.

It appears that the EPA scan functionality in the NS 13.0 GUI (this article relates to 13.0.82.45) has been merged so that the numeric/non-numeric registry scan types now coalesce into one type of scan: REG_PATH; whereas in previous versions string values were interpreted using REG_NON_NUM_PATH.

Here’s a screen shot of the new expression editor drop down for Windows client EPA scans

NS13.0.82.45 drop down for Windows EPA scans

In comparison to the previous version (NS13.0.71.44).

NS13.0.71.44 drop down for Windows EPA scans

Here’s a screenshot of the registry scan entry panel where you can enter registry path and value, plus comparison or presence operators. Note the tooltip box which says that numeric comparisons will be done when using <,>,== etc.

NS13.0 registry scan value/comparison entry GUI

The convergence of these two types of scan into one appears to hide a reduction in comparison functionality, which only emerges once you attempt to use a string based registry value comparison using REG_PATH. You cannot use == anymore with string values such as REG_SZ.

This is a quick summary of the new behaviour following my own testing:

Numeric comparisons

Scans based upon REG_DWORD, REG_QWORD, REG_BINARY values will only work when carrying out boolean comparisons on numeric values with operators such as ==, !=, >=

e.g.

sys.client_expr("sys_0_REG_PATH_==HKEY\\_LOCAL\\_MACHINE\\\\SOFTWARE\\\\Classes\\\\YourRegistryKeyLocation\\\\YourRegistryValueName_VALUE==_12345[COMMENT: Registry]")

will result in a successful scan when YourRegistryValueName == 12345.

String comparisons

However when using the newly merged functionality, scans based upon REG_SZ values will only work when carrying out comparisons on string values using operators such as ‘contains’, ‘notcontains’.

If you try to use == as the operator on a string comparison the EPA scan logs will result in:

2021-09-28 09:25:38.883 Boolean compare failed. Value false operator ==
2021-09-28 09:25:38.883 Scan 'REG_PATH_==_HKEY\_LOCAL\_MACHINE\\SOFTWARE\\Classes\\YourRegistryKeyLocation\\YourRegistryValueName_VALUE_==_12345' failed for method 'VALUE'

Therefore modify your EPA action expression to fit the following example using ‘contains’:

sys.client_expr("sys_0_REG_PATH_==_HKEY\\\\_LOCAL\\\\_MACHINE\\\\\\\\SOFTWARE\\\\\\\\Classes\\\\\\\\YourRegistryKeyLocation\\\\\\\\YourRegistryValueName_VALUE_contains_12345[COMMENT: Registry]")

There are several other comparisons which do not appear to work properly, e.g. a numeric registry comparison of a REG_QWORD value which is longer than that allowed by the Citrix EPA plugin BUT is allowed within the 64 bytes of the Windows Registry value.

So my advice would be to consider whether the version of Citrix ADC you’re currently using actually offers the type of scan which you’re intending to use (REG_NON_NUM_PATH, REG_PATH), and NOT to rely upon documented examples without determining if the operator matches the value type correctly.

Further reading

https://support.citrix.com/article/CTX209148 – How to enable client EPA logging/troubleshooting

https://docs.citrix.com/en-us/citrix-gateway/current-release/vpn-user-config/advanced-endpoint-analysis-policies/advanced-endpoint-analysis-policy-expression-reference.html

PowerShell walkthrough – Citrix FAS certificate renewal

Citrix Federated Authentication Service (FAS) allows SAML based authentication tokens to be used when accessing StoreFront resources via Citrix Gateway.

In many established installations the certificates issued to the FAS server(s) will eventually expire, typically after 2 years. A simple GUI tool can be used to ‘Reauthorize’ an expired domain registration authorization certificate in this event, but an alternative PowerShell route is available to Citrix administrators so that certificates can be renewed in advance.

Citrix’s documentation proposes the following sequence of commands, without referencing the required parameters or source of information:

  • Create a new authorization certificate: New-FasAuthorizationCertificate
  • Note the GUID of the new authorization certificate, as returned by: Get-FasAuthorizationCertificate
  • Place the FAS server into maintenance mode: Set-FasServer –Address <FAS server> -MaintenanceMode $true
  • Swap the new authorization certificate: Set-FasCertificateDefinition –AuthorizationCertificate <GUID>
  • Take the FAS server out of maintenance mode: Set-FasServer –Address <FAS server> -MaintenanceMode $false
  • Delete the old authorization certificate: Remove-FasAuthorizationCertificate

Whilst this might be sufficient if you have a fair degree of confidence with PowerShell it might not be enough if you’re faced with an expired certificate and hundreds of users trying to log in.

I have used the following sequence successfully recently and hope that it will be useful to others.

NB – this example is provided ‘as-is’ and you remain responsible for understanding the effect of each command and detecting when the output doesn’t match your own scenario.

The following colourised convention applies throughout, ensure that you do not copy and paste these values without updating them:

Original FAS certificate ID reference
New FAS certificate ID reference
Certificate authority reference

  1. Open PowerShell on the FAS server for which you want to update the registration certificate.
  2. Add the Citrix commandlets into the PowerShell session:

Add-PSSnapin Citrix.Authentication.FederatedAuthenticationService.V1

  1. Create a variable to hold the local FAS server’s address (if this is the second FAS server in a group of more than one, replace [0] with [1] below:

$CitrixFasAddress=(Get-FasServer)[0].Address

Address : yourfasnode01.yourdomain.com
Index : 0
Version : 1
MaintenanceMode : False
AdministrationACL : O:BAG:DUD:P(A;OICI;SW;;;BA)

  1. Get the existing FAS certificate ID

Get-FasAuthorizationCertificate

Id : 1c67270b-d2f4-4543-919b-519cb5470612
Address : yourdomainca01.yourdomain.com\yourcompany-yourdomainca01-CA
TrustArea : bb6b4e47-c5b3-4a6a-9a50-eb6a02a05c3c
CertificateRequest :
Status : MaintenanceDue

  1. Generate a new FAS certificate request against the CA. Both the existing certificate and new certificate request IDs will be shown.

New-FasAuthorizationCertificate -CertificateAuthority yourdomainca01.yourdomain.com\yourcompany-yourdomainca01-CA -CertificateTemplate Citrix_RegistrationAuthority

Id : 1c67270b-d2f4-4543-919b-519cb5470612
Address : yourdomainca01.yourdomain.com\yourcompany-yourdomainca01-CA
TrustArea : bb6b4e47-c5b3-4a6a-9a50-eb6a02a05c3c
CertificateRequest :
Status : MaintenanceDue

Id : 2c113327-1c73-2ca4-44a3-3c12da3963b5
Address : yourdomainca01.yourdomain.com\yourcompany-yourdomainca01-CA
TrustArea : 66a8d3fe-7bdb-4003-8220-cd11f7685b92
CertificateRequest :
Status : WaitingForApproval

  1. Log in to the certificate authority and locate the pending certificate request. Select the item, right click and choose and choose ‘Issue’. Wait a minute or two then continue.
  2. Repeat the process to retrieve the FAS authorisation certificates and notice that the status of the newly issued one should have changed from ‘WaitingForApproval’ to ‘Ok’.

Get-FasAuthorizationCertificate

Id : 1c67270b-d2f4-4543-919b-519cb5470612
Address : yourdomainca01.yourdomain.com\yourcompany-yourdomainca01-CA
TrustArea : bb6b4e47-c5b3-4a6a-9a50-eb6a02a05c3c
CertificateRequest :
Status : MaintenanceDue

Id : 2c113327-1c73-2ca4-44a3-3c12da3963b5
Address : yourdomainca01.yourdomain.com\yourcompany-yourdomainca01-CA
TrustArea : 66a8d3fe-7bdb-4003-8220-cd11f7685b92
CertificateRequest :
Status : Ok

  1. Set the local FAS server into maintenance mode:

Set-FasServer -Address $CitrixFasAddress -MaintenanceMode $true

  1. Get the FAS certificate definition rule, this points at the existing FAS authorisation certificate:

Get-FasCertificateDefinition

Name : default_Definition
CertificateAuthorities : {yourdomainca01.yourdomain.com\yourcompany-yourdomainca01-CA}
MsTemplate : Citrix_SmartcardLogon
AuthorizationCertificate : 1c67270b-d2f4-4543-919b-519cb5470612
PolicyOids : {}
InSession : False

  1. Create a variable to store the FAS certificate authority address:

$DefaultCA=(Get-FasMsCertificateAuthority -Default).Address

  1. Update the existing FAS certificate definition to use the new FAS certificate ID

Set-FasCertificateDefinition -Name default_Definition -AuthorizationCertificate 2c113327-1c73-2ca4-44a3-3c12da3963b5

  1. Get the FAS certificate definition rule, this should now point at the new FAS authorisation certificate:

Get-FasCertificateDefinition

Name : default_Definition
CertificateAuthorities : {yourdomainca01.yourdomain.com\yourcompany-yourdomainca01-CA}
MsTemplate : Citrix_SmartcardLogon
AuthorizationCertificate : 2c113327-1c73-2ca4-44a3-3c12da3963b5
PolicyOids : {}
InSession : False

  1. Remove the maintenance mode flag on the local FAS server:

Set-FasServer -Address $CitrixFasAddress -MaintenanceMode $false

  1. Remove the original FAS authorisation certificate (no longer required)

Remove-FasAuthorizationCertificate -Id 1c67270b-d2f4-4543-919b-519cb5470612

Citrix Advanced Session policy equivalents of default Classic expressions

A customer of mine recently asked for some help understanding why Citrix Gateway was not allowing external logons anymore, possibly combined with a recent upgrade to Citrix ADC VPX 13.0 Build 82.42.

He pointed out that there was an entry within the ns.log file which complained about a problem with ‘Ica mode status’, shown below:

Aug 6 11:39:59 192.168.200.191 08/06/2021:09:39:59 GMT citrix-netscaler 0-PPE-0 : default SSLVPN Message 586 0 : "Ica mode status is not okay"

Investigating further we could identify both successful LDAP authentication (basic LDAP auth attached directly to the Citrix Gateway vserver) and STA lookup, but the ADC wasn’t actually requesting any pages from the Storefront server URL defined in the session profile.

Searching for the error itself yielded one result which referred in particular to ‘Ica mode status’ :

https://support.citrix.com/article/CTX291268

Point #2 in the solution referred to switching the Classic expression in the session policy to an Advanced policy, however you cannot modify an existing policy without it switching back to the original setting. In order to bypass this limitation, create new session policies which use the Advanced expression equivalents to those created by the Citrix XenApp and XenDesktop ADC wizard available in the appliance.

See below screenshot for the before (first 2) and after (latter 2) Classic/Advanced equivalents.

Before (classic)

add vpn sessionPolicy PL_OS_192.168.200.190 "REQ.HTTP.HEADER User-Agent CONTAINS CitrixReceiver" AC_OS_192.168.200.190
add vpn sessionPolicy PL_WB_192.168.200.190 "REQ.HTTP.HEADER User-Agent NOTCONTAINS CitrixReceiver && REQ.HTTP.HEADER Referer EXISTS" AC_WB_192.168.200.190

After (advanced)

add vpn sessionPolicy PL_OS_192.168.200.190_Advanced "HTTP.REQ.HEADER(\"User-Agent\").CONTAINS(\"CitrixReceiver\")" AC_OS_192.168.200.190
add vpn sessionPolicy PL_WB_192.168.200.190_Advanced "HTTP.REQ.HEADER(\"User-Agent\").CONTAINS(\"CitrixReceiver\").NOT" AC_WB_192.168.200.190

Once the Advanced expression policies are bound to the vserver and the original Classic expressions have been removed – the initial problem is resolved and StoreFront loads successfully.

Whilst Citrix are advising that Citrix classic expression policies will be deprecated in ADC 13.1 it appears that some issues relating to session policies have crept in at/before 13.0 Build 82.42 which need to be carefully managed.

NB. It is possible to use a Citrix Advanced session policy with the Citrix ADC Gateway VPX license in this way. This isn’t the same as enabling nFactor Advanced Authentication policies as detailed by Carl Stalhood here: https://www.carlstalhood.com/nfactor-authentication-for-netscaler-gateway-12/

Citrix XenApp/Desktop LTSR 7.15 Azure catalog creation issues

I came across this problem whilst trying to build a lab scenario with an older version of LTSR 7.15 and wasn’t able to find any similar issues documented elsewhere. Essentially Citrix Studio would not allow me to browse for .vhd files when creating a new catalog from an unmanaged disk located in an Azure storage account.

Here’s the troubleshooting process and solution at the end (spoiler – it’s TLS 1.1, 1.2!)

Trying to create a catalog following successful creation of a hosting connection:


Machine creation wizard error

You might find for instance when examining other storage accounts that you are even able to view the name of any named containers e.g. ‘logs’ located within the storage account object, but no obvious difference is possible.

You might try even using PowerShell to examine the hypervisor connection, and by following along will eventually reach a dead end in the communication with Azure:

Add-PSsnapin Ci*
cd XDHyp:\
cd HostingUnits
(dir).PSChildName

Determine the name of your hosting connection, and change directory into it

cd .\YourHostingUnitName\
(dir).PSChildName

Determine the name of your resource, and change directory into it

cd .\image.folder\
(dir).PSChildName

Determine the name of your Azure resource group, and change directory into it

cd .\YourResourceGroupName.resourcegroup\
(dir).PSChildName

Determine the name of your storage account, and change directory into it

cd .\YourStorageAccountName.storageaccount\
(dir).PSChildName

At this point if you attempt to use dir or get-childitem you will receive an error saying:

An exception occurred. The associated message was Error: Could not receive inventory contents from path

In summary you don’t receive very much information from Citrix Studio which might provide further assistance at troubleshooting the issue. Citrix Host Service will generate an Event ID 1007 message including the text:

Citrix.MachineCreationAPI.MachineCreationException: Error: Could not retrieve inventory contents from path /UK South.region/image.folder/YourResourceGroup.resourcegroup/YourStorageAccount.storageaccount ---> Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (400) Bad Request. ---> System.Net.WebException: The remote server returned an error: (400) Bad Request.
   at System.Net.HttpWebRequest.GetResponse()

The solution took quite some comparison between different working environments until I happened on the cause and eventual solution. That is, that the storage accounts affected were configured by default to use TLS 1.2 as a minimum rather than TLS 1.0. Clearly this isn’t ideal but even relatively recent LTSR 7.15 CU5 (and presumably earlier) does not seem to support TLS 1.2 for this type of API communication with Azure.

Simply locate the storage account and modify the following switch under the Configuration page:

Finally (after waiting 30 seconds or so for the storage account change to take affect you’ll be able to open the storage account and view the unmanaged disk VHD blob.

Correctly working master image wizard selection

Switching to TLS 1.1 support does not improve the situation, it will begin failing again – even though the browser in Windows Server 2016 (with recent updates) supports TLS 1.1 and 1.2. So it appears that the code somewhere is out of date in LTSR 7.15 (either Citrix Studio or PowerShell perhaps).

I’ll update this post if I manage to resolve it using another method, but in my experience after testing this problem goes away with LTSR 1912.

Deploying Citrix Ingress Controller with Kubernetes

Citrix Ingress Controller is a niche but seriously interesting innovation from Citrix – developed in order to bring an enhanced application delivery capability to the Kubernetes container orchestration platform. This article is intended to communicate some basics of Kubernetes and ingress using Citrix ADC, but more-so to highlight some specific gaps in the documentation which are no longer appropriate for Kubernetes 1.16 and above due to API changes .

Many Citrix application and networking users will already be familiar with the hardware based or virtual Citrix NetScaler or ADC platforms, bringing L4 through to L7 load balancing, URL responder and rewrite features (amongst others) to conventional or virtualised networking environments. What you now have with Citrix Ingress Controller with ADC MPX/VPX is the ability to integrate Kubernetes with your existing ADCs, or introduce Citrix ADC CPX containerised NetScaler(s) such that you are able to deploy transient containerised NetScaler ADC instances within your Kubernetes platform enabling per-application networking services.

What is great about this solution is the way that it creates an automated API interface between Kubernetes and Citrix’s Nitro REST API of NetScaler. When a new containerised app is presented to the outside via a specially annotated ingress CIC will instantly create load balancing and content switching vservers along with rewrite rules for you, and even update/remove them when your container is modified or removed. This takes all of the manual work out of updating your ADC configuration on a per-app basis.

There are two basic ways in which to incorporate Citrix ADC into Kubernetes, namely ‘north-south’ and ‘east-west’ options. Familiar ingress solutions such as NGINX are often used within Kubernetes to attach the container networking stack to the outside world, since pod networking is normally completely abstracted from the user network in order to facilitate clean application separation. In a ‘north-south’ implementation you can think of the ingress controller (e.g. NGINX or Citrix ADC) as the front door to your application, with the remaining container based application networking presented through service endpoints within the backend network.

In an ‘east-west’ topology you can implement Citrix ADC CPX as a side-car to your container application in order to provide advanced ADC features within the Kubernetes network to enhance inter-container communication. This is a more advanced topology, but nonetheless directly intended for deployment within the Kubernetes infrastructure as a container. Citrix have a nice series of diagrams which highlight the tier 1 and tier 2 scenarios here.

Prerequisites

I’m going to be talking about bare-metal scenarios here rather than cloud based environments such Azure AKS, however to user these examples you will need to have created a Kubernetes 1.16 cluster first and be able to interact with it using kubectl. I have been using Rancher in order to build my Kubernetes clusters on vSphere, which in itself is a whole other subject which I hope to return to in a different post.. but you could always use something like MiniKube running within a desktop hypervisor (let me know how you get on!).

In order to use the implementation examples below you will need to have deployed a Citrix NetScaler MPX or VPX v12.1 / 13 in your network which is able to communicate with the Kubernetes API and cluster nodes. My lab uses a flat network range of 192.168.0.0/24 for instance, in which case the Kubernetes API is available on the same network as my NetScaler. However the backend pod networks are in the range 10.42.x.0/24 where each node hosts a separate range. Citrix Ingress Controller will take care of adding the network routes to these backend networks so they don’t have to be reachable from your desktop.

For the purposes of a lab type exercise it doesn’t matter if your Citrix ADC is used for other features, e.g. LB, Citrix Gateway because Citrix Ingress Controller will complement your infrastructure without replacing any of the existing configuration. It’s probably not a great idea to launch straight into this using your Production ADC instance though, best stick to the lab environment!

Create a system user on Citrix ADC

Your Citrix Ingress Controller will talk to NetScaler Nitro API directly using a user account which you define within Kubernetes. Perhaps you will use an existing user, or create a new one. For instance the following command will create a new user called cic on the NetScaler and create a new command policy:

add system user cic my-password
add cmdpolicy cic-policy ALLOW “^(?!shell)(?!sftp)(?!scp)(?!batch)(?!source)(?!.*superuser)(?!.*nsroot)(?!install)(?!show\s+system\s+(user|cmdPolicy|file))(?!(set|add|rm|create|export|kill)\s+system)(?!(unbind|bind)\s+system\s+(user|group))(?!diff\s+ns\s+config)(?!(set|unset|add|rm|bind|unbind|switch)\s+ns\s+partition).*|(^install\s*(wi|wf))|(^(add|show)\s+system\s+file)”

NB I’ve seen a problem with the above where the command might error out with an error concerning unexpected quotes character, it doesn’t seem to interfere with the creation of the command policy though.

In case you have any difficulties whilst attempting to recreate the steps in this post you can always try first using the ‘superuser’ command policy and then refine it until it matches the command permissions that you’re comfortable with.

In addition to this you may need to add additional rewrite module permissions if you’re going to use the rewrite CRDs, you can just tack these on to the end of the existing definition before the final quote mark:

(^(?!rm)\S+\s+rewrite\s+\S+)|(^(?!rm)\S+\s+rewrite\s+\S+\s+.*)

Finally, bind the newly created command policy to your new cic user.

bind system user cic cic-policy 0

Deploy Citrix Ingress Controller using YAML

This section is slightly different to that which is outlined in the actual Citrix Ingress Controller instructions. Please take care to understand the differences, they are mainly due to a desire to create better separation between components and configuration settings.

Create a new namespace to hold the secret and other CIC components. The commands below show the namespace entry in bold in case you choose to omit this and just place the components in the default namespace. It’s up to you, but for tidiness I created a namespace.

kubectl create namespace ingress-citrix

Create a new Kubernetes secret to store your Nitro API username and password. Using kubectl connect to your cluster and create a new secret to store the data.

kubectl create secret generic nslogin --from-literal=username=cic --from-literal=password=mypassword -n ingress-citrix

In my testing I ran into what I think is a Citrix documentation error for the above command where they show using single quotes around the name cic and mypassword values. Kubernetes converts these values into base64 encoding before they are stored, and might also include the quotes in the final value if you’re not careful. In fact that messed up my configuration for a while until I converted the secret back into its original content, using:

kubectl get secret nslogin -n ingress-citrix -o=yaml

Take the values for password: and username: from the secret and pass them through a base64 decoder just to check that this hasn’t happened (there are also various web sites which can do this for you) by using the following Linux/MacOS command for either the username or password taken from the YAML form above.

echo bXlwYXNzd29yZA== | base64 --decode

Using this source file as a reference, modify/add the following entries (shown in bold) within the file in order to add the name of your namespace:

kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1beta1
 metadata:
   name: cic-k8s-role
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cic-k8s-role
 subjects:
 kind: ServiceAccount
 name: cic-k8s-role
 namespace: ingress-citrix 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cic-k8s-role
  namespace: ingress-citrix
apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: cic-k8s-ingress-controller
   namespace: ingress-citrix
 (entry continues)

Be aware – the default CIC configuration creates a cluster role which will see events across the whole system, however this can be deliberately (or mistakenly) restricted to only watching API events in specific namespaces if your role contains:

kind: Role

instead of:

kind: ClusterRole

or if you add a NAMESPACE environment variable when defining the env: section of your CIC deployment manifest.

Finally, add/edit the following entries to define how to contact your Citrix ADC i.e. the NetScaler management IP (NS_IP) and virtual server IP (NS_VIP) to be used for LB/content switching your ingress (the front door)

env:
         # Set NetScaler NSIP/SNIP, SNIP in case of HA (mgmt has to be enabled) 
         - name: "NS_IP"
           value: "192.168.0.99"
         - name: "NS_VIP"
           value: "192.168.0.110"
         - name: "LOGLEVEL"
           value: "INFO"
args:
           - --ingress-classes
             citrix
           - --feature-node-watch
             true

NB – the --feature-node-watch option allows NetScaler to create routes automatically in order to reach the backend pod network addresses

NB – the LOGLEVEL default value is DEBUG, you might want to leave this as an unspecified value until you’re happy with the functionality, and then change it to INFO as above.

The version of Citrix Ingress Controller is specified within this YAML file, hence if you wish to upgrade your CIC version it can be modified and redeployed (as long as no other changes to your deployment are required)

image: "quay.io/citrix/citrix-k8s-ingress-controller:1.6.1"

After updating the above entries as citrix-k8s-ingress-controller.yaml save the modified YAML file and then deploy it using kubectl

kubectl create -f citrix-k8s-ingress-controller.yaml

Check that your Citrix Ingress Controller container has deployed correctly:

kubectl get pods -n ingress-citrix

NB – in the following examples you can ignore the rancherpart of the above command, the kubectl statements are being proxied through Rancher in order to reach the correct cluster

Validate the installation of Citrix Ingress Controller

Once CIC is online you can access the logs generated by the container by switching the name of your container into the following command:

kubectl logs cic-k8s-ingress-controller-9bdf7f885-hbbjb -n ingress-citrix

You’ll want to see the following highlighted section within the log file which shows that CIC was able to connect to the Nitro interface and create a test vserver (which coincidentally validates that it was able to locate and use the secret which was created to store the credentials!):

2020-01-10 10:45:50,144  - INFO - [nitrointerface.py:_test_user_edit_permission:3729] (MainThread) Processing test user permission to edit configuration
 2020-01-10 10:45:50,144  - INFO - [nitrointerface.py:_test_user_edit_permission:3731] (MainThread) In this process, CIC will try to create a dummy LB VS with name k8s-dummy_csvs_to_test_edit.deleteme
 2020-01-10 10:45:50,174  - INFO - [nitrointerface.py:_test_user_edit_permission:3756] (MainThread) Successfully created test LB k8s-dummy_csvs_to_test_edit.deleteme  in NetScaler
 2020-01-10 10:45:50,188  - INFO - [nitrointerface.py:_test_user_edit_permission:3761] (MainThread) Finished processing test user permission to edit configuration
 2020-01-10 10:45:50,251  - INFO - [nitrointerface.py:_perform_post_configure_operation:575] (MainThread) NetScaler UPTime is recorded as 7225

At this point the Citrix Ingress Controller container will sit there listening out for any Kubernetes API calls which it might be interested to assist with, e.g. creation of an ingress or load balancer object. By default Citrix should pick up any ingress creation event, but in many environments you’ll already have NGINX deployed for various reasons (e.g. it’s a functional part of accessing a dashboard for instance).

The way that you can avoid getting things tangled up is by deliberately using ingress class annotations in your specifications. In this way other ingress controllers will ignore your requests to build an ingress but CIC will jump straight in to help. The annotation which is used for this is called:

kubernetes.io/ingress.class:"Citrix"

Deploying an application

Let’s start by deploying a simple application into the default namespace. The reason we’re going to do this is two-fold, firstly it is simple and most likely to work, and secondly it verifies that CIC is able to see services and ingresses outside of its own namespace. I like to use a hello-world image from Tutum because it tells us a little bit about where it’s running when you access the page.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  namespace: default
spec:
  selector:
    matchLabels:
      run: hello-world
  replicas: 1
  template:
    metadata:
      labels:
        run: hello-world
    spec:
      containers:
      - name: hello-world
        image: tutum/hello-world
        ports:
        - containerPort: 80

Create a new YAML file and save it as deploy-hello-world.yaml, then use kubectl to deploy it to Kubernetes. You’ll see that I’ve prepended rancher in all of my examples but you can omit that if you’re not using Rancher

kubectl apply -f deploy-hello-world.yaml

Creating a service

Now that the application is running in a container you’ll need to create a service using the following YAML. Save it as expose-hello-world.yaml. You could use a type spec of ClusterIP or NodePort – it doesn’t matter when CIC is configured with --feature-node-watch=true although the default is actually ClusterIP.

apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: default
  labels:
    run: hello-world
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: hello-world
kubectl apply -f expose-hello-world.yaml

Defining your ingress

An ingress is a rule which directs incoming traffic to a host address or a given path through to the backend application. It’s quite important to know that an ingress itself is just a rule, there may be load balancers or ingress controllers which receive incoming traffic in your environment but the ingress assists in directing that flow to the backend application.

Again the use of the ingress class kubernetes.io/ingress.class:"Citrix" is an essential component of the below ingress example. It ensures that CIC ‘notices’ the new ingress definition and tells it that it should instruct the Citrix ADC to build load balancing or content switching vservers to make sure your traffic is received when the outside world attempts to talk to your application.

In this ingress example we are going to simulate a scenario where you have a path based entry point into your application, which itself then redirects to the container’s root page. Create a new YAML file with the following content and call it ingress-hello-world.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-world-ingress
  namespace: default
  annotations:
   kubernetes.io/ingress.class: "citrix"
spec:
  rules:
  - host:  www.helloworld.com
    http:
      paths:
      - path: /hello-world
        backend:
          serviceName: hello-world
          servicePort: 80

NB The author, his company and this post has nothing whatsoever to do with any websites or businesses operating on any real domains such as ‘helloworld.com’. It is chosen simply as a convenient example.

kubectl apply -f ingress-hello-world.yaml

At this point, if everything has worked correctly you should be able to make a host file or DNS entry for www.helloworld.com (of course you could use anything else) which points to the same IP address you used to define the NS_VIP address of your load balancer in the Citrix Ingress Controller configuration (citrix-k8s-ingress-controller.yaml). In the examples above the mapping would be:

www.helloworld.com <---> 192.168.0.110

You’ll see the virtual IP now created for you within the Citrix ADC in two places, firstly a new content switch:

A new content switch with the IP address specified in NS_VIP entry, 192.168.0.110

This new content switch has one or more expressions which match traffic to actions (created through ingress definitions):

Therefore any incoming HTTP request matching the www.helloworld.com host where the request URL includes pages starting with the /hello-world location will be sent to the second newly created object – the vserver defined in the action below:

A new load balancing vserver has been created with address 0.0.0.0

This LB vserver includes a service group whose members are actually represented by the pods where the application is currently running. If you changed the deployment specification to include more replicas then you would see more nodes participating in the service group. Citrix ADC will monitor the health of the exposed node ports in order to ensure that traffic is only directed onto running pods.

And now when we visit the page, via the hostname and URL path defined on the ingress we should now see:

Adding a rewrite policy

Let’s say that you have a single ingress controller which is exposing endpoints on a path basis, e.g. /myapproot but the application available on that service is expecting /myapproot/ instead. Some applications I’ve seen won’t respond properly unless you rewrite your request URL to have the trailing forward slash. Fortunately Citrix Ingress Controller and ADC are able to take care of this through a rewrite rule.

Before you can use this you’ll need to deploy the Custom Resource Definitions for rewrite using the following instructions.

Download the CRD for rewrite and responder YAML from this Citrix URL. Save it as rewrite-responder-policies-deployment.yaml and then deploy it using

kubectl create -f rewrite-responder-policies-deployment.yaml

NB One very interesting ‘gotcha’ here is that if you associate a CRD with a namespace then it will only create rewrite policies and actions for services in that namespace, so I would recommend simply using the simplest form of the command shown above without placing the CRD into the ingress-citrix namespace used in this blog’s example.

Now that is deployed you should adapt the following YAML in order to define how the app rewrite should function and then save it as cic-rewrite-example.yaml:

apiVersion: citrix.com/v1
kind: rewritepolicy
metadata:
 name: httpapprootrequestmodify
 namespace: default
spec:
 rewrite-policies:
   - servicenames:
       - hello-world
     rewrite-policy:
       operation: replace
       target: http.req.url
       modify-expression: '"/hello-world/"'
       comment: 'HTTP app root request modify'
       direction: REQUEST
       rewrite-criteria: http.req.url.eq("/hello-world")
kubectl create -f cic-rewrite-example.yaml

Using a Load Balancer service instead of Ingress

In the example above I outlined how to create a hello-world deployment and service in order to correctly present an application via an ADC using ingress. However ingress will only work for HTTP/HTTPS type traffic and cannot be used for other services. One additional method you can use for other traffic is to define a service of type LoadBalancer rather than any other option, e.g. ClusterIP, NodePort.

Citrix Ingress Controller has a specific annotation for this scenario which can be added to the service definition to add the IP address which ADC should use. This is the equivalent of a cloud-provider based load balancer in your on-prem Kubernetes environment where you might not use ingress at all.

apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: default
  annotations:  
    service.citrix.com/frontend-ip: '192.168.0.115'
  labels:
    run: hello-world
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: hello-world

Save the YAML example above into cic-loadbalancer-example.yaml and apply it.

kubectl create -f cic-loadbalancer-example.yaml

If you now examine the service which is created it should be apparent that the type has now changed from NodePort or ClusterIP to LoadBalancer. The external IP address is now shown, as defined within the service.citrix.com/frontend-ip: '192.168.0.115' annotation.

Citrix ADC will now direct traffic arriving at that IP address through to any pods which match the label selector. This method allows you to quite simply plug the outside world in to your Kubernetes application infrastructure at L4 without using ingress or path matching rules.

Summary

Citrix Ingress Controller is well worth investigating if you are beginning to implement on-prem Kubernetes based applications and already have an investment in Citrix ADC. If you need additional features such as DDoS protection, advanced rewrite, TCP optimisations etc. then CIC offers quite a lot of benefits over a simple NGINX proxy. The next article planned in this series will examine the sidecar Citrix ADC CPX deployment and how this can enhance visibility of inter-container communication.

Addendum – Rancher specific ingress issue with Citrix Ingress Controller

This section has been included here in order to highlight a specific issue which is currently occurring in CIC 1.6.1 and Rancher 2.3.4 releases. It seems to be a purely cosmetic issue however it’s been the subject of a recent call I had with some of the Citrix people responsible for CIC who confirmed the behaviour with me. Basically when an ingress is created it is successfully created by CIC but its status does not move from ‘Initializing’ to ‘Active’ in Rancher. This is because Rancher is awaiting the External-IP value to be updated in the Status, but this does not occur because CIC doesn’t mandate that this be actively reported. I’ll update/remove this section from the post if and when this is resolved.

UPDATE – the above issue is now resolved in releases 1.7.6 and above by appending the --update-ingress-status entry into the CIC deployment YAML under the following section:

args:
  - --ingress-classes citrix
  - --feature-node-watch true
  - --update-ingress-status yes

Upgrading Citrix XenApp 7.x VDA version using PowerShell

With the advent of XenApp 7 and more recently experiencing the higher frequency of VDA cumulative updates I would generally recommend implementing Citrix Machine Creation Services or other imaging mechanism (such as Provisioning Server) when rolling out new versions of the Virtual Desktop Agent to a large number of catalogs.

However, what happens when you only require one XA server per catalog, or when each one of those servers is handled manually when new application code is deployed? This is more common than you might imagine, especially in Citrix deployments which have per-customer or per-app specific catalogs. The work involved in maintaining a master image can be significant and the serviceability of such relies upon someone knowing how to treat image updates in a way that won’t introduce problems that could arise weeks or months later.

One customer of mine has at least 80 catalogs running one or more XenApp VMs and so it simply doesn’t make sense to maintain a single master image for each, especially when application code updates are delivered frequently. So I set about creating a simple PowerShell script which works in a VMware environment to attach the Citrix upgrade ISO and then run the setup installer within the context of a remote PowerShell session.

Using this method you can easily carry out a bulk upgrade of tens (possibly hundreds) of statically assigned VDAs individually by attaching the ISO and installing the update automatically. The advantage of this time saving approach is that it can even be run in a loop so that the upgrade is only attempted when a server is idle and not running any sessions.

NB – as always, please validate the behaviour of the script in a non-production environment and adjust where necessary to meet your own needs.

Here’s a walkthrough of the script, along with the complete example version included at the end.

  1. The script will load the required plugins from both Citrix and VMware PowerShell modules/plugins (I generally run things like this on the Citrix Delivery Controller and install PowerCLI alongside for convenience)
  2. Request credentials and connect to vCenter via a popup
  3. Request credentials for use with WinRM connections to remote Windows servers via a popup
  4. Create a collection of objects (XA servers) which are powered on, do not have any active sessions and don’t already have the target VDA version installed (see $targetvda variable)
  5. For each VM, sequentially:
    1. Attach the specified .iso image file to the resulting VMs
    2. Determine the drive letter where the XA ISO file has been mounted
    3. Create a command line for the setup installer, and save the command into c:\upgrade_vda.cmd on the XA server
    4. Connect via PowerShell remoting session to the remote XA server
    5. Adjust the EUEM registry node permissions (as per https://support.citrix.com/article/CTX215992)
    6. Execute the c:\upgrade_vda.cmd upgrade script on remote machine via PS session
    7. Disconnect the PowerShell remote session
    8. Reboot the VM via vCenter in order to restart the XA services

Review the script and edit the following variables to reflect your use-case:

$vcentersrv = "yourvcentersrv.domain.com"
$targetvda = '7.15.4000.653'
$isopath = "[DATASTORE] ParentFolderName\XenApp_and_XenDesktop_7_15_4000.iso"

Edit the selection criteria on the VMs which will be upgraded:

$targetvms = Get-BrokerMachine -DesktopKind Shared | Where-Object {($_.AgentVersion -ne $targetvda) -and ($_.PowerState -eq 'On') -and ($_.HostedMachineName -like 'SRV*')}

All servers in my example environment begin with virtual machine names SRV* so this line can be adapted according to the number of VMs which you would like to upgrade, or simply replace with the actual named servers if you want to be more selective:

($_.HostedMachineName -in 'SRV1','SRV2','SRV3')

Finally, consider modifying the following variable from $true to $false in order to actually begin the process of upgrading the selected VMs. I suggest running it in the default $true mode initially in order to validate the initial selection criteria.

$skiprun = $true

Additional work:

I would like additionally to incorporate the disconnection of previous VDA .ISO files from the VM before attempting to upgrade. I have noticed that the attached volume label search e.g. Get-Volume -FileSystemLabel ‘XA and XD*’ that determines the drive letter selection is too wide, and will erroneously detect both XA_7_15_4000.iso and XA_7_15_2000.iso versions without differentiating between them.

I would also like to do further parsing of the installation success result codes in order to decide whether to stop, or simply carry on – however I have used the script on tens of servers without hitting too many roadblocks.

This script could also be adapted to upgrade XenDesktop VDA versions where statically assigned VMs are provided to users.

Final note:

This script does not allow the Citrix installer telemetry to run during the installation because it requires internet access and this generates errors in PowerShell for XenApp servers which can’t talk outbound. You can choose to remove this command line parameter according to your circumstances:

/disableexperiencemetrics

Citrix also optionally collects and uploads anonymised product usage statistics, but again this requires internet access. In order to disable Citrix Telemetry the following setting is used:

/EXCLUDE "Citrix Telemetry Service"

Additionally the Personal vDisk feature is now deprecated, so the script excludes this item in order for it to be removed if it is currently present (so be aware if you’re using PvD):

/EXCLUDE "Personal vDisk"

PowerShell code example:

# Upgrade VDA on remote Citrix servers

if ((Get-PSSnapin -Name "Citrix.Broker.Admin.V2" -ErrorAction SilentlyContinue) -eq $Null){Add-PSSnapin Citrix.Broker.Admin.V2}
if ((Get-PSSnapin -Name "VMware.VimAutomation.Core" -ErrorAction SilentlyContinue) -eq $Null){Add-PSSnapin VMware.VimAutomation.Core}

$vcentersrv = "yourvcentersrv.domain.com"

if ($vmwarecreds -eq $null) {$vmwarecreds = Connect-VIServer -Server $vcentersrv}            # Authenticate with vCenter, you should enter using format DOMAIN\username, then password
if ($creds -eq $null) {$creds = Get-Credential -Message 'Enter Windows network credentials'} # Get Windows network credentials

clear

$targetvda = '7.15.4000.653' #Add the target VDA version number - anything which isn't correct will be upgraded
$isopath = "[DATASTORE] ParentFolderName\XenApp_and_XenDesktop_7_15_4000.iso" #Path to ISO image in VMware
$skiprun = $true #Set this variable to false in order to begin processing all listed VMs

$targetvms = Get-BrokerMachine -DesktopKind Shared | Where-Object {($_.AgentVersion -ne $targetvda) -and ($_.PowerState -eq 'On') -and ($_.HostedMachineName -like 'SRV*')}
Write-Host The following XA VMs will be targeted
Write-Host $targetvms.HostedMachineName
if ($skiprun -eq $true) {write-host Skip run is still enabled; exit}

foreach ($i in $targetvms){

if ($i.AgentVersion -ne $targetvda) {
    Write-Host Processing $i.HostedMachineName found VDA version $i.AgentVersion
    
    if ($i.sessioncount -ne $null) {Write-Host Processing $i.HostedMachineName found $i.sessioncount users are logged on}

    if ($i.sessioncount -eq 0) {#Only continue if there are no logged-on users

        Write-Host Processing $i.HostedMachineName verifying attachment of ISO image
        $cdstate = Get-VM $i.HostedMachineName | Get-CDDrive
        if (($cdstate.IsoPath -ne $isopath) -and ($cdstate -notcontains 'Connected')) { $cdstate | Set-CDDrive -ISOPath $isopath -Confirm:$false -Connected:$true;Write-Host ISO has been attached}

        $s = New-PSSession -ComputerName ($i.MachineName.split('\')[1]) -Credential $creds
            #Create the upgrade command script using correct drive letters
            Write-Host Processing $i.HostedMachineName -NoNewline
            invoke-command -Session $s {
                $drive = Get-Volume -FileSystemLabel 'XA and XD*'
                $workingdir = ($drive.driveletter + ":\x64\XenDesktop Setup\")
                $switches = " /COMPONENTS VDA /EXCLUDE `"Citrix Telemetry Service`",`"Personal vDisk`" /disableexperiencemetrics /QUIET"
                $cmdscript = "`"$workingdir" + "XenDesktopVDASetup.exe`"" + $switches
                Out-File -FilePath c:\upgrade_vda.cmd -InputObject $cmdscript -Force -Encoding ASCII
                Write-Host " wrote script using path" $workingdir
            }
            
            #Adjust the registry permissions remotely
            Write-Host Processing $i.HostedMachineName updating registry permissions
            Invoke-Command -Session $s {
                $acl = Get-Acl "HKLM:\SOFTWARE\Wow6432Node\Citrix\EUEM\LoggedEvents"
                $person = [System.Security.Principal.NTAccount]"Creator Owner"
                $access = [System.Security.AccessControl.RegistryRights]"FullControl"
                $inheritance = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit,ObjectInherit"
                $propagation = [System.Security.AccessControl.PropagationFlags]"None"
                $type = [System.Security.AccessControl.AccessControlType]"Allow"}
            Invoke-Command -Session $s {$rule = New-Object System.Security.AccessControl.RegistryAccessRule($person,$access,$inheritance,$propagation,$type)}
            Invoke-Command -Session $s {$acl.AddAccessRule($rule)}
            Invoke-Command -Session $s {$acl |Set-Acl}
                
            #Execute the command script
            Write-Host Processing $i.HostedMachineName, executing VDA install script
            Invoke-Command -Session $s {& c:\upgrade_vda.cmd} # Runs the upgrade script on remote server
            Remove-PSSession $s #Disconnect the remote PS session
            Restart-VMGuest -VM $i.HostedMachineName -Confirm:$false #Restart the server following either a successful or unsuccessful upgrade
            }
        }
    }

Locating Personal vDisk with PowerShell script

Dell vRanger is a backup solution for VMware which I’ve been using for a while to backup a customer’s ESXi environment. It’s generally OK, however the vRanger backup configuration wizard does not allow you to specifically exclude Citrix MCS base image disks which cannot themselves be backed up (.delta disk file types) – instead opting to force you to define the disks to exclude based upon Hard disk 1, Hard disk 2 names which apply to the whole job identically for each VM.

In this example I DO want to backup the pvDisk but DO NOT want to backup the other two disks which are deemed unnecessary. The issue which I’ve got with this approach is that sometimes (and I don’t quite understand why!) the virtual desktops added to the catalog sometimes use Hard disk 3 for the user’s pvDisk and others use Hard disk 2.

Perhaps this is just a timing issue with vCenter but nevertheless I needed to figure out a simple way of easily searching a group of VMs and selecting those which use Hard disk 2, and 3 and create separate backup jobs which exclude the non-backup targets i.e. the delta disk (non-persistent independent) and identity disk (persistent independent).

See below the script which I ended up with after a bit of tinkering. It makes an assumption that the identity disk is less than 1GB in size and that your pvDisk is greater than 1GB (otherwise you may not see anything returned):

#Connect-VIServer -Server vcentersrv1.domain.internal
$VMfilter = 'Win7-XD-C*'
$XenDesktopVMs = Get-VM -Name $VMfilter
Write-Host 'Listing pvDisks names for selected VMs:'foreach ($vm in $XenDesktopVMs) {$hdd=Get-HardDisk -VM $vm | Where {$_.Persistence -eq "Persistent"}foreach ($diskin$hdd | `
where-object {$_.CapacityGB -ge 1}) {Write-Host $vm.Name $Disk.Name '=' $disk.CapacityGB }}

Repointing vCenter Server to external PSC on load balanced FQDN fails

I have been  planning a migration project for a customer for a while which involves moving from an embedded SSO instance on vCenter 5.5 to an external Platform Services Controller instance on 6.5. Suffice to say, plenty of ‘how to’ guides exist, alongside the documentation from VMware – however, there is a generally scant outline of what steps to take when ‘repointing your vCenter to the new load balanced PSC virtual IP. The topic of this post is what happens when you follow the available load balancing documentation and your VMware Update Manager service fails to start afterwards.

I’ll include the reference articles up front, in case these are the ones which you might also have referred to:

Reference articles:

Configuring HA PSC load balancing on Citrix NetScaler – VMware KB article

Repoint vCenter Server to Another External Platform Services Controller in the Same Domain – VMware KB article

The repoint command:

At the step where you are reminded to repoint your vCenter instances at the new load balanced VIP address you’ll need to use the command:

cmsso-util repoint --repoint-psc psc-ha-vip.sbcpureconsult.internal

However, if you’ve followed the steps precisely, you’re likely to run into the following output when the repoint script attempts to restart the Update Manager service:

What happens:

Validating Provided Configuration …
Validation Completed Successfully.
Executing repointing steps. This will take few minutes to complete.
Please wait …
Stopping all the services …
All services stopped.
Starting all the services …

[… truncated …]

Stderr = Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting

Failed to start all the services. Error {
“resolution”: null,
“detail”: [
{
“args”: [
“Stderr: Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting\n\n”
],
“id”: “install.ciscommon.command.errinvoke”,
“localized”: “An error occurred while invoking external command : ‘Stderr: Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting\n\n'”,
“translatable”: “An error occurred while invoking external command : ‘%(0)s'”
}
],
“componentKey”: null,
“problemId”: null
}

Following this issue you might reboot or attempt to start all services directly on the vCenter appliance afterwards and receive:

service-control --start --all

Service-control failed. Error Failed to start vmon services.vmon-cli RC=2, stderr=Failed to start updatemgr services. Error: Service crashed while starting

This again is fairly unhelpful output and doesn’t provide any assistance as to the cause of the issue. After much investigation, it turns out that the list of TCP port numbers which the load balancing configuration details are not complete, causing the service startup to fail. Because we’re not running any other applications on the PSC hosts it’s possible to simplify the configuration on NetScaler by using wildcard port services for each server.

NetScaler configuration commands (specific to PSC load balancing):

The following alternative configuration ensures that any PSC service requested by your vCenter Server (or other solutions) will remain persistently connected on a ‘per host’ basis for up to 1440 minutes which is the default lifetime of a vCenter Web Client session. This is different to VMware’s documented approach which load balances each service individually, but obviously misses out some crucial port.

add server hosso01.sbcpureconsult.internal 192.168.0.117
add server hosso02.sbcpureconsult.internal 192.168.0.116

add service hosso01.sbcpureconsult.internal_TCP_ANY hosso01.sbcpureconsult.internal TCP * -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO

add service hosso02.sbcpureconsult.internal_TCP_ANY hosso02.sbcpureconsult.internal TCP * -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO

add lb vserver lb_hosso01_02_TCP_ANY TCP 192.168.0.122 * -persistenceType SOURCEIP -timeout 1440 -cltTimeout 9000

bind lb vserver lb_hosso01_02_TCP_ANY hosso01.sbcpureconsult.internal_TCP_ANY

bind lb vserver lb_hosso01_02_TCP_ANY hosso02.sbcpureconsult.internal_TCP_ANY

Once this configuration is put in place you’ll find that the vCenter Update Manager service will start correctly and your repoint will be successful.

Edit: Following the above configuration steps to get past the installation issue, I’ve since improved the list of ports that are load balanced by NetScaler to extend the list that VMware published for vCenter in their docs page. By enhancing the original series of ports I think we can resolve the initial issue without resorting to IP based wildcard load balancing.

I’ve included the full configuration below for reference:

Thanks for reading!

If you find this useful drop me a message via my contact page.

add server hosso01.sbcpureconsult.internal 192.168.0.117
add server hosso02.sbcpureconsult.internal 192.168.0.116
add service hosso01_TCP80 hosso01.sbcpureconsult.internal TCP 80 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP88 hosso01.sbcpureconsult.internal TCP 88 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP389 hosso01.sbcpureconsult.internal TCP 389 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP443 hosso01.sbcpureconsult.internal TCP 443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP514 hosso01.sbcpureconsult.internal TCP 514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP636 hosso01.sbcpureconsult.internal TCP 636 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP1514 hosso01.sbcpureconsult.internal TCP 1514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2012 hosso01.sbcpureconsult.internal TCP 2012 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2014 hosso01.sbcpureconsult.internal TCP 2014 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2015 hosso01.sbcpureconsult.internal TCP 2015 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP2020 hosso01.sbcpureconsult.internal TCP 2020 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP5480 hosso01.sbcpureconsult.internal TCP 5480 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso01_TCP7444 hosso01.sbcpureconsult.internal TCP 7444 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP80 hosso02.sbcpureconsult.internal TCP 80 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP88 hosso02.sbcpureconsult.internal TCP 88 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP389 hosso02.sbcpureconsult.internal TCP 389 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP443 hosso02.sbcpureconsult.internal TCP 443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP514 hosso02.sbcpureconsult.internal TCP 514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP636 hosso02.sbcpureconsult.internal TCP 636 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP1514 hosso02.sbcpureconsult.internal TCP 1514 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2012 hosso02.sbcpureconsult.internal TCP 2012 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2014 hosso02.sbcpureconsult.internal TCP 2014 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2015 hosso02.sbcpureconsult.internal TCP 2015 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP2020 hosso02.sbcpureconsult.internal TCP 2020 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP5480 hosso02.sbcpureconsult.internal TCP 5480 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add service hosso02_TCP7444 hosso02.sbcpureconsult.internal TCP 7444 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 9000 -svrTimeout 9000 -CKA NO -TCPB NO -CMP NO
add lb vserver lb_hosso01_02_80 TCP 192.168.0.122 80 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_88 TCP 192.168.0.122 88 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_389 TCP 192.168.0.122 389 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_443 TCP 192.168.0.122 443 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_514 TCP 192.168.0.122 514 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_636 TCP 192.168.0.122 636 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_1514 TCP 192.168.0.122 1514 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2012 TCP 192.168.0.122 2012 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2014 TCP 192.168.0.122 2014 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2015 TCP 192.168.0.122 2015 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_2020 TCP 192.168.0.122 2020 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_5480 TCP 192.168.0.122 5480 -timeout 1440 -cltTimeout 9000
add lb vserver lb_hosso01_02_7444 TCP 192.168.0.122 7444 -timeout 1440 -cltTimeout 9000
bind lb vserver lb_hosso01_02_80 hosso01_TCP80
bind lb vserver lb_hosso01_02_80 hosso02_TCP80
bind lb vserver lb_hosso01_02_88 hosso01_TCP88
bind lb vserver lb_hosso01_02_88 hosso02_TCP88
bind lb vserver lb_hosso01_02_389 hosso01_TCP389
bind lb vserver lb_hosso01_02_389 hosso02_TCP389
bind lb vserver lb_hosso01_02_443 hosso01_TCP443
bind lb vserver lb_hosso01_02_443 hosso02_TCP443
bind lb vserver lb_hosso01_02_514 hosso01_TCP514
bind lb vserver lb_hosso01_02_514 hosso02_TCP514
bind lb vserver lb_hosso01_02_636 hosso01_TCP636
bind lb vserver lb_hosso01_02_636 hosso02_TCP636
bind lb vserver lb_hosso01_02_1514 hosso01_TCP1514
bind lb vserver lb_hosso01_02_1514 hosso02_TCP1514
bind lb vserver lb_hosso01_02_2012 hosso01_TCP2012
bind lb vserver lb_hosso01_02_2012 hosso02_TCP2012
bind lb vserver lb_hosso01_02_2014 hosso01_TCP2014
bind lb vserver lb_hosso01_02_2014 hosso02_TCP2014
bind lb vserver lb_hosso01_02_2015 hosso01_TCP2015
bind lb vserver lb_hosso01_02_2015 hosso02_TCP2015
bind lb vserver lb_hosso01_02_2020 hosso01_TCP2020
bind lb vserver lb_hosso01_02_2020 hosso02_TCP2020
bind lb vserver lb_hosso01_02_5480 hosso01_TCP5480
bind lb vserver lb_hosso01_02_5480 hosso02_TCP5480
bind lb vserver lb_hosso01_02_7444 hosso01_TCP7444
bind lb vserver lb_hosso01_02_7444 hosso02_TCP7444
add lb group pg_hosso_01_02 -persistenceType SOURCEIP -timeout 1440
bind lb group pg_hosso_01_02 lb_hosso01_02_80
bind lb group pg_hosso_01_02 lb_hosso01_02_88
bind lb group pg_hosso_01_02 lb_hosso01_02_389
bind lb group pg_hosso_01_02 lb_hosso01_02_443
bind lb group pg_hosso_01_02 lb_hosso01_02_514
bind lb group pg_hosso_01_02 lb_hosso01_02_636
bind lb group pg_hosso_01_02 lb_hosso01_02_1514
bind lb group pg_hosso_01_02 lb_hosso01_02_2012
bind lb group pg_hosso_01_02 lb_hosso01_02_2014
bind lb group pg_hosso_01_02 lb_hosso01_02_2015
bind lb group pg_hosso_01_02 lb_hosso01_02_2020
bind lb group pg_hosso_01_02 lb_hosso01_02_5480
bind lb group pg_hosso_01_02 lb_hosso01_02_7444
set lb group pg_hosso_01_02 -persistenceType SOURCEIP -timeout 1440

XenApp 7.x open published apps session report PowerShell script

Whilst there’s many amazing things being introduced by Citrix recently (in the XenApp/XenDesktop space) I do sometimes feel that Citrix Studio can be somewhat limited in comparison to previous admin tools.

I would say one of the common things that administrators and consultants need to know on a daily basis is how many instances of each published app are being run within a Citrix environment. I was a little perplexed at first why this wasn’t easily available through Citrix Director without making connections directly to the database through an OData connection, but I guess in the end they decided that it simply wasn’t relevant .

So I’ve been working on a PowerShell script to give me a very simple view of how an environment’s application usage stacks up, and from there on in I can decide whether everything’s running fine or dig a little deeper.

The first drafts of the script originally required me to manually specify the delivery group(s) against which it would be run, but in this example I’m using a multi-select list box to allow me to choose more than one (just hold down the CTRL key). However,  since each execution of the script only gives me a point-in-time view this example script will refresh every 60 seconds until the maximum interval of one day has passed.

The sort order is currently defined based upon the total number of application instances running, ordered by largest to least, so bear this in mind when selecting multiple delivery groups as the resulting view may not be what you’re looking for.

if ((Get-PSSnapin -Name "Citrix.Broker.Admin.V2" -ErrorAction SilentlyContinue) -eq $Null){Add-PSSnapin Citrix.Broker.Admin.V2}
$selectmachines = @()
$count = 1440 # Script will run until 1 day has passed, updating every 60 seconds
$selectdg = Get-BrokerDesktopGroup | Select-Object -Property Name, UID | Sort-Object -Property UID | Out-GridView -OutputMode Multiple -Title 'Select one or more delivery groups to display active sessions'
foreach ($i in $selectdg) {
$selectmachines+=Get-BrokerMachine -DesktopGroupUid $i.Uid | Select-Object MachineName -ExpandProperty MachineName
}
Do {
clear #Reset the screen contents before redisplaying the connection count
Get-BrokerApplicationInstance -Filter 'MachineName -in $selectmachines'| group-Object -Property ApplicationName | sort-object -property Count -Descending | Format-Table -AutoSize -Property Count,Name
$count--
Start-sleep -Seconds 60
} while ($count -ne 0)

 

Why Citrix and Microsoft’s new servicing models now make sense

OK, so I wasted a little bit of time. I know.. it’s a shame when that happens, but it’s even worse to make the same mistake twice! So please read on in case you head down the same road without keeping your eyes peeled for the pitfalls. So what’s the take home message of this post? Microsoft and Citrix now need us (no actually, require us) to do as every professional should always do, and plan our release schedules properly!

This post discusses an issue I experienced installing Citrix XenDesktop VDA 7.15 on Windows 10 Fall Creator’s Update – receiving error 1603 when the Citrix Diagnostic Facility component failed to install. If you’re short on time, skip to the end for a series of helpful links – otherwise, bear with me and I’ll take you on a short journey to grudging mindset shift!

I’d wasted a morning patching a Citrix base image from Windows 10 build 1703 to 1709 Creator’s Fall update because we were looking to create a clean desktop for some developers to test their software releases on. But try as I might, the Citrix 7.15 VDA installer wouldn’t complete and always terminated with error 1603 –  the Citrix Diagnostic Facility (CDF) service had failed to install. After investigating the logs though it wasn’t clear why, other than a permissions failure on C:\Windows\assembly\tmp – and even checking those showed little evidence for the cause of the problem.

But here goes, after a little bit more digging I discovered that the latest Citrix VDA does NOT support the latest semi-annual ‘targeted’ release of Windows 10 (1709). See issue #1 on Citrix blog post.

Could I believe it? No, not at first really – how could a desktop OS release made generally available on 17th October 2017 not be compatible with the latest Citrix VDA which has also been chosen recently as the most recent Long Term Service Release version? Surely this new XenDesktop LTSR release would have been coordinated with Microsoft’s own release schedule, with release candidates being shared well in advance so that both vendor’s would have had a chance to test their interaction together?

Apparently not – and therein lies the message. You cannot expect that each vendor is attempting to align their minor and major servicing schedules with each other! ..Assuming.. that the latest Citrix VDA will work with the latest release of Windows is no longer going to float, and that’s why we all need to fully commit to the “test, test and test again” approach.

In fact, the logic was established a long time ago.  The last LTSR release of XenDesktop (7.6) did not support Windows 10 claiming this as a ‘notable exclusion’ despite the fact that early Windows 10 versions had been around for some time.

Notable Exclusions: These are components or features that are just not well suited for the extended lifecycle typically because this is newer technology that we plan on making significant enhancements to over time.  This is where Windows 10 fell when we originally launched 7.6 LTSR.

Citrix then later added retrospective support for Windows 10 by encouraging the use of VDA 7.9 in conjunction with the XenDesktop 7.6 LTSR release when it appeared that this combination worked well. However hope for the future compatibility was even made clear at this time with the following statement being added to the end of that post.

Finally, we want to note that Citrix is targeting to announce a new LTSR version in 2017 adding full LTSR benefits for the Windows 10 platform. However, this current announcement makes it easier for you to jump on Windows 10 desktop virtualization today while still maintaining all the benefits of being LTSR compliant.

And whilst it is indeed true that XenDesktop 7.15 LTSR release fully supports Windows 10 current branch/semi annual channel, it seems that only a simple statement on ‘requiring VDA 7.9 or later’ was made as long as you are happy to stick to the ‘Current release’ path:

Note about Windows 10: Regular support for Windows 10 is available through the Current Release path. Windows 10 does not get the full set of 7.15 LTSR benefits. For deployments that include Windows 10 machines, Citrix recommends that you use the Current Release Version 7.9 or later of the VDA for Desktop OS and of Provisioning Services.

A separate article entitled Windows 10 Compatibility with Citrix XenDesktop makes this clearer,

  • VDA: Although Semi-Annual Channel Targeted releases are intended for pilot trials, Citrix will provide limited support (configuration only) for VDA installations on Windows 10 Semi-Annual Channel Targeted releases, starting from version 1709 forward.

..and goes on further to say that ‘targeted’ releases such as Windows 10 Fall Creator’s Update are not guaranteed to be compatible:

While the Desktop OS VDA is expected to install and work on Windows 10 Semi-Annual Channel Targeted versions, Citrix does not guarantee proper functionality with these builds.

So there – it’s now clear. The LTSR releases, even the most recent, were never intended to deliver the latest compatibility with Microsoft’s own servicing schedule. It just happens in this case that VDA 7.15 is the most recent VDA available currently and for some reason Citrix also chose to adopt this as the version included in the latest LTSR release.

If you’re intending to use LTSR versions and maintain full compatibility with Windows 10 it seems that the only sensible way forward is to fall back on the most recent Semi-Annual Channel release (build 1703) and wait for the next LTSR cumulative release that adds support for the previously circulated Win10 ‘targeted’ version after all of the wrinkles have been ironed out. This is very well explained at the end of the linked article above, which simply states that you can’t be sure of support for specific Windows 10 versions unless you match them with the approved VDA for that Semi-annual channel release. Anything newer just might not work.

  • Windows 10 Creator’s Update (Version 1703) – use VDA 7.9/7.15 for LTSR support
  • Windows 10 Fall Creator’s Update (Version 1709) – Not supported!

So what’s the moral of the story, after all? Citrix and Microsoft have taken the stance to deliver frequent releases for those who are happy to trail-blaze and hotfix, depending upon their current release and semi-annual targeted releases respectively. But if you want to rely upon well-tested and proven operating system and VDA platforms – which are likely to survive the test of time (without high levels of maintenance and unpredictable results) then stick to the aligned Citrix LTSR and Windows Semi-Annual channel versions and plan your releases several months in advance. Anything else, and you could be left scratching your head for a short while until the penny drops!

Update: Since writing this post I’ve become aware of a clear summary of the current situation documented within Carl Stalhood’s excellent VDA 7.15 installation notes under point #7. Citrix have stated that they plan to provide retrospective support for VDA 7.15 on Windows 10 Version 1709 under two scenarios:

  • A new patch (now released) on Nov 14th 2017 (KB4051314) will provide the ability to update an existing Windows installation and existing VDA to Windows 10 version 1709
  • A new patch to be released via the Microsoft Update Catalogue in November Week 4 will allow you to do a fresh new VDA install on a clean Windows 10 version 1709.

NB This is a first draft of this post with minor edits. If you believe that anything included here is erroneous or misleading please get in contact/drop me a line so that I can clean it up. Thanks for reading!

Useful references:
Windows 10 Compatibility with Citrix XenDesktop
Windows 10 Fall Creators Update (v1709) – Citrix Known Issues
Windows 10 Creators Update (v1703) – Citrix Known Issues
XenApp and XenDesktop 7.15 LTSR
Adding Windows 10 Compatibility to XenApp and XenDesktop 7.6 LTSR
FAQ: XenApp, XenDesktop, and XenServer Servicing Options (LTSR)
Windows 10 update history
https://blogs.technet.microsoft.com/windowsitpro/2017/07/27/waas-simplified-and-aligned/
https://blogs.windows.com/windowsexperience/2017/10/17/get-windows-10-fall-creators-update/