Deploying Citrix Ingress Controller with Kubernetes

Citrix Ingress Controller is a niche but seriously interesting innovation from Citrix – developed in order to bring an enhanced application delivery capability to the Kubernetes container orchestration platform. This article is intended to communicate some basics of Kubernetes and ingress using Citrix ADC, but more-so to highlight some specific gaps in the documentation which are no longer appropriate for Kubernetes 1.16 and above due to API changes .

Many Citrix application and networking users will already be familiar with the hardware based or virtual Citrix NetScaler or ADC platforms, bringing L4 through to L7 load balancing, URL responder and rewrite features (amongst others) to conventional or virtualised networking environments. What you now have with Citrix Ingress Controller with ADC MPX/VPX is the ability to integrate Kubernetes with your existing ADCs, or introduce Citrix ADC CPX containerised NetScaler(s) such that you are able to deploy transient containerised NetScaler ADC instances within your Kubernetes platform enabling per-application networking services.

What is great about this solution is the way that it creates an automated API interface between Kubernetes and Citrix’s Nitro REST API of NetScaler. When a new containerised app is presented to the outside via a specially annotated ingress CIC will instantly create load balancing and content switching vservers along with rewrite rules for you, and even update/remove them when your container is modified or removed. This takes all of the manual work out of updating your ADC configuration on a per-app basis.

There are two basic ways in which to incorporate Citrix ADC into Kubernetes, namely ‘north-south’ and ‘east-west’ options. Familiar ingress solutions such as NGINX are often used within Kubernetes to attach the container networking stack to the outside world, since pod networking is normally completely abstracted from the user network in order to facilitate clean application separation. In a ‘north-south’ implementation you can think of the ingress controller (e.g. NGINX or Citrix ADC) as the front door to your application, with the remaining container based application networking presented through service endpoints within the backend network.

In an ‘east-west’ topology you can implement Citrix ADC CPX as a side-car to your container application in order to provide advanced ADC features within the Kubernetes network to enhance inter-container communication. This is a more advanced topology, but nonetheless directly intended for deployment within the Kubernetes infrastructure as a container. Citrix have a nice series of diagrams which highlight the tier 1 and tier 2 scenarios here.

Prerequisites

I’m going to be talking about bare-metal scenarios here rather than cloud based environments such Azure AKS, however to user these examples you will need to have created a Kubernetes 1.16 cluster first and be able to interact with it using kubectl. I have been using Rancher in order to build my Kubernetes clusters on vSphere, which in itself is a whole other subject which I hope to return to in a different post.. but you could always use something like MiniKube running within a desktop hypervisor (let me know how you get on!).

In order to use the implementation examples below you will need to have deployed a Citrix NetScaler MPX or VPX v12.1 / 13 in your network which is able to communicate with the Kubernetes API and cluster nodes. My lab uses a flat network range of 192.168.0.0/24 for instance, in which case the Kubernetes API is available on the same network as my NetScaler. However the backend pod networks are in the range 10.42.x.0/24 where each node hosts a separate range. Citrix Ingress Controller will take care of adding the network routes to these backend networks so they don’t have to be reachable from your desktop.

For the purposes of a lab type exercise it doesn’t matter if your Citrix ADC is used for other features, e.g. LB, Citrix Gateway because Citrix Ingress Controller will complement your infrastructure without replacing any of the existing configuration. It’s probably not a great idea to launch straight into this using your Production ADC instance though, best stick to the lab environment!

Create a system user on Citrix ADC

Your Citrix Ingress Controller will talk to NetScaler Nitro API directly using a user account which you define within Kubernetes. Perhaps you will use an existing user, or create a new one. For instance the following command will create a new user called cic on the NetScaler and create a new command policy:

add system user cic my-password
add cmdpolicy cic-policy ALLOW “^(?!shell)(?!sftp)(?!scp)(?!batch)(?!source)(?!.*superuser)(?!.*nsroot)(?!install)(?!show\s+system\s+(user|cmdPolicy|file))(?!(set|add|rm|create|export|kill)\s+system)(?!(unbind|bind)\s+system\s+(user|group))(?!diff\s+ns\s+config)(?!(set|unset|add|rm|bind|unbind|switch)\s+ns\s+partition).*|(^install\s*(wi|wf))|(^(add|show)\s+system\s+file)”

NB I’ve seen a problem with the above where the command might error out with an error concerning unexpected quotes character, it doesn’t seem to interfere with the creation of the command policy though.

In case you have any difficulties whilst attempting to recreate the steps in this post you can always try first using the ‘superuser’ command policy and then refine it until it matches the command permissions that you’re comfortable with.

In addition to this you may need to add additional rewrite module permissions if you’re going to use the rewrite CRDs, you can just tack these on to the end of the existing definition before the final quote mark:

(^(?!rm)\S+\s+rewrite\s+\S+)|(^(?!rm)\S+\s+rewrite\s+\S+\s+.*)

Finally, bind the newly created command policy to your new cic user.

bind system user cic cic-policy 0

Deploy Citrix Ingress Controller using YAML

This section is slightly different to that which is outlined in the actual Citrix Ingress Controller instructions. Please take care to understand the differences, they are mainly due to a desire to create better separation between components and configuration settings.

Create a new namespace to hold the secret and other CIC components. The commands below show the namespace entry in bold in case you choose to omit this and just place the components in the default namespace. It’s up to you, but for tidiness I created a namespace.

kubectl create namespace ingress-citrix

Create a new Kubernetes secret to store your Nitro API username and password. Using kubectl connect to your cluster and create a new secret to store the data.

kubectl create secret generic nslogin --from-literal=username=cic --from-literal=password=mypassword -n ingress-citrix

In my testing I ran into what I think is a Citrix documentation error for the above command where they show using single quotes around the name cic and mypassword values. Kubernetes converts these values into base64 encoding before they are stored, and might also include the quotes in the final value if you’re not careful. In fact that messed up my configuration for a while until I converted the secret back into its original content, using:

kubectl get secret nslogin -n ingress-citrix -o=yaml

Take the values for password: and username: from the secret and pass them through a base64 decoder just to check that this hasn’t happened (there are also various web sites which can do this for you) by using the following Linux/MacOS command for either the username or password taken from the YAML form above.

echo bXlwYXNzd29yZA== | base64 --decode

Using this source file as a reference, modify/add the following entries (shown in bold) within the file in order to add the name of your namespace:

kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1beta1
 metadata:
   name: cic-k8s-role
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cic-k8s-role
 subjects:
 kind: ServiceAccount
 name: cic-k8s-role
 namespace: ingress-citrix 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cic-k8s-role
  namespace: ingress-citrix
apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: cic-k8s-ingress-controller
   namespace: ingress-citrix
 (entry continues)

Be aware – the default CIC configuration creates a cluster role which will see events across the whole system, however this can be deliberately (or mistakenly) restricted to only watching API events in specific namespaces if your role contains:

kind: Role

instead of:

kind: ClusterRole

or if you add a NAMESPACE environment variable when defining the env: section of your CIC deployment manifest.

Finally, add/edit the following entries to define how to contact your Citrix ADC i.e. the NetScaler management IP (NS_IP) and virtual server IP (NS_VIP) to be used for LB/content switching your ingress (the front door)

env:
         # Set NetScaler NSIP/SNIP, SNIP in case of HA (mgmt has to be enabled) 
         - name: "NS_IP"
           value: "192.168.0.99"
         - name: "NS_VIP"
           value: "192.168.0.110"
         - name: "LOGLEVEL"
           value: "INFO"
args:
           - --ingress-classes
             citrix
           - --feature-node-watch
             true

NB – the --feature-node-watch option allows NetScaler to create routes automatically in order to reach the backend pod network addresses

NB – the LOGLEVEL default value is DEBUG, you might want to leave this as an unspecified value until you’re happy with the functionality, and then change it to INFO as above.

The version of Citrix Ingress Controller is specified within this YAML file, hence if you wish to upgrade your CIC version it can be modified and redeployed (as long as no other changes to your deployment are required)

image: "quay.io/citrix/citrix-k8s-ingress-controller:1.6.1"

After updating the above entries as citrix-k8s-ingress-controller.yaml save the modified YAML file and then deploy it using kubectl

kubectl create -f citrix-k8s-ingress-controller.yaml

Check that your Citrix Ingress Controller container has deployed correctly:

kubectl get pods -n ingress-citrix

NB – in the following examples you can ignore the rancherpart of the above command, the kubectl statements are being proxied through Rancher in order to reach the correct cluster

Validate the installation of Citrix Ingress Controller

Once CIC is online you can access the logs generated by the container by switching the name of your container into the following command:

kubectl logs cic-k8s-ingress-controller-9bdf7f885-hbbjb -n ingress-citrix

You’ll want to see the following highlighted section within the log file which shows that CIC was able to connect to the Nitro interface and create a test vserver (which coincidentally validates that it was able to locate and use the secret which was created to store the credentials!):

2020-01-10 10:45:50,144  - INFO - [nitrointerface.py:_test_user_edit_permission:3729] (MainThread) Processing test user permission to edit configuration
 2020-01-10 10:45:50,144  - INFO - [nitrointerface.py:_test_user_edit_permission:3731] (MainThread) In this process, CIC will try to create a dummy LB VS with name k8s-dummy_csvs_to_test_edit.deleteme
 2020-01-10 10:45:50,174  - INFO - [nitrointerface.py:_test_user_edit_permission:3756] (MainThread) Successfully created test LB k8s-dummy_csvs_to_test_edit.deleteme  in NetScaler
 2020-01-10 10:45:50,188  - INFO - [nitrointerface.py:_test_user_edit_permission:3761] (MainThread) Finished processing test user permission to edit configuration
 2020-01-10 10:45:50,251  - INFO - [nitrointerface.py:_perform_post_configure_operation:575] (MainThread) NetScaler UPTime is recorded as 7225

At this point the Citrix Ingress Controller container will sit there listening out for any Kubernetes API calls which it might be interested to assist with, e.g. creation of an ingress or load balancer object. By default Citrix should pick up any ingress creation event, but in many environments you’ll already have NGINX deployed for various reasons (e.g. it’s a functional part of accessing a dashboard for instance).

The way that you can avoid getting things tangled up is by deliberately using ingress class annotations in your specifications. In this way other ingress controllers will ignore your requests to build an ingress but CIC will jump straight in to help. The annotation which is used for this is called:

kubernetes.io/ingress.class:"Citrix"

Deploying an application

Let’s start by deploying a simple application into the default namespace. The reason we’re going to do this is two-fold, firstly it is simple and most likely to work, and secondly it verifies that CIC is able to see services and ingresses outside of its own namespace. I like to use a hello-world image from Tutum because it tells us a little bit about where it’s running when you access the page.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
  namespace: default
spec:
  selector:
    matchLabels:
      run: hello-world
  replicas: 1
  template:
    metadata:
      labels:
        run: hello-world
    spec:
      containers:
      - name: hello-world
        image: tutum/hello-world
        ports:
        - containerPort: 80

Create a new YAML file and save it as deploy-hello-world.yaml, then use kubectl to deploy it to Kubernetes. You’ll see that I’ve prepended rancher in all of my examples but you can omit that if you’re not using Rancher

kubectl apply -f deploy-hello-world.yaml

Creating a service

Now that the application is running in a container you’ll need to create a service using the following YAML. Save it as expose-hello-world.yaml. You could use a type spec of ClusterIP or NodePort – it doesn’t matter when CIC is configured with --feature-node-watch=true although the default is actually ClusterIP.

apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: default
  labels:
    run: hello-world
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: hello-world
kubectl apply -f expose-hello-world.yaml

Defining your ingress

An ingress is a rule which directs incoming traffic to a host address or a given path through to the backend application. It’s quite important to know that an ingress itself is just a rule, there may be load balancers or ingress controllers which receive incoming traffic in your environment but the ingress assists in directing that flow to the backend application.

Again the use of the ingress class kubernetes.io/ingress.class:"Citrix" is an essential component of the below ingress example. It ensures that CIC ‘notices’ the new ingress definition and tells it that it should instruct the Citrix ADC to build load balancing or content switching vservers to make sure your traffic is received when the outside world attempts to talk to your application.

In this ingress example we are going to simulate a scenario where you have a path based entry point into your application, which itself then redirects to the container’s root page. Create a new YAML file with the following content and call it ingress-hello-world.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-world-ingress
  namespace: default
  annotations:
   kubernetes.io/ingress.class: "citrix"
spec:
  rules:
  - host:  www.helloworld.com
    http:
      paths:
      - path: /hello-world
        backend:
          serviceName: hello-world
          servicePort: 80

NB The author, his company and this post has nothing whatsoever to do with any websites or businesses operating on any real domains such as ‘helloworld.com’. It is chosen simply as a convenient example.

kubectl apply -f ingress-hello-world.yaml

At this point, if everything has worked correctly you should be able to make a host file or DNS entry for www.helloworld.com (of course you could use anything else) which points to the same IP address you used to define the NS_VIP address of your load balancer in the Citrix Ingress Controller configuration (citrix-k8s-ingress-controller.yaml). In the examples above the mapping would be:

www.helloworld.com <---> 192.168.0.110

You’ll see the virtual IP now created for you within the Citrix ADC in two places, firstly a new content switch:

A new content switch with the IP address specified in NS_VIP entry, 192.168.0.110

This new content switch has one or more expressions which match traffic to actions (created through ingress definitions):

Therefore any incoming HTTP request matching the www.helloworld.com host where the request URL includes pages starting with the /hello-world location will be sent to the second newly created object – the vserver defined in the action below:

A new load balancing vserver has been created with address 0.0.0.0

This LB vserver includes a service group whose members are actually represented by the pods where the application is currently running. If you changed the deployment specification to include more replicas then you would see more nodes participating in the service group. Citrix ADC will monitor the health of the exposed node ports in order to ensure that traffic is only directed onto running pods.

And now when we visit the page, via the hostname and URL path defined on the ingress we should now see:

Adding a rewrite policy

Let’s say that you have a single ingress controller which is exposing endpoints on a path basis, e.g. /myapproot but the application available on that service is expecting /myapproot/ instead. Some applications I’ve seen won’t respond properly unless you rewrite your request URL to have the trailing forward slash. Fortunately Citrix Ingress Controller and ADC are able to take care of this through a rewrite rule.

Before you can use this you’ll need to deploy the Custom Resource Definitions for rewrite using the following instructions.

Download the CRD for rewrite and responder YAML from this Citrix URL. Save it as rewrite-responder-policies-deployment.yaml and then deploy it using

kubectl create -f rewrite-responder-policies-deployment.yaml

NB One very interesting ‘gotcha’ here is that if you associate a CRD with a namespace then it will only create rewrite policies and actions for services in that namespace, so I would recommend simply using the simplest form of the command shown above without placing the CRD into the ingress-citrix namespace used in this blog’s example.

Now that is deployed you should adapt the following YAML in order to define how the app rewrite should function and then save it as cic-rewrite-example.yaml:

apiVersion: citrix.com/v1
kind: rewritepolicy
metadata:
 name: httpapprootrequestmodify
 namespace: default
spec:
 rewrite-policies:
   - servicenames:
       - hello-world
     rewrite-policy:
       operation: replace
       target: http.req.url
       modify-expression: '"/hello-world/"'
       comment: 'HTTP app root request modify'
       direction: REQUEST
       rewrite-criteria: http.req.url.eq("/hello-world")
kubectl create -f cic-rewrite-example.yaml

Using a Load Balancer service instead of Ingress

In the example above I outlined how to create a hello-world deployment and service in order to correctly present an application via an ADC using ingress. However ingress will only work for HTTP/HTTPS type traffic and cannot be used for other services. One additional method you can use for other traffic is to define a service of type LoadBalancer rather than any other option, e.g. ClusterIP, NodePort.

Citrix Ingress Controller has a specific annotation for this scenario which can be added to the service definition to add the IP address which ADC should use. This is the equivalent of a cloud-provider based load balancer in your on-prem Kubernetes environment where you might not use ingress at all.

apiVersion: v1
kind: Service
metadata:
  name: hello-world
  namespace: default
  annotations:  
    service.citrix.com/frontend-ip: '192.168.0.115'
  labels:
    run: hello-world
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: hello-world

Save the YAML example above into cic-loadbalancer-example.yaml and apply it.

kubectl create -f cic-loadbalancer-example.yaml

If you now examine the service which is created it should be apparent that the type has now changed from NodePort or ClusterIP to LoadBalancer. The external IP address is now shown, as defined within the service.citrix.com/frontend-ip: '192.168.0.115' annotation.

Citrix ADC will now direct traffic arriving at that IP address through to any pods which match the label selector. This method allows you to quite simply plug the outside world in to your Kubernetes application infrastructure at L4 without using ingress or path matching rules.

Summary

Citrix Ingress Controller is well worth investigating if you are beginning to implement on-prem Kubernetes based applications and already have an investment in Citrix ADC. If you need additional features such as DDoS protection, advanced rewrite, TCP optimisations etc. then CIC offers quite a lot of benefits over a simple NGINX proxy. The next article planned in this series will examine the sidecar Citrix ADC CPX deployment and how this can enhance visibility of inter-container communication.

Addendum – Rancher specific ingress issue with Citrix Ingress Controller

This section has been included here in order to highlight a specific issue which is currently occurring in CIC 1.6.1 and Rancher 2.3.4 releases. It seems to be a purely cosmetic issue however it’s been the subject of a recent call I had with some of the Citrix people responsible for CIC who confirmed the behaviour with me. Basically when an ingress is created it is successfully created by CIC but its status does not move from ‘Initializing’ to ‘Active’ in Rancher. This is because Rancher is awaiting the External-IP value to be updated in the Status, but this does not occur because CIC doesn’t mandate that this be actively reported. I’ll update/remove this section from the post if and when this is resolved.

Should the /psc URL work on both HA Platform Services nodes?

I recently ran into a strange issue following the enablement of two PSC 6.5 nodes in an HA configuration, as part of a larger rolling upgrade from vCenter 5.5.

NB – all URLs shown are internal, in use within my lab environment only.

During the migration of the existing customers vCenter environment we had to rehearse the externalisation of PSC from an initial embedded SSO instance. As part of this process the first PSC node in a new site was migrated from an original Window vCenter 5.5 SSO to PSC 6.5, and subsequently a second new node was joined to the first site in order for replication to be established.

I used a Citrix NetScaler to load balance the configuration and noticed at some point after the successful HA repointing was done that I was unable to access the https://hosso01.sbcpureconsult.internal/psc URL.

The second node, https://hosso2.sbcpureconsult.internal/psc worked correctly and redirects to the load balanced address psc-ha-vip.sbcpureconsult.internal for authentication before displaying the PSC client UI.

Irrespective of whichever node is selected I was able to log in to vCenter, then choose Administration, System Configuration, select a node then Manage, Settings or CA without receiving any errors.

If I deliberately dropped the first node out of the load balancing config on the NetScaler I didn’t have any issues when accessing the /psc URL by either host name or load balancer name, but if I tried to connect to the first node by its own DNS name or IP I received an HTTP 400 error and the following entry in:

/storage/log/vmware/psc-client/psc-client.log

[2018-10-08 12:05:20.347] [ERROR] tomcat-http--3 com.vmware.vsphere.client.security.websso.MetadataGeneratorImpl - Error when creating idp metadata.
java.lang.RuntimeException: java.io.IOException: HTTPS hostname wrong:  should be <psc-ha-vip.sbcpureconsult.internal>

It appeared that the HTTP 400 error is because the psc-client Tomcat application doesn’t start up correctly on the first node anymore, along with an error in..

/storage/log/vmware/rhttpproxy/rhttpproxy.log

2018-10-08T13:27:10.691Z warning rhttpproxy[7FEA4B941700] [Originator@6876 sub=Default] SSL Handshake failed for stream <SSL(<io_obj p:0x00007fea2c098010, h:27, <TCP '192.168.0.117:443'>, <TCP '192.168.0.121:26417'>>)>: N7Vmacore3Ssl12SSLExceptionE(SSL Exception: error:140000DB:SSL routines:SSL routines:short read)

I repeated the same series of steps in my lab environment I had experienced on the customer site, and was able to confirm the same behaviour. Let me explain at this point, that all other vCenter functionality was correct and our issue only affected the /psc URL.

Could this be deemed ‘correct’ behaviour?

If I chose https://psc-ha-vip.sbcpureconsult.internal/psc (which is the load balancer address) I was initially only able to connect if the second node is online and happens to be selected.

I wanted to confirm before signing off on the work that it should be possible to access the /psc URL on each node deliberately?

After what seemed like a lot of internal dialogue between myself and my inner tech support dept. (sleepless nights!) I was left wondering what could be going wrong.. especially if this was the documented procedure from VMware?

Good news, I was able to roll back my lab and re-run the updateSSOConfig.py and UpdateLsEndpoint.py scripts – only to find that the /psc URL did indeed load successfully on both nodes with the NetScaler load balancing in place!

So at least I knew that the correct behaviour is that you should be able to open /psc on both appliances.

By examining my snapshots at different stages I was able to identify a difference between the original migration node and the clean appliance:

When you run the updateSSOconfig.py Python script to repoint the SSO URL to the load balanced address it explains that hostname.txt and server.xml were modified:

# python updateSSOConfig.py --lb-fqdn=psc-ha-vip.sbcpureconsult.internal
script version:1.1.0
executing vmafd-cli command
Modifying hostname.txt
modifying server.xml
Executing StopService --all
Executing StartService --all

I was able to locate hostname.txt files (containing the load balancer address) in:

  • /etc/vmware/service-state/vmidentity/hostname.txt
  • /etc/vmware-sso/keys/hostname.txt (missing on node 2, but contained the local name on node 1)
  • /etc/vmware-sso/hostname.txt

but this second hostname file was missing on the second node. Why is this? I guess that it is used transiently during the script execution in order to inject the correct value into the server.xml file.

The server XML file is located in the folder:

/usr/lib/vmware-sso/vmware-sts/conf/server.xml

my faulty node contained the following certificate entries under the connector definition:

..store="STS_INTERNAL_SSL_CERT"
certificateKeystoreFile="STS_INTERNAL_SSL_CERT"..

my working node contained:

..store="MACHINE_SSL_CERT"
certificateKeystoreFile="MACHINE_SSL_CERT"..

So I was able to simply copy the server.xml file from the working node (overwriting the original on the faulty node) and also remove the /etc/vmware-sso/keys/hostname.txt file to match the configuration.

Following a reboot my first SSO node then responded correctly by redirecting https://hosso01.sbcpureconsult.internal/psc to https://psc-ha-vip.sbcpureconsult.internal/websso to obtain its SAML token before ultimately displaying the PSC client UI.

As a follow up, by examining the STS_INTERNAL_SSL_CERT store I could see that the machine certificate being used was issued by the original Windows vCenter Server 5.5 SSO CA to the subject name:

ssoserver,dc=vsphere,dc=local

This store was not present on the other node, and so the correct load balancing certificate replacement must somehow be omitted by one of the upgrade scripts when this scenario occurs (5.5 SSO to 6.5 PSC).

I hope that this bug gets removed by VMware in due course, particularly as more customers are moving to the appliance based model of vCenter 6.x, but this workaround and method should be considered at least if you run into a similar problem.

NB This post is adapted from a longer discussion on VMware Communities page available under https://communities.vmware.com/thread/598140.

Updating password field names with multiple NetScaler Gateway virtual servers

Imagine a situation where you want to change your NetScaler Gateway’s logon page to include alternative prompts for the Username, Password 1 and Password 2 fields and need to update the language specific .XML files. This has been documented before, and isn’t too hard to figure out once you’ve found a couple of ‘How to’ guides on the Internet. However I have since come across a limitation in trying to apply the NetScaler’s new ‘Custom’ design template to several different NetScaler Gateway virtual servers at the same time, because essentially whilst you can define your own custom design it is automatically applied to all instances of the virtual server residing on the NetScaler – so if you define custom fields then you’ve defined them for all.

This may not be a problem for some people, but what if the secondary authentication mechanism is an RSA token for one site, and a VASCO token for another? How do you go about configuring alternative sets of custom logon fields? Most of the answers are already out there in one form or another, but I lacked one simple beginning to end description of the solution (I tried several alternate options including rewrite policies which didn’t quite work before I opted for this approach):

Background (NetScaler 10.5.x build)The Citrix NetScaler VPN default logon page has already been modified in order to ask for ‘AD password’ and ‘VASCO token’ values instead of Password 1: and Password 2:, as detailed in http://support.citrix.com/article/CTX126206

This was achieved by editing index.html and login.js files in /var/netscaler/gui/vpn of the NS as per the Citrix article above.

In addition, the resources path which holds the language based .XML files in /var/netscaler/gui/vpn/resources has been backed up into /var/customisations so that the /nsconfig/rc.netscaler file can copy them back into the correct location if they get overwritten or lost following reboot.

Contents of rc.netscaler file

cp /var/customisations/login.js.mod /netscaler/ns_gui/vpn/login.jscp /var/customisations/en.xml.mod /netscaler/ns_gui/vpn/resources/en.xmlcp /var/customisations/de.xml.mod /netscaler/ns_gui/vpn/resources/de.xmlcp /var/customisations/es.xml.mod /netscaler/ns_gui/vpn/resources/es.xmlcp /var/customisations/fr.xml.mod /netscaler/ns_gui/vpn/resources/fr.xml

However, because these values apply globally there is an issue if a second NetScaler virtual server does not use a VASCO token as a secondary authentication mechanism. This causes the normal ‘Password’ entry box to be displayed as ‘VASCO token’. The only suitable workaround for this is to create a parallel set of logon files for each additional NS gateway virtual server and use a responder policy on the NS to redirect incoming requests for the index.html page of the VPN to a different file.

In the following examples, I have created a second configuration for a ‘Training NetScaler’, abbreviated to TrainingNS throughout. In summary,

Create separate login.js and index.html files for the alternate parameters, create a new /resources folder specifically for those and edit references within those before defining a responder action & policy in NS:

  1. Copy existing login.js to loginTrainingNS.js
  2. Copy existing index.html to indexTrainingNS.html
  3. Create a new folder called /netscaler/ns_gui/vpn/resourcesTrainingNS and give it the same owner/group permissions as the /netscaler/ns_gui/vpn/resources folder (use WinSCP to define the permissions, right click Properties on the file)
  4. Copy all of the .XML files from /netscaler/ns_gui/vpn/resources into the new folder
  5. Edit the indexTraining.html file and make the following change to reflect the new location of the resource files

var Resources = new ResourceManager("resourcesTrainingNS/{lang}", "logon");

Edit the indexTrainingNS.html file and make the modifications described in CTX1262067.

Edit the individual .XML files in the new folder as per the explanation in CTX126206

AD Password:
TwoFactorAuth Password:

(this second option will not be used if only a primary authentication mechanism is defined)

When all of the file changes are complete, using https://support.citrix.com/article/CTX123736 as a guide, define the responder action and policy on the NS:

  • Create a responder action using the URL: “https://trainingns.lstraining.ads/vpn/indexTrainingNS.html”
  • Create a responder policy using the expression: HTTP.REQ.HOSTNAME.EQ(“trainingns.lstraining.ads”) && HTTP.REQ.URL.CONTAINS(“index.html”)Bind the policy to the global defaults

Now when you launch the URL for the Training NetScaler it will redirect to the custom index.html file and load a separate logon.js and .xml resource files so that the logon box will be name differently.

In addition, the following article hints at an alternative resolution if the Responder feature cannot be licensed: http://www.carlstalhood.com/netscaler-gateway-virtual-server/#customize