Citrix Ingress Controller is a niche but seriously interesting innovation from Citrix – developed in order to bring an enhanced application delivery capability to the Kubernetes container orchestration platform. This article is intended to communicate some basics of Kubernetes and ingress using Citrix ADC, but more-so to highlight some specific gaps in the documentation which are no longer appropriate for Kubernetes 1.16 and above due to API changes .
Many Citrix application and networking users will already be familiar with the hardware based or virtual Citrix NetScaler or ADC platforms, bringing L4 through to L7 load balancing, URL responder and rewrite features (amongst others) to conventional or virtualised networking environments. What you now have with Citrix Ingress Controller with ADC MPX/VPX is the ability to integrate Kubernetes with your existing ADCs, or introduce Citrix ADC CPX containerised NetScaler(s) such that you are able to deploy transient containerised NetScaler ADC instances within your Kubernetes platform enabling per-application networking services.
What is great about this solution is the way that it creates an automated API interface between Kubernetes and Citrix’s Nitro REST API of NetScaler. When a new containerised app is presented to the outside via a specially annotated ingress CIC will instantly create load balancing and content switching vservers along with rewrite rules for you, and even update/remove them when your container is modified or removed. This takes all of the manual work out of updating your ADC configuration on a per-app basis.
There are two basic ways in which to incorporate Citrix ADC into Kubernetes, namely ‘north-south’ and ‘east-west’ options. Familiar ingress solutions such as NGINX are often used within Kubernetes to attach the container networking stack to the outside world, since pod networking is normally completely abstracted from the user network in order to facilitate clean application separation. In a ‘north-south’ implementation you can think of the ingress controller (e.g. NGINX or Citrix ADC) as the front door to your application, with the remaining container based application networking presented through service endpoints within the backend network.
In an ‘east-west’ topology you can implement Citrix ADC CPX as a side-car to your container application in order to provide advanced ADC features within the Kubernetes network to enhance inter-container communication. This is a more advanced topology, but nonetheless directly intended for deployment within the Kubernetes infrastructure as a container. Citrix have a nice series of diagrams which highlight the tier 1 and tier 2 scenarios here.
I’m going to be talking about bare-metal scenarios here rather than cloud based environments such Azure AKS, however to user these examples you will need to have created a Kubernetes 1.16 cluster first and be able to interact with it using
kubectl. I have been using Rancher in order to build my Kubernetes clusters on vSphere, which in itself is a whole other subject which I hope to return to in a different post.. but you could always use something like MiniKube running within a desktop hypervisor (let me know how you get on!).
In order to use the implementation examples below you will need to have deployed a Citrix NetScaler MPX or VPX v12.1 / 13 in your network which is able to communicate with the Kubernetes API and cluster nodes. My lab uses a flat network range of 192.168.0.0/24 for instance, in which case the Kubernetes API is available on the same network as my NetScaler. However the backend pod networks are in the range 10.42.x.0/24 where each node hosts a separate range. Citrix Ingress Controller will take care of adding the network routes to these backend networks so they don’t have to be reachable from your desktop.
For the purposes of a lab type exercise it doesn’t matter if your Citrix ADC is used for other features, e.g. LB, Citrix Gateway because Citrix Ingress Controller will complement your infrastructure without replacing any of the existing configuration. It’s probably not a great idea to launch straight into this using your Production ADC instance though, best stick to the lab environment!
Create a system user on Citrix ADC
Your Citrix Ingress Controller will talk to NetScaler Nitro API directly using a user account which you define within Kubernetes. Perhaps you will use an existing user, or create a new one. For instance the following command will create a new user called
cic on the NetScaler and create a new command policy:
add system user cic my-password
add cmdpolicy cic-policy ALLOW “^(?!shell)(?!sftp)(?!scp)(?!batch)(?!source)(?!.*superuser)(?!.*nsroot)(?!install)(?!show\s+system\s+(user|cmdPolicy|file))(?!(set|add|rm|create|export|kill)\s+system)(?!(unbind|bind)\s+system\s+(user|group))(?!diff\s+ns\s+config)(?!(set|unset|add|rm|bind|unbind|switch)\s+ns\s+partition).*|(^install\s*(wi|wf))|(^(add|show)\s+system\s+file)”
NB I’ve seen a problem with the above where the command might error out with an error concerning unexpected quotes character, it doesn’t seem to interfere with the creation of the command policy though.
In case you have any difficulties whilst attempting to recreate the steps in this post you can always try first using the ‘superuser’ command policy and then refine it until it matches the command permissions that you’re comfortable with.
In addition to this you may need to add additional rewrite module permissions if you’re going to use the rewrite CRDs, you can just tack these on to the end of the existing definition before the final quote mark:
Finally, bind the newly created command policy to your new
bind system user cic cic-policy 0
Deploy Citrix Ingress Controller using YAML
This section is slightly different to that which is outlined in the actual Citrix Ingress Controller instructions. Please take care to understand the differences, they are mainly due to a desire to create better separation between components and configuration settings.
Create a new namespace to hold the secret and other CIC components. The commands below show the namespace entry in bold in case you choose to omit this and just place the components in the default namespace. It’s up to you, but for tidiness I created a namespace.
kubectl create namespace ingress-citrix
Create a new Kubernetes secret to store your Nitro API username and password. Using
kubectl connect to your cluster and create a new secret to store the data.
kubectl create secret generic nslogin --from-literal=username=cic --from-literal=password=mypassword -n ingress-citrix
In my testing I ran into what I think is a Citrix documentation error for the above command where they show using single quotes around the name
mypassword values. Kubernetes converts these values into base64 encoding before they are stored, and might also include the quotes in the final value if you’re not careful. In fact that messed up my configuration for a while until I converted the secret back into its original content, using:
kubectl get secret nslogin -n ingress-citrix -o=yaml
Take the values for
username: from the secret and pass them through a base64 decoder just to check that this hasn’t happened (there are also various web sites which can do this for you) by using the following Linux/MacOS command for either the username or password taken from the YAML form above.
echo bXlwYXNzd29yZA== | base64 --decode
Using this source file as a reference, modify/add the following entries (shown in bold) within the file in order to add the name of your namespace:
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: cic-k8s-role roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cic-k8s-role subjects: kind: ServiceAccount name: cic-k8s-role namespace: ingress-citrix
apiVersion: v1 kind: ServiceAccount metadata: name: cic-k8s-role namespace: ingress-citrix
apiVersion: apps/v1 kind: Deployment metadata: name: cic-k8s-ingress-controller namespace: ingress-citrix (entry continues)
Be aware – the default CIC configuration creates a cluster role which will see events across the whole system, however this can be deliberately (or mistakenly) restricted to only watching API events in specific namespaces if your role contains:
or if you add a
NAMESPACE environment variable when defining the
env: section of your CIC deployment manifest.
Finally, add/edit the following entries to define how to contact your Citrix ADC i.e. the NetScaler management IP (NS_IP) and virtual server IP (NS_VIP) to be used for LB/content switching your ingress (the front door)
env: # Set NetScaler NSIP/SNIP, SNIP in case of HA (mgmt has to be enabled) - name: "NS_IP" value: "192.168.0.99" - name: "NS_VIP" value: "192.168.0.110" - name: "LOGLEVEL" value: "INFO" args: - --ingress-classes citrix - --feature-node-watch true
NB – the
--feature-node-watch option allows NetScaler to create routes automatically in order to reach the backend pod network addresses
NB – the
LOGLEVEL default value is
DEBUG, you might want to leave this as an unspecified value until you’re happy with the functionality, and then change it to
INFO as above.
The version of Citrix Ingress Controller is specified within this YAML file, hence if you wish to upgrade your CIC version it can be modified and redeployed (as long as no other changes to your deployment are required)
After updating the above entries as
citrix-k8s-ingress-controller.yaml save the modified YAML file and then deploy it using
kubectl create -f citrix-k8s-ingress-controller.yaml
Check that your Citrix Ingress Controller container has deployed correctly:
kubectl get pods -n ingress-citrix
NB – in the following examples you can ignore the
rancherpart of the above command, the
kubectl statements are being proxied through Rancher in order to reach the correct cluster
Validate the installation of Citrix Ingress Controller
Once CIC is online you can access the logs generated by the container by switching the name of your container into the following command:
kubectl logs cic-k8s-ingress-controller-9bdf7f885-hbbjb -n ingress-citrix
You’ll want to see the following highlighted section within the log file which shows that CIC was able to connect to the Nitro interface and create a test vserver (which coincidentally validates that it was able to locate and use the secret which was created to store the credentials!):
2020-01-10 10:45:50,144 - INFO - [nitrointerface.py:_test_user_edit_permission:3729] (MainThread) Processing test user permission to edit configuration 2020-01-10 10:45:50,144 - INFO - [nitrointerface.py:_test_user_edit_permission:3731] (MainThread) In this process, CIC will try to create a dummy LB VS with name k8s-dummy_csvs_to_test_edit.deleteme 2020-01-10 10:45:50,174 - INFO - [nitrointerface.py:_test_user_edit_permission:3756] (MainThread) Successfully created test LB k8s-dummy_csvs_to_test_edit.deleteme in NetScaler 2020-01-10 10:45:50,188 - INFO - [nitrointerface.py:_test_user_edit_permission:3761] (MainThread) Finished processing test user permission to edit configuration 2020-01-10 10:45:50,251 - INFO - [nitrointerface.py:_perform_post_configure_operation:575] (MainThread) NetScaler UPTime is recorded as 7225
At this point the Citrix Ingress Controller container will sit there listening out for any Kubernetes API calls which it might be interested to assist with, e.g. creation of an ingress or load balancer object. By default Citrix should pick up any ingress creation event, but in many environments you’ll already have NGINX deployed for various reasons (e.g. it’s a functional part of accessing a dashboard for instance).
The way that you can avoid getting things tangled up is by deliberately using ingress class annotations in your specifications. In this way other ingress controllers will ignore your requests to build an ingress but CIC will jump straight in to help. The annotation which is used for this is called:
Deploying an application
Let’s start by deploying a simple application into the default namespace. The reason we’re going to do this is two-fold, firstly it is simple and most likely to work, and secondly it verifies that CIC is able to see services and ingresses outside of its own namespace. I like to use a hello-world image from Tutum because it tells us a little bit about where it’s running when you access the page.
apiVersion: apps/v1 kind: Deployment metadata: name: hello-world namespace: default spec: selector: matchLabels: run: hello-world replicas: 1 template: metadata: labels: run: hello-world spec: containers: - name: hello-world image: tutum/hello-world ports: - containerPort: 80
Create a new YAML file and save it as
deploy-hello-world.yaml, then use
kubectl to deploy it to Kubernetes. You’ll see that I’ve prepended
rancher in all of my examples but you can omit that if you’re not using Rancher
kubectl apply -f deploy-hello-world.yaml
Creating a service
Now that the application is running in a container you’ll need to create a service using the following YAML. Save it as
expose-hello-world.yaml. You could use a type spec of
NodePort – it doesn’t matter when CIC is configured with
--feature-node-watch=true although the default is actually ClusterIP.
apiVersion: v1 kind: Service metadata: name: hello-world namespace: default labels: run: hello-world spec: type: NodePort ports: - port: 80 protocol: TCP selector: run: hello-world
kubectl apply -f expose-hello-world.yaml
Defining your ingress
An ingress is a rule which directs incoming traffic to a host address or a given path through to the backend application. It’s quite important to know that an ingress itself is just a rule, there may be load balancers or ingress controllers which receive incoming traffic in your environment but the ingress assists in directing that flow to the backend application.
Again the use of the ingress class
kubernetes.io/ingress.class:"Citrix" is an essential component of the below ingress example. It ensures that CIC ‘notices’ the new ingress definition and tells it that it should instruct the Citrix ADC to build load balancing or content switching vservers to make sure your traffic is received when the outside world attempts to talk to your application.
In this ingress example we are going to simulate a scenario where you have a path based entry point into your application, which itself then redirects to the container’s root page. Create a new YAML file with the following content and call it
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-world-ingress namespace: default annotations: kubernetes.io/ingress.class: "citrix" spec: rules: - host: www.helloworld.com http: paths: - path: /hello-world backend: serviceName: hello-world servicePort: 80
NB The author, his company and this post has nothing whatsoever to do with any websites or businesses operating on any real domains such as ‘helloworld.com’. It is chosen simply as a convenient example.
kubectl apply -f ingress-hello-world.yaml
At this point, if everything has worked correctly you should be able to make a host file or DNS entry for
www.helloworld.com (of course you could use anything else) which points to the same IP address you used to define the NS_VIP address of your load balancer in the Citrix Ingress Controller configuration (
citrix-k8s-ingress-controller.yaml). In the examples above the mapping would be:
www.helloworld.com <---> 192.168.0.110
You’ll see the virtual IP now created for you within the Citrix ADC in two places, firstly a new content switch:
This new content switch has one or more expressions which match traffic to actions (created through ingress definitions):
Therefore any incoming HTTP request matching the
www.helloworld.com host where the request URL includes pages starting with the
/hello-world location will be sent to the second newly created object – the vserver defined in the action below:
This LB vserver includes a service group whose members are actually represented by the pods where the application is currently running. If you changed the deployment specification to include more replicas then you would see more nodes participating in the service group. Citrix ADC will monitor the health of the exposed node ports in order to ensure that traffic is only directed onto running pods.
And now when we visit the page, via the hostname and URL path defined on the ingress we should now see:
Adding a rewrite policy
Let’s say that you have a single ingress controller which is exposing endpoints on a path basis, e.g.
/myapproot but the application available on that service is expecting
/myapproot/ instead. Some applications I’ve seen won’t respond properly unless you rewrite your request URL to have the trailing forward slash. Fortunately Citrix Ingress Controller and ADC are able to take care of this through a rewrite rule.
Before you can use this you’ll need to deploy the Custom Resource Definitions for rewrite using the following instructions.
Download the CRD for rewrite and responder YAML from this Citrix URL. Save it as
rewrite-responder-policies-deployment.yaml and then deploy it using
kubectl create -f rewrite-responder-policies-deployment.yaml
NB One very interesting ‘gotcha’ here is that if you associate a CRD with a namespace then it will only create rewrite policies and actions for services in that namespace, so I would recommend simply using the simplest form of the command shown above without placing the CRD into the ingress-citrix namespace used in this blog’s example.
Now that is deployed you should adapt the following YAML in order to define how the app rewrite should function and then save it as
apiVersion: citrix.com/v1 kind: rewritepolicy metadata: name: httpapprootrequestmodify namespace: default spec: rewrite-policies: - servicenames: - hello-world rewrite-policy: operation: replace target: http.req.url modify-expression: '"/hello-world/"' comment: 'HTTP app root request modify' direction: REQUEST rewrite-criteria: http.req.url.eq("/hello-world")
kubectl create -f cic-rewrite-example.yaml
Using a Load Balancer service instead of Ingress
In the example above I outlined how to create a hello-world deployment and service in order to correctly present an application via an ADC using ingress. However ingress will only work for HTTP/HTTPS type traffic and cannot be used for other services. One additional method you can use for other traffic is to define a service of type
LoadBalancer rather than any other option, e.g.
Citrix Ingress Controller has a specific annotation for this scenario which can be added to the service definition to add the IP address which ADC should use. This is the equivalent of a cloud-provider based load balancer in your on-prem Kubernetes environment where you might not use ingress at all.
apiVersion: v1 kind: Service metadata: name: hello-world namespace: default annotations: service.citrix.com/frontend-ip: '192.168.0.115' labels: run: hello-world spec: type: LoadBalancer ports: - port: 80 protocol: TCP selector: run: hello-world
Save the YAML example above into
cic-loadbalancer-example.yaml and apply it.
kubectl create -f cic-loadbalancer-example.yaml
If you now examine the service which is created it should be apparent that the type has now changed from
LoadBalancer. The external IP address is now shown, as defined within the
service.citrix.com/frontend-ip: '192.168.0.115' annotation.
Citrix ADC will now direct traffic arriving at that IP address through to any pods which match the label selector. This method allows you to quite simply plug the outside world in to your Kubernetes application infrastructure at L4 without using ingress or path matching rules.
Citrix Ingress Controller is well worth investigating if you are beginning to implement on-prem Kubernetes based applications and already have an investment in Citrix ADC. If you need additional features such as DDoS protection, advanced rewrite, TCP optimisations etc. then CIC offers quite a lot of benefits over a simple NGINX proxy. The next article planned in this series will examine the sidecar Citrix ADC CPX deployment and how this can enhance visibility of inter-container communication.
Addendum – Rancher specific ingress issue with Citrix Ingress Controller
This section has been included here in order to highlight a specific issue which is currently occurring in CIC 1.6.1 and Rancher 2.3.4 releases. It seems to be a purely cosmetic issue however it’s been the subject of a recent call I had with some of the Citrix people responsible for CIC who confirmed the behaviour with me. Basically when an ingress is created it is successfully created by CIC but its status does not move from ‘Initializing’ to ‘Active’ in Rancher. This is because Rancher is awaiting the External-IP value to be updated in the Status, but this does not occur because CIC doesn’t mandate that this be actively reported. I’ll update/remove this section from the post if and when this is resolved.
UPDATE – the above issue is now resolved in releases 1.7.6 and above by appending the
--update-ingress-status entry into the CIC deployment YAML under the following section:
- --ingress-classes citrix
- --feature-node-watch true
- --update-ingress-status yes