In my previous video, I talked at a high level about functions and serverless. In this blog post, I will use Knative, an on-prem function framework, to show how functions can be implemented locally. I will do so by showing how to install Knative and deploy a Knative function.
Before I dive into Knative, let me answer the obvious question: Why chose Knative? Great question!
Despite being the first out of the gate with a Function-as-a-Service offering in 2014, AWS Lambda was soon joined by all the major cloud providers. Today all the major cloud providers have a Function-as-a-Service offering, but not every organization wants to consume functions in the cloud. Lots of customers want to implement functions within their datacenters, which brings me to the on-prem function frameworks.
The most popular frameworks are:

There are others, but those are the most talked about and here is why I chose Knative:
-
OpenWhisk is really powerful and flexible, but that power and flexibility comes at a cost in complexity, making it much harder to get started with it.
-
OpenFaas is very similar to Knative, but the last release, at the time of writing, was release 0.25.5, meaning they haven’t reached version 1.0 yet. For customers who don’t care about this, OpenFaas is a great option.
-
Knative offers all the features of OpenFaas, with a little less flexibility as OpenWhisk and, at the time of writing, its latest release is 1.8.3. Knative has also been adopted by a number of vendors, such as Google, IBM, vmware, RedHat as the foundation of their serverless platform. All of these lead to Knative being a mature option of on-prem function deployment.
Now, with that out of the way, let’s dive into Knative proper
Knative is a Kubernetes extension implementing a serverless framework. The benefit of being a Kubernetes extension is that Knative functions can be deployed using native Kubernetes commands such as kubectl. This dramatically reduces the learning curve required to deploy Knative functions. Knative is built around 2 modules: Serving and Eventing.
The Serving module runs the containers implementing the functions and handles features, such as autoscaling, revision tracking and networking. The Eventing module implements the universal subscription, delivery and management of events. When events are received by the Eventing module, it talks to the Serving module to trigger the appropriate function. The purpose of this blog post is not to dive into the Knative architecture, so further info can be found on the Knative website: https://knative.dev/docs.
Installing Knative
But first thing first, let’s install Knative. To install Knative, one needs a Kubernetes cluster. That cluster can be of whichever Kubernetes distribution of your choice. For folks looking to test this out, Kind or Minikube are great ways to get Knative installed on a laptop. For the remainder of this post, I will assume that you have access to a Kubernetes cluster, that kubectl commands can be executed against that cluster and that the cluster is being fronted by a load balancer. If the cluster doesn’t have a load balancer, MetalLB is a great option for a software-based load balancer, which is what I am using in my cluster.
Installing Knative can be done in 2 ways:
-
Using the kn quickstart cli command: This can be used only with Kind or Minikube as it creates the cluster and deploy Knative on it using a single command. This is a great way to try Knative out on a laptop.
-
Using kubectl apply, which is what I will be using.
Before installing Knative, let’s check out Kubernetes cluster:
As you can see, I have 3 nodes in my Kubernetes cluster running Kubernetes v1.24.8. Installing Knative Serving requires 2 steps:
-
Loading the CRDs
-
Loading the Knative pods.
Both of those steps are done using kubectl apply. Let’s install the CRDs:
Here you can choose which version you want to install. To do so, just replace the v1.8.3 by the version you want. I only chose v1.8.3 because it is the latest right now.
Now, let’s install Knative Serving:
If you install a different version, you will need to make sure the version between Serving Core and the Serving CRDs match. If I look at the deployed pods, I can see all the Knative pods under the knative-serving namespace:
and the services associated with Knative:
For the purpose of this post, I need to install the Kong ingress controller. My Kubernetes cluster already has an ingress controller installed, nginx, but later in this post, I will need features offered by Kong, hence why I need to install it.
The easiest way to install Kong is by using helm, so let’s add the Kong helm repository:
Let’s create a new namespace to host the Kong ingress controller:
I call this ingress controller kong-external because it will accept requests coming from outside of the cluster, ie the outside world. I can now install the Kong ingress controller:
Double-checking the install, I can see that the pod and service associated with the Kong installation are up and running:
I can also see that the service got an IP address from the load balancer, MetalLB in my case. This IP address is the one that will be used to connect to the functions.
Now that I have an ingress controller deployed, I need to make sure the default ingress class in Knative is the one served by the Kong ingress controller. This is done by patching the config map config-network:
Then I need to set up the default domain for Knative Serving:
The content of the yaml file is self-explanatory, but one line needs a little bit more explanation: 192.1.0.60.nip.io: “”. In this line, you have a couple of options:
-
If the IP address assigned by your load balancer to the external ingress controller is in DNS and has a hostname assigned to it, then replace the 192.1.0.60.nip.io with that fully qualified hostname.
-
If the IP address assigned by your load balancer isn’t in DNS, then replace the 192.1.0.60 with that IP address. The nip.io domain name is a special domain name. It is a wildcard domain for any IP address, meaning that any request made to 192.1.0.60.nip.io will always resolve to 192.1.0.60. Other similar domains are sslip.io and xip.io.
In my case, 192.1.0.60 isn’t in DNS, so I am using 192.1.0.60.nip.io as a fully qualified domain name.
Deploying a Hello World function
At this point, I am ready to deploy a new service, ie function. I am going to use the helloworld-go service provided by knative-sample:
Once I run the kubectl apply command, I can check that the pod and the services are running as shown above. I can further test that the service is working by using curl:
Yay!!!!! I have just installed Knative, deployed and invoked a Knative function on my Kubernetes cluster. It is as easy as that.
Deploying a custom function
Let’s see if I can create and deploy my own function. The best way to do that is to use the Knative func cli command, which can be downloaded directly from the Knativ Github repo:
Here, I create a new function called myfirstfunction, that will be coded in Go and will be accessed through HTTP. The func create command creates a new directory with the function name and create some files containing the boilerplate code required to make it work. The handle.go contains the skeleton for the function:
By default, the function doesn’t do much. It just prints the requests it receives, which is enough to try it out.
The other interesting file is the func.yaml file, which describes how the function is to be deployed:
In this post, I am not going to make any change to the handle.go file as it is for testing purposes only, but I will need to make some changes to the func.yaml file to account for the container registry that the new docker container image will be uploaded to.
This is what the updated func.yaml file looks like:
When the function is being built, it will create a docker container, so let’s build the function:
After a few moments, you will see the image above. Once the function is built, you can do 2 things:
-
You can run the function locally for testing purposes. This is done by running the func run command. It will create a local container that will bind to port 8080 and that you can interact with using either the func invoke command or curl http://localhost:8080. This is a great while coding and testing the function as it removes Kubernetes out of the equation.
-
You can deploy the function to your Kubernetes cluster using the func deploy command. This will push the docker image created with the func build command and then deploy it on the Kubernetes cluster.
Let’s deploy that function using func deploy:
Once the deployment is successful, I can see the pod and services associated with my new function and I can interact with it using curl. And as you can see, the function returns the HTTP request headers.
You might have noticed something in the screenshot above: each of my functions has its own URL. It works as I can interact with both, but this means that if I have 100s of functions, I will have 100s of separate URLs, which can become cumbersome and difficult to manage.
What is simpler is to have each function being a separate endpoint with a common IP address. In my case, if I want to call myfirstfunction, then I would do a curl -i http://192.1.0.60/myfirstfunction or if I want to access my helloworld-external function, I do a curl -i http://192.1.0.60/hello .
Implementing Path-based Routing
Doing this is possible and it is called path-based routing. To enable it, I need to leverage a couple of features from the Kong ingress controller and from Kubernetes, but first I need to deploy the function I want to access using path-based routing.
The trick here is to deploy the function internally, but have it use the default ingress class, ie the kong ingress class defined by the Kong external ingress controller.
Here is how the function is deployed:
Because I set the visibility to be cluster-local, that function can’t be access from the outside world, only from the inside of the cluster.
The first feature I will be leveraging is Kong’s plugin framework. Kong has a number of plugins that allow the extension of its core functionality. To enable path-based routing, I will be using the request-transformer Kong plugin. This plugin will take requests coming to the Kong external ingress controller and route them to the internal service, based on the Host field in the request header. In my case, the Host field in the request header will be replace by
Let’s deploy the new plugin:
The second feature I need to leverage is the ability to define ingress points within Kubernetes. In my case, I will create an ingress point for my ingress class kong that will trigger the request-transformer plugin when requests are made to the ingress point path. In the case of the helloworld function, the ingress point is /hello.
At this point, with the plugin and the endpoint configured, I can access my function using the endpoint:
The ingress endpoint ties everything together. It creates the /hello endpoint in the kong-external namespace and specifies that requests made to port 80 and going to path /hello are to be sent to the external Kong ingress controller. The external Kong ingress controller then receives the requests, calls the hello-world plugin, which will replace the Host header with the URL of the internal Knative function, which is then used to call the function.
Deploying a new function is as easy as deploying the function internally, creating a new plugin and associating the function to a new endpoint:
Here I have created a new function that can be triggered by doing curl -I http://192.1.0.60/hello2, even though that function is an internal function within my cluster:
Conclusion
In this post, I have shown how easy it is to implement Knative and deploy functions on a local Kubernetes cluster. Functions are a key component of event-driven architecture and, since AWS Lambda came out, have been gaining popularity. As with everything, not every organization will want to run their functions in the cloud, so it is important to understand how on-prem function frameworks work.
Opinions expressed in this article are entirely my own and may not be representative of the views of Dell Technologies.