In the last post, we got a gentle introduction to service mesh. We highlighted some of the benefits of adopting a service mesh and why it is almost inescapable as you develop cloud-native applications. Today, we will explore Kuma—one of several service mesh projects adopted by the CNCF.
Kuma's website describes the project as an open-source, universal Envoy service mesh for distributed service connectivity, delivering high performance and reliability.
Getting started
Kuma can be deployed on bare-metal VMs or within a Kubernetes cluster. In this post, we focus on deploying Kuma in a Kubernetes cluster. How to obtain a cluster is outside the scope of this blog post—there are several ways, including running a self-hosted cluster with minikube, Docker Desktop, or one of the popular cloud providers. We assume you have a Kubernetes cluster, kubectl, and Helm CLIs installed.
First, create the namespace within the cluster to deploy Kuma to:
kubectl create namespace kuma-system
Now use the Helm chart provided by the Kuma project to deploy it:
helm repo add kuma https://kumahq.github.io/charts
helm repo update
helm install --namespace kuma-system kuma kuma/kuma
Launch the Kuma GUI by forwarding the control plane port. This is a read-only UI, but very helpful for understanding the workloads deployed onto Kuma:
kubectl port-forward svc/kuma-control-plane -n kuma-system 5681:5681
Now point your browser to http://localhost:5681/gui. You will be greeted by the welcome screen.
Enabling mutual TLS
At this point, our service mesh is ready to start accepting workloads. Before we onboard applications, it is prudent to ensure the service mesh has mutual TLS enabled to encrypt service-to-service communications. By default, Kuma comes with a default mesh with all features—mTLS, logging, metrics, and tracing—turned off.
To turn on mTLS on the default mesh, run:
echo "apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
name: default
spec:
mtls:
enabledBackend: ca-1
backends:
- name: ca-1
type: builtin" | kubectl apply -f -
Deploying a sample application
Now the mesh is secure. Let's deploy some sample workloads. The sample application we will use is one bundled with Istio (another service mesh project we will cover in an upcoming post). The bookinfo sample application consists of four services, each written in a different language, working together to provide application functionality.
Create a namespace that will automatically allow Kuma to inject a sidecar into pods deployed to it:
echo "apiVersion: v1
kind: Namespace
metadata:
name: blog
annotations:
kuma.io/sidecar-injection: enabled" | kubectl apply -f -
Deploy the sample application:
kubectl apply -f https://bit.ly/3dzghY5 -n blog
By default, services are not exposed outside the cluster. To reach our application, we forward a port on our localhost to the service running in the cluster:
kubectl port-forward productpage-v1-6987489c74-xxxxx 9080:9080
(Replace with your actual product page pod name.) Then visit http://localhost:9080/productpage?u=normal. Upon opening the app, you will notice error messages on the landing page. This is because the mesh has mTLS enabled and service-to-service communications has to be explicitly specified to make successful service calls.
Traffic permissions
Kuma uses a Kubernetes custom resource definition (CRD) called TrafficPermission. In layman's terms, a traffic permission instructs the mesh to allow traffic from service A to service B. If communication from B to A is also desired, that also needs to be explicitly defined. This allows fine-grained control over which services can communicate with others.
kubectl apply -f https://bit.ly/2HcanjH
This allows all the communications needed between the respective services for proper application functionality. Refresh your browser and things should look much better. You should see book details and associated reviews. Three versions of the review service are deployed and are all routable, so refreshing the browser will invoke different versions—denoted by no stars, black stars, or red stars.
Telemetry
Out of the box, Kuma comes with integration for Prometheus and Grafana. These help collect a wealth of data for deployed services in our mesh. Such metrics are useful for troubleshooting service performance, latency, errors, and communications with other services. Perhaps the best part: you don't need to add any additional code to your services to benefit from such insights. It's another added benefit of using a service mesh.
To add telemetry to the mesh:
kubectl apply -f https://bit.ly/31eNJhP
# also add traffic permissions to allow the metrics pods communicate
kubectl apply -f https://bit.ly/3k5PRQf
With the metrics components successfully deployed, reach the Grafana dashboard by making it available on your local workstation:
kubectl port-forward grafana-xxxxxxxx 3000:3000 -n kuma-metrics
Point your browser to http://localhost:3000 and login with the default credentials admin/admin. Kuma comes bundled with three separate dashboards: Kuma Mesh, Kuma Dataplane, and Kuma Service to Service.
What we haven't covered
We have barely scratched the surface of all the features Kuma offers. We haven't explored Traffic Routes, Traffic Logs, Fault Injection, Health Checks, Circuit Breakers, and many more. All those features help provide finer-grained controls over services, helping deliver more secure, resilient, performant, and dependable applications and services to your end users.