As noted in the previous article on this blog, in the next episode of the “Self-Hosted Kubernetes Adventures” sitcom, I’ll explain how I configured HAProxy as an Ingress Controller, and how I issue TLS certificates within the cluster using cert-manager. I’ll also mention some plans for the future, because future is always more exciting than the past.
When you deploy a pod into the cluster, it gets assigned with its own networking component. In my case IP address is being assigned by the Calico on a Pod network I specified while configuring the cluster. So in my case that’s one of the IPs under 192.168.0.0/16
.
Now, that’s nice, but I have my own network sitting in front that whole NUC machine and I’d like to route my traffic to the application within the cluster. How would one connect to the service running on some port within the Pod?
I could in theory route requests to the Pod by passing them through HAProxy installed on that host node, but I don’t want to do that because at the end of this journey, I want to get rid of all applications hosting on the host system and simply run them within the K8s. There’s also a question on how would one auto-discover IP changes and trigger the change on the host. I could quickly end up in a world of pain. Instead, I want to use some Ingress Controller that can take requests coming to some port on the host system, and route them to the appropriate service/pod/deployment within the cluster. So Ingress Controller basically acts as an entry point to your K8s cluster.
HAProxy Ingress Controller
Installation
Official install instructions are pretty simple and straight to the point. I recommend carefully reading them and making your own decisions about the particulars.
I have decided to go with the Helm based installation as I imagine it will be easier to manage updates down the road.
First I added the official helm-chart repository
helm repo add haproxytech https://haproxytech.github.io/helm-charts
Then I updated the repository
helm repo update
After which I was ready to install the HAProxy:
helm install haproxy-kubernetes-ingress haproxytech/kubernetes-ingress --create-namespace --namespace haproxy-controller
With that, HAProxy ingress controller got provisioned into a haproxy-controller namespace. By default ingress controller creates a K8s service that randomly assigns 3 NodePort ports for the following purposes:
- HTTP traffic
- HTTPs traffic
- stats page
If I did this setup few weeks ago when I started moving all of the services to the local network, I would have probably immediately configured HAProxy to bind to the correct ports and not some random ones (talking obviously about 80, 443, and whatever you want for the stats page). You can do that by simply specifying controller.service.nodePort.http
, controller.service.nodePort.https
, and controller.service.nodePort.stat
Helm chart fields.
Plan for the future here is to migrate the apps hosted on that Intel NUC under docker-compose to the K8s cluster. After configuring them and making sure they’re running properly, I will just do a great switcheroo where I shutdown HAProxy on the host and reconfigure HAProxy Ingress controller to bind 80 and 443 on the host directly. Or I might take staged approach by pointing backend on the host HAProxy to the ports of the HAProxy running in K8s. I’ll cross that bridge once I get there.
Also, since this is a home-setup, I don’t really need HAProxy on each of the nodes in the cluster (although there’s currently only one host node), so I proceeded with the default replicaset deployment with two HAProxy containers, which I now scaled down to one because I’m running a single node cluster at the moment.
kubectl -n haproxy-controller scale deployment haproxy-kubernetes-ingress --replicas 1
I might need to re-consider this “type of deployment” decision once I get to the multi-node setup, but that’s a problem for future me. On one hand it might be useful to have it running as a DaemonSet because in that case I would just forward all of my DNS entries to each of the K8s Nodes and everything would be routed automatically. On the other hand, even single HAProxy can handle more than few tens of thousands connections per second on this machine, and I really don’t need that.
Cert-Manager
Alright, I’ll tell you my dirty little secret. My home-lab is using my “real” domain name, just locally. I don’t configure all those non-routable addresses in my public DNS zone, instead, I have a record within my local Mikrotik Static DNS, which works great at this moment.
This on the other hand means I need to issue a certificate for the real domain name which can pose some challenges. I mean, I could generate my own self-signed certificate and add it to the trusted root on my machines, but let’s be honest, that’s hassle to manage, and management of those certs across the devices is currently not a rabbit hole I want to visit.
All of that leads to the fact that I also want to use certificate issued by the trusted authority. There are few free and “automateable” ones. I am aware of:
Both providing ACME challenges for the domain validation. If you’d like to recommend some other CA’s, let me know.
Installation
Now, to install Cert Manager I have decided to go with a straight kubectl apply method since that seems to be the recommended method in the official install docs. There is a helm chart available in the jetstack repository, but at this point I was a bit confused about who publishes to that repository, maintains the chart, and consequently had some trust issues, so I just did the manual install. I might want to reconsider this decision in the future and use Helm for everything.
So as with Calico in the previous article, I downloaded the configuration file first, examined its content to the extent of my understanding and applied it
curl -O https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
kubectl apply -f cert-manager.yaml
Configuration
There are basically two options when configuring what will issue the certificate in the cluster. You can go with Issuer or ClusterIssuer resource. First one being scoped to the namespace in which it is deployed, while the later one is cluster-wide.
I opted to use ClusterIssuer as I might operate services in various namespaces, and I don’t necessarily want to configure separate account for each namespace. I also decided to go with the LetsEncrypt since that was already in my CAA DNS records of the domain and I have been using them for years now. So in the end configuration looked like:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: ivan@tomica.net
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod-issuer-account-key
solvers:
- selector:
dnsZones:
- "mydomainname"
dns01:
route53:
region: eu-central-1
hostedZoneID: MYHOSTEDZONEID
accessKeyIDSecretRef:
name: prod-route53-credentials-secret
key: access-key-id
secretAccessKeySecretRef:
name: prod-route53-credentials-secret
key: secret-access-key
Deployed it to the cluster with:
kubectl apply -f cert-manager-issuer.yaml
Access key and secret key are saved in k8s secrets and read from there. Configured as:
apiVersion: v1
kind: Secret
metadata:
name: prod-route53-credentials-secret
namespace: cert-manager
data:
access-key-id: Base64EncodedAccessKeyID
secret-access-key: Base64EncodedSecretAccessKey
And I also deployed it to the cluster with
kubectl apply -f cert-manager-secret.yaml
Please notice that the secret has to be within the cert-manager namespace so cert-manager itself can read it. Domain validation during certificate issuance is being done via dns01 challenge because obviously this setup is not exposed on the public internet, and in addition to that, I’m issuing wildcard certificate for the setup.
Why wildcard you might ask? Well, there are Certificate Transparency Logs where LetsEncrypt (and other CAs) publish information about newly issued certificates, and I don’t necessarily want you to know which DNS records I have on my local network. Security by obscurity is still a security layer, but when used in isolation this layer makes you cry, just like a real onion.
Issuing certificates
You can now issue certificates by simply deploying Certificate resource to the cluster. For my wildcard I have something like this configured:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mycert-mydomain
spec:
secretName: mycert-mydomain
renewBefore: 720h
dnsNames:
- "MYDOMAIN"
- "*.MYDOMAIN"
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
group: cert-manager.io
I deployed this to the default namespace and it issued certificate no problemos (few problemos, but all of them were my mistakes, outlined later):
[ivan@tomica-main ~]$ kubectl get certificate
NAME READY SECRET AGE
mycert-mydomain True mycert-mydomain 45h
Alternative is to define it as part of the Ingress resource. Make sure to just have proper annotations and TLS section in place:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
name: ingress-example-nginx
namespace: default
spec:
ingressClassName: haproxy
rules:
- host: ingress-example-nginx.mydomain
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
tls:
- secretName: ingress-example-nginx.mydomain
hosts:
- ingress-example-nginx.mydomain
This will issue a certificate for the domains listed under tls.hosts
and save private key and certificate to the kubernetes secret named under tls.secretName
Now, when I initially installed HAProxy into the cluster, it generated self-signed certificate and is using that as a default certificate, but this can easily be replaced with this new certificate by simply editing the existing configmap:
kubectl -n haproxy-controller edit cm haproxy-kubernetes-ingress
There, under data section I simply configured ssl-certificate to the value of NAMESPACE/SECRETNAME and voila, everything worked like a magic
data:
ssl-certificate: default/mycert-mydomain
Conclusion
During the process of cert-manager configuration I made a few crucial mistakes:
- I configured secret in the wrong namespace so cert-manager couldn’t read from it
- When creating secret with AWS credentials, I didn’t encode them to base64 and instead passed exact values, and apparently everything expects them to be base64 encoded
Perhaps there are a few more mistakes I did but I already forgot about them. :-) Making those mistakes sent me through a troubleshooting and reading journey in which I definitely learned few new things.
Perhaps it is not second nature to many, but what I like to do in such situations is to observe the logs. Since cert-manager is just another service running in just another pod in the whole system, you can just tail its logs and see what exactly is happening and why is it failing.
Now it is probably a time to start deploying some apps into that cluster. Here are some prerequisites and things I plan on doing before though:
- Provision some kind of storage driver into the system. Doesn’t need to be anything fancy, and may in this single-node setup be a local directory even. But perhaps it is healthier to learn proper things upfront.
- Provision database into the cluster. I’m mostly using Postgres for all of the applications and am currently looking for Percona Operator for PostgreSQL as a potential solution. Percona folks are great and I have no doubts that they put out products of utmost quality.
If you have any thoughts, suggestions or questions, don’t hesitate to reach out.