Helm, etcd and CoreDNS on Telefónica Open Cloud

Fernando de la Iglesia
10 min readSep 25, 2018

by Fernando de la Iglesia, Technology expert at Telefónica I+D

Weeks ago, we saw how to use kubectl with CCE Containers service in Open Cloud. In this post we will see how to use one of the most popular package managers for Kubernetes, Helm and with this, deploy a popular distributed key value store etcd, and another popular DNS and service discovery service, CoreDNS that uses etcd as a possible backend.

Helm

As defined in their home page, “Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application”. As said before Helm is one of the most popular package managers for Kubernetes. Enable Helm to use CCE clusters is quite easy. Helm consist of two parts, the client (Helm) and the server (Triller). Following the instructions in the helm github page, first you need to install the client in the same system you already installed and configured kubectl (following the guidelines in the referenced post). In my case I just installed the binary for linux and deploy the helm binary in my path. Helm will use as default the kubectl config file (in ~/.kube/) to connect to the kubectl current context.

Once you have the client part installed the next step is to install Triller, as simple as

$ helm init
$HELM_HOME has been configured at ~/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

This is the way of installing helm server (Triller) in CCE because RBAC is not configured in CCE clusters in the current version.

You can verify that Triller is installed and running in the kube-system namespace of your CCE cluster

$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
fluentd-elasticsearch-10.13.3.34 2/2 Running 0 70d
heapster-v1.4.0-2395533666-fgr92 2/2 Running 0 70d
kube-dns-v17-37r7l 3/3 Running 0 70d
kube-proxy-10.13.3.34 1/1 Running 0 70d
tiller-deploy-2838256982-ppsrx 1/1 Running 0 2m
$ helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

etcd-operator

Etcd-operator is an easy way of managing etcd clusters deployed in Kubernetes and automate management task as create and destroy clusters, resize clusters, backup and restore, etc. And it is quite easy to deploy etcd-operator using helm. First, let us search for a stable version of the corresponding helm chart

$ helm search stable/etcd-operator
NAME CHART VERSION APP VERSION DESCRIPTION
stable/etcd-operator 0.7.7 0.7.0 CoreOS etcd-operator Helm chart for Kubernetes

If we inspect the chart we can see that by default it uses RBAC, that we know that is not configured in CCE clusters, but fortunately there exist the possibility of not to use RBAC

$ helm inspect stable/etcd-operator
apiVersion: v1
appVersion: 0.7.0
description: CoreOS etcd-operator Helm chart for Kubernetes
home: https://github.com/coreos/etcd-operator
...
## RBAC
By default the chart will install the recommended RBAC roles and rolebindings.
...
To disable RBAC do the following:```console
$ helm install --name my-release stable/etcd-operator --set rbac.create=false
```
...

Therefore, we can use helm to deploy etcd-operator and afterwards use it to create an etcd cluster. In order to show some other details and maintain ordered our resources in the Kubernetes cluster, let us install etcd-operator, cluster and CoreDNS in a new namespace called my-dns-coredns.

$ kubectl create ns my-dns-coredns
namespace "my-dns-coredns" created
$ kubectl get ns
NAME STATUS AGE
default Active 84d
kube-public Active 84d
kube-system Active 84d
my-dns-coredns Active 4s

Now deploy etcd-operator in this new namespace without using RBAC

$ helm install --namespace my-dns-coredns --name etcd-operator stable/etcd-operator --set rbac.create=false
NAME: etcd-operator
LAST DEPLOYED: Fri Aug 17 09:11:38 2018
NAMESPACE: my-dns-coredns
STATUS: DEPLOYED
RESOURCES:
==> v1/ServiceAccount
NAME SECRETS AGE
etcd-operator-etcd-operator-etcd-backup-operator 1 1s
etcd-operator-etcd-operator-etcd-operator 1 1s
etcd-operator-etcd-operator-etcd-restore-operator 1 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
etcd-restore-operator ClusterIP 10.247.132.190 <none> 19999/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
etcd-operator-etcd-operator-etcd-backup-operator 1 1 1 0 1s
etcd-operator-etcd-operator-etcd-operator 1 1 1 0 1s
etcd-operator-etcd-operator-etcd-restore-operator 1 1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
etcd-operator-etcd-operator-etcd-backup-operator-4085030617rsfm 0/1 ContainerCreating 0 1s
etcd-operator-etcd-operator-etcd-operator-2106553830-f5p5t 0/1 ContainerCreating 0 1s
etcd-operator-etcd-operator-etcd-restore-operator-19698260g94rs 0/1 ContainerCreating 0 1s
NOTES:
1. etcd-operator deployed.
If you would like to deploy an etcd-cluster set cluster.enabled to true in values.yaml
Check the etcd-operator logs
export POD=$(kubectl get pods -l app=etcd-operator-etcd-operator-etcd-operator --namespace my-dns-coredns --output name)
kubectl logs $POD --namespace=my-dns-coredns

After some seconds, we can check that all the resources required have been deployed correctly

$ helm status etcd-operator
LAST DEPLOYED: Fri Aug 17 09:11:38 2018
NAMESPACE: my-dns-coredns
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
etcd-restore-operator ClusterIP 10.247.132.190 <none> 19999/TCP 1m
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
etcd-operator-etcd-operator-etcd-backup-operator 1 1 1 1 1m
etcd-operator-etcd-operator-etcd-operator 1 1 1 1 1m
etcd-operator-etcd-operator-etcd-restore-operator 1 1 1 1 1m
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
etcd-operator-etcd-operator-etcd-backup-operator-4085030617rsfm 1/1 Running 0 1m
etcd-operator-etcd-operator-etcd-operator-2106553830-f5p5t 1/1 Running 0 1m
etcd-operator-etcd-operator-etcd-restore-operator-19698260g94rs 1/1 Running 0 1m
==> v1/ServiceAccount
NAME SECRETS AGE
etcd-operator-etcd-operator-etcd-backup-operator 1 1m
etcd-operator-etcd-operator-etcd-operator 1 1m
etcd-operator-etcd-operator-etcd-restore-operator 1 1m
NOTES:
1. etcd-operator deployed.
If you would like to deploy an etcd-cluster set cluster.enabled to true in values.yaml
Check the etcd-operator logs
export POD=$(kubectl get pods -l app=etcd-operator-etcd-operator-etcd-operator --namespace my-dns-coredns --output name)
kubectl logs $POD --namespace=my-dns-coredns

To proceed to create an etcd cluster, we need to create the CRDs (Custom resources) in the etcd-operator deployment because this is set to false by default

$ helm inspect stable/etcd-operator
...
| `customResources.createEtcdClusterCRD` | Create a custom resource: EtcdCluster | `false` |
| `customResources.createBackupCRD` | Create an a custom resource: EtcdBackup | `false` |
| `customResources.createRestoreCRD` | Create an a custom resource: EtcdRestore | `false`
...
$ helm upgrade --namespace my-dns-coredns --set rbac.create=false,customResources.createEtcdClusterCRD=true,customResources.createBackupCRD=true,customResources.createRestoreCRD=true etcd-operator stable/etcd-operator... (output)...# kubectl get pods --namespace my-dns-coredns
NAME READY STATUS RESTARTS AGE
etcd-cluster-0000 1/1 Running 0 4m
etcd-cluster-0001 1/1 Running 0 4m
etcd-cluster-0002 1/1 Running 0 4m
etcd-operator-etcd-operator-etcd-backup-operator-408503061wzzj2 1/1 Running 0 18m
etcd-operator-etcd-operator-etcd-operator-2106553830-fsrsz 1/1 Running 0 18m
etcd-operator-etcd-operator-etcd-restore-operator-196982606mv0n 1/1 Running 0 18m

As you can see, the cluster is created. We can check the status of the cluster members by running a temporary pod that contains the etcd tools already installed

$ kubectl run --rm -i --tty --env="ETCDCTL_API=3" --env="ETCDCTL_ENDPOINTS=http://etcd-cluster-client:2379" --namespace my-dns-coredns etcd-test --image quay.io/coreos/etcd --restart=Never -- /bin/sh -c 'etcdctl  member list'66ab26b4b2a3b0c1, started, etcd-cluster-0001, http://etcd-cluster-0001.etcd-cluster.my-dns-coredns.svc:2380, http://etcd-cluster-0001.etcd-cluster.my-dns-coredns.svc:2379
fd15540bd064405e, started, etcd-cluster-0002, http://etcd-cluster-0002.etcd-cluster.my-dns-coredns.svc:2380, http://etcd-cluster-0002.etcd-cluster.my-dns-coredns.svc:2379
fdb37621e9c1e39f, started, etcd-cluster-0000, http://etcd-cluster-0000.etcd-cluster.my-dns-coredns.svc:2380, http://etcd-cluster-0000.etcd-cluster.my-dns-coredns.svc:2379

CoreDNS with etcd as a backend

As a very interesting example of use of the etcd cluster we just created, let us deploy CoreDNS with etcd as a backend. CoreDNS can be used as a DNS and service discovery service.

From the different options we have to deploy our DNS, we show how to use an external DNS with a Service Type Load Balancer. So first of all, let us deploy an Elastic Load Balancer with Elastic IP from the Open Cloud web console

Nothing especial to configure.

However, in order to allow the resources in CCE cluster created in the namespace my-dns-coredns to add configurations (backends, etc.) to the Load Balancer we just created, we need to add a secret resource to this namespace. Remember that when the CCE cluster was created, the platform asked you for your AS/SK to that end. This action creates the appropriate secret in the default namespace. Because we created the etcd-operator, etcd cluster and we are going to create the CoreDNS in a different namespace, we need to create the same secret but in the corresponding namespace

$ kubectl get secret
NAME TYPE DATA AGE
default-token-3h837 kubernetes.io/service-account-token 3 80d
paas.elb Opaque 2 80d

Create a text file on your client side including your AK/SK and create the secret in the namespace

$ cat secret-pass.elb.yaml
apiVersion: v1
kind: Secret
metadata:name: paas.elb
data:
access.key: NTlNV0dHR0dMM0AjarenarePVkU=
secret.key: WWFNZ2dvdHVBZlRuRDVSelAjarenareVzZXd2cDB0TTVXON3d3hmRg==
$ kubectl create -f secret-pass.elb.yaml --namespace my-dns-coredns
secret "paas.elb" created

Now is time to deploy the CoreDNS heml chart. Previous to that we need to create the file values.yaml in order to not to use the default values in the chart, because for this example we don’t want to create a CCE (Kubernetes) cluster service, as stated before we want to use a Load Balancer Service Type, use etcd cluster as a backend and, as in other popular cloud services, we need to configure just one protocol for exposing the DNS service, TCP or UDP, but it is not possible to set up both. In addition, we will define the DNS zone, enable the logs and use a version of the CoreDNS image different that the default one that is too old. All these options are configured in the values.yaml file

$ cat values.yaml
isClusterService: false
serviceType: "LoadBalancer"
serviceProtocol: "TCP"
plugins:
kubernetes:
enabled: false
etcd:
enabled: true
zones:
- "opencloudcoredns.com."
endpoint: "http://etcd-cluster.my-dns-coredns:2379"
log:
enabled: true
image:
tag: "1.2.0"

Before executing the deployment of the chart using these values we need to know that because the Elastic Load Balancer (ELB) service is based in Nginx, and as it happens to all the Nginx based LB services, the listener will not be enabled until the backends are online. That means that after deploying the chart we need to edit the deployed service to add the EIP in the variable loadBalancerIP in the spec section (see below). Let us proceed.

First let us deploy CoreDNS

$ helm install --namespace my-dns-coredns --name coredns -f values.yaml stable/coredns
...
$ helm status coredns
LAST DEPLOYED: Tue Aug 21 14:54:56 2018
NAMESPACE: my-dns-coredns
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
coredns-coredns 1 36s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns-coredns LoadBalancer 10.247.141.166 <pending> 53:32353/TCP,9153:30486/TCP 36s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
coredns-coredns 1 1 1 1 36s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
coredns-coredns-516803778-rqt88 1/1 Running 0 36s
NOTES:CoreDNS is now running in the cluster.
It can be accessed using the below endpoint
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl get svc -w coredns-coredns'
export SERVICE_IP=$(kubectl get svc --namespace my-dns-coredns coredns-coredns -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $SERVICE_IP
It can be tested with the following:1. Launch a Pod with DNS tools:kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools2. Query the DNS server:/ # host kubernetes
$ kubectl get services --namespace my-dns-coredns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns-coredns LoadBalancer 10.247.141.166 <pending> 53:32353/TCP,9153:30486/TCP 1m
etcd-cluster ClusterIP None <none> 2379/TCP,2380/TCP 1d
etcd-cluster-client ClusterIP 10.247.53.33 <none> 2379/TCP 1d
etcd-restore-operator ClusterIP 10.247.128.150 <none> 19999/TCP 1d

As you can see the corresponding service’s External IP remains in status <pending> until we edit the service to add the EIP in the variable loadBalancerIP in the spec section as said before

$ kubectl edit service coredns-coredns --namespace my-dns-coredns
...
uid: 6787359c-a541-11e8-89fe-fa16d03430ee
spec:
loadBalancerIP: 200.XXX.XXX.XXX
clusterIP: 10.247.141.166
externalTrafficPolicy: Cluster
ports:
- name: dns-tcp
nodePort: 32353
port: 53
protocol: TCP
targetPort: 53
- name: metrics
nodePort: 30486
port: 9153
protocol: TCP
targetPort: 9153
selector:
app: coredns-coredns
sessionAffinity: None
type: LoadBalancer
status:
-- INSERT --
...
$ kubectl get services --namespace my-dns-coredns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns-coredns LoadBalancer 10.247.141.166 200.XXX.XXX.XXX 53:32353/TCP,9153:30486/TCP 16m
etcd-cluster ClusterIP None <none> 2379/TCP,2380/TCP 1d
etcd-cluster-client ClusterIP 10.247.53.33 <none> 2379/TCP 1d
etcd-restore-operator ClusterIP 10.247.128.150 <none> 19999/TCP 1d

Cool, our DNS is already working and we can query the zone we created with dig

$ dig opencloudcoredns.com. +tcp @200.XXX.XXX.XXX; <<>> DiG 9.9.4-RedHat-9.9.4-51.el7 <<>> opencloudcoredns.com. +tcp @200.XXX.XXX.XXX
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 5972
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;opencloudcoredns.com. IN A
;; AUTHORITY SECTION:
opencloudcoredns.com. 30 IN SOA ns.dns.opencloudcoredns.com. hostmaster.opencloudcoredns.com. 1536916495 7200 1800 86400 30
;; Query time: 195 msec
;; SERVER: 200.XXX.XXX.XXX#53(200.XXX.XXX.XXX)
;; WHEN: Fri Sep 14 11:14:55 CEST 2018
;; MSG SIZE rcvd: 103

And add DNS registers usign the etcd backend, as simple as

$ etcdctl put /skydns/com/opencloudcoredns/x1 '{"ttl":60,"text":"Simpletext"}'

To execute this you can create a temporary container with the etcd tools installed as before and connect to it as

$ kubectl run --rm -i --tty --env="ETCDCTL_API=3" --env="ETCDCTL_ENDPOINTS=http://etcd-cluster.my-dns-coredns:2379" --namespace my-dns-coredns etcd-test --image quay.io/coreos/etcd --restart=Never -- /bin/shIf you don't see a command prompt, try pressing enter.
/ # etcdctl put /skydns/com/opencloudcoredns/x1 '{"ttl":60,"text":"Simpletext"}'
OK
/ # etcdctl put /skydns/com/opencloudcoredns/www '{"ttl":60,"host":"1.1.1.12"}'
OK
/ # exit
$

Now we can request the DNS registers from the internet

$ dig +short +tcp @200.XXX.XXX.XXX TXT opencloudcoredns.com
"Simpletext"
$ dig +short +tcp @200.XXX.XXX.XXX www.opencloudcoredns.com
1.1.1.12

These are tools that you can leverage in Open Cloud CCE to create your infrastructure to support microservices based architectures.

--

--

Fernando de la Iglesia

I love to learn, specially how nature works, and this is why I studied physics and love quantum “things”.