Using standard kubectl tool to manage applications in the new CCE clusters version from Telefónica Open Cloud

Fernando de la Iglesia
4 min readJun 20, 2018

by Fernando de la Iglesia, Technology expert at Telefónica I+D

Some time ago, I wrote an entry on How to use kubectl to manage applications in CCE clusters from Telefónica Open Cloud. At that time the way the kubernetes management of the cluster should be accessed by the kubectl tool was by using the CCE API and therefore the kubectl tool had to be modified in order to be able to use the CCE API.

In the new version of CCE this has changed, and now the management can be accessed directly and therefore we can use the standard kubectl tool. Let us see how to get the corresponding information and how to configure kubectl.

First of all, you need to create the CCE cluster. Nothing new, the process is equivalent; you create the cluster using the console or the CCE API and once created you can start adding nodes. A difference is that now you can choose to attach or not an EIP to the nodes you create. It depends if you need your containers to communicate with the Internet or not (by the way this include downloading container images form Docker Hub). In any case, you can attach an EIP to the node VM later on. For this example and because I will deploy some container from Docker Hub I will select to attach an EIP to the node at creation time.

I will show how to use kubectl from my desktop, therefore I need to set up an externally accesible IP address to the cluster. If in your case you choose to use kubectl from a virtual machine in the same VPC where the CCE cluster is created, you do not need to configure the external access, the Internal Address (see image below) will work for you.

Once created let’s go to the “Basic Info” tab of the cluster

To access from my desktop I will configure the “Externally accessible address” (see the image), this will ask for an EIP and you can use some EIP you already have assigned of click the link to assign one in this moment

Once the external address (public) is configured, we can download the certificates. The certificates (self-signed) are created taking into account the assigned EIP.

When clicking the link (just once) you will download three certificate files: cacrt.txt (the CA certificate), clientcrt.txt (the client certificate) and clientkey.txt (the client key).

Before configuring kubectl with the server, credentials and context, we need to add the public IP address for management to our local hosts file using a special name that is the one used for creating the certificates named before. You cannot use by now a different name. In the example, you can see a linux /etc/hosts entrance

# cat /etc/hosts
...
200.XXX.XXX.XXX kubernetes.default.svc.cluster.local
...

Now is time to configure kubectl. I assume that you have already installed kubectl tool in your client system. You can choose several options for this task; in this link you can find a reference on how to install the tool for different systems. In my case, I installed it using the CentOS package as described in the page I am referring to.

You need to set the new cluster, credentials and context in order to kubectl to be able to access the cluster

$ kubectl config set-cluster kubetest --server https://kubernetes.default.svc.cluster.local:5443 --certificate-authority=./cacrt.txt
Cluster "kubetest" set.
$ kubectl config set-credentials figlesia --certificate-authority=./cacrt.txt --client-key=./clientkey.txt --client-certificate=./clientcrt.txt
User "figlesia" set.
$ kubectl config set-context context_kubetest --cluster=kubetest --user=figlesia
Context "context_kubetest" created.

Now configure kubectl to use this new context

$ kubectl config set current-context context_kubetest
Property "current-context" set.

Great! We are already prepared to start using kubectl with our new cluster. Let’s deploy a simple application (exactly the same that we deployed in the previous version, described in the section “Creating your first application”) with the replication controller and the corresponding service

$ kubectl create -f node-rc.yaml
replicationcontroller "simplenode" created
$ kubectl create -f node-service.yaml
service "simplenode" created
$ kubectl get rc
NAME DESIRED CURRENT READY AGE
simplenode 1 1 1 2m
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.247.0.1 <none> 443/TCP 2d
simplenode NodePort 10.247.170.55 <none> 8080:32288/TCP 46s

Let us check the service we have just deployed just requesting the service to the NodePort in the public address of the kubernetes node we configured at the beginning

$ curl -i http://200.XXX.XXX.X44:32288
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 45
ETag: W/"2d-xUePkkhYSerb0p3OaM1vlVwkBwg"
Date: Mon, 04 Jun 2018 10:31:22 GMT
Connection: keep-alive
Hello world<br>This is a very simple example

Just note that when creating the cluster node it is assigned a security group by default that is having the adequate ports configured for NodePort.

Pretty easy as you can see. With this new version you can start managing your Open Cloud CCE clusters with the same tool you are using for managing your current clusters and easily and seamlessly growing in Open Cloud.

--

--

Fernando de la Iglesia

I love to learn, specially how nature works, and this is why I studied physics and love quantum “things”.