Getting started with Kubernetes

K Prayogo
5 min readJan 28, 2022

for better reading experience (code formatting), read on the original blog link below (bottom part of this post).

Since the containerization got more popular, kubernetes gained more traction than just using single VM. In previous post I explained why or when you don’t need kubernetes and when you’ll need it. At least from deployment perspective we can categorize it into 5 types (based on ownership, initial cost, and granularity of the recurring cost, the need for capacity planning):

1. On-Premise Dedicated Server, our own server, our own rack, or put it the colocation, we own the hardware, and have to replace it when it’s broken, we have to be also maintaining the network part. Usually this one is best choice for internal services (software that used only by internal staff, from the security and bandwidth especially.

2. VM, we rent the “cloud” infrastructure, this can be considered IaaS (Infrastructure as a Service), we rent a virtual machine/server or sometimes named Virtual Private/Dedicated Server, so we pay monthly when the server turned on (or based on contract). Some of notable product in this category: Google Compute Engine, Amazon EC2, Azure VM, Contabo VPS/VDS, etc. Usually this one is the best for databases (unless you are using managed database service) or other stateful applications or if the number of users are limited based on the capacity planning (not whole world will be accessing this).

3. Kubernetes, we rent or use managed kubernetes, or install kubernetes on top of our own own-premise dedicated server. Usually the company will rent 3 huge servers 64 core, 256GB RAM, with very large harddisk, and let developer to deploy containers/pod inside the kubernetes themself splitted based on their team or service’s namespace. This have constant cost (those 3 huge VMs, and the managed service’s cost), some provider also provider automatic node scale out (so the kubernetes nodes/VM (where the pods will be located) can be increased based on load). Some notable product in this category: GKE, Amazon EKS, AKS, DOKS, Jelastic Kubernetes Cluster, etc.

4. Container Engine, we use the infrastructure provider’s platform, so we only need to supply container without have to rent the VM manually, some provider will deploy the container inside a single VM, some other will deploy the container on shared dedicated server/VM. Some of them Some notable product in this category: Google AppEngine, Amazon ECS/Beanstalk/Fargate, Azure App Service, Jelastic Cloud, Heroku, etc. Usually this one is the best choice for most cases on budget-wise and for scalability side.

5. Serverless/FaaS, we only need to supply the function (mostly based on a specific template) that will run on specific event (eg. on specific time like CRON, or when load balancer receiving a request like in old CGI). Usually the function put inside a container, and used as standby instance, so the scale out only happened when it receives high load. If the function requires database as dependency, it’s recommended to use managed databases that support high number of connections/connect-disconnect or offloaded to MQ/PubSub service. Notable products in this category: Google CloudRun, AWS Lambda, Azure Functions, OpenFaaS, Netlify, Vercel, Cloudflare Workers, etc. We pay this service usually based on CPU duration, number of calls, total RAM usage, bandwidth, and any other metrics, so it would be very cheap when number of function calls are small, but can be really costly if you are writting inefficient function or have large number of calls. Usually lambda only used for handling spikes or as atomic CRON.

Because of the hype, or because it fit their use cases (bunch of teams that want to do independent service deployments), and the possibility of avoiding vendor locking, sometimes a company might decide to use kubernetes. Most of the company can survive by not following the hype by only managed database (or deploying database on VM or even using docker-compose with volume binding) + container engine (for scaling out strategy), not having to train everyone to learn Kubernetes.

But today we’re gonna try one of the fastest local kubernetes for development use-case (not for production).

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

minikube start

# use — driver=kvm2 or virtualbox if docker cannot connect internet
#sudo apt install virtualbox
#sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager
#sudo adduser `id -un` libvirt
#sudo adduser `id -un` kvm

alias kubectl=’minikube kubectl — ‘
alias k=kubectl

# will download kubectl if it’s the first time
k

# get pods from all namespace
k get po -A

# open dashboard and authenticate
minikube dashboard

# destroy minikube cluster
minikube ssh
sudo poweroff
minikube delete

create Dockerfile you want to deploy to kubernetes cluster, or if it’s just simple single binary golang project, build locally then put it to alpine docker, then push to image registry will be work just fine:

# build binary
CGO_ENABLED=0 GOOS=linux go build -o ./bla.exe

# create Dockerfile
echo ‘
FROM alpine:latest
WORKDIR /
COPY bla.exe .
CMD ./bla.exe
‘ > Dockerfile

# build docker image
VERSION=$(ruby -e ‘t = Time.now; print “v1.#{t.month+(t.year-2021)*12}%02d.#{t.hour}%02d” % [t.day, t.min]’)
COMMIT=$(git rev-parse — verify HEAD)
APPNAME=local-bla
docker image build -f ./Dockerfile . \
— build-arg “app_name=$APPNAME” \
-t “$APPNAME:latest” \
-t “$APPNAME:$COMMIT” \
-t “$APPNAME:$VERSION”

# push image to minikube
minikube image load $APPNAME

# create deployment config
echo ‘
apiVersion: v1
kind: Pod
metadata:
name: bla-pod
spec:
containers:
— name: bla
image: bla
imagePullPolicy: Never
env:
— name: BLA_ENV
value: “ENV_VALUE_TO_INJECT”
# if you need access to docker-compose outside the kube cluster
# use minikube ssh, route -n, check the ip of the gateway
# and use that ip as connection string
# it should work, as long the port forwarded
restartPolicy: Never
‘ > bla-pod.yaml

# deploy
kubectl apply -f bla-pod.yaml

# check
k get pods
k logs bla-pod

# delete deployment
kubectl delete pod bla-pod

If you need NewRelic log forwarding, it’s just as easy as adding a helm chart (it would automatically attach new pod logs and send it to newrelic):

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm repo add newrelic
https://helm-charts.newrelic.com
helm search repo newrelic/
helm install newrelic-logging newrelic/newrelic-logging — set licenseKey=eu01xx2xxxxxxxxxxxxxxxRAL
kubectl get daemonset -o wide -w — namespace default newrelic-logging

The next step should be adding load balancer or ingress so that pod can receive http requests.

Originally published at http://kokizzu.blogspot.com.

--

--