Kubernetes
Kubernetes deployment port and service port
- The Deployment container port does not expose the port outside the Pod.
- The Service port and targetPort bridges the gap between the external world and your containerized application. It exposes your application running in the Pods to be accessible either from outside the cluster or from other Pods within the cluster.
Kubernetes list containers
Since Kubernetes 1.20, Docker is being deprecated in favor of containerd or CRI-O as the container runtime. Kubernetes now mostly relies on the OCI (Open Container Initiative) compatible runtimes like containerd and CRI-O. These runtimes can be used instead of Docker to run containers in a Kubernetes cluster.
crictl: If you are using CRI-O or containerd as your container runtime, you can use the crictl command-line tool to manage the containers.
VERSION="v1.26.0"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
rm -f crictl-$VERSION-linux-amd64.tar.gz
mkdir -p /etc/crictl
nano /etc/crictl/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
crictl --config /etc/crictl/crictl.yaml ps
Microk8s list containers
microk8s.ctr c ls
Kubernetes run container from image and enter shell
kubectl run -it busybox --image=busybox -- sh
Kubernetes get event of the specific pod
kubectl get events --field-selector involvedObject.name=deployment-6f585b5848-7swk5
Kubernetes index page with ConfigMap
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-blue
labels:
app: nginx-blue
spec:
replicas: 5
selector:
matchLabels:
app: nginx-blue
template:
metadata:
labels:
app: nginx-blue
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: index-page-volume
mountPath: /usr/share/nginx/html
volumes:
- name: index-page-volume
configMap:
name: index-page-blue-configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: index-page-blue-configmap
data:
index.html: |
<html>
<head>
<title>Version Blue</title>
<style>
body{
background-color: rgb(153, 204, 255);
}
</style>
</head>
<body>
<h1>Version Blue page</h1>
</body>
</html>
---
apiVersion: v1
kind: Service
metadata:
name: nginx-blue-svc
spec:
selector:
app: nginx-blue
ports:
- port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: blue-ingress
annotations:
spec:
defaultBackend:
service:
name: nginx-blue-svc
port:
number: 80
Load balander service
apiVersion: v1
kind: Service
metadata:
name: lb-service
spec:
ports:
# Порт сетевого балансировщика, на котором будут обслуживаться пользовательские запросы.
- port: 80
name: plaintext
# Порт контейнера, на котором доступно приложение.
targetPort: 80
# Метки селектора, использованные в шаблоне подов при создании объекта Deployment.
selector:
app: pma
type: LoadBalancer
deploy a simple Docker image and a Kubernetes Service (LoadBalancer) using only kubectl commands without using YAML files
kubectl run phpmyadmin --image=phpmyadmin/phpmyadmin --port=80 --env="PMA_HOST=dbhost.example.com"
kubectl expose pod phpmyadmin --type=LoadBalancer --port=80 --target-port=80
These commands will create a deployment named phpmyadmin
using the phpmyadmin/phpmyadmin
Docker image. The deployment will expose port 80 and set the PMA_HOST
environment variable to dbhost.example.com
. The second command will create a LoadBalancer service that maps to the deployment's port 80.
After executing these commands, you can use kubectl get services
to retrieve the external IP of the LoadBalancer service, which you can use to access phpMyAdmin.
You could connect to phpMyAdmin by going to that external IP address and providing your MySQL user credentials.
emptyDir volume
The emptyDir volume is created when a pod is created, and all the containers within the same pod can read from and write to that volume. However, the content of the emptyDir is ephemeral by nature and is erased when the pod is deleted.
For example, we can run a log shipper container alongside our main application container. The main application container will write the logs to the emptyDir volume and the log shipper container will tail the logs and ship them to a remote target.
Check Kubernetes secret
kubectl get secret monitoring-grafana -o jsonpath="{.data.admin-password}" -n monitoring | base64 --decode