docker
Docker show containers full output
docker ps --all --no-trunc --format='{{json .}}' | jq
Edit file on you host and copy to container
docker cp <container>:/path/to/file.ext .
docker cp file.ext <container>:/path/to/file.ext
Copy all files from the anchors directory on the host to the ca-certificates directory in the weblate-localization-server Docker container
docker cp /etc/pki/ca-trust/source/anchors/. weblate-localization-server:/usr/local/share/ca-certificates/
Attach to running container
docker exec -it --user root <container-id> /bin/bash
Insall app in container
docker exec -it --user root <container-id> /bin/bash
apt-get install nano
Edit file inside container with no editors
cat > file_to_edit
- Write or Paste you text
- don't forget to leave a blank line at the end of file
- Ctrl + D to apply configuration
Copy content of local file to container file
docker exec -it <container-id> sh -c 'cat > /app/container-file.java' < lcaol-file.java
Copy file from local to container
docker cp main.py my-container:/data/scripts/
docker exec -it my-container python /data/scripts/main.py
Run image with publish ports
docker run -d -p 80:80 --name nginx nginx-image
Run command inside container
docker exec -it CONTAINER_NAME /bin/sh -c "nginx -t && nginx -s reload"
Run container in priveleged mode (systemd service allowed)
docker run --name container -itd --privileged=true centos:7 /usr/sbin/init
Run container with memory restriction (1G limit)
docker run -m 1gb amazoncorretto:11.0.13-alpine java -XX:MaxRAMPercentage=60.0 -XshowSettings:vm -version
Log path to a container log
/var/lib/docker/containers/*/*.log
Docker socket
/var/run/docker.sock
Container is temporary
A container persists after it exits, unless you started it using the --rm argument to docker run
$ docker run -it ubuntu:14.04 /bin/bash
# date > example_file
# exit
Since we exited our shell, the container is no longer running:
$ docker ps
But if we had the -a option, we can see it: And we can restart it and re-attach to it:
$ docker start 79aee3e2774e
$ docker attach 79aee3e2774e
And the file we created earlier is still there:
/ # cat example_file
When you use docker run to start a container, it actually creates a new container based on the image you have specified. Besides the other useful answers here, note that you can restart an existing container after it exited and your changes are still there.
docker ps -ql
docker start <container-id> # restart it in the backgroup
docker exec -it <container-id> /bin/bash # exec is used in place of run, and not on an image but on a containerid
Docker run
The following command runs an ubuntu container, attaches interactively to your local command-line session, and runs /bin/bash.
docker run -i -t ubuntu /bin/bash
When you run this command, the following happens (assuming you are using the default registry configuration): - If you do not have the ubuntu image locally, Docker pulls it from your configured registry, as though you had run docker pull ubuntu manually. - Docker creates a new container, as though you had run a docker container create command manually. - Docker allocates a read-write filesystem to the container, as its final layer. This allows a running container to create or modify files and directories in its local filesystem. - Docker creates a network interface to connect the container to the default network, since you did not specify any networking options. This includes assigning an IP address to the container. By default, containers can connect to external networks using the host machine’s network connection. - Docker starts the container and executes /bin/bash. Because the container is running interactively and attached to your terminal (due to the -i and -t flags), you can provide input using your keyboard while the output is logged to your terminal. - When you type exit to terminate the /bin/bash command, the container stops but is not removed. You can start it again or remove it.
Docker volume maps
docker run -d --name=netdata \
-p 19999:19999 \
-v netdataconfig:/etc/netdata \
-v netdatalib:/var/lib/netdata \
-v netdatacache:/var/cache/netdata \
-v /etc/passwd:/host/etc/passwd:ro \
-v /etc/group:/host/etc/group:ro \
-v /proc:/host/proc:ro \
-v /sys:/host/sys:ro \
-v /etc/os-release:/host/etc/os-release:ro \
--restart unless-stopped \
--cap-add SYS_PTRACE \
--security-opt apparmor=unconfined \
netdata/netdata
Remove all containers
docker images
docker container inspect jovial_payne
docker ps -a -q
docker rm -f $(docker ps -a -q)
docker ps -a
Script to remove all containers
docker ps --format='{{.ID}}' | xargs -n 1 -r docker inspect -f '{{.ID}} {{.State.Running}} {{.Created}}' | awk '$2 == "true" && $3 <= "'$(date -d '90 days ago' -Ins --utc | sed 's/+00:00/Z/' | sed 's/,/./')'" { print $1 }' | xargs -r docker rm --force
Save and restore images to local file
docker image save -o images.tar image1 [image2 ...]
docker image load -i images.tar
Dokcker metrics
docker system df
docker stats
Docker logs
docker logs
docker events
Docker configure custom bridge network
docker network create --driver bridge --subnet=10.11.2.0/24 --gateway=10.11.2.1 custom-docker-bridge
docker network ls
docker network inspect custom-docker-bridge
Docker network host mode
if the container runs with the --network host
, it will use the host's default networking namespace, which will also allow it to manipulate the host's iptables rules if it runs with the NET_ADMIN option. This also means that the network adapters that the container sees will be the host's network adapters and not the virtual ones created by Docker.
Docker run with environment variables
docker run -dit -e MSSQL_PID='Developer' -e MSSQL_SA_PASSWORD='GukjCgwN9hek' -e MSSQL_MEMORY_LIMIT_MB=2048 -e ACCEPT_EULA=Y -p 1439:1433 --name sql09 --network=custom-docker-bridge sql-image
Note: it is not possible to overwrite an environment variable in a running container. To make a change in an existing environment variable in a running docker container you have to delete and recreate docker container.
Docker execute command in container
docker exec sql09 cat /var/opt/mssql/log/errorlog
List docker volumes by container
docker ps -a --format '{{ .ID }}' | xargs -I {} docker inspect -f '{{ .Name }}{{ printf "\n" }}{{ range .Mounts }}{{ printf "\n\t" }}{{ .Type }} {{ if eq .Type "bind" }}{{ .Source }}{{ end }}{{ .Name }} => {{ .Destination }}{{ end }}{{ printf "\n" }}' {}
Set docker container to always restart
docker inspect --format '{{json .HostConfig.RestartPolicy}}' <container name>
{"Name":"","MaximumRetryCount":0}
docker update --restart always <container name>
docker inspect --format '{{json .HostConfig.RestartPolicy}}' <container name>
{"Name":"always","MaximumRetryCount":0}
Check if process is running inside container or locally
- First find processor id
ps aux | grep node
root 41142 0.0 0.3 249612 32268 ? Sl 15:23 0:00 node index.js
- Then check cgroup of the process
cat /proc/41142/cgroup
0::/system.slice/docker-00212452eb8c4cb9e9aaa28d545a24ca17495d254743d3e643966febb890cbfc.scope/kubepods/besteffort/pod4090946f-8dea-430c-85cf-3b8a32ca30b4/e769093d27768be73ace66d1edc611d3bab380718463c5e0aadc88db1a50e6bc
If cgroup has docker in it, then the process is running inside container
- You could run command inside conatainer
docker exec -it k3d-k3s-default-agent-1 sh -c 'ps aux | grep node'
15574 0 node index.js
docker exec -it k3d-k3s-default-agent-1 sh -c 'cat /proc/15574/cgroup'
0::/kubepods/besteffort/pod4090946f-8dea-430c-85cf-3b8a32ca30b4/e769093d27768be73ace66d1edc611d3bab380718463c5e0aadc88db1a50e6bc
Dockerfile multi stage build
FROM golang:latest as builder
ENV BUILD_DIR /go/src/example
RUN go get github.com/gin-gonic/gin
WORKDIR “${BUILD_DIR}”
COPY . “${BUILD_DIR}”
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o example.app example.go
FROM alpine:latest
WORKDIR /
COPY — from=builder /go/src/example/example.app /
RUN chmod 755 /example.app
CMD [“/example.app”]
Docker Wormhole pattern
Docker daemon running a container from within the container. Then within the container you can use the Docker CLI. To use Docker commands in your CI/CD jobs, you can bind-mount /var/run/docker.sock into the container. Docker is then available in the context of the image. When you use Docker socket binding, you avoid running Docker in privileged mode. Any containers created by Docker commands are siblings of the runner, rather than children of the runner.
docker run -it --rm -v $PWD:$PWD -w $PWD -v /var/run/docker.sock:/var/run/docker.sock maven:3 mvn test
- -v $PWD:$PWD will add your current directory as a volume inside the container
- -w $PWD will set the current directory to this volume
- -v /var/run/docker.sock:/var/run/docker.sock will map the Docker socket
Docker get the number of restarts for container my-container
docker inspect -f "{{ .RestartCount }}" my-container
Docker extract file from image layer
COPY id_rsa /root/.ssh/
RUN . . . && rm -rf /root/.ssh/id_rsa
CMD ["/run.sh"]
$ docker history $IMAGE_NAME
IMAGE CREATED CREATED BY
c274f07a418f 20 minutes ago CMD ["/run.sh"]
<missing> 20 minutes ago RUN ... && rm -rf…
<missing> 20 minutes ago COPY id_rsa /root/.ssh/ # buildkit
. . .
$ docker save $IMAGE_NAME | tar -x -C .
$ find blobs/sha256/ -type f -exec file {} \; | grep "tar"
blobs/sha256/9b47fc. . .fde2bd: POSIX tar archive
. . .
blobs/sha256/6e9c5e. . .98398c: POSIX tar archive
$ tar xvf 9b47fc. . .fde2bd
root/
root/.ssh/
root/.ssh/id_rsa
$ cat root/.ssh/id_rsa
-----BEGIN OPENSSH PRIVATE KEY-----
. . .
-----END OPENSSH PRIVATE KEY-----
Delete Docker image layers
$ docker history $IMAGE_NAME
IMAGE CREATED CREATED BY
c274f07a418f 20 minutes ago CMD ["/run.sh"]
<missing> 20 minutes ago RUN ... && rm -rf…
<missing> 20 minutes ago COPY id_rsa /root/.ssh/ # buildkit
. . .
$ docker-squash -f 2 -t $IMAGE_NAME:squashed $IMAGE_NAME
$ docker save $IMAGE_NAME:squashed | tar -x -C .
$ tar xvf f4152d. . .750c19 | grep "id_rsa"
Docker security misconfiguration
- Use COPY instead of ADD in Dockerfile
- Do not store credential in environment variables/files
- Avoid latest tag
- Avoid sudo command
- Create a user for the container
Docker multi-stage builds
The basic idea is that you'll have one stage to build your application artifacts, and insert them into your runtime distroless image.
Dockerfile
FROM python:3-slim AS build-env
COPY . /app
WORKDIR /app
FROM gcr.io/distroless/python3-debian12:debug
COPY . /app
WORKDIR /app
CMD ["hello.py", "/etc"]
To run the example, go to the directory for the language and run
cd examples/python3/
docker build -t myapp .
docker run --entrypoint=sh -ti myapp
Docker get environment variables from image
docker image inspect --format '{{.Config.Env}}' nginx
Docker Compose
Docker compose default network
When there are multiple containers in docker compose file they are linked to the same network. F.x. web application and database server. So database container could skip port publish because web application could connect simply with containername:databaseport
. It would resolve containername
to correct IP address for database container and database port is the default port f.x. 5432 for PostgreSQL.
Docker + Docker Compose + SystemD service setup
- SystemD unit file
[Unit]
Description=Docker Compose Servicename
Requires=docker.service
After=docker.service
[Service]
Type=simple
ExecStart=/usr/local/bin/docker-compose -f /opt/servicename/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f /opt/servicename/docker-compose.yml stop
Environment='COMPOSE_HTTP_TIMEOUT=600'
[Install]
WantedBy=multi-user.target
- Dockerfile
FROM node:12-alpine
# Set workdir
WORKDIR /app
# Packages info
COPY package.json yarn.lock ./
# Install (without postinstalls)
RUN yarn install --ignore-scripts
# Copy all
COPY . .
# build app
RUN yarn build
EXPOSE 3000
CMD npm run start
- docker-compose.yml
---
version: '3'
services:
web:
build: .
restart: always
container_name: servicename
ports:
- "3000:3000"
Docker Compose run database and frontend
docker-compose.yml
Docker Compose file that starts two containers
version: '3'
services:
db_container:
container_name: db_container
hostname: db_container
image: postgres:10
command: postgres -c 'max_connections=200'
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
restart: always
environment:
POSTGRES_PASSWORD: 'secretpass'
ports:
- "127.0.0.1:5432:5432"
volumes:
- "/etc/localtime:/etc/localtime:ro"
- "/etc/timezone:/etc/timezone:ro"
- "./db/backups:/backups"
- "./db/db:/var/lib/postgresql/data"
frontend_container:
container_name: frontend_container
hostname: frontend_container
image: frontend-image
restart: always
ports:
- 127.0.0.1:8080:8080
volumes:
- "/etc/localtime:/etc/localtime:ro"
- "/etc/timezone:/etc/timezone:ro"
- "/var/www/html:/var/www/html"
depends_on:
db_container:
condition: service_healthy
links:
- db_container
/etc/systemd/system/example.service
SystemD service that starts Docker Compose
[Unit]
Description=Docker Compose Example Service
Requires=docker.service
After=docker.service
[Service]
Type=simple
ExecStart=/usr/local/bin/docker-compose -f /opt/example/docker-compose.yml up
ExecStop=/usr/local/bin/docker-compose -f /opt/example/docker-compose.yml stop
[Install]
WantedBy=multi-user.target
docker-compose.yml for KMS server
version: "3"
networks:
vlmcsd:
external: false
services:
vlmcsd:
image: mikolatero/vlmcsd
container_name: vlmcsd
networks:
- vlmcsd
restart: always
ports:
- "1688:1688"
Docker container for KMS server
docker run -d -p 1688:1688 --restart=always --name vlmcsd mikolatero/vlmcsd
- Windows KMS activation
slmgr.vbs -upk
slmgr.vbs -ipk XXXXX-XXXXX-XXXXX-XXXXX-XXXXX
slmgr.vbs -skms DOCKER_IP:PORT
slmgr.vbs -ato
slmgr.vbs -dlv
- Office x86_64 KMS activation
cd \Program Files\Microsoft Office\Office16
cscript ospp.vbs /sethst:DOCKER_IP
cscript ospp.vbs /setprt:PORT
cscript ospp.vbs /inpkey:xxxxx-xxxxx-xxxxx-xxxxx-xxxxx
cscript ospp.vbs /act
cscript ospp.vbs /dstatusall
-
GVLK keys
- Windows: https://docs.microsoft.com/en-us/windows-server/get-started/kmsclientkeys
- Office 2016 & 2019 & 2021: https://technet.microsoft.com/en-us/library/dn385360(v=office.16).aspx
-
Public KMS server
kms.srv.crsoo.com
- Download Windows from Microsoft
Get latest version from Windows Update and create ISO file
https://uup.ee/
- Activate Windows with KMS (install GLVK key)
slmgr.vbs -ipk W269N-WFGWX-YVC9B-4J6C9-T83GX
slmgr /skms kms.srv.crsoo.com
slmgr /ato
- Download Microsoft Office from Microsoft
https://officecdn.microsoft.com/db/492350f6-3a01-4f97-b9c0-c7c6ddf67d60/media/ru-ru/ProPlus2021Retail.img
- Activation Microsoft Office with KMS
cd /d %ProgramFiles(x86)%\Microsoft Office\Office16
cd /d %ProgramFiles%\Microsoft Office\Office16
for /f %x in ('dir /b ..\root\Licenses16\ProPlus2021VL_KMS*.xrm-ms') do cscript ospp.vbs /inslic:"..\root\Licenses16\%x"
cscript ospp.vbs /inslic:"..\root\Licenses16\ProPlus2021VL_KMS_Client_AE-ppd.xrm-ms"
cscript ospp.vbs /inslic:"..\root\Licenses16\ProPlus2021VL_KMS_Client_AE-ul-oob.xrm-ms"
cscript ospp.vbs /inslic:"..\root\Licenses16\ProPlus2021VL_KMS_Client_AE-ul.xrm-ms"
cscript ospp.vbs /setprt:1688
cscript ospp.vbs /unpkey:6F7TH >nul
cscript ospp.vbs /inpkey:FXYTK-NJJ8C-GB6DW-3DYQT-6F7TH
cscript ospp.vbs /sethst:kms.srv.crsoo.com
cscript ospp.vbs /act
https://ygoo.ru/server-addresses-for-windows-kms-activation https://bafista.ru/aktivacziya-microsoft-windows-i-office/ https://massgrave.dev/
Docker Compose Wormhole pattern
tests:
image: maven:3
stop_signal: SIGKILL
stdin_open: true
tty: true
working_dir: $PWD
volumes:
- $PWD:$PWD
- /var/run/docker.sock:/var/run/docker.sock
# Maven cache (optional)
- ~/.m2:/root/.m2
command: mvn test
ContainerD
List images
ctr image ls
List containers
ctr container list
List containers in namespace
ctr -n k8s.io containers list
List available namespaces
ctr ns ls