Docker

54 Notes
+ Logs (Aug. 19, 2022, 11:37 a.m.)

docker logs <container ID> docker logs --follow <container ID> docker logs --tail 100 <container ID> docker logs --follow --until=30m docker logs --since 2019-03-02 <container ID>

+ Find IP address (Aug. 17, 2022, 4:37 p.m.)

docker inspect 3f52acaa7ba9 | grep IPAddress

+ RabbitMQ (March 5, 2022, 4:41 p.m.)

version: "3" services: my-rabbit: image: rabbitmq:management container_name: my-rabbit environment: RABBITMQ_DEFAULT_USER: rabbit RABBITMQ_DEFAULT_PASS: 123456 ports: - '5672:5672' - '15672:15672' expose: - 5672 - 15672 networks: - 'my-net' ------------------------------------------------------------------------------------------------------------ The port 5672 is for the transport_url. The port 15672 is the web admin console. ------------------------------------------------------------------------------------------------------------

+ Force-update Let’s Encrypt Certificates (Feb. 1, 2022, 8:24 p.m.)

We need to only restart the traefik service so that it updates the acme.json file: 1- Rename the acme.json file which contains old certificates: cd /var/lib/docker/volumes/traefik_traefik-public-certificates/_data mv acme.json acme.json.bak 2- Get the ID from the following command: docker stack services traefik 3- Force update the docker service: docker service update --force <the_id_from_previous_command> Done! The new acme.json is created containing the new certificates.

+ Backup postgres database from docker (Jan. 14, 2022, 9:10 p.m.)

docker exec -t $(docker container ls -a -q --filter=name=notes2-postgresql) pg_dumpall -c -U postgres | gzip > notes2_`date +%Y-%m-%d`.gz

+ Get ID of a container (Jan. 14, 2022, 9:05 p.m.)

docker container ls -a -q --filter=name=notes2-postgresql

+ Service logs (Nov. 10, 2021, 10:43 p.m.)

docker service logs [OPTIONS] SERVICE|TASK --details Show extra details provided to logs --follow , -f Follow log output

+ Swarm - Filtering container name (Nov. 10, 2021, 10:40 p.m.)

Filtering container name: docker ps -q -f name=notes2_notes2-django --------------------------------------------------------------------------- Execute a command within the docker swarm service: docker exec $(docker ps -q -f name=notes2_notes2-django) ls ---------------------------------------------------------------------------

+ Remove Docker Images, Containers, and Volumes (Oct. 19, 2021, 12:19 a.m.)

Remove dangling images: List: docker images -f dangling=true Remove: docker image prune --------------------------------------------------------------------- Remove all images: List: docker images -a Remove: docker rmi $(docker images -a -q) --------------------------------------------------------------------- Remove all exited containers: List: docker ps -a -f status=exited Remove: docker rm $(docker ps -a -f status=exited -q) --------------------------------------------------------------------- Stop and remove all containers: List: docker ps -a Remove: docker stop $(docker ps -a -q) docker rm $(docker ps -a -q) --------------------------------------------------------------------- Remove one or more specific volumes: List: docker volume ls Remove: docker volume rm volume_name volume_name --------------------------------------------------------------------- Remove dangling volumes: List: docker volume ls -f dangling=true Remove: docker volume prune --------------------------------------------------------------------- Remove a container and its volume: docker rm -v container_name ---------------------------------------------------------------------

+ Remove <none> TAG images (Dec. 23, 2020, 3:59 p.m.)

docker rmi $(docker images -f "dangling=true" -q) OR docker image prune --filter="dangling=true"

+ Portainer (Dec. 13, 2020, 4:52 p.m.)

1- Create environment variables with for Portainer instance: export DOMAIN=portainer.mohsenhassani.com export NODE_ID=$(docker info -f '{{.Swarm.NodeID}}') docker node update --label-add portainer.portainer-data=true $NODE_ID 2- Create the Docker Compose file: curl -L dockerswarm.rocks/portainer.yml -o portainer.yml 3- Deploy it: docker stack deploy -c portainer.yml portainer

+ Disable auto-restart on containers (Dec. 13, 2020, 3:54 p.m.)

docker update --restart=no $(docker ps -a -q)

+ Login using root user (Dec. 12, 2020, 2:49 p.m.)

docker exec -it --user root tiptong-django bash

+ Check the remaining Pull Rate Limits (Dec. 6, 2020, 4:20 p.m.)

Anonymous Requests 1- Makes a request to auth.docker.io for an authentication token: TOKEN=$(curl "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token) 2- Check the remaining pull limits: curl --head -H "Authorization: Bearer $TOKEN" https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest 2>&1 ------------------------------------------------------------------------------------------------------ Authenticated requests 1- Replace username:password with your Docker ID and password in the command below: TOKEN=$(curl --user 'username:password' "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token) 2- Check the remaining pull limits: curl --head -H "Authorization: Bearer $TOKEN" https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest 2>&1 ------------------------------------------------------------------------------------------------------ Then you need to log in: docker login --username <your_username> --password <your_password> - Using --password via the CLI is insecure - Your password will be stored unencrypted in /root/.docker/config.json. ------------------------------------------------------------------------------------------------------

+ Build (Dec. 6, 2020, 3:46 p.m.)

Do not forget the last DOT docker build -t tiptong:latest . ----------------------------------------------------------------------------------------- In case you need to find the name "tiptong", use the command: docker image ls ----------------------------------------------------------------------------------------- docker build -t tiptong:latest . --no-cache --force-rm -----------------------------------------------------------------------------------------

+ Docker Stack and Service (Nov. 14, 2020, 9:33 p.m.)

docker stack ls docker stack services service_name docker stack rm ------------------------------------------------------------------------------- docker service ls ------------------------------------------------------------------------------- Restart a service in the docker swarm stack: 1- docker stack services <stack_name> 2- docker service update --force <service ID> ------------------------------------------------------------------------------- Open the shell in a service: docker exec -it $(docker ps -q -f name=minio_minio-client) /bin/bash Run a command in a service: docker exec $(docker ps -q -f name=servicename) ls -------------------------------------------------------------------------------

+ Swarmpit (Nov. 14, 2020, 2:23 p.m.)

Setup an optional directory: sudo su - mkdir -p ~/services/swarmpit; cd ~/services/swarmpit 1- Create environment variables with for Swarmpit instance: export DOMAIN=swarmpit.mohsenhassani.com export NODE_ID=$(docker info -f '{{.Swarm.NodeID}}') docker node update --label-add swarmpit.db-data=true $NODE_ID docker node update --label-add swarmpit.influx-data=true $NODE_ID 2- Download the file swarmpit.yml: curl -L dockerswarm.rocks/swarmpit.yml -o swarmpit.yml 3- Deploy the stack: docker stack deploy -c swarmpit.yml swarmpit 4- Check it: docker stack ps swarmpit 5- Check the Swarmpit logs with: docker service logs swarmpit_app

+ Traefik (Nov. 14, 2020, 1:41 p.m.)

Setup an optional directory: sudo su - mkdir -p ~/services/traefik; cd ~/services/traefik 1- Set up swarm mode: docker swarm init 2- Create a network that will be shared with Traefik and the containers that should be accessible from the outside: docker network create --driver=overlay --attachable traefik-public 3- Get the Swarm node ID: export NODE_ID=$(docker info -f '{{.Swarm.NodeID}}') 4- Create a tag in this node, so that Traefik is always deployed to the same node and uses the same volume: docker node update --label-add traefik-public.traefik-public-certificates=true $NODE_ID 5- Create an environment variable with your email, to be used for the generation of Let's Encrypt certificates: export EMAIL=mohsen@mohsenhassani.com 6- Create environment variables for the Traefik UI: export DOMAIN=traefik.mohsenhassani.com export USERNAME=admin export PASSWORD=changethis export HASHED_PASSWORD=$(openssl passwd -apr1 $PASSWORD) 7- Download the file traefik.yml: curl -L dockerswarm.rocks/traefik-host.yml -o traefik-host.yml 8- Deploy the stack with: docker stack deploy -c traefik-host.yml traefik 9- Check it: docker stack ps traefik 10- Check the Traefik logs: docker service logs traefik_traefik -f

+ docker service vs. docker stack (Nov. 8, 2020, 10:09 a.m.)

You can consider the docker service vs docker stack is the same as docker run vs docker-compose but in the docker swarm cluster. The docker service is used when managing individual service on a docker swarm cluster. It is the client command line to access the docker swarm manager. The docker stack can be used to manage a multi-service application. It also moves many of the options you would enter on the docker service into the .yml file (such as docker-cloud.yml or docker-compose.yml) for easier reuse. It works as a front end "script" on top of the docker swarm manager used by docker swarm cluster, so you can do everything docker stack does with docker service.

+ Docker Registry - Description (Nov. 1, 2020, 9:22 p.m.)

Docker Registry is a server-side application and part of Docker’s platform-as-a-service product. It allows you to locally store all your Docker images into one centralized location. When you set up a private registry, you assign a server to communicate with Docker Hub over the internet. The role of the server is to pull and push images, store them locally, and share them among other Docker hosts. By running an externally-accessible registry, you can save valuable resources and speed up processes. The software lets you pull images without having to connect to the Docker Hub, saving up bandwidth and securing the system from potential online threats. Docker hosts can access the local repository over a secure connection and copy images from the local registry to build their own containers.

+ Removing All Unused Docker Objects (Nov. 1, 2020, 7:15 p.m.)

docker system prune

+ Docker ps (Nov. 1, 2020, 7:10 p.m.)

docker ps docker ps -q docker ps -q -f status=exited docker rm $(docker ps -q -f status=exited) -------------------------------------------------

+ Swarm Mode - Commands (Oct. 30, 2020, 10:49 p.m.)

To see the worker nodes: docker node ls ----------------------------------------------------------

+ Swarm Mode (Oct. 30, 2020, 10:44 p.m.)

1- Create a new swarm: docker swarm init --advertise-addr <ip_address> 2- Add nodes to a swarm: docker swarm join --token <WORKERJOINTOKEN> <MANAGERIPADDRESS> You can obtain the join-token command by: docker swarm join-token worker

+ Copy to container (Oct. 26, 2020, 11:16 a.m.)

docker cp source_file_or_directory my_container:/srv/tiptong/

+ Change data directory (Oct. 25, 2020, 9 a.m.)

1- Stop the docker daemon sudo service docker stop 2- Add a daemon.json configuration file with the following content: vim /etc/docker/daemon.json { │ "graph": "/media/mohsen/256G/docker" │ } 3- Copy the current data directory to the new one: rsync -aP /var/lib/docker /media/mohsen/256G/docker 4- Rename the old docker directory: mv /var/lib/docker /var/lib/docker_old 5- Start the docker daemon: sudo service docker start 6- If everything is ok, delete the old data directory: sudo rm -rf /var/lib/docker_old

+ Docker Swarm(Classic), Swarm Mode and SwarmKit (Oct. 22, 2020, 12:48 a.m.)

Docker Swarm is an older (2014) Docker native orchestration tool. It is stand-alone from the Docker engine and serves to connect Docker engines together to form a cluster. It's then possible to connect to the Swarm and run containers on the cluster. Swarm has a few features: - Allows us to specify a discovery service - Some control over where containers are placed (using filters/constraints/distribution strategies, etc…) - Exposes the same API as the Docker engine itself, allowing 3rd-party tools to interact seamlessly ------------------------------------------------------------------------------ Swarmkit is a new (2016) tool developed by the Docker team which provides functionality for running a cluster and distributing tasks to the machines in the cluster. Here are the main features: - Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions. - Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization, and transport encryption, automating both certificate issuance and rotation. - Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operate. ------------------------------------------------------------------------------ Docker Swarm Mode (Version 1.12 >) uses Swarmkit libraries & functionality in order to make container orchestration over multiple hosts (a cluster) very simple & secure to operate. There is a new set of features (the main one being docker swarm) which are now built into Docker itself to allow us to initiate a new Swarm and deploy tasks to that cluster. Docker Swarm is not being deprecated and is still a viable method for Docker multi-host orchestration, but Docker Swarm Mode (which uses the Swarmkit libraries under the hood) is the recommended way to begin a new Docker project where orchestration over multiple hosts is required. One of the big features in Docker 1.12 release is Swarm mode. Docker had Swarm available for Container orchestration from the 1.6 release. Docker released Swarmkit as an opensource project for orchestrating distributed systems a few weeks before Docker 1.12(RC) release. ------------------------------------------------------------------------------ "Swarm" refers to traditional Swarm functionality, "Swarm Mode" refers to new Swarm mode added in 1.12, "Swarmkit" refers to the plumbing open source orchestration project. ------------------------------------------------------------------------------ - Swarm is separate from Docker Engine and can run as Container. - Swarm Mode is integrated inside the Docker engine - Swarm needs an external KV store like Consul - Swarm Mode does not need a separate external KV store - In the Swarm service model is not available - In the Swarm Mode service model is available. This provides features like scaling, rolling update, service discovery, load balancing, and routing mesh - In Swarm communication is not secure - In Swarm Mode both control and data plane is secure - Swarm is integrated with machine and compose - Swarm Mode is not yet integrated with machine and compose as of release 1.12. It will be integrated in the upcoming releases ------------------------------------------------------------------------------

+ .env file, ARG,ENV, env_file (Sept. 29, 2020, 1:26 p.m.)

The .env file is only used during a pre-processing step when working with docker-compose.yml files. Dollar-notation variables like $HI are substituted for values contained in a ".env” named file in the same directory. ARG is only available during the build of a Docker image (RUN etc), not after the image is created and containers are started from it (ENTRYPOINT, CMD). You can use ARG values to set ENV values to work around that. ENV values are available to containers, but also RUN-style commands during the Docker build starting with the line where they are introduced. ----------------------------------------------------------------------------------------- If you set an environment variable in an intermediate container using bash (RUN export VARI=5 && …) it will not persist in the next command. There’s a way to work around that. An env_file is a convenient way to pass many environment variables to a single command in one batch. This should not be confused with a .env file. Setting ARG and ENV values leaves traces in the Docker image. Don’t use them for secrets that are not meant to stick around (well, you kinda can with multi-stage builds). -----------------------------------------------------------------------------------------

+ Docker Swarm - Nodes (Sept. 23, 2020, 7:58 p.m.)

A docker swarm is comprised of a group of physical or virtual machines operating in a cluster. When a machine joins the cluster, it becomes a node in that swarm. The docker swarm function recognizes three different types of nodes, each with a different role within the docker swarm ecosystem: - Manager Node The primary function of manager nodes is to assign tasks to worker nodes in the swarm. Manager nodes also help to carry out some of the managerial tasks needed to operate the swarm. Docker recommends a maximum of seven manager nodes for a swarm. - Leader Node When a cluster is established, the Raft consensus algorithm is used to assign one of them as the "leader node". The leader node makes all of the swarm management and task orchestration decisions for the swarm. If the leader node becomes unavailable due to an outage or failure, a new leader node can be elected using the Raft consensus algorithm. - Worker Node In a docker swarm with numerous hosts, each worker node functions by receiving and executing the tasks that are allocated to it by manager nodes. By default, all manager modes are also worker nodes and are capable of executing tasks when they have the resources available to do so.

+ Docker Swarm - Mode Services (Sept. 23, 2020, 7:56 p.m.)

Docker Swarm has two types of services: replicated and global. Replicated services: Swarm mode replicated services functions by you specifying the number of replica tasks for the swarm manager to assign to available nodes. Global services: Global services function by using the swam manager to schedule one task to each available node that meets the services constraints and resource requirements.

+ Handy Commands (Sept. 19, 2020, 2:08 p.m.)

docker container ls -a docker container rm <container_id> docker container rm $(docker ps -a -q) docker container stop <container_id> docker container stop $(docker ps -a -q) docker image ls docker image rm <image_id> docker image rm $(docker image ls -q) -------------------------------------------------------------------- docker-compose -f docker-compose.yml up --build -------------------------------------------------------------------- List only names: docker ps --format 'table {{.Names}}' --------------------------------------------------------------------

+ Docker Compose compatibility matrix (Sept. 12, 2020, 10:08 a.m.)

https://docs.docker.com/compose/compose-file/ Compose file format Docker Engine release 3.8 19.03.0+ 3.7 18.06.0+ 3.6 18.02.0+ 3.5 17.12.0+ 3.4 17.09.0+

+ Docker Service (Aug. 9, 2020, 4:03 p.m.)

To deploy an application image when Docker Engine is in swarm mode, you create a service. Frequently a service is an image for a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment. When you create a service, you specify which container image to use and which commands to execute inside running containers. You also define options for the service including: - The port where the swarm makes the service available outside the swarm - An overlay network for the service to connect to other services in the swarm - CPU and memory limits and reservations - A rolling update policy - The number of replicas of the image to run in the swarm

+ WORKDIR (July 17, 2020, 12:42 a.m.)

The WORKDIR command is used to define the working directory of a Docker container at any given time. The command is specified in the Dockerfile. Any RUN, CMD, ADD, COPY, or ENTRYPOINT command will be executed in the specified working directory. If the WORKDIR command is not written in the Dockerfile, it will automatically be created by the Docker compiler. Hence, it can be said that the command performs mkdir and cd implicitly. Example: FROM ubuntu:16.04 WORKDIR /project RUN npm install If the project directory does not exist, it will be created. The RUN command will be executed inside the project. --------------------------------------------------------------- Reusing WORKDIR WORKDIR can be reused to set a new working directory at any stage of the Dockerfile. The path of the new working directory must be given relative to the current working directory. Example: FROM ubuntu:16.04 WORKDIR /project RUN npm install WORKDIR ../project2 RUN touch file1.cpp While directories can be manually made and changed, it is strongly recommended that you use WORKDIR to specify the current directory in which you would like to work as​ it makes troubleshooting easier. ---------------------------------------------------------------

+ Docker Compose commands (July 16, 2020, 11:54 p.m.)

docker-compose rm --------------------------------------------------------------- docker-compose up --build web docker-compose up -d --build docker-compose up db docker-compose -f docker-compose.prod.yml up --build -d --------------------------------------------------------------- docker-compose run web --------------------------------------------------------------- Stop containers and remove the volumes created by up. docker-compose down --volumes docker-compose down --volumes --remove-orphans Stop containers and remove containers, networks, volumes, and images created by up. docker-compose down --rmi all --volumes --------------------------------------------------------------- docker-compose -f docker-compose.prod.yml run web python manage.py migrate docker-compose -f docker-compose.prod.yml run web python manage.py collectstatic --noinput ---------------------------------------------------------------

+ COPY vs ADD (July 16, 2020, 11:36 p.m.)

Although ADD and COPY are functionally similar, generally speaking, COPY is preferred. That’s because it’s more transparent than ADD. COPY only supports the basic copying of local files into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious. Consequently, the best use for ADD is local tar file auto-extraction into the image, as in ADD rootfs.tar.xz /.

+ RUN vs CMD (July 16, 2020, 11:29 p.m.)

RUN is an image build step, the state of the container after a RUN command will be committed to the container image. A Dockerfile can have many RUN steps that layer on top of one another to build the image. CMD is the command that the container executes by default when you launch the built image. A Dockerfile will only use the final CMD defined. The CMD can be overridden when starting a container with docker run $image $other_command. ENTRYPOINT is also closely related to CMD and can modify the way a container starts an image. ------------------------------------------------------------------ RUN - command triggers while we build the docker image. CMD - command triggers while we launch the created docker image. ------------------------------------------------------------------ RUN - Install Python, your container now has python burnt into its image. CMD - python hello.py, run your favorite script. ------------------------------------------------------------------

+ WORKDIR vs CD (July 16, 2020, 11:23 p.m.)

RUN cd / does absolutely nothing. WORKDIR / changes the working directory for future commands. Each RUN command runs in a new shell and a new environment (and technically a new container, though you won't usually notice this). The ENV and WORKDIR directives before it affect how it starts up. If you have a RUN step that just changes directories, that will get lost when the shell exits and the next step will start in the most recent WORKDIR of the image. FROM busybox WORKDIR /tmp RUN pwd # /tmp RUN cd / # no effect, resets after end of RUN line RUN pwd # still /tmp WORKDIR / RUN pwd # / RUN cd /tmp && pwd # /tmp RUN pwd # /

+ Docker compose file (July 16, 2020, 9:04 p.m.)

https://docs.docker.com/compose/compose-file/ ------------------------------------------------------------------------------ version: '3.8' services: restart: always build: ./web/ expose: - "8000" links: - postgres:postgres - redis:redis env_file: env volumes: - ./web:/data/web command: /usr/bin/gunicorn mydjango.wsgi:application -w 2 -b :8000 - Restart: This container should always be up, and it will restart if it crashes. - Build: We have to build this image using a Dockerfile before running it, this specifies the directory where the Dockerfile is located. - Expose: We expose the port 8000 to linked machines (that will be used by the NGINX container) - Links: We need to have access to the Postgres instance using the "postgres" name (This creates a "postgres" entry in the /etc/hosts files that points to the Postgres instance IP), idem for the Redis. - Env_file: This container will load all the environment variables from the env file. - Volumes: We specify the different mount points we want on this instance - Command: What command to run when starting the container? Here we start the WSGI process. ------------------------------------------------------------------------------

+ Docker Compose vs Dockerfile (July 16, 2020, 7:18 p.m.)

Think of Dockerfile as a set of instructions you would tell your system administrator what to install on this brand new server. For example: - We need a Debian linux - Add an apache web server - We need postgresql as well - Install midnight commander - When all done, copy all *.php, *.jpg, etc. files of our project into the webroot of the webserver (/var/www) By contrast, think of docker-compose.yml as a set of instructions you would tell your system administrator how the server can interact with the rest of the world. For example: - it has access to a shared folder from another computer, - it's port 80 is the same as the port 8000 of the host computer, A Dockerfile is a simple text file that contains the commands a user could call to assemble an image. The Compose file describes "the container in its running state", leaving the details on how to build the container to Dockerfiles. Docker Compose - is a tool for defining and running multi-container Docker applications. - define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. - get an app running in one command by just running docker-compose up When you define your app with Compose in development, you can use this definition to run your application in different environments such as CI, staging, and production. docker-compose makes it easy to start up multiple containers at the same time and automatically connect them together with some form of networking. -------------------------------------------------------------------- Example Dockerfile: FROM ubuntu:latest MAINTAINER john doe RUN apt-get update RUN apt-get install -y python python-pip wget RUN pip install Flask ADD hello.py /home/hello.py WORKDIR /home -------------------------------------------------------------------- Example, docker-compose.yml: version: "3" services: web: build: . ports: - '5000:5000' volumes: - .:/code - logvolume01:/var/log links: - redis redis: image: redis volumes: logvolume01: {} --------------------------------------------------------------------

+ Cheat Sheet (July 13, 2020, 11:41 a.m.)

## List Docker CLI commands docker docker container --help ## Display Docker version and info docker --version docker version docker info ## Execute Docker image. Pulls hello-world container form Dcker Hub. docker run hello-world ## List Docker images docker image ls ## List Docker containers (running, all, all in quiet mode) docker container ls docker container ls --all docker container ls -aq ## This will open a terminal running a light linux distro named busybox docker container run -i -t busybox /bin/sh ## Builds the docker container from the docker file. Must be in the root of the project. Uses caching. Don't miss the dot at he end of the command. docker build . ## Builds the docker container from scratch (no caching). Don't miss the dot at he end of the command. docker build --pull --no-cache . ## WARNING! This will remove: ## - all stopped containers ## - all networks not used by at least one container ## - all dangling images ## - all dangling build cache docker system prune ## Build images before starting containers. docker-compose up --build ## Creates a network for the reverse-proxy app. docker network create reverse-proxy ## Starts the docker container in the background. i.e. when the command prompt is closed the container continues to run. docker-compose up -d ## Stops the container in the current directory. docker-compose down . ## Allows you to interactively work with containers. docker exec -it <Container Name> /bin/sh ## Will get the ip address of a running container. docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" my-running-site

+ Change container hostname (July 12, 2020, 11:40 a.m.)

To change the hostname of a running container, you can use the "nsenter" command. 1- Using the command "docker container ls" from the column "COMMAND" find the command name of the docker container. 2- list the namespaces on the host with the "lsns" command: lsns 3- Find the PID of related to the COMMAND you found from step 1. 4- nsenter --target 14145 --uts hostname gitlab.mohsenhassani.ir

+ Dockerfile (July 11, 2020, 4:10 p.m.)

A Dockerfile is a file that contains a list of instructions that Docker should follow when creating our image. Docker has two stages: - Build stage - Run stage

+ Pipenv (July 11, 2020, 12:48 p.m.)

pipenv install --system --deploy --ignore-pipfile Use --system flag, so it will install all packages into the system python, and not into the virtualenv. Since docker containers do not need to have virtualenvs. Use --deploy flag, to stop your build fail if your Pipfile.lock is out of date. Use --ignore-pipfile, so it won't mess with our setup.

+ Docker behind socks proxy (Oct. 24, 2018, 2:29 p.m.)

1- mkdir -p /etc/systemd/system/docker.service.d 2- vim /etc/systemd/system/docker.service.d/http-proxy.conf 3- [Service] Environment="HTTP_PROXY=socks5://127.0.0.1:1080/" Environment="HTTPS_PROXY=socks5://127.0.0.1:1080/" 4- systemctl daemon-reload 5- systemctl restart docker

+ Commands (Oct. 24, 2018, 2:52 p.m.)

Downloads the image if it is not already present, or runs it as a container. docker run <image> ---------------------------------------------------------------- docker start <name | id> ---------------------------------------------------------------- Get the process ID of the container docker inspect container | grep Pid ---------------------------------------------------------------- Stop a running container: docker stop ContainerID ---------------------------------------------------------------- Show ports of a container: docker port InstanceID ---------------------------------------------------------------- See the top processes within a container: docker top ContainerID ---------------------------------------------------------------- docker images docker images -q q − It tells the Docker command to return the Image IDs only. ---------------------------------------------------------------- docker inspect <image> The output will show detailed information on the Image. ---------------------------------------------------------------- docker ps [-a include stopped containers] OR docker container ls ---------------------------------------------------------------- Statistics of a running container: docker stats ContainerID The output will show the CPU and Memory utilization of the Container. ---------------------------------------------------------------- Delete a container: docker rm ContainerID ---------------------------------------------------------------- Pause the processes in a running container: docker pause ContainerID ---------------------------------------------------------------- docker unpause ContainerID ---------------------------------------------------------------- Kill the processes in a running container docker kill ContainerID -------------------------------------------------------------- Attach to a running container: docker attach ContainerID I think this will hang/freeze, or I can't have any outputs. Use the following command instead: docker exec -it <container-id> bash ---------------------------------------------------------------- docker pull gitlab/gitlab-ce ---------------------------------------------------------------- Listing all docker networks: docker network ls ---------------------------------------------------------------- Inspecting a Docker network: docker network inspect networkname Example: docker network inspect bridge ---------------------------------------------------------------- docker logs -f <name> ---------------------------------------------------------------- --detach --name ---------------------------------------------------------------- See all the commands that were run with an image via a container: docker history ImageID ---------------------------------------------------------------- Removing Docker Images: docker rmi ImageID ---------------------------------------------------------------- Set the hostname inside the container: --hostname gitlab.mohsenhassani.com ---------------------------------------------------------------- docker run centos -it /bin/bash The -it argument is used to mention that we want to run in interactive tty mode. /bin/bash is used to run the bash shell once CentOS is up and running. ---------------------------------------------------------------- docker run -p 8080:8080 -p 50000:50000 jenkins The -p is used to map the port number of the internal Docker image to our main Ubuntu server so that we can access the container accordingly. ---------------------------------------------------------------- Tell Docker to expose the HTTP and SSH ports from GitLab on ports 30080 and 30022, respectively. --publish 30080:80 --publish 30022:22 ---------------------------------------------------------------- See information on the Docker running on the system: docker info Return Value The output will provide the various details of the Docker installed on the system such as: Number of containers Number of images The storage driver used by Docker The root directory used by Docker The execution driver used by Docker ---------------------------------------------------------------- Stop all running containers: docker stop $(docker ps -a -q) Delete all stopped containers: docker rm $(docker ps -a -q) ---------------------------------------------------------------- docker volume ls docker volume rm <volume> ----------------------------------------------------------------

+ Docker Compose - Installation (Oct. 24, 2018, 8:01 p.m.)

1- Check the latest version of "docker-compose-Linux-x86_64" from the following link: https://github.com/docker/compose/releases 2- Download the binary file: wget -O /usr/bin/docker-compose https://github.com/docker/compose/releases/download/v2.0.1/docker-compose-linux-x86_64 3- chmod +x /usr/bin/docker-compose ------------------------------------------------------------------------ This will NOT install the latest version. Follow the instructions above for having the latest tool. apt install docker-compose ------------------------------------------------------------------------ Docker-compose is an optional tool that you can use with Docker, just to make it easier to interact with Docker. It's very useful for the development environment, it lets you describe multiple different docker containers and how they should run and also let you forward ports and set dependencies between your containers. ------------------------------------------------------------------------

+ Difference between image and container (Dec. 13, 2018, 11:32 p.m.)

An instance of an image is called a container. When the image is started, you have a running container of this image. You can have many running containers of the same image. You can see all your images with "docker images" whereas you can see your running containers with "docker ps" (and you can see all containers with docker ps -a).

+ Command Examples - docker run (Dec. 14, 2018, 12:06 a.m.)

docker run -v /full/path/to/html/directory:/usr/share/nginx/html:ro -p 8080:80 -d nginx -v /full/path/to/html/directory:/usr/share/nginx/html:ro Maps the directory holding our web page to the required location in the image. The ro field instructs Docker to mount it in read-only mode. It’s best to pass Docker the full paths when specifying host directories. -p 8080:80 maps network service port 80 in the container to 8080 on our host system. -d detaches the container from our command line session. We don’t want to interact with this container. ---------------------------------------------------------------------- docker run --name foo -d -p 8080:80 mynginx - name foo gives the container a name, rather than one of the randomly assigned names. ---------------------------------------------------------------------- docker run busybox echo "hello from busybox" ---------------------------------------------------------------------- -P will publish all exposed ports to random ports We can see the ports by running: docker port InstanceID ---------------------------------------------------------------------- docker run -d -p 80:80 my_image service nginx start ---------------------------------------------------------------------- docker run -d -p 80:80 my_image nginx -g 'daemon off;' ---------------------------------------------------------------------- Restart policies --restart=always Restart only if the container exits with a non-zero exit status. Optionally, limit the number of restart retries the Docker daemon attempts. --restart=always Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container. --restart=unless-stopped Always restart the container regardless of the exit status, including on daemon startup, except if the container was put into a stopped state before the Docker daemon was stopped. ---------------------------------------------------------------------- VOLUME (shared filesystems): -v, --volume=[host-src:]container-dest[:<options>]: Bind mount a volume. The comma-delimited `options` are [rw|ro], [z|Z], [[r]shared|[r]slave|[r]private], and [nocopy]. The 'host-src' is an absolute path or a name value. If neither 'rw' or 'ro' is specified then the volume is mounted in read-write mode. The `nocopy` mode is used to disable automatically copying the requested volume path in the container to the volume storage location. For named volumes, `copy` is the default mode. Copy modes are not supported for bind-mounted volumes. --volumes-from="": Mount all volumes from the given container(s) ---------------------------------------------------------------------- USER -u="", --user="": Sets the username or UID used and optionally the groupname or GID for the specified command. ---------------------------------------------------------------------- WORKDIR The default working directory for running binaries within a container is the root directory (/), but the developer can set a different default with the Dockerfile WORKDIR command. The operator can override this with: -w="": Working directory inside the container ---------------------------------------------------------------------- docker run \ --rm \ --detach \ --env KEY=VALUE \ --ip 10.10.9.75 \ --publish 3000:3000 \ --volume my_volume \ --name my_container \ --tty --interactive \ --volume /my_volume \ --workdir /app \ IMAGE bash ---------------------------------------------------------------------- --rm Automatically remove the container when it exits. The alternative would be to manually stop it and then remove it. ----------------------------------------------------------------------

+ Managing Ports (Dec. 13, 2018, 11:54 p.m.)

In Docker, the containers themselves can have applications running on ports. When you run a container, if you want to access the application in the container via a port number, you need to map the port number of the container to the port number of the Docker host. To understand what ports are exposed by the container, you should use the Docker inspect command to inspect the image: docker inspect jenkins The output of the inspect command gives a JSON output. If we observe the output, we can see that there is a section of "ExposedPorts" and see that there are two ports mentioned. One is the data port of 8080 and the other is the control port of 50000. To run Jenkins and map the ports, you need to change the Docker run command and add the ‘p’ option which specifies the port mapping. So, you need to run the following command: docker run -p 8080:8080 -p 50000:50000 jenkins The left-hand side of the port number mapping is the Docker host port to map to and the right-hand side is the Docker container port number.

+ Docker Network (Dec. 14, 2018, 12:33 a.m.)

When docker is installed, it creates three networks automatically. docker network ls NETWORK ID NAME DRIVER SCOPE c2c695315b3a bridge bridge local a875bec5d6fd host host local ead0e804a67b none null local -------------------------------------------------------------------- The bridge network is the network in which containers are run by default. So that means when we run a container, it runs in this bridge network. To validate this, let's inspect the network: docker network inspect bridge -------------------------------------------------------------------- You can see that our container is listed under the Containers section in the output. What we also see is the IP address this container has been allotted - 172.17.0.2. -------------------------------------------------------------------- Defining our own networks: docker network create my-network-net docker run -d --name es --net my-network-net -p 9200:9200 -p 9300:9300 --------------------------------------------------------------------

+ When to use --hostname in docker? (Dec. 15, 2018, 1:25 a.m.)

The --hostname flag only changes the hostname inside your container. This may be needed if your application expects a specific value for the hostname. It does not change DNS outside of docker, nor does it change the networking isolation, so it will not allow others to connect to the container with that name. You can use the container name or the container's (short, 12 character) id to connect from container to container with docker's embedded dns as long as you have both containers on the same network and that network is not the default bridge.

+ Docker Engine - Installation (Feb. 28, 2017, 9:01 a.m.)

Debian: 1- Install packages to allow apt to use a repository over HTTPS: apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common 2- Add Docker’s official GPG key: curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - 3- Use the following command to set up the stable repository: add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" 4- Install Docker Engine and containerd: apt update apt install docker-ce docker-ce-cli containerd.io 5- To make working with Docker easier, add your username to the Docker users group. sudo usermod -aG docker mohsen ------------------------------------------------------------------ Fedora: Install Community Edition (CE) 1- Install the dnf-plugins-core package which provides the commands to manage your DNF repositories from the command line. dnf -y install dnf-plugins-core 2- Use the following command to set up the stable repository. (You might need a proxy) proxychains4 dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo 3- Install the latest version of Docker CE: (You might need a proxy) dnf install docker-ce ------------------------------------------------------------------

+ Introduction (Feb. 27, 2017, 11 a.m.)

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code. ------------------------------------------------------------ In a way, Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they're running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application. ------------------------------------------------------------ Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines. ------------------------------------------------------------ Docker can be integrated into various infrastructure tools, including Amazon Web Services, Ansible, CFEngine, Chef, Google Cloud Platform, IBM Bluemix, HPE Helion Stackato, Jelastic, Jenkins, Kubernetes, Microsoft Azure, OpenStack Nova, OpenSVC, Oracle Container Cloud Service, Puppet, Salt, Vagrant, and VMware vSphere Integrated Containers.