====== Docker Reference ====== ==== Install Docker ==== //Using docker.io (Ubuntu maintained repo)// mkdir ~/docker && cd ~/docker sudo apt update && sudo apt install -y docker.io curl sudo gpasswd-a $USER docker && docker run hello-world && docker info //Using Docker official (to ensure we have the latest and/or installing on other than Ubuntu)// mkdir ~/docker && cd ~/docker curl -SL -o install.sh https://get.docker.com && sudo bash ./install.sh sudo gpasswd -a $USER docker && docker run hello-world && docker info **//Problems running docker commands without sudo? https://stackoverflow.com/questions/48568172/docker-sock-permission-denied //** ---- ==== Docker Basic Commands ==== Create and run a container from an image, with a custom name: docker run --name Run a container with and publish a container’s port(s) to the host. docker run -p : Run a container in the background docker run -d Start or stop an existing container: docker start|stop (or ) Remove a stopped container: docker rm Open a shell inside a running container: docker exec -it sh Fetch and follow the logs of a container: docker logs -f To inspect a running container: docker inspect (or ) To list currently running containers: docker ps List all docker containers (running and stopped): docker ps --all View resource usage stats docker container stats ---- ==== Docker Run Reference ==== [[https://docs.docker.com/engine/reference/run/|Docker run reference]] ---- ==== Securing Docker ==== [[https://youtu.be/KINjI1tlo2w?si=Tk-x6dOEQQLjfPGG|Securing Docker]] ---- ==== Working With Docker Secrets ==== === Method 1: ENV variables === === Method 2: .env file === ---- ==== Shared Database Container ==== Ah, the age-old debate of centralized versus decentralized databases in containerized environments! Let's weigh the pros and cons of each approach to help you make an informed decision. === Option 1: Dedicated MariaDB Server === This approach involves running a single MariaDB server instance, either in its own Docker container or directly on the host, which is then shared by all other containers. **Pros:**\\ 1. Resource Efficiency: * Memory and CPU Utilization: A single MariaDB instance will generally use less memory and CPU compared to running multiple instances. This is especially important given your limited resources (2 CPUs and 4GB RAM). * Storage: You avoid data duplication and can optimize storage utilization more effectively. 2. Simplified Management: * Backups: You only need to manage backups for one database instance. * Monitoring and Maintenance: Easier to monitor and maintain a single database server. 3. Consistency: * Configuration: Consistent configuration and versioning across all applications. * Data Integrity: Easier to enforce data integrity and constraints when all data is in a single database. **Cons:**\\ 1. Single Point of Failure: * If the database server goes down, all dependent applications will be affected. 2. Potential Bottlenecks: * Although you mentioned performance isn't a primary concern, a single database server could become a bottleneck if the load increases unexpectedly. 3. Network Latency: * Depending on your network setup, there could be slight latency due to network overhead, though this is usually minimal in a local environment. === Option 2: Separate Databases per Docker Compose File === This approach involves each application running its own MariaDB instance within its Docker Compose setup. **Pros:**\\ 1. Isolation: * Each application is fully isolated, reducing the risk of one application affecting the database of another. 2. Scalability: * You can scale each application independently, though this might be less of a concern given your resource constraints. 3. Simplified Dependency Management: * Each application's dependencies are self-contained, making it easier to manage versions and configurations independently. **Cons:**\\ 1. Resource Utilization: * Memory and CPU: Running multiple MariaDB instances will consume more memory and CPU, which is a significant consideration given your resource limitations. * Storage: Increased storage usage due to data duplication and multiple database instances. 2. Complexity: * Backups: More complex backup strategies as each database needs to be backed up individually. * Monitoring and Maintenance: More effort required to monitor and maintain multiple database instances. === Recommendation === Given your VPS resource constraints (2 CPUs, 4GB RAM, and limited storage), a dedicated MariaDB server seems to be the more efficient and manageable option. Here’s why: 1. Resource Efficiency: A single MariaDB instance will make better use of your limited CPU and memory resources. 2. Simplified Management: Easier to manage backups, monitoring, and maintenance. 3. Consistency: Provides a consistent environment for all applications. === Implementation Tips === 1. Docker Compose Example for Dedicated MariaDB: Let's assume you want to run the MariaDB container separately and have other containers connect to it. You can define a Docker network that all your services will use. Docker Compose for MariaDB: version: '3.1' services: mariadb: image: mariadb:latest container_name: mariadb environment: MYSQL_ROOT_PASSWORD: your_root_password MYSQL_DATABASE: your_database MYSQL_USER: your_user MYSQL_PASSWORD: your_password volumes: - mariadb_data:/var/lib/mysql networks: - my_network networks: my_network: volumes: mariadb_data: 2. Docker Compose for an Application: version: '3.1' services: myapp: image: your_app_image environment: DB_HOST: mariadb DB_USER: your_user DB_PASSWORD: your_password DB_NAME: your_database networks: - my_network networks: my_network: external: true By using an external network, both the MariaDB container and your application containers can communicate easily. 3. Resource Limits: - Consider setting resource limits for your containers to prevent any single container from consuming too many resources. This can be done using the resources block in your Docker Compose files. Example: services: mariadb: image: mariadb:latest deploy: resources: limits: cpus: '1.0' memory: 1G By centralizing your database, you’ll likely achieve better resource utilization and simpler management, aligning well with your current VPS setup. ---- ==== Docker cleanup ==== Remove stopped containers and all unused images, volumes, and networks docker system prune -a Remove unused images docker image prune Remove unused volumes docker volume prune Remove unused networks docker network prune ---- ==== Docker nuke-n-pave ==== (Delete all containers, images, volumes, and networks (and start over) Stop and remove all containers: docker rm -f $(docker ps -a) ***If that doesn’t work (i.e. permission denied)*** sudo systemctl stop docker sudo rm -rf /var/lib/docker/containers/* # Could also specify a single container by container ID sudo rm -rf /var/lib/docker/containers/* # Could also specify a single container by container ID sudo systemctl start docker sudo docker ps Cleanup images, volumes, and networks leftover from deleted containers: docker image prune && docker volume prune && docker network prune ---- ---- To move a Docker container, with all of its data, to a new host machine: On the old host machine, save the container data to a tar archive using the docker export command: docker export > .tar Transfer the tar archive to the new host machine On the new host machine, import the tar archive to create a new Docker image using the docker import command: cat .tar | docker import - Start a new container from the imported image using the docker run command: docker run -it --name To migrate the network settings and volumes to the new host machine, use the docker save and docker load commands to save the entire Docker image, including network settings and volumes, and then load it onto the new host machine. On the old host machine, save the image to a tar archive using the docker save command: docker save > .tar Transfer the tar archive to the new host machine. On the new host machine, load the image from the tar archive using the docker load command: docker load < .tar Finally, start a new container from the loaded image, specifying any desired network and volume configurations, using the docker run command: php Copy code docker run -it --name --network -v : Note: This method will preserve the network settings and volumes, but may also transfer any unwanted configurations or dependencies to the new host machine. It is recommended to review the contents of the tar archive before loading it on the new host machine. openai@donnybahama.com How do I edit HTML files in a volume on a Docker container? To edit HTML files in a volume on a Docker container, you have several options: Use a text editor inside the container: Start a new terminal session in the container: python Copy code docker exec -it bash Navigate to the volume containing the HTML files: bash Copy code cd Use a text editor, such as nano or vi, to edit the HTML files: php Copy code nano .html Map the volume to a host directory and edit the files directly on the host machine: Use the -v option to map the volume when starting the container: ruby Copy code docker run -it --name -v : Edit the HTML files directly in the host directory using a text editor. Note: Changes made to the HTML files in the volume will persist even after the container is stopped or deleted, as long as the volume is not deleted. It is a good practice to regularly backup important data stored in volumes. openai@donnybahama.com how do I backup data stored in volumes? To backup data stored in volumes, you can use the following methods: Use the docker cp command to copy the data from the container to the host machine: ruby Copy code docker cp : Create a new image of the container that includes the data in the volume: php Copy code docker commit Save the image to a tar archive using the docker save command: php Copy code docker save > .tar Transfer the tar archive to a backup location using a method such as scp or a USB drive. Note: You can also use backup and restore tools that are specifically designed for backing up Docker containers and their data, such as Velero or Portainer Backup and Restore. It is recommended to regularly backup important data stored in volumes to ensure its availability and prevent data loss. openai@donnybahama.com Can I backup my Docker containers and volumes on a regular schedule using crown? Yes, you can backup your Docker containers and volumes on a regular schedule using cron. Cron is a tool in Unix-like operating systems that allows you to schedule repetitive tasks, such as backups, at specified times. To use cron for backups, you would create a script that performs the backup steps, and then use cron to run the script at regular intervals. Here's an example of how to use cron for Docker container and volume backups: Create a backup script: Write a shell script that performs the backup steps using the docker cp and docker save commands, as described in my previous answer. Save the script in a file, such as backup.sh. Make the script executable: bash Copy code chmod +x backup.sh Edit the crontab file: Copy code crontab -e Schedule the backup script: Add the following line to the crontab file, replacing with the desired backup location and with the desired backup interval (in cron syntax): javascript Copy code /bin/bash /backup.sh For example, to run the backup script every day at 2:00 AM, add the following line: javascript Copy code 0 2 * * * /bin/bash /backup.sh Save the changes and exit the crontab file. Note: Cron will run the backup script at the specified interval, but it is still important to regularly verify that the backups are successful and can be restored in the event of data loss. ---- ==== Moving Docker Containers Between Hosts ==== https://www.tutorialspoint.com/how-to-move-docker-containers-between-different-hosts