Using docker.io (Ubuntu maintained repo)
mkdir ~/docker && cd ~/docker sudo apt update && sudo apt install -y docker.io curl sudo gpasswd-a $USER docker && docker run hello-world && docker info
Using Docker official (to ensure we have the latest and/or installing on other than Ubuntu)
mkdir ~/docker && cd ~/docker curl -SL -o install.sh https://get.docker.com && sudo bash ./install.sh sudo gpasswd -a $USER docker && docker run hello-world && docker info
Problems running docker commands without sudo? https://stackoverflow.com/questions/48568172/docker-sock-permission-denied
Create and run a container from an image, with a custom name:
docker run --name <container_name> <image_name>
Run a container with and publish a container’s port(s) to the host.
docker run -p <host_port>:<container_port> <image_name>
Run a container in the background
docker run -d <image_name>
Start or stop an existing container:
docker start|stop <container_name> (or <container-id>)
Remove a stopped container:
docker rm <container_name>
Open a shell inside a running container:
docker exec -it <container_name> sh
Fetch and follow the logs of a container:
docker logs -f <container_name>
To inspect a running container:
docker inspect <container_name> (or <container_id>)
To list currently running containers:
docker ps
List all docker containers (running and stopped):
docker ps --all
View resource usage stats
docker container stats
Ah, the age-old debate of centralized versus decentralized databases in containerized environments! Let's weigh the pros and cons of each approach to help you make an informed decision.
This approach involves running a single MariaDB server instance, either in its own Docker container or directly on the host, which is then shared by all other containers.
Pros:
1. Resource Efficiency:
2. Simplified Management:
3. Consistency:
Cons:
1. Single Point of Failure:
2. Potential Bottlenecks:
3. Network Latency:
This approach involves each application running its own MariaDB instance within its Docker Compose setup.
Pros:
1. Isolation:
2. Scalability:
3. Simplified Dependency Management:
Cons:
1. Resource Utilization:
2. Complexity:
Given your VPS resource constraints (2 CPUs, 4GB RAM, and limited storage), a dedicated MariaDB server seems to be the more efficient and manageable option. Here’s why:
1. Resource Efficiency: A single MariaDB instance will make better use of your limited CPU and memory resources. 2. Simplified Management: Easier to manage backups, monitoring, and maintenance. 3. Consistency: Provides a consistent environment for all applications.
1. Docker Compose Example for Dedicated MariaDB: Let's assume you want to run the MariaDB container separately and have other containers connect to it. You can define a Docker network that all your services will use.
Docker Compose for MariaDB:
version: '3.1' services: mariadb: image: mariadb:latest container_name: mariadb environment: MYSQL_ROOT_PASSWORD: your_root_password MYSQL_DATABASE: your_database MYSQL_USER: your_user MYSQL_PASSWORD: your_password volumes: - mariadb_data:/var/lib/mysql networks: - my_network networks: my_network: volumes: mariadb_data:
2. Docker Compose for an Application:
version: '3.1' services: myapp: image: your_app_image environment: DB_HOST: mariadb DB_USER: your_user DB_PASSWORD: your_password DB_NAME: your_database networks: - my_network networks: my_network: external: true
By using an external network, both the MariaDB container and your application containers can communicate easily.
3. Resource Limits:
Example:
services: mariadb: image: mariadb:latest deploy: resources: limits: cpus: '1.0' memory: 1G
By centralizing your database, you’ll likely achieve better resource utilization and simpler management, aligning well with your current VPS setup.
Remove stopped containers and all unused images, volumes, and networks
docker system prune -a
Remove unused images
docker image prune
Remove unused volumes
docker volume prune
Remove unused networks
docker network prune
(Delete all containers, images, volumes, and networks (and start over)
Stop and remove all containers:
docker rm -f $(docker ps -a)
*If that doesn’t work (i.e. permission denied)*
sudo systemctl stop docker sudo rm -rf /var/lib/docker/containers/* # Could also specify a single container by container ID sudo rm -rf /var/lib/docker/containers/* # Could also specify a single container by container ID sudo systemctl start docker sudo docker ps
Cleanup images, volumes, and networks leftover from deleted containers:
docker image prune && docker volume prune && docker network prune
To move a Docker container, with all of its data, to a new host machine: On the old host machine, save the container data to a tar archive using the docker export command:
docker export <container-name> > <filename>.tar
Transfer the tar archive to the new host machine On the new host machine, import the tar archive to create a new Docker image using the docker import command:
cat <filename>.tar | docker import - <image-name>
Start a new container from the imported image using the docker run command:
docker run -it --name <new-container-name> <image-name>
To migrate the network settings and volumes to the new host machine, use the docker save and docker load commands to save the entire Docker image, including network settings and volumes, and then load it onto the new host machine. On the old host machine, save the image to a tar archive using the docker save command:
docker save <image-name> > <filename>.tar
Transfer the tar archive to the new host machine. On the new host machine, load the image from the tar archive using the docker load command:
docker load < <filename>.tar
Finally, start a new container from the loaded image, specifying any desired network and volume configurations, using the docker run command: php Copy code docker run -it –name <new-container-name> –network <network-name> -v <host-path>:<container-path> <image-name> Note: This method will preserve the network settings and volumes, but may also transfer any unwanted configurations or dependencies to the new host machine. It is recommended to review the contents of the tar archive before loading it on the new host machine.
[email protected] How do I edit HTML files in a volume on a Docker container? To edit HTML files in a volume on a Docker container, you have several options:
Use a text editor inside the container: Start a new terminal session in the container: python Copy code docker exec -it <container-name> bash Navigate to the volume containing the HTML files: bash Copy code cd <container-path> Use a text editor, such as nano or vi, to edit the HTML files: php Copy code nano <filename>.html Map the volume to a host directory and edit the files directly on the host machine: Use the -v option to map the volume when starting the container: ruby Copy code docker run -it –name <container-name> -v <host-path>:<container-path> <image-name> Edit the HTML files directly in the host directory using a text editor. Note: Changes made to the HTML files in the volume will persist even after the container is stopped or deleted, as long as the volume is not deleted. It is a good practice to regularly backup important data stored in volumes.
[email protected] how do I backup data stored in volumes? To backup data stored in volumes, you can use the following methods:
Use the docker cp command to copy the data from the container to the host machine: ruby Copy code docker cp <container-name>:<container-path> <host-path> Create a new image of the container that includes the data in the volume: php Copy code docker commit <container-name> <new-image-name> Save the image to a tar archive using the docker save command: php Copy code docker save <new-image-name> > <filename>.tar Transfer the tar archive to a backup location using a method such as scp or a USB drive. Note: You can also use backup and restore tools that are specifically designed for backing up Docker containers and their data, such as Velero or Portainer Backup and Restore. It is recommended to regularly backup important data stored in volumes to ensure its availability and prevent data loss.
[email protected] Can I backup my Docker containers and volumes on a regular schedule using crown? Yes, you can backup your Docker containers and volumes on a regular schedule using cron.
Cron is a tool in Unix-like operating systems that allows you to schedule repetitive tasks, such as backups, at specified times. To use cron for backups, you would create a script that performs the backup steps, and then use cron to run the script at regular intervals.
Here's an example of how to use cron for Docker container and volume backups:
Create a backup script: Write a shell script that performs the backup steps using the docker cp and docker save commands, as described in my previous answer. Save the script in a file, such as backup.sh. Make the script executable: bash Copy code chmod +x backup.sh Edit the crontab file: Copy code crontab -e Schedule the backup script: Add the following line to the crontab file, replacing <backup-path> with the desired backup location and <interval> with the desired backup interval (in cron syntax): javascript Copy code <interval> /bin/bash <backup-path>/backup.sh For example, to run the backup script every day at 2:00 AM, add the following line: javascript Copy code 0 2 * * * /bin/bash <backup-path>/backup.sh Save the changes and exit the crontab file. Note: Cron will run the backup script at the specified interval, but it is still important to regularly verify that the backups are successful and can be restored in the event of data loss.