Earlier I noticed that there's one problem with Docker support
documentation. Unfortunately, while it is possible to find out the very
basic scenarios in its Get Started
section, you need to dig through the Internet for hours to
find information about use cases. Use cases is an answer to the question "What are we trying to do here, anyway?".
So, I started to collect this information for myself and pair it with command sets. Below you can find a use case for setting up a docker image and performing some basic commands on it. I shall extend this collection later, if I'm lucky.
NOTE: mind docker name:tag vs tag command
NOTE: mind potential issues connected with 'latest' tag
$ docker image ls
$ docker image ls -q
$ docker images
normal:
$ docker image ls -q | xargs docker image rm
forced:
$ docker image ls -q | xargs docker image rm -f
tagging as a remote image:
$ docker container ls -a
$ docker logs
$ docker container kill container_id
nix:
$ docker exec -it container_id linux_shell_command
win:
$ winpty docker exec -it container_id linux_shell_command
NOTE: winpty was added because of an error, reproducible with GitBash under Windows: "the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'"
So, I started to collect this information for myself and pair it with command sets. Below you can find a use case for setting up a docker image and performing some basic commands on it. I shall extend this collection later, if I'm lucky.
-+-------------------------+-
create an image from a Dockerfile
-+-------------------------+-
$ docker
image build -t name:tag path_to_dockerfile
NOTE: mind the context, wherever your Dockerfile is, it must be a part of a valid directory, because Docker understands the contents of Dockerfile home directory as a kind of context root. This is why usually it is better to keep your Dockerfile in your project root
NOTE: mind the context, wherever your Dockerfile is, it must be a part of a valid directory, because Docker understands the contents of Dockerfile home directory as a kind of context root. This is why usually it is better to keep your Dockerfile in your project root
NOTE: mind docker name:tag vs tag command
NOTE: mind potential issues connected with 'latest' tag
NOTE: mind the old and new set of command names. (Docker designers re-worked the set of commands some time ago, so currently there are old commands and new commands. Tip: new commands are usually longer, may have three words instead of two).
-+-------------------------+-
list images
-+-------------------------+-
list images
-+-------------------------+-
$ docker image ls
$ docker image ls -q
$ docker images
-+-------------------------+-
remove images
-+-------------------------+-
remove images
-+-------------------------+-
normal:
$ docker image ls -q | xargs docker image rm
forced:
$ docker image ls -q | xargs docker image rm -f
-+-------------------------+-
tag your image (convert code to a name or an url)
-+-------------------------+-
tagging as a remote image:
$ docker
tag image_id custom_repo_host:custom_repo_port/project_name/image_name:image_tag
LINKS:
-+-------------------------+-
login to custom repo
-+-------------------------+-
$ cat ../file_w_password | docker login repo_host:repo_port -u user --password-stdin
NOTE:
yes, it means that your password should be stored in a file to use this
approach. Alternatively, you can give your password as text, but not
all the shell emulators will allow that. Security is not discussed here
=)
-+-------------------------+-
push to a custom registry
-+-------------------------+-
$ docker
push target_location_with_image_name_and_tag
NOTE: remote target location may look like this:
repo.somehost.org:9999/project_name/repo_name/image_name:image_tag
NOTE: you should tag it first (see above)
NOTE: you should tag it first (see above)
LINKS:
-+-------------------------+-
run it without self-destruction (so you can kill it later)
-+-------------------------+-
$ docker run -dit target_location_with_image_name_and_tag
$ docker container ls -a
$ docker logs
$ docker container kill container_id
NOTE: all the info within docker container instance dies together with the container
-+-------------------------+-
connecting to a container and executing a command within
-+-------------------------+-
connecting to a container and executing a command within
-+-------------------------+-
nix:
$ docker exec -it container_id linux_shell_command
win:
$ winpty docker exec -it container_id linux_shell_command
NOTE: winpty was added because of an error, reproducible with GitBash under Windows: "the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'"
No comments:
Post a Comment