Background
There has always been a challenge with the environment when running Ansible, just because it is hard to maintain across different environments (developers, pipelines, AWX, standalone execution, etc….).
How do we make sure Ansible runs with the correct dependencies like Python libraries, Ansible roles, collections and system packages? This has mostly been solved manually and separately in each environment.
Another way is to add tasks in a role to make sure dependencies are installed before each run, this will of course put extra time to your execution that is very unwanted.
Python dependencies are usually solved by adding a requirements.txt file or similar to the project. Making sure that all the dependencies for modules, plugins, roles and collections etc. are added, for the project to work properly.
$ pip install ansible requests # installs ansible and requests
$ pip freeze > requirements.txt # saves the installed libs to file
$ pip install -r requirements.txt # installs all the dependencies
The reason why it is so important to have all the dependencies in the control-node environment is the basics of how Ansible remotely executes the Python code on the managed node with the Ansiballz framework.
Prior to version 2.10 every module and plugin was part of Ansible itself. This created scalability issues with the number of new modules constantly being added. A more “sane” model was in urgent need. With the release of version 2.10 modules and plugins were migrated to collections which are no longer part of the Ansible base package installation. With this model you only have to install the modules and plugins that you actually use. Name change in ansible 4.0.0 onwards ansible-base to ansible-core, just to make it a little bit harder.
We can also have Ansible roles/collections dependencies, these can be defined in a requirements.yml file and installed manually with the ansible-galaxy command:
$ ansible-galaxy collection install -r requirements.yml
And then we need to make sure that Ansible will use the correct Python interpreter that we have in our environment to use the requirements we have installed. Read this page about the Interpreter Discovery if needed.
There is also the system level specifics to take into consideration. If there are any packages that are needed for installing Python dependencies or needed for some local execution.
To solve all these use-cases Ansible have introduced “Execution Environment”, let's have a look at how this helps us.
Ansible-builder is a tool to help us create the execution environment from the requirement files and from the collections it parses them as well to build a container image.
From the documentation we can see how ansible-builder parse for requirements in collections
A file meta/execution-environment.yml references the Python and/or bindep requirements files
A file named requirements.txt is in the root level of the collection
A file named bindep.txt is in the root level of the collection
Introspect
First we need to install the ansible-builder.
$ pip install ansible-builder
To manually verify the requirements in a collection we can run
$ ansible-builder introspect --sanitize ~/.ansible/collections/
------------------
python:
- 'jsonschema # from collection ansible.utils'
- 'textfsm # from collection ansible.utils'
- 'ttp # from collection ansible.utils'
- 'xmltodict # from collection ansible.utils'
- 'pytz # from collection awx.awx'
- 'python-dateutil>=2.7.0 # from collection awx.awx'
- 'awxkit # from collection awx.awx'
- 'dnacentersdk>=2.0.2 # from collection cisco.dnac'
system:
- 'gcc-c++ [doc test platform:rpm] # from collection ansible.utils'
- 'python3-devel [test platform:rpm] # from collection ansible.utils'
- 'python3 [test platform:rpm] # from collection ansible.utils'
- 'python38-pytz [platform:centos-8 platform:rhel-8] # from collection awx.awx'
- 'python38-requests [platform:centos-8 platform:rhel-8] # from collection awx.awx'
- 'python38-pyyaml [platform:centos-8 platform:rhel-8] # from collection awx.awx'
The command expects that there is a folder at the path called ”ansible_collections” that it will introspec. In this case we have the awx.awx, juniper.device and cisco.dnac collections installed. The dependencies for juniper.device collection are missing, this is because they don’t have a requirements.txt file in the collection root folder but rather in their repository root folder.
Galaxy (collections/roles)
Collection dependencies are defined in requirements.yml file to be used in the execution-environment, in our case two collections, juniper.device and awx.awx.
$ cat requirements.yml
------------
collections:
- name: juniper.device
version: 1.0.0
- name: awx.awx
version: 19.1.0
- name: cisco.dnac
version: 2.0.7
System level
Depending on the base-image we are building on top of, we might need some system level dependencies, one example is for python-ldap.
Bindep is a tool that handles binaries and makes sure they are installed originally used in the Openstack project. In our case we need to define a couple of system packages for Centos 8 in the bindep.txt file that are missing to use the python-ldap package.
$ cat bindep.txt gcc python38-devel openldap-devel
gcc
python38-devel
openldap-devel
Build
Now we need to create and define an execution-environment.yml file for the input to the ansible-builder:
---
version: 1
dependencies:
galaxy: requirements.yml
python: requirements.txt
system: bindep.txt
additional_build_steps:
prepend: |
RUN pip install pip --upgrade
RUN whoami
RUN cat /etc/os-release
append:
- RUN echo This is a post-install command!
- RUN ls -la /etc
This YAML file declares the dependencies, base-images to use and build-steps etc. that we want for our execution environment. Building on top of a OCI compliant container format, we can add instructions pre/post to what the builder itself adds. These can be written with a multi-line string like prepend or as a list of commands like append.
Supported runtime engines for Ansible-builder are Docker and Podman, lets run the build and verify the output.
$ ansible-builder build
Running command:
docker build -f context/Dockerfile -t ansible-execution-env:latest context
Running command:
docker run --rm -v /Users/tobias_sdnit/Library/Caches/pypoetry/virtualenvs/ansible-ee-OudvHk9-py3.9/lib/python3.9/site-packages/ansible_builder:/ansible_builder_mount:Z ansible-execution-env:latest python3 /ansible_builder_mount/introspect.py
Running command:
docker build -f context/Dockerfile -t ansible-execution-env:latest context
Complete! The build context can be found at: /Users/tobias_sdnit/git/ansible-ee/context
$ tree context
context
├── Dockerfile
└── _build
├── bindep_combined.txt
├── requirements.yml
└── requirements_combined.txt
In the requirements_combined.txt file we find the collections from ansible.utils, awx.awx, cisco.dnac and our user defined. The user defined are because the juniper.device collection didn’t have a requirements.txt file in the root folder of its collection.
$ cat context/_build/requirements_combined.txt
jsonschema # from collection ansible.utils
textfsm # from collection ansible.utils
ttp # from collection ansible.utils
xmltodict # from collection ansible.utils,user
pytz # from collection awx.awx
python-dateutil>=2.7.0 # from collection awx.awx
awxkit # from collection awx.awx
dnacentersdk>=2.0.2 # from collection cisco.dnac
junos-eznc>=2.5.4 # from collection user
jsnapy>=1.3.4 # from collection user
jxmlease # from collection user
In the requirements.yml there is nothing new, dependencies for collections are handled by ansible-galaxy at installation. However the bindep_combined.txt have the system level dependencies from the awx.awx and the ansible.utils collection. Ansible.utils is a dependency collection to cisco.dnac.
$ cat context/_build/bindep_combined.txt
gcc-c++ [doc test platform:rpm] # from collection ansible.utils
python3-devel [test platform:rpm] # from collection ansible.utils
python3 [test platform:rpm] # from collection ansible.utils
python38-pytz [platform:centos-8 platform:rhel-8] # from collection awx.awx
python38-requests [platform:centos-8 platform:rhel-8] # from collection awx.awx
python38-pyyaml [platform:centos-8 platform:rhel-8] # from collection awx.awx
gcc # from collection user python38-devel # from collection user
openldap-devel # from collection user
If we look into the Dockerfile we see how the dependencies are installed and the image is actually built:
ARG ANSIBLE_RUNNER_IMAGE=quay.io/ansible/ansible-runner:devel
ARG PYTHON_BUILDER_IMAGE=quay.io/ansible/python-builder:latest
FROM $ANSIBLE_RUNNER_IMAGE as galaxy
ARG ANSIBLE_GALAXY_CLI_COLLECTION_OPTS=
ADD _build /build
WORKDIR /build
RUN ansible-galaxy role install -r requirements.yml --roles-path /usr/share/ansible/roles
RUN ansible-galaxy collection install $ANSIBLE_GALAXY_CLI_COLLECTION_OPTS -r requirements.yml --collections-path /usr/share/ansible/collections
FROM $PYTHON_BUILDER_IMAGE as builder
ADD _build/requirements_combined.txt /tmp/src/requirements.txt
ADD _build/bindep_combined.txt /tmp/src/bindep.txt
RUN assemble
FROM $ANSIBLE_RUNNER_IMAGE
RUN pip install pip --upgrade
RUN whoami
RUN cat /etc/os-release
COPY --from=galaxy /usr/share/ansible /usr/share/ansible
COPY --from=builder /output/ /output/
RUN /output/install-from-bindep && rm -rf /output/wheels
RUN echo This is a post-install command!
RUN ls -la /etc
And to verify that the image is built with the correct collection we can run the following command.
$ docker run --rm ansible-execution-env ansible-galaxy collection list
# /usr/share/ansible/collections/ansible_collections
Collection Version
-------------- -------
ansible.utils 2.1.0
awx.awx 19.1.0
cisco.dnac 2.0.7
juniper.device 1.0.0
A quote from the projects readme:
”A tool and python library that helps when interfacing with Ansible directly or as part of another system whether that be through a container image interface, as a standalone tool, or as a Python module that can be imported. ....”
This is exactly what we need, a CLI tool to execute our ansible playbooks inside the execution environment. That also works in the same way whether we are executing the playbooks in CLI, AWX or in another Python module. To strive for freedom for the users but at the same time guarantee that the end-result will be the same.
Let's look at how this can be done with ansible-runner and how to structure this in a good way, first we need to install the ansible-runner:
$ pip install ansible-runner==2.0.0a2
Ansible-runner can take an input directory hierarchy structure just like ansible, this helps us to organize the input and output from the ansible-runner tool.
Example folder directory, ansible-runner project also provides a demo directory:
$ tree ansible-ee
├── env
├── inventory
├── project
│ ├── roles
│ │ └── testrole
│ └── test.yml
I created a small playbook test.yml to verify the runner and the execution in the container.
---
- hosts: localhost
roles:
- testrole
tasks:
- name: Check the hostname
debug:
msg: "Hostname: {{ ansible_facts.hostname }}"
- name: Check the installed collections
shell: ansible-galaxy collection list
register: collections
- name: Print the installed collections
debug:
msg: "{{ collections.stdout_lines|list }}"
The playbook includes the testrole and a debug task that prints the hostname of the localhost to stdout and the installed collections. Lets run the playbook and verify the execution was successful in the container.
$ ansible-runner playbook --container-runtime docker --container-image \ ansible-execution-env project/test.yml
From the output we see the hostname is “2b5d5522d957”, this is the hostname inside the container that was created when we ran the command, so the playbook was executed correctly in the execution environment.
There is also the possibility to get all the artifacts from the environment and stored in a secure folder for inspection, like facts and events. This is defined with an extra flag --private-data-dir <folder> and we can investigate the actual command that is run to execute the playbook in the command file. One useful trick is to bind mount our ssh directory so that all our ssh-keys are available for ansible in the execution environment.
$ tree artifacts/d362689b-1265-4680-972f-cf055c520e32 -L 1
artifacts/d362689b-1265-4680-972f-cf055c520e32
├── ansible_version.txt
├── collections.json
├── command
├── env.list
├── job_events
├── rc
├── status
├── stderr
└── stdout
It is of course also possible to send your own docker command to verify the flexibility of the execution environment.
$ docker run -it --rm --workdir /runner/project/project -v /Users/tobias_sdnit/.ssh/:/home/runner/.ssh/ -v /Users/tobias_sdnit/git/ansible-ee/project:/runner/project/project ansible-execution-env ansible-playbook test.yml
Another useful way to run the playbook is with Python directly, with the ansible-runner as a Python module.
#!/usr/bin/env python
import ansible_runner
import sys
out, err = ansible_runner.run_command(
executable_cmd="ansible-playbook",
cmdline_args=["/Users/tobias_sdnit/git/ansible-ee/project/test.yml", "-v"],
input_fd=sys.stdin,
output_fd=sys.stdout,
error_fd=sys.stderr,
cwd="/Users/tobias_sdnit/git/ansible-ee/",
process_isolation=True,
process_isolation_executable="docker",
container_image="quay.io/tobiasjohanssonsdnit/ansible-ee",
)
Here the ansible_runner module is used with the function run_command and gives it the same kind of input we had on the command-line. The output of the run will be the same but now we run it with Python code, this can have a lot of advantages for our own development.
Share the execution environment
So now that we build our image, we are able to run playbooks in the container. We should share the image to a registry, for all others to utilize and run their playbooks with it as well.
One way is to set up your own container registry, perhaps one of these Project Quay, Pulp project or Gitlab container registry. If you don’t have any secrets in the execution environment you can push it to a public registry as shown below.
Tag the image with the registry-url, name and tag, then push the tag to the registry, if not already done in the build step.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ansible-execution-env latest 8f2a6d56ae5b 10 minutes ago 770MB
$ docker tag 8f2a6d56ae5b quay.io/tobiasjohanssonsdnit/ansible-ee:latest
$ docker login quay.io
Authenticating with existing credentials...
Login Succeeded
$ docker push quay.io/tobiasjohanssonsdnit/ansible-ee:latest
Using default tag: latest
The push refers to repository [quay.io/tobiasjohanssonsdnit/ansible-ee]
5f70bf18a086: Layer already exists
a53cb7d68be3: Pushed
af41dc6dae26: Pushed
26555777f769: Pushed
c384fbed4161: Pushed
….
latest: digest: sha256:c76d8789cbb332483ba74da6e8b812c7cf96279c43bc1efbfeb53cda2b0dbe97 size: 4710
Now we can use this image in AWX => 18.0, for all our playbooks by configuring the execution environment with the container registry path and adding it in the job_templates. The conclusion is that an Ansible execution environment is going to be extremely useful, managing environments for Ansible automation.
All the examples are made available on https://github.com/tobiasjohanssonsdnit/ansible-ee and appreciate any kind of feedback.
Comments