Extra variables

Setup extra variables

The Playbook is using several roles to setup the environment. These roles are using extra variables to allow you to tune the installation. Most extra variables are using reasonable defaults, but some have to be configured manually, before the first run.

Using Vault

Sensitive information is stored in an Ansible Vault file. For example to specify the automation controller administrator password it’s recommended to use an Ansible Vault file storing the password in an variable called vault_controller_admin_password - and not storing the clear text password in your extra variable file.

General Section

These are variables which always have to be set and are independent of the target platform where AAP is deployed.

The documentation below only lists mandatory or important variables. Read the Contribute chapter of this documentation to find details about all variables.

# all variables prefixed with "vault_" should not be declared here, but in a ansible vault encrypted file
# more details on Ansible Vault: https://docs.ansible.com/ansible/latest/cli/ansible-vault.html
# Automation controller instance name
controller_instance_name: controller
# enable private automation hub
# default is false in which case only controller is deployed and no private automation hub
controller_ah_enable: true
# name of the private automation hub instance
# MANDATOR if controller_ah_enable is true
controller_ah_instance_name: hub
# this is the password of the admin user for logging into automation controller
# MANDATORY - store the actual password in a vault file!
controller_admin_password: "{{ vault_controller_admin_password }}"
# all Jobs will use the account "ansible" to log into the target machine.
# therefore automation controller needs to store the private key for this user
# note the key has to be provided in one line!
# MANDATORY - store the actual password in a vault file!
controller_ansible_private_key: "{{ vault_controller_ansible_private_key }}"
# set an individual Sync URL instead of the following default, which will sync everything
# replace the provided default with your individual sync list from https://console.redhat.com/ansible/automation-hub/token
automation_hub_server_url: "https://console.redhat.com/api/automation-hub/content/published/"
# the base64 encoded manifest
# download the manifest from access.redhat.com and convert it to a base64 encoded string
# base64 < /path/to/manifest.zip
rhaap_manifest: |
  [output truncated]  
# Offline access token for Automation Hub on console.redhat.com
# MANDATORY if controller_ah_enable is true
rhsm_ah_offline_token: "{{ vault_rhsm_ah_offline_token }}"
# RHSM user name to register system to subscription manager
# RHSM password to register system to subscription manager
# MANDATORY - store the actual password in a vault file!
rhsm_password: "{{ vault_rhsm_password }}"
# RHSM Pool ID to subscribe the system to, must have an active Red Hat Ansible Automation Platform entitlement
# MANDATORY- store the actual password in a vault file!
rhsm_poolid: "{{ vault_rhsm_poolid }}"

Amazon Web Services (AWS)

The following variables become mandatory if type is set to “ec2”.

# Amazon EC2
# make sure "type" is set to "ec2"
type: ec2
# keypair which will be injected into the VM
ec2_key_pair: '{{ lookup("file","~/path/to/public_key/id_rsa.pub") }}'

# which EC2 region to use
ec2_region: "eu-central-1"
# set the instance size correctly for AWS
instance_flavor: t3.large

Microsoft Azure

# Microsoft Azure
# set typo to "azure" to use these settings
type: azure
# public key to inject into the Linux instance for SSH
azure_ssh_public_key: "{{ vault_azure_ssh_public_key }}"

# which Azure location to deploy the instances to
azure_location: westeurope

# set the instance size correctly for AWS
instance_flavor: Standard_DS3

Google Cloud

The following variables become mandatory if type is set to “gcp”.

# Google Cloud GCP
# make sure "type" is set to "gcp"
type: gcp
# user provided credentials to log into Google
# the password is in fact the SSH key
# it is highly recommended to create a service account in Google and not user/password
gcp_password: "{{ vault_gcp_password }}"
gcp_username: "{{ vault_gcp_username }}"
# the project in which all objects will be created
gcp_project: "{{ vault_gcp_project }}"
# the region used to deploy the instance, network, etc.
gcp_region: "{{ vault_gcp_region }}"
gcp_zone: "{{ vault_gcp_zone }}"
# set the instance size correctly for AWS
instance_flavor: t3.large

Let’s encrypt

If you want to enable Let’s encrypt support, set the boolean letsencrypt_skip to false and the playbook will configure it automatically for you. For this to work, you need a DNS name for your automation controller and hub which is resolvable and reachable on the internet. Since there are many different methods to setup DNS, the playbook will not do this for you.

If you enable Let’s encrypt but your automation controller can not be reached by the specified letsencrypt_public_fqdn playbook execution will fail.

Let’s encrypt has a rate limit system which will detect if there are too many certificate requests. If you plan to recreate your automation controller on a regular basis, consider setting letsencrypt_staging to true and read the documentation about the Let’s encrypt staging environment.

Dynamic DNS

We provide a Dynamic DNS Service for Red Hatters using this lab. You can request a dedicated sub domain for your own purposes by filling out the Google Form.

After you got your request approved, you will receive a DNSSec key pair. Change the following variables accordingly:

dns_update: true
# enter the sub domain you got assigned, just the sub domain without the .ansible-labs.de part
dns_suffix: myname
# you should have received a key pair which you keep in a secure place, content of both files here
# you will have to make the private key a single line by using \n instead of the line breaks
dns_key: "{{ vault_dns_key }}"
dns_private: "{{ vault_dns_key_private }}"