Let’s start testing new features and preparing to migrate to vRealize Automition 8. Our vRA 7.x installation uses software components extensively to configure our servers. Therefore, the main question is how to configure the server if the software components are no longer there? vRA 8 offers several solutions: cloud-init, Ansible Tower and just Ansible (on a remote SSH server).
- Cloud-init is suitable for a little “file processing” of the server at the first start and installation of additional software, this tool is actively used by many cloud providers. Cons: for complex deployments of cluster applications, it is not suitable and, as it turned out, is not yet friendly with VMware VM customization;
- Ansible is one of the most common and convenient ways to manage a server fleet; there are a large number of modules and example scripts (playbooks) for various systems. Ansible Tower is far from a free management system, but there is also a free implementation – Ansible AWX. Disadvantages: if the installation of these systems does not take much time, then you cannot say the same about their study “from scratch”;
- Ansible Open Source combines all the advantages of using Ansible and the ability to quickly start, since it does not require a lot of time to configure and study the documentation. Let’s get to know this option now …
Configuring Ansible
Configuring vRA Integration – Ansible
vRA Blueprint
Debugging. What’s going on under the hood?
Ansible setup
Installing Ansible on the system (all settings in this section are performed on a dedicated Linux server, with which vRA will connect via SSH):
$ apt install ansible # для Ubuntu / Debian
$ yum install ansible # для RHEL / CentOS.
Recommendation: To integrate vRA with Ansible, create a separate user without administrator rights. If you create a configuration file .ansible.cfg in your home directory , then the settings will be taken from it.
$ useradd ansible
$ cp /etc/ansible/ansible.cfg /home/ansible/.ansible.cfg
$ cp /etc/ansible/hosts /home/ansible/hosts
$ chown -R ansible:ansible /home/ansible/
# задайте пользователю пароль или создайте пару ключей для ssh-доступа
$ passwd ansible
Setting up Ansible, like integrating with vRA, is very simple. Everything is described in detail in the Configure Ansible Open Source integration in vRealize Automation Cloud Assembly documentation .
Make additional settings in the config file ~ / .ansible.cfg
[defaults]
vault_password_file = ~/.creds/ansible_vault_password
...
[paramiko_connection]
record_host_keys = False
...
[ssh_connection]
#ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
ssh_args = -o UserKnownHostsFile=/dev/null
vault_password_file is used by the ansible-vault utility to store the encryption key. If you specify a username / password for remote access to configured servers, then the password will be encrypted using this key. You can create this file with the following commands:
$ echo 'ksdjhfgasf32te41u2egwd32' > ~/.creds/ansible_vault_password
$ chmod 600 ~/.creds/ansible_vault_password
Configuring vRA integration – Ansible
Create vRA Integration Object – Ansible:
- Go to Infrastructure > Connections > Integrations and click the Add Integration button ;
- Select Ansible ;
- Fill out the form: name for the new integration object, hostname with configured ansible, path to the inventory file (in our case, ~ / hosts ) and connection parameters: ansible username and password;
- Click Validate to check the correctness of the entered data and ansible settings on the host;
- Click Add .
VRA Blueprint
You are now ready to deploy your first Ansible-configured server. The simplest blueprint:
formatVersion: 1
name: Ansible test blueprint
version: 1
inputs: {}
resources:
Ansible-Test:
type: Cloud.Ansible
properties:
account: Ansible-host
inventoryFile: ~/hosts
groups:
- test_echo
host: '${resource.VM.*}'
username: ansible
privateKeyFile: ~/.ssh/id_rsa
osType: linux
maxConnectionRetries: 10
playbooks:
provision:
- ~/playbooks/test/echo.yml
hostVariables: |
vmName: ${resource.VM.resourceName}
VM:
type: Cloud.Machine
properties:
image: Ubuntu_18
flavor: small
customizationSpec: tmp-linux-vra
networks:
- network: '${resource.vSphere_Network.id}'
assignment: static
vSphere_Network:
type: Cloud.vSphere.Network
properties:
networkType: existing
- account – select the name of your vRA integration object with Ansible;
- inventoryFile – path to inventory file;
- groups – the name of the group in the inventory file into which this host will be registered (if there is no such group yet, it will be created automatically);
- host – a virtual server on which the ansible playbook will run (when linking elements in the diagram, this value is automatically assigned);
- username – username to access the configured server;
- privateKeyFile – the path to the private part of the ssh connection key (if access by keys has not been configured yet, then use password access, specify it in the password field );
- osType The type of system being configured.
- maxConnectionRetries – the number of attempts to connect to the server;
- playbooks.provision – playbook to execute when setting up the server (you can specify several, they will be executed sequentially);
- playbooks.de-provision – playbooks to execute when the server is displayed;
- hostVariables – a list of variables to pass to Ansible.
# ~/playbooks/test/echo.yml
- name: Echo hello
hosts: all
connection: ssh
become: no
tasks:
- name: "Echo hello string"
shell: echo "{{ vmName }}" >> ~/echo.txt
This demo playbook writes the name of the virtual machine to the ~ / echo.txt file , which is passed to it from the blueprint in the hostVariables parameter .
- hosts – list of hosts to run the playbook or host group name. In our case, instead of all , it was possible to specify the test_echo group from the blueprint , but using the all group will not start on all servers from the inventory file. Why? The answer is in the next section.
- become – allows or denies user change for privilege escalation. No additional rights are required to run this playbook. But in most cases, the setting will require root privileges, for this change the value “become” to yes , give the user administrator privileges and allow elevation without password ( username ALL = (ALL) NOPASSWD: ALL ).
Debugging. What’s going on under the hood?
The vRA with Ansible is different from running the ansible-playbook on a local server.
vRA uses a separate storage for logs and temporary files for each deployment and each Ansible item from your blueprint. They are all located in ~ / var / tmp / vmware / provider / user_defined_script / *. This is the first place to start looking for errors when executing playbooks.
To check the values of the vRA variables, look at the files in the host_vars directory . It is always created next to the inventory file, we have it ~ / host_vars . For each VM, a separate subdirectory is created by its IP; it contains files with global variables, user variables ( hostVariables from vRA), as well as encrypted passwords. It should be remembered that if an element in your blueprint contains a count property , then by passing its properties as variable values, you will receive arrays, even if count = 1 .
Tip : Try passing one of the resources to a variable as a value, for example, a virtual machine:
vm: '${resource.VM.*}'
Get the json object in the variable and see all its available properties: ~ / host_vars / <VM IP> /vra_user_host_vars.yml
To take a closer look at how the playbook is executed from vRA, add one more task to it:
- name: "Wait 15 minutes"
shell: sleep 15m
While the playbook is running, look at the contents of ~ / var / tmp / vmware / provider / user_defined_script / * and the list of processes. Here are the processes running on my system as an ansible user:
- Execution starts by running the exec file , it initializes several variables and calls the following process:
bash var / tmp / vmware / provider / user_defined_script / 4c2d7946 – 9539 – 448a – 89e3 – a6bd2d21b100 / exec - This is the main bash script that does all the dirty work: it checks the inventory and registers the host in it, creates files with variables, works with encrypted data … It also launches the ansible-playbook:
bash var / tmp / vmware / provider / user_defined_script / 4c2d7946 – 9539 – 448a – 89e3 – a6bd2d21b100 / 4c2d7946 – 9539 – 448a – 89e3 – a6bd2d21b100 max_connection_retries = 10 ansible_inventory_path = / home / ansible / hosts use_sudo = false ansible_groups = test01 node_host = 192,168 …100.3 node_user = Ansible node_uuid = 4c2d7946 – 9539 – 448a – 89e3 – a6bd2d21b100 operation = the create provisioning_playbook_paths = L29wdC92bXdhcmUvYW5zaWJsZS9rOHMvZWNoby55bWw = ansible_ssh_private_key_file = / home / Ansible /. ssh / id_rsa - Playbook launch. The parameters set the inventory file from the blueprint and the specific host to run. That is why using the all host group in the playbook will not cause it to run on all hosts from the inverter or on all hosts in the group:/ usr / bin / python3 / usr / local / bin / ansible – playbook / home / ansible / playbooks / test / echo . yml – l 192.168 . 100.3 – i / home / ansible / hosts
- Next, ansible already works:
the ssh – o UserKnownHostsFile = / dev / null – o StrictHostKeyChecking = the no – o IdentityFile = “/home/ansible/.ssh/id_rsa” – o KbdInteractiveAuthentication = the no – o PreferredAuthentications = gssapi ‘ – with – the mic , gssapi’ – keyex , hostbased , publickey – o PasswordAuthentication = no – o User = “ansible” – o ConnectTimeout = 10 – tt 192.168 . 100.3 / bin / sh – c ‘/ usr / bin / python3 /home/ansible/.ansible/tmp/ansible-tmp-1590723734.9405904-113746505590972/AnsiballZ_command.py && sleep 0’
Continue reading vRA 8: Ansible + Kubernetes
Translated by Google Translate