The previous article vRA 8: Ansible Integration tested using Ansible to configure servers in vRealize Automation 8. Let’s see how this works on a real-world task – deploying a kubernetes cluster. There is a large selection of ready-made Ansible playbooks, one of the most advanced is Kubespray. Let’s develop a vRA blueprint to create a K8S cluster using Kubespray.
Kubespray. Setting up and first launch
Kubespray is a ready-to-use suite of Ansible playbooks and roles for installing and configuring a kubernetes cluster.
Project Description: https://kubespray.io/ Code Download: https://github.com/kubernetes-sigs/kubespray. The main advantages of Kubespray (according to the development team):
- Creation of highly available clusters;
- Choice of components and flexible customization;
- Support for the most popular Linux distributions;
- Deployment both on hardware and in various cloud platforms.
Before starting Kubespray via vRA, be sure to do a test deployment of the K8S cluster: download Kubespray, order several VMs via vRA, and start the installation. Most likely, your intervention will be required to eliminate unexpected errors.
- We use the server with Ansible, configured earlier: vRA 8: integration with Ansible;
- local user ansible;
- Kubespray is loaded in ~ / playbooks / kubespray;
- access to configurable servers by key, remote user on nodes is also ansible;
- OS on virtual machines Ubuntu 18.04 (despite the declared “support for the most popular distributions”, I never got it on CentOS 7/8).
# Установите зависимости по списку из ``requirements.txt``
sudo pip3 install -r requirements.txt
# В директории ``inventory/sample`` находятся конфигурационные файлы
# Скопируйте ``inventory/sample`` в ``inventory/vra``
cp -rfp inventory/sample inventory/vra
# Создайте файл инвентори с помощью inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/vra/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Изучите получаемый файл inventory/vra/hosts.yaml для разного количества VM
# Две мастер-ноды, 1 или 3 ноды с etcd, все ноды рабочие
# Изучите и откорректируйте параметры в файлах ``inventory/vra/group_vars/``
cat inventory/vra/group_vars/all/all.yml
cat inventory/vra/group_vars/k8s-cluster/k8s-cluster.yml
# Развёртывание Kubespray с Ansible Playbook
ansible-playbook -i inventory/vra/hosts.yaml --become --become-user=root cluster.yml
# Если имя пользователя на нодах отличается от имени локального пользователя,
# то добавьте --user=<имя пользователя>
Kubespray Launch Playbook
To deploy a kubernetes cluster using vRA and Kubespray, you need to automate:
- Cloning the directory with Kubespray settings, we will copy from inventory / vra /;
- Generating the inventory file;
- Editing configuration files according to user-selected values;
- Launching cluster deployment.
All of the above operations will be performed locally, so in the playbook, connection == local . The text of the playbook for deploying the cluster (the ansible playbooks described below and the blueprint code are available at https://github.com/isas2/vra):
# ~/playbooks/kubespray/vra_deploy.yaml - name: Deploy K8S cluster hosts: kubespray connection: local roles: - kwoodson.yedit tasks: - name: "01. Create new inventory" copy: src=inventory/vra/ dest=inventory/{{ inventoryName }}/ - name: "02. Create hostlist file" file: path: inventory/{{ inventoryName }}/vra_hosts state: touch - name: "03. Write to hostlist first master-node IP" lineinfile: path: inventory/{{ inventoryName }}/vra_hosts line: "{{ ipAddressMn }}" - name: "04. Write to hostlist all node IPs" lineinfile: path: inventory/{{ inventoryName }}/vra_hosts line: "{{ item }}" loop: "{{ ipAddressWn }}" - name: "05. Exclude pods subnets" shell: sed -i '/^10\.233\./d' inventory/{{ inventoryName }}/vra_hosts - name: "06. Create Inventory File" shell: CONFIG_FILE=inventory/{{ inventoryName }}/hosts.yaml python3 contrib/inventory_builder/inventory.py $(cat inventory/{{ inventoryName }}/vra_hosts) - name: "07. Enable external cloud provider" yedit: src: inventory/{{ inventoryName }}/group_vars/all/all.yml key: cloud_provider value: "external" when: vcpIstall == true - name: "08. Set external cloud provider name" yedit: src: inventory/{{ inventoryName }}/group_vars/all/all.yml key: external_cloud_provider value: "vsphere" when: vcpIstall == true - name: "09. Enable vSphere CSI" yedit: src: inventory/{{ inventoryName }}/group_vars/all/vsphere.yml key: vsphere_csi_enabled value: true when: vcpIstall == true - name: "10. Set network plugin" yedit: src: inventory/{{ inventoryName }}/group_vars/k8s-cluster/k8s-cluster.yml key: kube_network_plugin value: "{{ netPlugin }}" - name: "11. Wait a nodes" shell: sleep 1m - name: "12. Start Kubespray deploy" shell: /usr/bin/ansible-playbook -i inventory/{{ inventoryName }}/hosts.yaml --become --become-user=root cluster.yml
- To work with yaml files, the yedit module is used ( installing kwoodson.yedit );
- 01: Since there may be more than one cluster and their settings may differ, we create a separate inventory for each. It is necessary to ensure the generation and transfer of a unique value to the inventoryName variable ;
- 02 – 05: Creation of a file with a list of IP addresses of virtual machines, master nodes at the top of the list;
- 06: Creation of the inventory file, the vra_hosts file with the list of IP of all nodes was passed to the script input;
- 07 – 09: Enable vSphere Cloud Provider if vcpIstall == true . In the file inventory / vra / group_vars / all / vsphere.yml, the fields for connecting to vCenter must be filled, except for the vsphere_csi_enabled field ;
- 10: Setting the name of the network plugin;
- 11: A slight delay in launch, there were cases when the playbook was started before the hosts were fully ready (you can foresee all vRA – Ansible playbook launches).
When deleting a cluster, you must also delete the directory from its inverter:
# ~/playbooks/kubespray/vra_destroy.yaml
- name: Destroy cluster
hosts: kubespray
connection: local
tasks:
- name: "Delete inventory directory"
file:
path: inventory/{{ inventoryName }}/
state: absent
VRA Blueprint
Kubespray creates two master nodes by default, but there is only one in the diagram. This is due to the peculiarity of launching playbooks: how many VMs, so many playbooks, which does not suit us. The first VM from the worker nodes will be used as the second master node.
formatVersion: 1 name: K8S cluster with Ansible and Kybespray version: 1 inputs: workerNodes: type: integer default: 2 minimum: 1 maximum: 5 title: Number worker nodes nodeSize: type: string enum: - small - medium - large default: small title: Node size vcpIstall: type: boolean default: false title: Install vSphere cloud provider netPlugin: type: string enum: - calico - flannel - weave default: calico title: Select network plugin resources: K8S-Install: type: Cloud.Ansible dependsOn: - WorkerVM properties: inventoryFile: ~/hosts username: ansible privateKeyFile: ~/.ssh/id_rsa playbooks: provision: - ~/playbooks/kubespray/vra_deploy.yaml de-provision: - /home/ansible/playbooks/kubespray/vra_destroy.yaml hostVariables: | inventoryName: ${to_lower(join([env.projectName, env.requestedBy, env.deploymentId], '-'))} ipAddressWn: ${resource.WorkerVM.address} ipAddressMn: ${resource.MasterVM.address} vcpIstall: ${input.vcpIstall} netPlugin: ${input.netPlugin} osType: linux groups: - kubespray maxConnectionRetries: 10 host: '${resource.MasterVM.*}' account: Ansible (vra-ssh) MasterVM: type: Cloud.Machine properties: image: Ubuntu_18 flavor: '${input.nodeSize}' customizationSpec: tmp-linux-vra networks: - network: '${resource.NetworkVM.id}' assignment: static WorkerVM: type: Cloud.Machine dependsOn: - MasterVM properties: count: '${input.workerNodes}' image: Ubuntu_18 flavor: '${input.nodeSize}' customizationSpec: tmp-linux-vra networks: - network: '${resource.NetworkVM.id}' assignment: static NetworkVM: type: Cloud.vSphere.Network properties: networkType: existing
- In playbooks.provision and playbooks.de-provisio n, specify the paths to the playbooks. Please note that the path in de-provision is specified absolute, with a relative one it did not start for me (maybe just a glitch, check with yourself);
- The variable inventoryName is the name of the inventory directory: a string is passed from the project name, username and deployment ID. If your project names are long, then you don’t have to add them.
Cluster management
What’s next? Try modifying these examples to take advantage of the cluster scaling and upgrade capabilities (scale.yml, upgrade-cluster.yml).
The first obstacle along the way is vRA itself. After installing the Kubernetes cluster, it begins to see additional IP addresses from the K8S subnet on virtual machines (for Kubespray, by default, this is 10.233.64.0/18). The vRA begins to display these IP addresses in the deployment description and use it when calling ansible playbooks. If during the installation of the cluster one IP was used, and another is transferred for updating, then such a host will not be found in the inventory.
- Add all hosts by mask to the inventory file: 10.233.[64:127].0;
- The vra_deploy.yaml playbook has a special item “05. Exclude pods subnets”, which excludes these addresses from the Kubespray inventory file.
All ansible playbooks and vRA blueprint code used in this article are available at https://github.com/isas2/vra.
Translated by Google Translate