Andrew Stanley – SMS https://www.sms.com Solving | Managing | Securing Thu, 02 Nov 2023 16:07:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.sms.com/wp-content/uploads/2023/05/cropped-sms-favicon-32x32.png Andrew Stanley – SMS https://www.sms.com 32 32 Automating Operating System Hardening https://www.sms.com/blog/automating-operating-system-hardening/ https://www.sms.com/blog/automating-operating-system-hardening/#respond Wed, 12 Jul 2023 17:01:42 +0000 https://smsprod01.wpengine.com/?p=6583 By Andrew Stanley, Director of Engineering, SMS

In the ever-evolving landscape of cybersecurity, the importance of operating system hardening cannot be overstated. As the foundational layer of any IT infrastructure, the operating system presents a broad surface area for potential attacks. Hardening these systems, therefore, is a critical step in any comprehensive cybersecurity strategy. However, the challenge lies in automating this process, particularly in legacy on-premises infrastructures not designed with automation in mind. 

Open-source software has emerged as a powerful ally in this endeavor, offering flexibility, transparency, and a collaborative approach to tackling cybersecurity challenges. Tools such as OpenSCAP and Ansible have been instrumental in automating and streamlining the process of operating system hardening. The Center for Internet Security (CIS), a non-profit entity, plays a pivotal role in this context by providing well-defined, community-driven security benchmarks that these tools can leverage. 

While cloud-native architectures have been at the forefront of automation with tools like HashiCorp’s Packer and Terraform, these tools are not confined to the cloud. They can be ingeniously adapted to work with on-premises systems like VMware, enabling the creation of hardened virtual machine images and templates. This convergence of cloud-native tools with traditional on-premises systems is paving the way for a new era in cybersecurity, where robust, automated defenses are within reach for all types of IT infrastructures. This blog post will delve into how these tools can automate operating system hardening, making cybersecurity more accessible and manageable. 

Why Use OpenSCAP and Ansible for Operating System Hardening

The Center for Internet Security (CIS) Benchmarks Level II Server Hardening standard is a stringent set of rules designed for high-security environments. It includes advanced security controls like disabling unnecessary services, enforcing password complexity rules, setting strict access controls, and implementing advanced auditing policies. OpenSCAP, an open-source tool, can automate the application of these benchmarks by generating Ansible templates. This automation ensures consistency, accuracy, and efficiency in securing your servers according to these high-level standards.

Prerequisites

  • VMware vSphere environment for building and testing images
  • One Linux host or VM to run the required tools
  • One Linux host or VM for auditing

Note

The examples in this post use Ubuntu 20.04 but should work for other versions and distros.

Steps

  • Execute the following on the host you intend to use for running OpenScap, Ansible, Packer and Terraform.
# Reference - https://medium.com/rahasak/automate-stig-compliance-server-hardening-with-openscap-and-ansible-85f2f091b00
# install openscap libraries on local and remote hosts
sudo apt install libopenscap8

# Create a working directory
mkdir ~/openscap
export WORKDIR=~/openscap
cd $WORKDIR

# Download ssg packages and unzip
# Check for updates here - https://github.com/ComplianceAsCode/content/releases
wget https://github.com/ComplianceAsCode/content/releases/download/v0.1.67/scap-security-guide-0.1.67.zip
unzip -q scap-security-guide-0.1.67.zip

# Clone openscap
git clone https://github.com/OpenSCAP/openscap.git
  • Create a new Ubuntu 20.04 base image and virtual machine template in VMware

Note

There are several ways to create base images in vSphere. Our recommendation is to use Hashicorp Packer and the packer-examples-for-vsphere project. The setup and configuration of these is outside the scope of this post but we may cover it in more detail in the future. The advantage of using this project is that it already provides a convenient way to add ansible playbooks to your image provisioning process. Additionally, SMS develops reusable terraform modules that are designed to work with images created from this project.

  • Run a remote scan against the new virtual machine you created
# Return to the root of the working directory
cd $WORKDIR

# Scan the newly created Ubuntu 20.04 instance using the CIS Level2 Server profile
./openscap/utils/oscap-ssh --sudo <user@host> 22 xccdf eval \
  --profile xccdf_org.ssgproject.content_profile_cis_level2_server \
  --results-arf ubuntu2004-cis_level2_server.xml \
  --report ubuntu2004-cis_level2_server.html \
  scap-security-guide-0.1.67/ssg-ubuntu2004-ds.xml
  • Generate an Ansible Remediation Playbook
# Generate an Ansible Playbook using OpenSCAP
oscap xccdf generate fix \
  --fetch-remote-resources \
  --fix-type ansible \
  --result-id "" \
  ubuntu2004-cis_level2_server.xml > ubuntu2004-playbook-cis_level2_server.yml
  • Test the generated Ansible Playbook
# Validate the playbook against the target machine
ansible-playbook -i "<host>," -u <user> -b -K ubuntu2004-playbook-cis_level2_server.yml

Note

It may be necessary to perform the previous scanning and playbook creation steps multiple times. As new packages are added additional hardening configurations will be needed.

Using Ansible Templates with Packer Examples for VMware vSphere

In this section, we delve into the practical application of Packer in a vSphere environment. We will explore the Packer Examples for VMware vSphere repository on GitHub, which provides a comprehensive set of examples for using Packer with vSphere. These examples demonstrate how to automate the creation of vSphere VM templates using Packer, Ansible and Terraform which can be used to create consistent and repeatable infrastructure. By the end of this section, you will have a solid understanding of how to leverage these examples in a vSphere environment to streamline your infrastructure management tasks. 

# Return to the root of the working directory
cd $WORKDIR

# Clone packer-examples-for-vsphere
git clone https://github.com/vmware-samples/packer-examples-for-vsphere.git
cd ./packer-examples-for-vsphere

# Create a new branch to save customizations. New templates will include the branch name by default.
git checkout -b dev
  • Update the repo to include the Ansible Playbook created with OpenSCAP
# Add a new role to the Ansible section of the repo
mkdir -p ./ansible/roles/harden/tasks
mkdir -p ./ansible/roles/harden/vars

# Create a variables file for the new role and copy all of the variables from the Ansible Playbook
vi ./ansible/roles/harden/vars/main.yml

# Create a task file and copy the remaining contents of the Ansible Playbook
vi ./ansible/roles/harden/tasks/main.yml

# Update the the existing Ansible Playbook to include the newly created role
vi ./ansible/main.yml

---
- become: "yes"
  become_method: sudo
  debugger: never
  gather_facts: "yes"
  hosts: all
  roles:
    - base
    - users
    - configure
    - harden
    - clean
  • Create a new hardened image and virtual machine template in VMware
# Follow the setup instructions in the README.md then create your base images
./build.sh

    ____             __                ____        _ __    __     
   / __ \____ ______/ /_____  _____   / __ )__  __(_) /___/ /____ 
  / /_/ / __  / ___/ //_/ _ \/ ___/  / __  / / / / / / __  / ___/ 
 / ____/ /_/ / /__/ ,< /  __/ /     / /_/ / /_/ / / / /_/ (__  )  
/_/    \__,_/\___/_/|_|\___/_/     /_____/\__,_/_/_/\__,_/____/   

  Select a HashiCorp Packer build for VMware vSphere:

      Linux Distribution:

         1  -  VMware Photon OS 4
         2  -  Debian 11
         3  -  Ubuntu Server 22.04 LTS (cloud-init)
         4  -  Ubuntu Server 20.04 LTS (cloud-init)

Choose Option 4

Creating Virtual Machines on VMware vSphere Using the Hardened Virtual Machine Templates

In this section, we will explore using the ‘terraform-vsphere-instance’ project, hosted on GitLab by SMS, for creating virtual machines. This project provides a set of Terraform configurations designed to create instances on VMware vSphere. These configurations leverage the power of Terraform, a popular Infrastructure as Code (IaC) tool, to automate the provisioning and management of vSphere instances. By using these Terraform modules, you can streamline the process of creating and managing your virtual machines on vSphere, ensuring consistency and repeatability in your infrastructure.

  • Create a virtual machine instance from the new template
# Return to the root of the working directory
cd $WORKDIR

# Clone terraform-vsphere-instance
git clone https://gitlab.com/sms-pub/terraform-vsphere-instance.git
cd ./terraform-vsphere-instance/examples/vsphere-virtual-machine/template-linux-cloud-init

# Copy and update the example tfvars file with settings for your environment
cp terraform.tfvars.example test.auto.tfvars

# Deploy a new virtual machine using Terraform
terraform plan

...
Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + module_output = [
      + {
          + vm_id           = (known after apply)
          + vm_ip_address   = (known after apply)
          + vm_ip_addresses = (known after apply)
          + vm_moid         = (known after apply)
          + vm_tools_status = (known after apply)
          + vm_vmx_path     = (known after apply)
        },
    ]

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

terraform apply

...
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

module_output = [
  {
    "vm_id" = "423d5014-829b-e000-9489-ac12dfaf4627"
    "vm_ip_address" = "10.4.3.142"
    "vm_ip_addresses" = tolist([
      "10.4.3.142",
      "fe80::250:56ff:febd:394f",
    ])
    "vm_moid" = "vm-4174"
    "vm_tools_status" = "guestToolsRunning"
    "vm_vmx_path" = "f784ad64-86a2-588d-a073-0025b500002e/lin-test-2004-default-00.vmx"
  },
]

Conclusion

In this blog post, we’ve explored the importance of operating system hardening and the challenges of automating this process, particularly in legacy on-premises infrastructures. We’ve seen how open-source tools like OpenSCAP and Ansible, along with the CIS Benchmarks, provide a robust framework for maintaining the security of enterprise systems. 

We’ve also delved into the practical application of Packer in a vSphere environment, demonstrating how to automate the creation of vSphere VM templates. Furthermore, we’ve seen how these templates can be used to create consistent and repeatable infrastructure, ensuring a high level of security across all systems. 

Finally, we’ve explored the use of Terraform modules from GitLab for creating virtual machines on VMware vSphere. This approach leverages the power of Infrastructure as Code (IaC) to automate the provisioning and management of vSphere instances, streamlining the process and ensuring consistency and repeatability in your infrastructure. 

In conclusion, the convergence of cloud-native tools with traditional on-premises systems is paving the way for a new era in cybersecurity. By leveraging these tools, organizations can ensure that their systems are configured according to best security practices and are resilient against potential threats. This approach makes cybersecurity more accessible and manageable, even in complex, legacy infrastructures. 

As we move forward, it’s clear that the automation of operating system hardening will continue to play a crucial role in cybersecurity. By staying informed and leveraging the right tools, we can ensure that our systems remain secure in the face of ever-evolving threats.

References

]]>
https://www.sms.com/blog/automating-operating-system-hardening/feed/ 0
Bare Metal to Kubernetes as a Service – Part 3 of 3 https://www.sms.com/blog/bare-metal-kubernetes-as-a-service-part-3/ https://www.sms.com/blog/bare-metal-kubernetes-as-a-service-part-3/#respond Tue, 23 Jun 2020 05:22:00 +0000 http://sms-old.local/?p=2773 By Andrew Stanley, Director of Engineering, SMS

Introduction

I started this series of posts with the assertion that, although a lot of effort has gone in to making Kubernetes easier to deploy and operate, Kubernetes is still hard. Simply showing how to deploy a Kubernetes cluster is not the aim. There are thousands of how-to’s on the web covering the basics. The ultimate point of these posts is to demonstrate at least one viable option for deploying a capability similar to a public cloud Kubernetes as a service solution. This means having the ability to provision multiple, production-ready clusters complete with features such as multitenancy, role based access control (RBAC), service meshCI/CD etc. When you hear “Kubernetes is hard” it’s not usually in reference to simply setting up a functioning cluster. It is usually a commentary on the challenges associated with these additional features which are necessary for running Kubernetes in production.

Objective

I mentioned in Part 1 of this series some of the available solutions for deploying Kubernetes in private clouds. After some extensive research and experimentation with a few of the more popular solutions I decided to go with Rancher’s Kubernetes management platform. This is not an endorsement of one product over another and I don’t intend to do a side-by-side comparison of all of the platforms I tested. For me, Rancher simply shortened the steep learning curve associated with Kubernetes and met or exceeded all of the requirements I had for running production clusters with limited experience.

In this final post I’m going to demonstrate how to integrate Rancher with Openstack. Integrating these two technologies simplifies the deployment and management of production ready Kubernetes clusters. Rancher abstracts and automates the process of creating the underlying infrastructure and provides a unified management platform for running Kubernetes at scale on any cloud, public or private. Once completing this demonstration we’ll have a self-service platform from which to create and manage multiple Kubernetes clusters with the features necessary for production environments.

Prerequisites

In Part 1 of this series we deployed MAAS to automate the provisioning and management of our physical compute resources. In Part 2 we deployed Openstack as our Infrastructure-as-a-Service (IaaS) platform for managing virtual machines, networks and storage. These prerequisites provide the private cloud infrastructure on which Rancher will automate self-service Kubernetes deployments. If you’ve been following along up till now you should already have everything you need to complete this final demonstration.

Install Rancher

Rancher was designed to be cloud-native and is intended to be run as a distributed, highly available (HA) application on top of Kubernetes. That said, getting started with Rancher is as simple as launching a single container. For the purposes of this demonstration I’ll be using a single node deployment. For additional information on running Rancher in HA please reference the Rancher Documentation.

Create Cloud-init file

To automate the creation and installation of the Rancher server we’ll use a cloud-init file with the Openstack API.

Cloud-init is the industry standard multi-distribution method for cross-platform cloud instance initialization. It is supported across all major public cloud providers, provisioning systems for private cloud infrastructure, and bare-metal installations.

First we’ll need to retrieve the SSH public key from the keypair that was created in Part 2

openstack keypair show --public-key os_rsa

Next using your preferred text editor create a file named cloud-init with the following contents. Be sure to replace <your public key> with the public key retrieved in the previous step.

#cloud-config
groups:
  - ubuntu
  - docker
users:
  - name: ubuntu
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    primary-group: ubuntu
    groups: sudo, docker
    lock_passwd: true
    ssh-authorized-keys:
      - ssh-rsa <your public key>
apt:
  sources:
    docker:
      keyid: "0EBFCD88"
      source: "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" 
package_upgrade: true
packages: 
  - apt-transport-https 
  - docker-ce
runcmd:
  - "docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher"

This cloud-init file will instruct the Ubuntu virtual machine, that we’ll launch in the next step, to install Docker, then pull and run the Rancher server container.

Create Rancher Server

You should have already created the required Openstack flavors, images and networks needed to launch the server. To launch the server with the custom cloud-init file run;

openstack server create --image bionic --flavor m1.medium \
    --nic net-id=$(openstack network list | grep int_net | awk '{ print $2 }') \
    --user-data cloud-init rancher

Add Floating IP

To enable SSH access from external hosts we’ll need to assign the instance a floating ip in the external subnet.

floating_ip=$(openstack floating ip create -f value -c floating_ip_address ext_net)
openstack server add floating ip rancher ${floating_ip}

Security Groups

Kubernetes has many port and protocol requirements for communicating with cluster nodes. There are several different ways to implement these requirements depending upon the type of public or private cloud you are deploying to. Make sure you review these port and protocol requirements before running Rancher in production. For demonstration purposes I’m going to update the existing default security group in Openstack to allow all traffic. It should go without saying that this is not recommended outside of a lab environment.

To update the default security group run;

project_id=$(openstack project show --domain admin_domain admin -f value -c id)
secgroup_id=$(openstack security group list --project ${project_id} | awk '/default/ {print $2}')
openstack security group rule create $secgroup_id --protocol any --ethertype IPv6 --ingress
openstack security group rule create $secgroup_id --protocol any --ethertype IPv4 --ingress

Connect to Rancher UI

Our Rancher server should now be remotely reachable via the floating IP address we added earlier. To get the IP run;

openstack server show rancher -f value -c addresses

Now open a web browser and launch the Rancher User Interface (UI) at https://<floating_ip>.

rancher ui 1

Configure Rancher

Upon connecting to the Rancher UI you’ll be prompted to set a new password for the default admin user.

After selecting Continue you’ll be prompted to set the Rancher server URL. It’s OK to leave the default for now but typically this would be the fully qualified domain name of the server (FQDN).

Note: all cluster nodes will need to be able to reach this URL and it can be changed later from within the UI.

rancher url 1

After selecting Save URL you’ll be logged in to the default Clusters view of the UI. Since no clusters have been added, only an Add Cluster button is visible.

Enable Openstack Node Driver

Before adding our first cluster we need to configure settings that allow Rancher to automate the provisioning of Openstack resources. The first step is to enable the Openstack Node Driver.

Node drivers are used to provision hosts, which Rancher uses to launch and manage Kubernetes clusters.

To enable the Openstack Node Driver navigate to Tools then Drivers in the UI navigation.

node driver 1

On the Drivers page, click the tab for Node Drivers, select the check box next to OpenStack and then click the Activate button.

node driver 2

Add Openstack Node Template

Once we’ve enabled the Openstack Node Driver we can create a Node Template that will specify the settings used to create nodes/hosts within Openstack.

To create the node template click the drop down in the top right of the UI next to the admin user’s avatar and select Node Templates.

node template 1

Next click the Add Template button.

node template 4

On the Add Node Template page first select the option for Openstack. Before filling out the template data we’ll need to retrieve our settings from the Openstack API.

node template 2

Obtain Template Settings

The script for obtaining the default API credentials was included in the Openstack bundle we downloaded in Part 2. If they are not already loaded in your environment change to the openstack-base directory and run the following.

~$ source openrc
~$ env | grep OS_

OS_USER_DOMAIN_NAME=admin_domain
OS_PASSWORD=Uafe7chee6eigedi
OS_AUTH_URL=http://10.1.20.32:5000/v3
OS_USERNAME=admin

Only the settings needed for the node template are shown above.

After loading the API credentials in your environment, you’ll need to obtain the default domain and tenant id from the Openstack API.

openstack project show admin -f value -c id

Once you have these settings you should have all of the information needed to fill out the node template.

Several of the settings shown below were created in Part 2 in order to validate the Openstack installation. Some of these settings are shown with the values specific to my environment.

OptionValue
authUrlhttp://10.1.20.32:5000/v3
domainNameadmin_domain
flavorNamem1.large
floatingipPoolext_net
imageNamebionic
insecureenable
keypairNameos_rsa
netNameint_net
passwordUafe7chee6eigedi
privateKeyFile<contents of ~/.ssh/os_rsa>
secGroupsdefault
sshUserubuntu
tenantidca52ba242b104464b15811d4589f836e
usernameadmin

Be sure to use tenantId instead of tenantName. Using tenantName will cause docker-machine to use the wrong Openstack API version.

Once you’ve added all of your relevant settings to the template click Create to save the node template.

node template 3

Add Cluster

Now that we’ve configured the settings Rancher will use to create nodes within Openstack, we can add our first Kubernetes cluster.

Click on Custers in the UI navigation and then click the Add Cluster button.

add cluster 1

On the Add Cluster page select Openstack from the list of infrastructure providers.

add cluster 2 1

Continue configuring the cluster with the following settings;

OptionValue
Cluster Nameos1
Name Prefixos1-all-
Count3
TemplateOpenstack

For demonstration purposes select the check boxes next to etcdcontrol plane and worker.

add cluster 3

These setting will create a three node cluster named os1 using the previously created Openstack node template. Each node will be configured to run the Kubernetes etcd, control plane and worker services.

Note: it is recommended to separate the worker nodes from other services in production clusters.

Configure the Cloud Provider

Before launching the cluster you’ll need to configure the Kubernetes Cloud Provider for Openstack.

Cloud Providers enable a set of functionality that integrate with the underlying infrastructure provider, a.k.a the cloud provider. Enabling this integration unlocks a wide set of features for your clusters such as: node address & zone discovery, cloud load balancers for Services with Type=LoadBalancer, IP address management, and cluster networking via VPC routing tables.

Under the Cloud Provider section select the Custom option. A warning message will appear that states that the prerequisites of the underlying cloud provider must be met before enabling this option.

cloud provider 1

All of the prerequisite configurations for Openstack were applied in Part 2 of this series.

Also notice the message instructing you to edit the cluster’s YAML configuration in order to enable the custom cloud provider. To begin editing the cluster configuration click the Edit as YAML link at the top of the Cluster Options section.

cloud provider 2

Obtain Cloud Provider Settings

To configure the Openstack cloud provider we need to add a section of YAML to the top of the configuration. The YAML consists of settings similar to those used in the node template.

First we’ll need the Openstack API credentials. As shown above they can be obtained by running.

~$ source openrc
~$ env | grep OS_

OS_PASSWORD=Uafe7chee6eigedi
OS_AUTH_URL=http://10.1.20.32:5000/v3
OS_USERNAME=admin

The remaining configuration settings can be obtained from the Openstack API. Run the following commands and record their output.

Now that we have the needed credentials and settings we can add them to a YAML formatted block of text as shown below.

# tenant-id
openstack project show admin -f value -c id

# domain-id
openstack project show admin -f value -c domain_id

# subnet-id
openstack subnet show int_subnet -f value -c id

# floating-network-id
openstack network show ext_net -f value -c id

# router-id
openstack router show rtr-01 -f value -c id
cloud_provider:
  name: openstack
  openstackCloudProvider:
    global: 
      username: admin
      password: Uafe7chee6eigedi
      auth-url: "http://10.1.20.32:5000/v3"
      tenant-id: ca52ba242b104464b15811d4589f836e
      domain-id: 3f7614e901ef4adc8e14014ae9a5d8e6
    load_balancer:
      use-octavia: true
      subnet-id: a114a47d-e7d5-4e98-8a5a-26b2a6692017
      floating-network-id: e499e0b6-fa1e-40b0-abcd-3f9299fd1094
    route:
      router-id: 498ce328-6ba2-4a09-b421-71e28028b4fa

Paste this configuration (substituting your values) in to the top of the Cloud Provider YAML configuration as shown below.

cloud provider 3

Now click the Create button to launch the Kubernetes cluster.

For more information on configuring the cloud provider please see the Kubernetes Cloud Provider or the Rancher documentation.

Validate Cluster

As the cluster is deploying we can monitor its progress and status in the Rancher UI by navigating to the Clusters page and selecting the link to os1 under the Cluster Name list. From there click on Nodes in the UI navigation.

validate cluster 1

From here status and error messages will be displayed as the cluster is being deployed.

Rancher Logs

You can also monitor the cluster build process by connecting to the Rancher instance with SSH. This is useful for debugging any error messages produced by Rancher.

First SSH to the Rancher server.

ssh -i ~/.ssh/os_rsa ubuntu@<rancher_ip>

Next get the rancher Container ID.

ubuntu@rancher:~$ docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS                                      NAMES
21792badbb1c        rancher/rancher:v2.2.8   "entrypoint.sh"     19 hours ago        Up 19 hours         0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   upbeat_chaplygin

Run the following to watch the logs of the rancher container.

docker logs -f 21792badbb1c

Openstack API

You can validate the creation of each cluster node and its associated settings from the Openstack API.

$ openstack server list
+--------------------------------------+-----------+--------+------------------------------------+--------+-----------+
| ID                                   | Name      | Status | Networks                           | Image  | Flavor    |
+--------------------------------------+-----------+--------+------------------------------------+--------+-----------+
| 6a54ade8-1c93-486e-99d9-183b401131a7 | os1-all-2 | ACTIVE | int_net=10.0.0.127, 192.168.10.229 | bionic | m1.large  |
| be07abea-fab0-4ffd-8098-0e6c4b69a86b | os1-all-1 | ACTIVE | int_net=10.0.0.41, 192.168.10.180  | bionic | m1.large  |
| 8a281b10-2ab9-4573-b1bc-62910622fb46 | os1-all-3 | ACTIVE | int_net=10.0.0.81, 192.168.10.204  | bionic | m1.large  |
| 982c2885-55fb-4a5b-a9ee-8bd4b9d2a771 | rancher   | ACTIVE | int_net=10.0.0.238, 192.168.10.234 | bionic | m1.medium |
+--------------------------------------+-----------+--------+------------------------------------+--------+-----------+

Make sure that the status of all hosts created in Openstack is ACTIVE and that each is associated with a floating IP address.

Rancher UI

Once the cluster nodes have been deployed and Rancher has completed the Kubernetes installation process the State of each node in the Rancher UI should change to Active.

validate cluster 2

Deploy Application

Now that the cluster has been deployed we can validate its functionality by deploying a simple test application.

First navigate to the cluster’s default namespace by selecting the cluster name os1 in the top left of the navigation and then selecting Default.

deploy app 1

From the Workloads view click the Deploy button to configure our test application.

deploy app 2

Configure Workload

First set the following name, type and image for the workload.

OptionValue
Namewebapp
Workload Type3 pods
Docker Imageleodotcloud/swiss-army-knife
deploy app 3

Next click the Add Port button and configure the port, protocol and service type.

OptionValue
Container Port80
ProtocolTCP
Service TypeLayer-4 Load Balancer
Listening Port80

Next add an environment variable to be used by the application by expanding the Environment Variables section and clicking the Add Variable button.

Add the following environment variable then click the Launch button to deploy the application.

VariableValue
ALPHABETA
deploy app 4

Adding the environment variable is optional but is useful for testing additional Kubernetes functionality.

Validate Application

The workload we created will launch three pods each with a single instance of the leodotcloud/swiss-army-knife container. This container contains various tools and a simple web server that can be used to test your Kubernetes environment.

View Application Status

While the workload is is being deployed we can monitor its status and progress from the Workloads view in Rancher.

validate app 1

You can change the default view to Group By Node to see the placement of each Pod on the nodes in our Kubernetes cluster.

validate app 2

View Load Balancer Status

Since we chose a service type of Layer-4 Load Balancer Rancher will use the previously configured cloud provider to create an instance of the Openstack Octavia load balancer. It will further configure all of the necessary settings for the load balancer to direct traffic to our new application.

You can view the deployment status and configuration of the load balancer from the Load Balancing view of the Rancher UI.

validate lb 1

It can take some time for Openstack to create the Octavia virtual machine instance so be patient.

You can validate from the Openstack API that the load balancer instance was created successful by running the following command.

$ openstack loadbalancer list                             
+--------------------------------------+----------------------------------+----------------------------------+-------------+---------------------+----------+
| id                                   | name                             | project_id                       | vip_address | provisioning_status | provider |
+--------------------------------------+----------------------------------+----------------------------------+-------------+---------------------+----------+
| 1301cf7b-9319-4222-a0a0-bcf42fe6d2f0 | ad023b4dbda2e11e98886fa163ea07cc | ca52ba242b104464b15811d4589f836e | 10.0.0.50   | ACTIVE              | amphora  |
+--------------------------------------+----------------------------------+----------------------------------+-------------+---------------------+----------+

The provisioning_status in the Openstack API should eventually change to ACTIVE. You can further validate that the State shown in the Rancher UI has changed from Pending to Active.

validate lb 2

Test Application

Now that the workload and it’s load balancer have been deployed we can connect to the web application included in the swiss-army-knife container.

From the Workloads view of the Rancher UI click the 80/tcp link below the application’s name. This should launch a web browser and connect you to the floating IP address that was assigned to the load balancer.

test app 1

The browser window should display the floating IP of the load balancer in the address bar. The page should show the hostname and IP address of the container instance that the load balancer forwarded you to.

test app 2

If you refresh the browser periodically you should see that the load balancer forwards you to each of the three containers that were launched as part of the workload.

test app 3

Conclusion

Once you’ve successfully completed your validation you should have the basic functionality needed for delivering Kubernetes as a service. Though this demonstration only walked you through the creation of a single cluster it is trivial to create additional clusters by reproducing these steps. By leveraging additional features in Rancher, creating additional host and cloud templates and by utilizing the Rancher API you can streamline your process for abstracting the creation of these clusters. Additional efficiency can be added by using DevOps tools such as HashiCorp’s Terraform to fully automate Rancher configurations and deployments.

]]>
https://www.sms.com/blog/bare-metal-kubernetes-as-a-service-part-3/feed/ 0
Bare Metal to Kubernetes as a Service – Part 2 of 3 https://www.sms.com/blog/bare-metal-kubernetes-as-a-service-part-2/ https://www.sms.com/blog/bare-metal-kubernetes-as-a-service-part-2/#respond Tue, 09 Jun 2020 16:45:22 +0000 http://sms-old.local/?p=2524 By Andrew Stanley, Director of Engineering, SMS

In Part 1 of this series I demonstrated the installation and configuration of MAAS. In this post I’m going to show how to use Canonical’s Juju to deploy an Openstack cloud on top of hosts managed by MAAS.

Juju is an open source application modelling tool that allows you to deploy, configure, scale and operate cloud infrastructures quickly and efficiently on public clouds such as AWS, GCE, and Azure along with private ones such as MAAS, OpenStack, and VSphere.

Prerequisites

If you’ve been following along from the previous post you should have a working MAAS server with at least one physical machine in its default resource pool. Refer to the prerequisites defined in Part 1 for a complete list of requirements.

Before continuing you’ll need to add at least two more machines to MAAS. I’ve added an additional physical machine as well as a virtual machine composed from a KVM Pod on which I’ll install my Juju controller.

2stacks part2 1

To deploy Openstack you must have a bare minimum of two physical machines available in MAAS. If you don’t have an extra machine to add to MAAS for the Juju controller you can deploy a multi-cloud controller on your workstation using LXD.

Install Juju

There are two methods of installing Juju on Debian based Linux, Snapd and PPA. Snap packages are also available for other Linux distributions. For MacOS and Windows reference the Juju installation documentation. I’ll be using the snap package with a Debian based distribution of Linux.

To install the snap run the following

sudo snap install juju --classic

Configure Juju

Before we can use Juju to model and deploy workloads we must first configure a few prerequisites.

  • Clouds – Define the underlying provider of bare metal and/or virtual machines.
  • Credentials – Keys, user names, passwords etc. used to authenticate with each cloud.
  • Controllers – Machines deployed within a cloud to manage configuration and resources.
  • Models – An environment associated with a controller in which workloads are deployed.

Add Clouds

Juju works on top of many different types of clouds and has built in support for MAAS. To see the available clouds run
the following from you workstation console;

~$ juju clouds --local

Cloud           Regions  Default          Type        Description
aws                  18  us-east-1        ec2         Amazon Web Services
aws-china             2  cn-north-1       ec2         Amazon China
aws-gov               2  us-gov-west-1    ec2         Amazon (USA Government)
azure                27  centralus        azure       Microsoft Azure
azure-china           2  chinaeast        azure       Microsoft Azure China
cloudsigma           12  dub              cloudsigma  CloudSigma Cloud
google               18  us-east1         gce         Google Cloud Platform
joyent                6  us-east-1        joyent      Joyent Cloud
oracle                4  us-phoenix-1     oci         Oracle Cloud Infrastructure
oracle-classic        5  uscom-central-1  oracle      Oracle Cloud Infrastructure Classic
rackspace             6  dfw              rackspace   Rackspace Cloud
localhost             1  localhost        lxd         LXD Container Hypervisor

To add add a new MAAS cloud run the add-cloud command and enter the details for your environment.

~$ juju add-cloud

Cloud Types
  lxd
  maas
  manual
  openstack
  vsphere

Select cloud type: maas

Enter a name for your maas cloud: maas-01

Enter the API endpoint url: http://10.1.20.5:5240/MAAS/

Cloud "maas-01" successfully added

You will need to add credentials for this cloud (`juju add-credential maas-01`)
before creating a controller (`juju bootstrap maas-01`).

Add Credentials

As the output shows we’ll need to add our MAAS server credentials before Juju can interact with the MAAS API. To obtain your MAAS API key, log in to MAAS and click on your username in the top right of the MAAS GUI.

mass 3 machine avail

Copy the API key shown or generate a new key.

mass pref

Now run the following and enter the copied API key when prompted.

~$ juju add-credential maas-01

Enter credential name: maas-01_creds

Using auth-type "oauth1".

Enter maas-oauth: 

Credential "maas-01_creds" added locally for cloud "maas-01".

To validate/list the added credentials run;

~$ juju credentials

Cloud    Credentials
maas-01  maas-01_creds

Add Controllers

Now that we have added a cloud and its associated credentials we can add a Juju controller.

A controller is the management node of a Juju cloud environment. It runs the API server and the underlying database. Its overall purpose is to keep state of all the models, applications, and machines in that environment.

The controller only requires a minimum of 3.5 GiB of memory and 1vCPU so to ensure that Juju deploys the controller on the virtual machine I’ve added to MAAS I’m going to add a Tag to the machine that Juju can use as a deployment constraint.

To add a Tag to a machine go to the Machines tab of the MAAS GUI and click on the name of the machine you would like to tag. Next click the Edit link under the Tags section.

mass control 1

Click the Edit button to the right of the Machine configuration

mass control 2

Add juju to the Tags section and click Save changes

mass control 3

To bootstrap the controller on MAAS execute the following from you workstation;

~$ juju bootstrap --constraints tags=juju maas-01 juju-01

Creating Juju controller "juju-01" on maas-01
Looking for packaged Juju agent version 2.6.8 for amd64
Launching controller instance(s) on maas-01...
 - eppk7s (arch=amd64 mem=4G cores=1)  
Installing Juju agent on bootstrap instance
Fetching Juju GUI 2.15.0

This will bootstrap a controller named juju-01 to a machine with the tag juju on the maas-01 cloud. You can verify in MAAS that your machine is being deployed.

mass control 4

After a few minutes your terminal will return the deployment status and you can verify the deployment with the juju controllers command.

Waiting for address
Attempting to connect to 10.1.20.13:22
Connected to 10.1.20.13
Running machine configuration script...
Bootstrap agent now started
Contacting Juju controller at 10.1.20.13 to verify accessibility...

Bootstrap complete, controller "juju-01" now is available
Controller machines are in the "controller" model
Initial model "default" added

~$ juju controllers

Use --refresh option with this command to see the latest information.

Controller  Model    User   Access     Cloud/Region  Models  Nodes    HA  Version
juju-01*    default  admin  superuser  maas-01            2      1  none  2.6.8 

 

Juju GUI

In addition to the API the Juju controller has a web interface that makes it easier to experiment with with Juju’s modeling and automation capabilities. To access the GUI run the following;

~$ juju gui

GUI 2.15.0 for model "admin/default" is enabled at:
  https://10.1.20.13:17070/gui/u/admin/default
Your login credential is:
  username: admin
  password: 85faa8d324ce26300d8aa6b43b8fd794

Open a browser and enter the url and credentials provided in the output.

addjui2

You can play around with the default model from here, click the green circle in the center of the model to add a workload or use the search in the top right to explore the available Charms in the Charm Store.

addjui3

The rest of this demonstration will use the controller API however it is useful to monitor the status of your models in the GUI as you continue the deployment.

Add Models

When a controller is bootstrapped two models are automatically added, controller and default. The controller model is used for internal Juju management and is not intended for general workloads. The default model can be used to deploy any supported software but is typically used for experimentation purposes.

Before deploying Openstack we’ll create a new model by running the following;

~$ juju add-model openstack

Added 'openstack' model with credential 'maas-01_creds' for user 'admin'

To verify that the model was created and selected as the active model run;

~$ juju models

Controller: juju-01

Model       Cloud/Region  Type  Status     Machines  Cores  Access  Last connection
controller  maas-01       maas  available         1      1  admin   just now
default     maas-01       maas  available         0      -  admin   5 minutes ago
openstack*  maas-01       maas  available         0      -  admin   never connected

Deploy Openstack

Both the Openstack and Juju documentation contain detailed, step by step instructions for installing the core components of Openstack with Juju. Instead of just providing those as reference I wanted to show how you can customize the installation to reduce the number of servers needed as well as how to include Octavia (Openstacks’ LBaaS implementation). In my next post I’ll demonstrate how Kubernetes interacts directly with Octavia to provide L4 load balancing services for container workloads.

The Juju and Openstack documentation both utilize a minimum of four servers in their examples. My demonstration will show how to deploy on a minimum of two servers. Deploying on a single server is only possible with LXD as Neutron Gateway and Nova Compute can not run on the same bare metal server.

Configure Openstack Bundle

Before we deploy the Openstack Base bundle we need to make modifications to match our lab environment. There’s a charm development tool aptly named charm that can be installed as a snap. To install the snap run;

sudo snap install charm --classic

For other distributions please reference the Charm Tools Project documentation. Now we’ll pull down the current version of the Openstack bundle with;

charm pull cs:openstack-base

Change to the newly created directory and create a backup of the bundle.yaml file;

cd openstack-base
cp bundle.yaml bundle.yaml.bak

If you’ve been following my examples you can download a copy of the bundle file I’m using or edit the bundle.yaml file to match your own environment;

wget https://gist.githubusercontent.com/2stacks/d0b4b4b81df4a835934bbd9b8543ad2e/raw/aa98813d2dc260b188931c5df58066db3b54c4df/bundle.yaml

You can diff my version with the original file to see what I’ve changed.

diff bundle.yaml bundle.yaml.bak

In short, I’m placing Neutron Gateway and all of the other Openstack dependencies on one host and reserving the remaining host for Nova Compute. Remember that Neutron Gateway and Nova Compute can not run on the same bare metal machine. In order to have as much capacity as possible for running virtual machines I’m not running any additional services on the host dedicated to Nova Compute.

Configure Octavia Overlay

Next we’ll download and modify the Octavia overlay used to deploy the resources required by Octavia.

curl https://raw.githubusercontent.com/openstack-charmers/openstack-bundles/master/stable/overlays/loadbalancer-octavia.yaml -o loadbalancer-octavia.yaml

By default this overlay expects a minimum of four machines so it will need to be modified to run with fewer machines. As before I’ve modified the file to deploy all of the resources on the host running Neutron Gateway. You can download my version of the file by running;

mv loadbalancer-octavia.yaml loadbalancer-octavia.yaml.bak
wget https://gist.githubusercontent.com/2stacks/21ba79de48e38b3588868cd033675d1a/raw/2eac80acffc6173b996cc30862da7a02317ce9a3/loadbalancer-octavia.yaml

Deploy Openstack Bundle

Once we’ve made the necessary changes or downloaded the modified files we can start the deployment of Openstack.

juju deploy ./bundle.yaml --overlay loadbalancer-octavia.yaml

The deployment time depends on the speed of your hardware but in my environment it takes approximately 15 minutes. You can monitor the deployment status by running the following command.

watch -c juju status --color

You can also monitor the status of the deployment and view the model of the Openstack installation that has been constructed in the Juju GUI. While Juju is deploying the Openstack workloads and resources we can continue to apply additional configurations required by Octavia.

Configure Octavia

Octavia consists of an Openstack controller and load balancer instances that deploy as virtual machines on Nova Compute. The communication between the controller and each instance is secured with bi-directional certificate authentication over an out-of-band management network. The certificate, network and image resources must be configured before you can deploy Octavia load balancer instances.

For additional information on configuring the Octavia Charm please reference the documentation

Generate Certificates

Below is an example of how to create self-signed certificates for use by Octavia. Note: This example is not meant to be used in production.

mkdir -p demoCA/newcerts
touch demoCA/index.txt
touch demoCA/index.txt.attr
openssl genrsa -passout pass:foobar -des3 -out issuing_ca_key.pem 2048
openssl req -x509 -passin pass:foobar -new -nodes -key issuing_ca_key.pem \
    -config /etc/ssl/openssl.cnf \
    -subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
    -days 365 \
    -out issuing_ca.pem
openssl genrsa -passout pass:foobar -des3 -out controller_ca_key.pem 2048
openssl req -x509 -passin pass:foobar -new -nodes \
    -key controller_ca_key.pem \
    -config /etc/ssl/openssl.cnf \
    -subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
    -days 365 \
    -out controller_ca.pem
openssl req \
    -newkey rsa:2048 -nodes -keyout controller_key.pem \
    -subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
    -out controller.csr
openssl ca -passin pass:foobar -config /etc/ssl/openssl.cnf \
    -cert controller_ca.pem -keyfile controller_ca_key.pem \
    -create_serial -batch \
    -in controller.csr -days 365 -out controller_cert.pem
cat controller_cert.pem controller_key.pem > controller_cert_bundle.pem

Apply Certificates

To apply the certificates to the Octavia controller run the following command.

juju config octavia \
    lb-mgmt-issuing-cacert="$(base64 controller_ca.pem)" \
    lb-mgmt-issuing-ca-private-key="$(base64 controller_ca_key.pem)" \
    lb-mgmt-issuing-ca-key-passphrase=foobar \
    lb-mgmt-controller-cacert="$(base64 controller_ca.pem)" \
    lb-mgmt-controller-cert="$(base64 controller_cert_bundle.pem)"

Create Openstack Resources

To initiate the creation of the Openstack network, router and security group used by Octavia instances run;

juju run-action --wait octavia/0 configure-resources

Deploy Amphora Image

Finally to create the disk image that will be used by load balancer instances run the following.

juju run-action --wait octavia-diskimage-retrofit/leader retrofit-image

By default this will create an Ubuntu bionic image preconfigured with HAProxy.This command can take a long time to finish so be patient. It should eventually create an image with a prefix of amphora-haproxy. We’ll validate that this image was successfully created in the next section.

Validate Deployment

Once everything has been configured and Juju has finished it’s deployment we can begin validating the installation by creating resources within Openstack.

The output of juju status should show everything as active with the exception of Vault. It’s ok to leave this as is for now but if you’d like additional information on setting up Vault please reference Appendix C of the Openstack deployment documentation.

 

API Access

The Openstack installation should now be accessible via it’s API and built in web interface. To begin utilizing the API we need to install the OpenStackClient package.

OpenStackClient (aka OSC) is a command-line client for OpenStack that brings the command set for Compute, Identity, Image, Object Storage and Block Storage APIs together in a single shell with a uniform command structure.

There are several ways to install the client package but for most linux distributions the snap package is recommended.

sudo snap install openstackclients

The credentials required to access the API can be obtained and set as environment variables by running the rc script included in the openstack-base directory.

source openrc

To view the credentials run;

~$ env | grep OS_

OS_REGION_NAME=RegionOne
OS_USER_DOMAIN_NAME=admin_domain
OS_PROJECT_NAME=admin
OS_AUTH_VERSION=3
OS_IDENTITY_API_VERSION=3
OS_PASSWORD=MaemumeFoolah1ae
OS_AUTH_TYPE=password
OS_AUTH_URL=http://10.1.20.36:5000/v3
OS_USERNAME=admin
OS_PROJECT_DOMAIN_NAME=admin_domain

To verify access to the API run the following commands. They should output a list of all of the Openstack API endpoints and services created by Juju.

openstack catalog list
openstack endpoint list

 

Web Dashboard Access

To access the Openstack web interface, retrieve the IP address of the LXD container that the Openstack Dashboard was deployed on.

juju status openstack-dashboard/0* | grep Container | awk '{print $3}'

Open a browser and use the returned IP address in the following URL;

http://<dashboard_ip>/horizon

The credentials needed to log in to the browser are the same as previously obtained with the openrc script.

OS_USER_DOMAIN_NAME=admin_domain
OS_PROJECT_NAME=admin
OS_PASSWORD=MaemumeFoolah1ae
webdash3

Add Images

During deployment the Juju glance-simplestreams charm should have added a few default disk images in addition to the Amphora image used by Octavia. To verify that these images were created run the following;

~$ openstack image list

+--------------------------------------+-----------------------------------------------------------------+--------+
| ID                                   | Name                                                            | Status |
+--------------------------------------+-----------------------------------------------------------------+--------+
| 86c318f3-76f7-4311-b146-cc00e79e9406 | amphora-haproxy-x86_64-ubuntu-18.04-20190813.1                  | active |
| 40aca86a-9177-4b94-afe3-fee448abee4c | auto-sync/ubuntu-bionic-18.04-amd64-server-20190813.1-disk1.img | active |
| 4b238000-ba59-4470-bcea-93dd8761c40e | auto-sync/ubuntu-trusty-14.04-amd64-server-20190514-disk1.img   | active |
| d87ed10a-5d86-409c-9fa5-7cf59e80a258 | auto-sync/ubuntu-xenial-16.04-amd64-server-20190814-disk1.img   | active |
+--------------------------------------+-----------------------------------------------------------------+--------+

You can use the auto-sync images in the following examples or create your own via the API. The following command will create an image named bionic using the current Ubuntu 18.04 cloud-image.

curl http://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img | \
openstack image create --public --min-disk 3 --container-format bare --disk-format qcow2 \
    --property architecture=x86_64 --property hw_disk_bus=virtio --property hw_vif_model=virtio bionic

If you run openstack image list again you should see the newly created image in the list.

~$ openstack image list

+--------------------------------------+-----------------------------------------------------------------+--------+
| ID                                   | Name                                                            | Status |
+--------------------------------------+-----------------------------------------------------------------+--------+
| 86c318f3-76f7-4311-b146-cc00e79e9406 | amphora-haproxy-x86_64-ubuntu-18.04-20190813.1                  | active |
| 40aca86a-9177-4b94-afe3-fee448abee4c | auto-sync/ubuntu-bionic-18.04-amd64-server-20190813.1-disk1.img | active |
| 4b238000-ba59-4470-bcea-93dd8761c40e | auto-sync/ubuntu-trusty-14.04-amd64-server-20190514-disk1.img   | active |
| d87ed10a-5d86-409c-9fa5-7cf59e80a258 | auto-sync/ubuntu-xenial-16.04-amd64-server-20190814-disk1.img   | active |
| b2797730-70ca-4e91-8b30-26cf2e6c7968 | bionic                                                          | active |
+--------------------------------------+-----------------------------------------------------------------+--------+

 

Add Flavors

Next we’ll create flavors that determine the amount of resources given to virtual machine instances.

openstack flavor create --ram 1024 --disk 10 m1.small
openstack flavor create --vcpus 2 --ram 2048 --disk 20 m1.medium
openstack flavor create --vcpus 4 --ram 4096 --disk 40 m1.large

To list the newly created instance flavors run;

~$ openstack flavor list 

+--------------------------------------+-----------+------+------+-----------+-------+-----------+
| ID                                   | Name      |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+
| 8dc59145-1ba3-4709-b037-6806a38d598c | m1.large  | 4096 |   20 |         0 |     4 | True      |
| c14adc69-d72a-4ed2-b49a-d010a5e58fa3 | m1.small  | 1024 |   10 |         0 |     1 | True      |
| e426aa37-2d29-4ece-8e3c-c1fb833f9308 | m1.medium | 2048 |   20 |         0 |     2 | True      |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+

Add Networks and Subnets

We need to create at least two networks (internal and external) to verify the functionality of Openstack virtual routers and load balancers. The two types of Openstack networks we’ll create are;

The first network type we’ll create is a provider-network. This network will provide ingress and egress communication between compute instances and the outside world. Subnets created within this network can be used to assign floating IPs which will allow you to run public services and remotely reach your Openstack instances.

To create the network run the following;

openstack network create ext_net --share --external --provider-network-type flat --provider-physical-network physnet1

To create a subnet inside the network for allocation of floating IPs run;

openstack subnet create ext_subnet --no-dhcp --allocation-pool start=192.168.10.101,end=192.168.10.254 \
    --subnet-range 192.168.10.0/24 --gateway 192.168.10.1 --dns-nameserver 192.168.10.1 --network ext_net

Refer to Part 1 of this series for a description of the networks and interfaces required to support the Openstack installation. The subnet you create here must be routable in your environment.

The second network type we’ll create is a self-service network. These networks allow non privileged users of the Openstack installation to create project specific networks. Typically they employ GRE or VXLAN as overlays and require an Openstack virtual router to connect to external networks.

To create the network run the following;

openstack network create int_net

To create a subnet inside the network;

openstack subnet create int_subnet --allocation-pool start=10.0.0.11,end=10.0.0.254 \
    --subnet-range 10.0.0.0/24 --gateway 10.0.0.1 --dns-nameserver 192.168.10.1 --network int_net

This will deploy a virtual network within the default admin project using the default overlay type of GRE. Since this subnet is not exposed outside of Openstack by default, you can configure any subnet and IP range you like. Note that I have configured a dns-nameserver external to Openstack for external name resolution.

To verify the networks and subnets we just created run;

~$ openstack network list

+--------------------------------------+-------------+--------------------------------------+
| ID                                   | Name        | Subnets                              |
+--------------------------------------+-------------+--------------------------------------+
| 1730fd08-c48c-4591-993c-7d5f90ea5578 | ext_net     | 3d283801-faee-44c1-b51b-5c69f76ef61f |
| 695d5dab-a12a-4471-841b-566c08ea732d | lb-mgmt-net | 525306b4-5653-4b80-bbe9-facf3d900bca |
| a606266e-3c10-4962-9588-71a7989c78ac | int_net     | c4a1bfd3-463d-44a0-b4ca-9748e76f90e6 |
+--------------------------------------+-------------+--------------------------------------+

~$ openstack subnet list

+--------------------------------------+------------------+--------------------------------------+-------------------------+
| ID                                   | Name             | Network                              | Subnet                  |
+--------------------------------------+------------------+--------------------------------------+-------------------------+
| 3d283801-faee-44c1-b51b-5c69f76ef61f | ext_subnet       | 1730fd08-c48c-4591-993c-7d5f90ea5578 | 192.168.10.0/24         |
| 525306b4-5653-4b80-bbe9-facf3d900bca | lb-mgmt-subnetv6 | 695d5dab-a12a-4471-841b-566c08ea732d | fc00:566c:8ea:732d::/64 |
| c4a1bfd3-463d-44a0-b4ca-9748e76f90e6 | int_subnet       | a606266e-3c10-4962-9588-71a7989c78ac | 10.0.0.0/24             |
+--------------------------------------+------------------+--------------------------------------+-------------------------+

For additional network options and settings consult the Openstack Networking documentation or run;

openstack network create --help

Add a Router

Openstack virtual routers provide layer-3 services such as routing and NAT. They are required for routing between self-service networks within the same project as well as for communication between self-service and provider networks.

To create a router within the current domain/project run;

openstack router create rtr-01

To attach a router interface to the previously created subnet within our provider network run;

openstack router set rtr-01 --external-gateway ext_net

And finally to connect our internal self-service subnet to the external provider subnet run;

openstack router add subnet rtr-01 int_subnet

To verify that the router was created and configured run;

openstack router list
openstack router show rtr-01

You can also verify via the Openstack web interface by navigating to;

http://<dashboard_ip>/horizon/project/network_topology/

Here openstack will display a topology of the existing routers/networks/instances created within the current project.

router6

If all has gone well up to now you should be able to ping the external interface of the router from your workstation. To get the external IP of the router run;

router_ip=$(openstack port list --router rtr-01 --network ext_net -f value | awk -F "'" '{print $2}')

Now try pinging the returned IP from your workstation.

~$ ping $router_ip

PING 192.168.10.214 (192.168.10.214) 56(84) bytes of data.
64 bytes from 192.168.10.214: icmp_seq=1 ttl=64 time=1.17 ms
64 bytes from 192.168.10.214: icmp_seq=2 ttl=64 time=0.292 ms

 

Add Security Groups

To allow access to our compute instances we’ll need to create security groups or add rules to the existing security groups that were created during the installation of Openstack. To view a list of the existing security groups run;

openstack security group list

Next we’ll add rules to the default security group to allow ICMP and SSH access from external hosts.

project_id=$(openstack project show --domain admin_domain admin -f value -c id)
secgroup_id=$(openstack security group list --project ${project_id} | awk '/default/ {print $2}')
openstack security group rule create ${secgroup_id} --protocol icmp --remote-ip 0.0.0.0/0
openstack security group rule create ${secgroup_id} --protocol tcp --remote-ip 0.0.0.0/0 --dst-port 22

You can validate that the rules were added to the security group by running;

openstack security group show ${secgroup_id}

 

Add SSH Keys

To be able to access our compute instances remotely via ssh we’ll need to either create a new SSH keypair or upload the public key of an existing keypair.

To create a new keypair run;

touch ~/.ssh/os_rsa
chmod 600 ~/.ssh/os_rsa
openstack keypair create os_rsa > ~/.ssh/os_rsa

Or to upload an existing public key run;

openstack keypair create --public-key ~/.ssh/id_rsa.pub id_rsa

Add an Instance

The following command will add a new compute instance named bionic-test using the Ubuntu 18.04 cloud image. The configured flavor will give it 1 vCPU and 1G of RAM. It will be configured with an IP in the internal subnet we created, be assigned to the default security group and will allow SSH access using the os_rsa keypair.

openstack server create \
    --image bionic --flavor m1.small --key-name os_rsa \
    --nic net-id=$(openstack network list | grep int_net | awk '{ print $2 }') \
    bionic-test

 

Add Floating IP

To enable SSH access from external hosts we’ll need to assign the instance a floating ip in the external subnet.

floating_ip=$(openstack floating ip create -f value -c floating_ip_address ext_net)
openstack server add floating ip bionic-test ${floating_ip}

Test Reachability

Once the Status of Open Stack Server List shows that the instance is ACTIVE we can attempt to access it via ssh.

~$ openstack server list

+--------------------------------------+-------------+--------+-------------------+--------+----------+
| ID                                   | Name        | Status | Networks          | Image  | Flavor   |
+--------------------------------------+-------------+--------+-------------------+--------+----------+
| 946b52a2-bf60-4063-966c-f0d1f4828777 | bionic-test | ACTIVE | int_net=10.0.0.35 | bionic | m1.small |
+--------------------------------------+-------------+--------+-------------------+--------+----------+
~$ ssh -i ~/.ssh/os_rsa ubuntu@$floating_ip

Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-60-generic x86_64)

ubuntu@bionic-test:~$

Add a Load Balancer

The last thing we’ll test is the functionality of the Octavia load balancing service. I’m not going set up multiple web servers and verify full functionality for now. The goal of this test is to verify than an Amphora instance is created and that it can properly route traffic to the Ubuntu instance we just crated.

To create the load balancer with an interface in our internal subnet run the following;

lb_vip_port_id=$(openstack loadbalancer create -f value -c vip_port_id --name lb-01 --vip-subnet-id int_subnet)

Continue to run the following until lb-01 shows status of ACTIVE and ONLINE.

~$ openstack loadbalancer show lb-01

The instance can take some time to create so be patient. If the image creation fails you can check the worker logs on the Octavia controller.

juju ssh octavia/0
more /var/log/octavia/octavia-worker.log 

Configure the Load Balancer

Once the load balancer has been created we can configure vips/pools/monitors etc. For testing purposes, I’m only going to show how to create a simple service to forward ssh connections to the bionic-test instance we created.

openstack loadbalancer listener create --name listener1 --protocol tcp --protocol-port 22 lb-01
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol tcp
openstack loadbalancer healthmonitor create --delay 5 --timeout 5 --max-retries 3 --type TCP pool1
instance_ip=$(openstack server show bionic-test -f value -c addresses | sed -e 's/int_net=//' | awk -F "," '{print $1}')
openstack loadbalancer member create --subnet-id int_subnet --address ${instance_ip} --protocol-port 22 pool1

Now verify the status of the health monitor and pool member.

~$ openstack loadbalancer member list pool1

+--------------------------------------+------+----------------------------------+---------------------+-----------+---------------+------------------+--------+
| id                                   | name | project_id                       | provisioning_status | address   | protocol_port | operating_status | weight |
+--------------------------------------+------+----------------------------------+---------------------+-----------+---------------+------------------+--------+
| b332c304-3695-4e83-a57f-7bcd2bf76f3d |      | 55d6e3168bc24e0397c1185a09500b78 | ACTIVE              | 10.0.0.35 |            22 | ONLINE           |      1 |
+--------------------------------------+------+----------------------------------+---------------------+-----------+---------------+------------------+--------+

Add Floating IP

To reach our load balancer externaly we’ll need to create and assign a floating IP address.

floating_ip=$(openstack floating ip create -f value -c floating_ip_address ext_net)
openstack floating ip set --port $lb_vip_port_id $floating_ip

Test Reachability

Now ssh to the newly created floating IP. If the load balancer was created correctly it should forward your connection to your internal instance.

~$ ssh -i ~/.ssh/os_rsa ubuntu@$floating_ip

Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-60-generic x86_64)

ubuntu@bionic-test:~$

Conclusion

At this point we have deployed and validated the essential components of Openstack necessary to complete our Kubernetes-as-a-Service solution. Openstack is a very robust and capable IaaS solution for building public and private clouds. This post is only intended to demonstrate a subset of it’s capabilities and how to use open source orchestration to simplify what is ordinarily a very complex and labor intensive endeavor. In my next post I’ll continue with this trend of simplifying complex deployments through the use of open source solutions and demonstrate how to automate Kubernetes deployments on top of Openstack.

]]>
https://www.sms.com/blog/bare-metal-kubernetes-as-a-service-part-2/feed/ 0
Bare Metal to Kubernetes as a Service – Part 1 of 3 https://www.sms.com/blog/bare-metal-to-kubernetes-as-a-service-part-1/ https://www.sms.com/blog/bare-metal-to-kubernetes-as-a-service-part-1/#respond Fri, 10 Apr 2020 01:58:00 +0000 https://smsprod01.wpengine.com/?p=6441 By Andrew Stanley, Director of Engineering, SMS

“You can think of Kubernetes as a platform for application patterns. The patterns make your application easy to deploy, easy to run, and easy to keep running.” Janet Kuo

The “ease” with which Google Engineer Janet Kuo speaks has led to the rapid adoption of Kubernetes by organizations of all sizes and across all industries. However, “easy” is relative. For all of the gains made by Kubernetes, as it relates to managing applications, there is a general consensus that deploying and managing Kubernetes itself is hard. To address this problem, all of the major public cloud companies (Google GKE, Amazon EKS, Microsoft AKS, etc.) are now offering Kubernetes-as-a-Service. Similar to Infrastructure-as-a-Service (IaaS) these offerings promise to abstract the functionality of Kubernetes from the underlying physical resources, tooling and expertise needed to support it. If your organization is leveraging public cloud services, then problem solved. But what about organizations in search of private cloud Kubernetes offerings? Although there are a growing number of options available in this space (OpenShift, Rancher, PKS, Platform9 etc.) the degree to which they make Kubernetes “easy” is still relative.

Objective

Organizations looking to adopt an on-premise as-a-service Kubernetes solution face implementation challenges and choices. This lab deployment demonstrates how anyone can build their own Kubernetes-as-a-Service offering using readily available open source solutions. I’m going to cover this process in a series of posts each one building upon the previous. The final solution will consist of a private cloud, built with MAAS (Metal-as-a-Service) and Openstack, on which self-service Kubernetes clusters can be deployed.

This first post will cover the deployment of MAAS as the base building block of this on-premises solution. MAAS is Canonical’s open source solution for building a self-service cloud of bare metal servers. It allows you to deploy and manage physical servers as though they were virtual instances in a public cloud.

Prerequisites

In order to replicate the configurations that I’m going to demonstrate, the following minimum requirements must be met;

  • 1 x Physical host or VM to run MAAS. (see https://maas.io/docs/maas-requirements)
  • 1 x Physical host or VM to run the Juju controller. (We’ll use this to deploy Openstack on MAAS)
  • 2 x Physical hosts with an IPMI capable BMC. (Bare minimum for this demonstration – Quad Core, 8G RAM, two network interfaces and two storage disks)
  • 1 x Switch with VLAN support.
  • 1 x Router/Firewall to provide inter-vlan routing and Internet access.

Lab Setup

My lab environment is representative of a typical brownfield deployment so disregard the randomness of names, interfaces, vlans etc. You can substitute VLAN IDs, Subnets, Names etc. to match your own environment.

Physical Configuration

On each of the physical servers that will be managed by MAAS, you’ll need a BMC interface, two network interfaces and two storage disks. The network and storage requirements are set by the Openstack installation I’ll demonstrate in a later blog post. Below is the configuration I’m using in my environment.

Network

  • eno1 – PXE Boot and Host Management
  • eno2 – Dedicated to Openstack Networking
  • mgmt0 – iDrac interface used for IPMI power management
1 twostacks

Storage

  • /dev/sda – Used for OS Installation
  • /dev/sdb – Dedicated to Openstack Storage

Logical Configuration

I’m using the following vlans in my lab for this demonstration. You may substitute these as needed however, it is recommended that you use at least three separate vlans.

  • v11 – IPMI for out of band access to the physical hosts
  • v20 – MAAS for PXE Boot and host management
  • v193 – IaaS for Openstack virtual machine instances
2 twostacks

Note: I will not be covering the creation of interfaces and vlans on the physical switch or router. Please consult your device documentation for guidance on implementing these configurations.

MAAS Deployment

Install MAAS

The installation of MAAS is very simple. For lab purposes we’ll be installing both the Region Controller and Rack Controller service on the same host. I recommend you start with a clean install of Ubuntu Server 18.04. For any other OS/version please check the MAAS documentation for compatibility. To get started you can use any of the following options;

Or for the impatient, ssh to the machine you’ve dedicated to run MAAS and execute the following;

maas-01:~$ sudo apt-add-repository -yu ppa:maas/stable
maas-01:~$ sudo apt update
maas-01:~$ sudo apt install maas
maas-01:~$ sudo maas init
Create first admin account
Username: admin
Password: 
Again: 
Email: admin@localhost
Import SSH keys [] (lp:user-id or gh:user-id): <your_Launchpad_or_Github_name>

We won’t be using the MAAS CLI for now but if you find you have issues running MAAS commands as a non root user you may also need to do the following;

maas-01:~$ maas --help
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/maascli/config.py", line 109, in open
    cls.create_database(dbpath)
  File "/usr/lib/python3/dist-packages/maascli/config.py", line 93, in create_database
    os.close(os.open(dbpath, os.O_CREAT | os.O_APPEND, 0o600))
PermissionError: [Errno 13] Permission denied: '/home/<your_user>/.maascli.db'

maas-01:~$ sudo chown <your_user>:<your_user> .maascli.db

You should now be able to login to the MAAS UI at http://(your_maas_ip):5240/MAAS/

Configure MAAS

The first time you log in to MAAS you’ll be prompted to verify/change some initial settings. Of these the following two need to be configured for your environment.

DNS forwarder

By default this is set to dns servers used by the host where MAAS is running.

3 twostacks

SSH Keys

If you did not specify a Github or Launchpad User-ID during initialization you should upload an RSA public key now.

4 twostacks

You can make other changes as desired but for now we will leave the defaults.

Now click the Go to dashboard button to continue configuring MASS.

5 twostacks

Next you will be taken to the Machines section of the dashboard where you will be presented with a warning that DHCP is not enabled.

6 twostacks

The next step in deploying MAAS is to set up the DHCP configuration that will be used to PXE boot hosts. Before I cover these steps it’s important to review some of MAAS’ Concepts and Terms and the different types of DHCP scopes that can be configured. I found the documentation to be a little confusing as it pertains to the setup and configuration of DHCP so hopefully this summary will be of use to those deploying MAAS for the first time.

MAAS Concepts and Terms

For complex, highly available deployments, MAAS supports concepts similar to those found in public clouds such as Regions and Availability Zones. For the purposes of this post we will not be focused on those for now. If you would like more information on configuring these features please read the MAAS Documentation.

Fabric

Is synonymous with a Layer 2 Fabric. It can be represented by a single switch or cluster of connected switches. It is assumed that vlan IDs will not be duplicated within a single Fabric.

Spaces

Are groupings of logical networks that serve a similar purpose and that may or may not be a part of the same Fabric, Zone or Region. An example use of Spaces would be to restrict the deployment of hosts to specific security zones. We won’t be configuring any spaces for now but I wanted to mention their purpose as its a concept unique to MAAS.

7 twostacks

Machines

A machine is a host that can be deployed by MAAS. This can be a physical host or a virtual machine belonging to a KVM host or POD.

DHCP and Reserved Ranges

This to me was one of the harder configuration concepts to work with in MAAS. After deploying a few times and trying different options I was finally able to make sense of the documentation. The following is a summary of the concepts as I understand them.

  • DHCP – At least one Reserved Dynamic Range is needed by MAAS for the Enlistment and Commissioning process used to discover and manage machines. The easiest and recommended setup is to enable DHCP on the untagged vlan in which the MAAS server resides.
  • Managed Subnet – By default all subnets added to MAAS are considered Managed regardless of whether you choose to enable MAAS managed DHCP for them.
    • Reserved Ranges – A reserved range in a Managed subnet will never be utilized by MAAS. However, any IP in the subnet that is outside of the reserved ranges can be statically assigned to hosts when they are deployed. Reserved ranges in this case are to be used to exclude the assignment of IPs that may be in use by routers, switches and other infrastructure devices within the subnet.
    • Reserved Dynamic Ranges – A dynamic range in a Managed subnet can be used for Enlistment and Commissioning as previously mentioned but can also be used to assign DHCP addresses to hosts during deployment. This will actually set the deployed hosts’ /etc/network/interfaces or Netplan configuration to use DHCP.
  • Unmanaged Subnet – You must explicitly configure a subnet added to MAAS as Unmanaged. This assumes that there may be an external DHCP server allocating addresses for this subnet.
    • Reserved Ranges – Reserved ranges in an Unmanaged Subnet constitute the only addresses that may be statically assigned by MAAS during host deployment. You must also add this same range to a DHCP exclusion list in any external DHCP server used for this subnet.
    • Reserved Dynamic Ranges – Dynamic ranges can not be configured in an Unmanaged subnet.

Until you feel you’ve mastered the difference between these concepts it is recommended that you start with a single DHCP range managed by MAAS for Enlistment and Commissioning purposes.

Configure DHCP

To configure the DHCP requirements we need to enable MAAS managed DHCP on the default untagged vlan utilized by the MAAS server.

This vlan needs to be configured in your physical switch(s) as the access or native vlan (untagged) to all hosts that will be managed by MAAS. If it is not configured as untagged to your physical hosts the PXE boot process may not work.

Click on the Subnets tab in the MAAS GUI navigation, then click the link for the vlan named untagged

8 twostacks

Click the Enable DHCP button 

9 twostacks

To configure the Reserved Dynamic Range enter a start and end IP address for the range that MAAS will use for DHCP. I’ve added a comment as a reminder that this range is primarily used for enlistment and commissioning. Click the Configure DHCP button to apply the settings. 

10 twostacks

Now under the Reserved ranges section click the drop down menu to the right and select Reserve range.

11 twostacks

Since this subnet is a Managed subnet this range of IPs will never be assigned (statically or dynamically) by MAAS.
We are going to use this range to block off the first ten IPs of the subnet for use by network infrastructure devices. Enter the range of IPs and then click the Reserve button. 

12 twostacks

At this point we have configured MAAS to manage the allocation of the default vlan/subnet as follows;

  • 10.1.20.1-10.1.20.10 – MAAS will never allocate IPs in this range.
  • 10.1.20.11-10.1.20.199 – MAAS may use this range to allocate static IPs to deployed hosts.
  • 10.1.20.200-10.1.20.254 – MAAS will use this range for Enlistment and Commissioning and may use it to assign DHCP addresses to deployed hosts.

Machine Deployment

Now that we have completed the initial configuration of MAAS we can prepare the physical hosts/machines that will be managed by MAAS.

Configure Hosts

Physical Networking

You should verify your physical host cabling and switch configurations to ensure compatibility with the vlans and subnets that we have added to MAAS.

Physical Storage

If you have a hardware RAID controller in any of your hosts it will need to be configured prior to adding the host to MAAS. If this is not done MAAS may not be able to detect storage disks during Enlistment or Commissioning.

PXE Boot

At least one interface on your physical hosts should be enabled for PXE boot. As mentioned before this interface should be assigned to an untagged switch port in the vlan in which the MAAS server resides.

Enable PXE Boot in BIOS

I’m using older Dell R610s with iDrac6 in my lab and will use one of the on board NICs to PXE boot the servers. 

13 twostacks

Configure BIOS Boot Order

It is important that the PXE enabled interface be placed higher in the boot order than any physical disks. MAAS will take care of ensuring that the server boots from network or disk as needed. 

14 twostacks

IPMI

If your servers have an onboard BMC similar to Dell’s DRAC or HP’s iLO you can take full advantage of MAAS’ power control and zero-touch-provisioning capabilities.

IPMI Settings

For my Dell servers all I had to do was enable IPMI in the DRAC management interface. 

15 twostacks

IPMI Testing

In order for MAAS to control the power of your servers with IPMI, MAAS will need to be able to reach the servers BMC interface via UDP port 623. You can test that this is working before adding your servers to MAAS by installing ipmitool on your MAAS server.

maas-01:~$ sudo apt install ipmitool
maas-01:~$ ipmitool -I lanplus -H <host_ip> -U <user> -P <password> sdr elist all
Temp             | 01h | ns  |  3.1 | No Reading
Temp             | 02h | ns  |  3.2 | No Reading
Temp             | 05h | ns  | 10.1 | No Reading
Ambient Temp     | 07h | ns  | 10.1 | Disabled
Temp             | 06h | ns  | 10.2 | No Reading
Ambient Temp     | 08h | ns  | 10.2 | Disabled
Ambient Temp     | 0Eh | ns  |  7.1 | No Reading
Planar Temp      | 0Fh | ns  |  7.1 | No Reading
CMOS Battery     | 10h | ns  |  7.1 | No Reading
....
...
..

To use ipmitool, you must use a username and password that has already been configured in your BMC and that has permissions to query IPMI information over LAN. By default MAAS will create its own user account during the Enlistment phase.

Add Hosts to MAAS

MAAS manages physical and virtual machines using what it calls a node lifecycle. I’m going to focus mainly on the the following phases of this lifecycle for now.

  • Enlistment
  • Commissioning
  • Deployment
  • Release

In both the Enlistment and Commissioning phases a machine will undergo the following process;

  • DHCP server is contacted
  • kernel and initrd are received over TFTP
  • machine boots
  • initrd mounts a Squashfs image ephemerally over HTTP
  • cloud-init runs enlistment and built-in commissioning scripts
  • machine shuts down

The key difference between the two phases is Enlistment is primarily used for initial discovery and Commissioning allows for additional hardware customization and testing.

Enlistment

Adding a host to MAAS is typically done via a combination of DHCP, TFTP and PXE. This unattended manner of adding a host is called Enlistment.

Note: MAAS runs built-in commissioning scripts during the enlistment phase so that when you commission a host, any customised commissioning scripts you add will have access to data collected during enlistment.

If everything has been configured correctly the only required step to start the Enlistment process is to power on the machine you are adding to MAAS. You can watch the Enlistment process discovering a new host in the following video:

/

IPMI Setup Verification in MAAS

Once the Enlistment process is complete you can verify that MAAS was able to discover/configure the IPMI information needed to manage power settings on your host.

Navigate to Machines in the MAAS GUI menu. Click the automatically assigned host name of the newly discovered host, (we’ll change this later) then navigate to the Configuration tab.

16 twostacks

Here you can see that the IP address and power management type have been discovered and that MAAS has configured a username and password to authenticate with the host’s BMC.

IPMI Setup Verification on Host

You can also confirm that a maas user was automatically created in your hosts BMC management interface.

17 twostacks

Commissioning

Once a host has been added to MAAS via the Enlistment process, the next step is to Commission it. You have the option of selecting some extra parameters (like whether to leave the host running and accessible via ssh) and can perform additional hardware tests during this phase.

Prior to Commissioning a host I prefer to change the automatically assigned hostname. This is purely optional but if you would like to change the hostname, click the current hostname in the top left corner and set the name to something more identifiable for your environment.

18 twostacks

To start the commissioning process, click the Take Action button in the top right corner and select Commission from the drop down list.

19 twostacks

Now you can specify a few configuration options as well as select specific hardware tests to run. You can leave the default settings for now and click the Commission machine button. 

20 twostacks

You can watch the Commissioning process run its scripts and perform hardware tests in the following video:

Once a host is commissioned its status will change to Ready.

Deploy Hosts

During the deployment phase MAAS utilizes a process similar to Commissioning to deploy an operating system with Curtain. Network and Storage configurations are also applied as part of this process.

  • DHCP server is contacted
  • kernel and initrd are received over TFTP
  • machine boots
  • initrd mounts a Squashfs image ephemerally over HTTP
  • cloud-init triggers deployment process
    • curtin installation script is run
    • Squashfs image (same as above) is placed on disk

MAAS Network Model

Once a machine is in the Ready state (post-commissioning) it’s intended network and storage configuration can be defined in MAAS. Interfaces can be added/removed, attached to a fabric, linked to a subnet, and provided an IP assignment mode.
This is done using a model that represents the desired network configuration for a deployed host. This allows MAAS to deploy consistent networking across multiple operating systems regardless of whether they employ Network Managersystemd-networkd or Netplan.

MAAS will automatically discover the host interfaces and the vlan/subnet configuration used by the PXE enabled interface. The default configuration is all that is needed for this demonstration but MAAS is capable of provisioning advanced network settings such as LACP Bonds, VLAN interfaces and bridges.

To view the current network configuration Navigate to the Interfaces tab of the host. Notice that the IP Mode is set to Auto assign this means that when the host is deployed MAAS will automatically configure a static IP address on the host. This is not the same as enabling DHCP in the post-deployment configuration.

21 twostacks

If you prefer you can also change this setting to Static assign and manually set the preferred IP address yourself.

22 twostacks

No configuration is needed in MAAS for eno2. However, you need to make sure that this interface is assigned to the vlan you want to use with Openstack on your physical switch.

Configure MAAS Storage Model

The last requirement before deploying a host is to configure the storage disks and filesystems.

Navigate to the Storage tab of the hosts configuration menu and review any existing configuration that may have been discovered when the host was added to MAAS. If no existing partitions and filesystems exist this tab will only show the disks that were discovered.

For simplicity I’m going to keep the default storage layout setting of Flat.

Next, click the button under Actions to the right of the disk you want to configure and select Add partition.

23 twostacks

Configure the SizeFilesystem type and Mount point you to want provision. For lab purposes I’m going to use the simplest possible configuration of a single ext4 partition mounted at the root of the filesystem.

23 twostacks

Click the Add partition button after filling in these settings.

Do not configure any partitions or filesystems on the remaining disk (sdb). The Openstack deployment will fail if it can not detect an unused disk.

Deployment

To deploy the host click the Take action button in the top right corner and select Deploy from the drop down list. You can watch as the Deployment process provisions the storage, OS and network configurations in the following video;

Verify Deployment

Once the Deployment process has completed your host should be left powered on with a newly provisioned operating system (Ubuntu 18.04 unless otherwise specified). If the host was deployed successfully, it’s status on the Machines page of the MAAS GUI should show the version of the operating system that was deployed. 

25 twostacks

You should also be able to ssh to the host using the default username ubuntu and the SSH key you specified during the initial installation of the MAAS server.

host:~$ ssh -i ~/.ssh/id_rsa ubuntu@10.1.20.11

Once you log in you should verify that your partitions were created as specified and that the MAAS network model properly deployed your intended network configuration.

ubuntu@metal-03:~$ df -h

Filesystem Size Used Avail Use% Mounted on udev 24G 0 24G 0% /dev tmpfs 4.8G 1.2M 4.8G 1% /run /dev/sda1 67G 9.8G 54G 16% / tmpfs 24G 0 24G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 24G 0 24G 0% /sys/fs/cgroup tmpfs 4.8G 0 4.8G 0% /run/user/1000 ubuntu@metal-03:~$ cat /etc/netplan/50-cloud-init.yaml ---

network: ethernets: eno1: addresses: - 10.1.20.13/24 gateway4: 10.1.20.1 match: macaddress: f0:4d:a2:07:c9:2d mtu: 1500 nameservers: addresses: - 10.1.20.5 search: - maas set-name: eno1 eno2: match: macaddress: f0:4d:a2:07:c9:2f mtu: 1500 set-name: eno2 version: 2

Release

Now that I have demonstrated the entire deployment process I’m going to release the host back in to MAAS’ pool of available machines. In the next post I’m going to demonstrate how to use Canonical’s Juju to deploy a two node Openstack cloud with MAAS.

To release the host we just provisioned, go to the Machines page and click the name of the host. Next click the Take action button in the top right corner and select Release from the drop down list. 

26 twostacks

Releasing a host back into the pool of available machines changes a host’s status from ‘Deployed’ to ‘Ready’. This process includes the ‘Power off’ action. The user has the opportunity to erase the host’s storage (disks) before confirming the action.

Conclusion

The steps and examples demonstrated in this post can be used to construct an entire data center of bare metal resources. According to the documentation, the single MAAS rack controller we deployed can service as many as 1000 hosts. A distributed, highly-available deployment of MAAS is designed to support multiple data centers across multiple regions. This level of scalability makes it possible to rapidly build a self managed cloud of physical resources much like those used to power the services of existing public cloud vendors.

]]>
https://www.sms.com/blog/bare-metal-to-kubernetes-as-a-service-part-1/feed/ 0 MAAS Enlistment nonadult