Deploying a Home Lab using OpenStack-Ansible
I've had a Lenovo P910 for a few months and I use it for development purposes. Now that I have a few cycles to spare, I took some time to configure it. Originally, I had it serving up Linux containers on my home network using my router as a DHCP server. This worked great because whatever I spun up was accessible so long as I was connected to my home network. However, depending on the purpose of the container, my virtualization support varied (e.g. running certain parts of nova or cinder).
Instead, I wanted to deploy OpenStack and figured it would be a good opportunity to learn a little bit more about OpenStack-Ansible (OSA). My main requirement still being that I wanted instances to be available via my home network. The purpose of this post is to walk through how I was able to use OSA to lay down a basic deployment and what I had to do to achieve that use case. Note that I'm not the original author of some of this content. I just aggregated useful bits from several sources into a single place. Thanks to cloudnull and bgmccollum for all their help and configs!
Before I got started with the deployment, I re-installed a fresh copy of Ubuntu 16.04 on the host. The configuration was minimal and I only opted to install OpenSSH afterwords so that I could manage the host remotely. I also setup a DHCP reservation on my router to always give the same internal IP address to the host based on the MAC address of the ethernet device (192.168.1.10). Out-of-the-box Ubuntu gave me eno1 and rename3 for my ethernet devices. I adjusted them in favor of more traditional naming.
Afterwords, I adjusted the network interfaces to be persistent, making roughly the following changes:
I tested this through a series of reboots until I was able to manage the host remotely and testing the DHCP reservation. This file gets adjusted after OSA is run to put the static IP on a bridge.
Deploying with OpenStack-Ansible
Since my use-case is pretty basic, I hardly made any changes to my ansible configuration. For the most part, I followed the documented steps provided by OSA, specifically for an All-in-One deployment. I stuck with the default scenario, which was aio_lxc and the most recent release of Queens using OSA tagged at 17.0.7.
Afterwords I was able to access horizon via the statically assigned IP of the host. From the host I was able to use python-openstackclient CLI to interact with the services. Note, the containers hosting the services were not on my home network, instead they were using 172.29.236.0/24. OSA supports ways to configure this, but for my immediate needs I didn't change this setting (more on this later). The installation will also have public and private networks as a result of running tempest during the verification process. These networks weren't going to help me expose instances via my home network, so removing them was fine.
After that I went back to the host and made some adjustments to the networking configuration again. What I did here was determine the primary interface, removed address configuration from eth1 putting it on br-vlan instead, and removed iptable rules and checksums on br-vxlan. First was determining the primary interface:
Then I updated /etc/network/interfaces to look like the following:
Next I moved that static IP assignment to OSA's network configuration in /etc/network/interfaces.d/osa_interfaces.cfg. I also bridged the primary interface from the host to a bridge and removed SSH checksums. After making those changes, my osa_interfaces.cfg looked like this:
I removed the following iptable rules and rebooted to apply the new network configuration:
Finally, I created a new flat network that modeled my physical home network. I bound the allocation pool to a specific set of IPs that I hopefully won't have conflicts with:
I uploaded a couple images I use for development, imported an ssh key, and booted an instance. I just used standard cloud images from Ubuntu:
The instance I booted came back with an IP address of 192.168.1.202 handed out by my router, which I was able to confirm by viewing its attached devices:
Once the instance was active, I was able to directly SSH to it from my laptop.
This ended up meeting my main use case, but it would be nice to abstract the steps above into a set of user variables I can provide to OSA. This is pretty much what Kevin does in his KISS deployment. It would also be nice to have the service containers exposed via my home network, too. Then using the CLI from any of my devices wouldn't require hopping to the host. Both of these improvement should be doable using OSA, which would be a fun follow-up post.