I have written another article to install Openstack using Packstack, here I will show you step by step guide to Install TripleO Undercloud (Openstack) using Red HatOpenStack Platform 10 on virtualmachines using virt-manager (RHEL).

The Red Hat OpenStack Platform director is a toolset for installing and managing a complete OpenStack environment. It is based primarily on the OpenStack project TripleO, which is an abbreviation for “OpenStack-On-OpenStack”. .
So the Red Hat OpenStack Platform director uses two main concepts:
- Undercloud
- Overcloud
Before we start with the steps to install tripleo undercloud, let us understand some basic terminologies
Undercloud
The undercloud is the main director node. It is a single-system OpenStack installation that includes components for provisioning and managing the OpenStack nodes that form your OpenStack environment (the overcloud).
The primary objectives of undercloud are as below:
- Discover the bare-metal servers on which the deployment of Openstack Platform has been deployed
- Serve as the deployment manager for the software to be deployed on these nodes
- Define complex network topology and configuration for the deployment
- Rollout of software updates and configurations to the deployed nodes
- Reconfigure an existing undercloud deployed environment
- Enable high availability support for the openstack nodes
Overcloud
- The overcloud is the resulting Red Hat OpenStack Platform environment created using the undercloud.
- This includes different nodes roles which you define based on the OpenStack Platform environment you aim to create.
So this was brief overview on Openstack On Openstack, let us start with the steps toInstall TripleO Undercloud and deploy Overcloud in Openstack.
My Environment:
I plan to bring up a single controller and compute node as part of my overcloud deployment.
- Physical host machine for hosting undercloud and overcloud nodes
- Red Hat OpenStack Platform Director (VM)
- one Red Hat OpenStack Platform Compute node (VM)
- One Red Hat OpenStack Platform Controller node (VM)
Physical Host Machine Requirements (Minimal)
Below are the minimal requirements recommended by Red Hat for performing the prototype:
- Dual Core 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions
- A minimum of 16 GB of RAM.
- Atleast 40 GB of available disk space on the root disk.
- A minimum of 2 x 1 Gbps Network Interface Cards
- Red Hat Enterprise Linux 7.X / CentOS 7.X installed as the host operating system.
- SELinux is enabled on the host.
My Setup Details
Below is my physical host configuration:
| OS | CentOS 7.4 |
| Hostname | openstack.example |
| Bridge IP (nm-bridge1) | 10.43.138.12 |
| External Network (virbr0) | 192.168.122.0/24 GW: 192.168.122.1 |
| Provisioning Network (virbr1) | 192.168.126.0/24 GW: 192.168.126.254 |
| RAM | 128 GB |
| Disk | 900 GB |
| CPU | Dual Core |
While installing your physical host, make sure you install GNOME Desktop with all the Virtualization related rpms or else you can manually install them later using
$ yum install libvirt-client libvirt-daemon qemu-kvm libvirt-daemondriver-qemu libvirt-daemon-kvm virt-install bridge-utils rsync
rhel-7-server-openstack-11-rpms, while we are using openstack-10. For
CentOS you can download VirtualBMC from RDO project.
Networking Requirements
The undercloud host requires at least two networks:
- Provisioning network - Provides DHCP and PXE boot functions to help discover bare metal systems for use in the overcloud. Typically, this network must use a native VLAN on a trunked interface so that the director serves PXE boot and DHCP requests.
- External Network - A separate network for remote connectivity to all nodes. The interface connecting to this network requires a routable IP address, either defined statically, or dynamically through an external DHCP service.
Design Flow (Steps for this article)
In a nutshell below is the flow to “Install TripleO Undercloud and deploy Overcloud in Openstack”
- First of all bring up a physical host
- Install a new virtual machine for undercloud-director
- Set hostname for the director
- Configure repo or subscribe to RHN
- Install python-tripleoclient
- Configure undercloud.conf
- Install Undercloud
- Obtain and upload images for overcloud introspection and deployment
- Create virtual machines for overcloud nodes (compute and controller)
- Configure Virtual Bare Metal Controller
- Importing and registering the overcloud nodes
- Introspecting the overcloud nodes
- Tagging overcloud nodes to profiles
- Lastly start deploying Overcloud Nodes
Install TripleO Undercloud Openstack
On my physical host (openstack) we already have a default network
[root@openstack ~]# virsh net-list
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
We will destroy this network and create external and provisioning
network
[root@openstack ~]# virsh net-destroy default
[root@openstack ~]# virsh net-undefine default
[root@openstack ~]# virsh net-list
Name State Autostart Persistent
----------------------------------------------------------
Next use the below template to create external network. Here I am
using 192.168.122.1 as the gateway which is assigned to the physical
host.
[root@openstack ~]# cat /tmp/external.xml
<network>
<name>external</name>
<forward mode='nat'>
<nat> <port start='1024' end='65535'/>
</nat>
</forward>
<ip address='192.168.122.1' netmask='255.255.255.0'>
</ip>
</network>
Now define this network and make it start automatically on boot
[root@openstack ~]# virsh net-define /tmp/external.xml
[root@openstack ~]# virsh net-autostart external
[root@openstack ~]# virsh net-start external
So let us validate the new network
[root@openstack ~]# virsh net-list
Name State Autostart Persistent
----------------------------------------------------------
external active yes yes
Similarly create a provisioning network with 192.168.126.254 as the
gateway.
[root@openstack ~]# cat /tmp/provisioning.xml
<network>
<name>provisioning</name>
<ip address='192.168.126.254' netmask='255.255.255.0'>
</ip>
</network>
Now define this network and make it start automatically on boot
[root@openstack ~]# virsh net-define /tmp/provisioning.xml
[root@openstack ~]# virsh net-autostart provisioning
[root@openstack ~]# virsh net-start provisioning
Finally validate your new list of network for virtual machines.
[root@openstack ~]# virsh net-list
Name State Autostart Persistent
----------------------------------------------------------
external active yes yes
provisioning active yes yes
Check your network configuration. As you see now we have two bridge
virbr0 and virbr1 with the network we created above.
[root@openstack ~]# ifconfig
eno51: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 9c:dc:71:77:ef:51 txqueuelen 1000 (Ethernet)
RX packets 100888 bytes 5670187 (5.4 MiB)
RX errors 0 dropped 208 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno52: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 9c:dc:71:77:ef:59 txqueuelen 1000 (Ethernet)
RX packets 54461086 bytes 81543828070 (75.9 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2985822 bytes 438043585 (417.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1 (Local Loopback)
RX packets 152875 bytes 9356602 (8.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 152875 bytes 9356602 (8.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
nm-bridge1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.43.138.12 netmask 255.255.255.224 broadcast 10.43.138.31
inet6 fe80::9edc:71ff:fe77:ef59 prefixlen 64 scopeid 0x20
ether 9c:dc:71:77:ef:59 txqueuelen 1000 (Ethernet)
RX packets 8015838 bytes 77945540204 (72.5 GiB)
RX errors 0 dropped 240 overruns 0 frame 0
TX packets 2725594 bytes 416996466 (397.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:4e:e8:2c txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1 bytes 160 (160.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.126.254 netmask 255.255.255.0 broadcast 192.168.126.255
ether 52:54:00:c9:37:63 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::fc54:ff:fea1:8128 prefixlen 64 scopeid 0x20
ether fe:54:00:a1:81:28 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 74 bytes 4788 (4.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::fc54:ff:fe33:e8b4 prefixlen 64 scopeid 0x20
ether fe:54:00:33:e8:b4 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 33 bytes 1948 (1.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Similarly check the network connectivity for your gateway
[root@openstack ~]# ping 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.040 ms
^C
--- 192.168.122.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms
[root@openstack ~]# ping 192.168.126.254
PING 192.168.126.254 (192.168.126.254) 56(84) bytes of data.
64 bytes from 192.168.126.254: icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from 192.168.126.254: icmp_seq=2 ttl=64 time=0.069 ms
^C
--- 192.168.126.254 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.058/0.063/0.069/0.009 ms
Configure OpenStack with KVM-based Nested Virtualization
When using virtualization technologies like KVM, one can take advantage
of Nested VMX (i.e. the ability to run KVM on KVM) so that the VMs in
cloud (Nova guests) can run relatively faster than with plain QEMU
emulation.
Check if the nested KVM Kernel parameter is enabled
[root@openstack ~]# cat /sys/module/kvm_intel/parameters/nested
N
Add the below content in kvm.conf
[root@openstack ~]# vim /etc/modprobe.d/kvm.conf
options kvm_intel nested=Y
Reboot the node and check the nested KVM kernel parameter again.
[root@openstack ~]# cat /sys/module/kvm_intel/parameters/nested
Y
Update /etc/hosts content on your physical host (openstack). I plan to
use 192.168.122.90 for my director node so I have added the same here
[root@openstack ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.122.90 director.example director
Disable firewalld on the host openstack machine
[root@openstack ~]# systemctl stop firewalld
[root@openstack ~]# systemctl disable firewalld
Create Director Virtual Machine
Here you can manually create a virtual machine for the director node. Below are my specs and node details
| OS | RHEL 7.4 |
| Hostname | director.example |
| vCPUs | 4 |
| Memory | 20480 MB |
| Disk Format: qcow2 |
60 GB |
| Public Network (ens3) MAC: 52:54:00:a1:81:28 |
10.43.138.27 |
| Provisioning Network (ens4) MAC: 52:54:00:33:e8:b4 |
192.168.122.90 |
| External Network (ens9) MAC: 52:54:00:86:83:c0 |
192.168.126.1 |
Setting your hostname for the undercloud
The director requires a fully qualified domain name for its installation
and configuration process.
This means you may need to set the hostname of your director’s host.
# hostnamectl set-hostname director.example
# hostnamectl set-hostname --transient director.example
The director also requires an entry for the system’s hostname and base
name in /etc/hosts.
[stack@director ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.122.90 director.example director
Below is the network configuration for my director node
[root@director network-scripts]# ifconfig
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.43.138.27 netmask 255.255.255.0 broadcast 10.43.138.255
inet6 fe80::5054:ff:fea1:8128 prefixlen 64 scopeid 0x20
ether 52:54:00:a1:81:28 txqueuelen 1000 (Ethernet)
RX packets 1393 bytes 75417 (73.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 78 bytes 7833 (7.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.126.1 netmask 255.255.255.0 broadcast 192.168.126.255
inet6 fe80::5054:ff:fe33:e8b4 prefixlen 64 scopeid 0x20
ether 52:54:00:33:e8:b4 txqueuelen 1000 (Ethernet)
RX packets 2 bytes 130 (130.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 77 bytes 4226 (4.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.122.90 netmask 255.255.255.0 broadcast 192.168.122.255
inet6 fe80::5054:ff:fe86:83c0 prefixlen 64 scopeid 0x20
ether 52:54:00:86:83:c0 txqueuelen 1000 (Ethernet)
RX packets 1238 bytes 87817 (85.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 805 bytes 220059 (214.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1 (Local Loopback)
RX packets 251 bytes 20716 (20.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 251 bytes 20716 (20.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Similarly below is my network config file for the public network which
I use for direct connectivity from my laptop
[root@director network-scripts]# cat ifcfg-ens3
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.43.138.27
PREFIX=24
GATEWAY=10.43.138.30
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=ens3
UUID=e7dab5ae-06c6-4855-bf1e-487919fe13a2
DEVICE=ens3
ONBOOT=yes
Similarly below is my network config file for the provisioning network
[root@director network-scripts]# cat ifcfg-ens4
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.126.1
PREFIX=24
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=ens4
UUID=8f6b534e-2ee1-4bc8-9159-27be0214d507
DEVICE=ens4
ONBOOT=yes
Lasltly below is my network config file for the external network
[root@director network-scripts]# cat ifcfg-ens9
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=ens9
UUID=7ab31c05-3da6-4609-a55f-c63c078e8f19
DEVICE=ens9
ONBOOT=yes
IPADDR=192.168.122.90
PREFIX=24
Below are my route files
[root@director network-scripts]# cat route-ens4
ADDRESS0=192.168.126.0
NETMASK0=255.255.255.0
GATEWAY0=192.168.126.254
METRIC0=0
[root@director network-scripts]# cat route-ens9
ADDRESS0=192.168.122.0
NETMASK0=255.255.255.0
GATEWAY0=192.168.122.1
METRIC0=0
Likewise below are my route details
[root@director network-scripts]# ip route show
default via 10.43.138.30 dev ens3 proto static metric 100
10.43.138.0/24 dev ens3 proto kernel scope link src 10.43.138.27 metric 100
192.168.122.0/24 via 192.168.122.1 dev ens9 proto static
192.168.122.0/24 dev ens9 proto kernel scope link src 192.168.122.90 metric 102
192.168.126.0/24 via 192.168.126.254 dev ens4 proto static
192.168.126.0/24 dev ens4 proto kernel scope link src 192.168.126.1 metric 101
Lastly make sure you are able to ping to all your gateways
[root@director network-scripts]# ping 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.269 ms
64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.315 ms
64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.335 ms
^C
--- 192.168.122.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.269/0.306/0.335/0.031 ms
[root@director network-scripts]# ping 192.168.126.254
PING 192.168.126.254 (192.168.126.254) 56(84) bytes of data.
64 bytes from 192.168.126.254: icmp_seq=1 ttl=64 time=0.410 ms
64 bytes from 192.168.126.254: icmp_seq=2 ttl=64 time=0.337 ms
64 bytes from 192.168.126.254: icmp_seq=3 ttl=64 time=0.365 ms
^C
--- 192.168.126.254 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.337/0.370/0.410/0.037 ms
Configure Repository
Since I do not have direct access to Internet from this director node,
So I have synced the required online
repo to my openstack host machine and using it over http as
offline repo
[root@director network-scripts]# cat /etc/yum.repos.d/rhel.repo
[rhel-7-server-extras-rpms]
name=rhel-7-server-extras-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-extras-rpms/
gpgcheck=0
enabled=1
[rhel-7-server-rh-common-rpms]
name=rhel-7-server-rh-common-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-rh-common-rpms/
gpgcheck=0
enabled=1
[rhel-7-server-rpms]
name=rhel-7-server-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-rpms/
gpgcheck=0
enabled=1
[rhel-7-server-openstack-10-devtools-rpms]
name=rhel-7-server-openstack-10-devtools-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-openstack-10-devtools-rpms/
gpgcheck=0
enabled=1
[rhel-7-server-openstack-10-rpms]
name=rhel-7-server-openstack-10-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-openstack-10-rpms/
gpgcheck=0
enabled=1
[rhel-7-server-satellite-tools-6.2-rpms]
name=rhel-7-server-satellite-tools-6.2-rpms
baseurl=http://192.168.122.1/repo/rhel-7-server-satellite-tools-6.2-rpms/
gpgcheck=0
enabled=1
[rhel-ha-for-rhel-7-server-rpms]
name=rhel-ha-for-rhel-7-server-rpms
baseurl=http://192.168.122.1/repo/rhel-ha-for-rhel-7-server-rpms/
gpgcheck=0
enabled=1
Disable firewalld on director node
[root@director ~]# systemctl stop firewalld
[root@director ~]# systemctl disable firewalld
Installing the Director Packages
So use the following command to install the required command line tools for director installation and configuration:
[root@director ~]# yum install -y python-tripleoclient
Creating user for undercloud deployment
The undercloud and overcloud deployment must be done as a normal user
and not the root user so we will create a stack user for this purpose.
[root@director ~]# useradd stack
[root@director network-scripts]# echo redhat | passwd --stdin stack
Changing password for user stack.
passwd: all authentication tokens updated successfully.
[root@director ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack
stack ALL=(root) NOPASSWD:ALL
[root@director ~]# chmod 0440 /etc/sudoers.d/stack
[root@director ~]# su - stack
Last login: Mon Oct 8 08:54:44 IST 2018 on pts/0
Configure undercloud deployment parameters
Copy the sample undercloud.conf file to the home directory of stack
user as shown below
[stack@director ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
Now update or add the below variables in your undercloud.conf. These
variables will be used to setup your undercloud node.
[stack@director ~]$ vim undercloud.conf
[DEFAULT]
local_ip = 192.168.126.1/24
undercloud_public_vip = 192.168.126.2
undercloud_admin_vip = 192.168.126.3
local_interface = ens4
masquerade_network = 192.168.126.0/24
dhcp_start = 192.168.126.100
dhcp_end = 192.168.126.150
network_cidr = 192.168.126.0/24
network_gateway = 192.168.126.1
inspection_iprange = 192.168.126.160,192.168.126.199
generate_service_certificate = true
certificate_generation_ca = local
You can follow the official Red Hat page to understand individual parameter we have used here.
Install TripleO Undercloud
Undercloud deployment is completely automated and uses puppet manifest
provided by TripleO. This launches the director’s configuration script.
The director installs additional packages and configures its services to
suit the settings in the undercloud.conf
[stack@director ~]$ openstack undercloud install
** output trimmed **
#############################################################################
Undercloud install complete.
The file containing this installation's passwords is at
/home/stack/undercloud-passwords.conf.
There is also a stackrc file at /home/stack/stackrc.
These files are needed to interact with the OpenStack services, and should be
secured.
#############################################################################
/home/stack/.instack/install-undercloud.log file for the installation
related logs
The configuration is performed using python script
/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py
The configuration script generates two files when complete:
- undercloud-passwords.conf - A list of all passwords for the director’s services.
- stackrc - A set of initialisation variables to help you access the director’s command line tools.
View the undercloud’s configured network interfaces. The br-ctlplane
bridge is the 192.168.126.1 provisioning network; the ens9 interface
is the 192.168.122.90 external network and ens3 with 10.43.138.27
is the public network.
[root@director ~]# ip a | grep -E 'br-ctlplane|ens9|ens3'
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 10.43.138.27/24 brd 10.43.138.255 scope global noprefixroute ens3
4: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.122.90/24 brd 192.168.122.255 scope global noprefixroute ens9
6: br-ctlplane: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
inet 192.168.126.1/24 brd 192.168.126.255 scope global br-ctlplane
inet 192.168.126.3/32 scope global br-ctlplane
inet 192.168.126.2/32 scope global br-ctlplane
So In my next article I will continue with “Install TripleO Undercloud and deploy Overcloud in Openstack”. Next I will share the steps to deploy the Overcloud with single controller and compute node.


