Installation Instructions
Enea NFV Core leverages the work in the OPNFV
Project, delivering selected Installer DVD images together with instructions
on how to setup the Installers and deploy OPNFV releases on a Pharos
compliant test lab.
Enea NFV Core uses the Fuel@OPNFV Installer as a deployment facility,
hereafter referred to as Fuel. Fuel is an
automated deployment tool capable of automatically provisioning and
deploying OpenStack on a cluster of servers.
Enea NFV Core is based on the OPNFV Danube release. The OPNFV download page
provides general instructions for building and installing the Fuel Installer
ISO and also on how to deploy OPNFV Danube using Fuel on a Pharos compliant
test lab.
Covering chapters 1-6 of the Fuel Installation Guide is useful for
better understanding the hardware requirements and how the deployment
process works. Since an ISO image is provided however, it is not necessary
to build one from scratch.
Before starting the installation of this release of Enea NFV Core,
certain preparations must be done to ensure optimal performance.
The ISO image
ENEA provides the ISO image that is to be used, removing the need
for any downloads or building from the ground up.
Other Preparations
Reading the following documents aides in familiarizing yourself with
Fuel:
Fuel
Installation Guide
Fuel
User Guide
Fuel
Developer Guide (optional)
Fuel
Plugin Developers Guide (optional)
OPNFV
Fuel Installation Guide for aarch64
OPNFV
Fuel Installation Guide for x86
Prior to installation, a number of deployment specific parameters
must be collected:
Change the following parameters as appropriate
Provider sub-net and gateway information
Provider VLAN information
Provider DNS addresses
Provider NTP addresses
Network overlay planned for deployment (VLAN, VXLAN, FLAT). Only
VLAN is supported in this release.
How many nodes and what roles you want to deploy (Controllers,
Storage, Computes)
Monitoring options you want to deploy (Ceilometer, Syslog,
etc.).
Other options not covered in the document are available in the
links above.
This information will be needed for the configuration procedures
that follow.
Hardware Requirements
Enea NFV Core has been validated with the
requirements shown below, which represent the recommended configuration
that must be met for the installation of Enea NFV Core using Fuel, to be
successful.
Enea NFV Core 1.1 can also be installed on a cluster consisting of
servers with mixed CPU architectures. The user can deploy x86
Controllers and a combination of x86 and aarch64 Compute nodes.
Hardware Requirements for
Aarch64:
Nr. of nodes
6 physical nodes:
1 x Fuel deployment master (which was virtualized),
x86 based
3x Cavium ThunderX 1U 48 cores R120-T30
as Controller nodes (for an HA configuration: 1 collocated
mongo/ceilometer role, 1 Ceph-OSD role, 1 Vitrage Controller
role)
2x Cavium ThunderX 2U 96 cores R270-T60
as Compute nodes (with collocated Ceph-OSD roles)
RAM
128 GB on the Controller nodes, 256 GB on the Compute
nodes
Disk
1 x 120GB SSD and 1 x 2TB SATA 5400 rpm
Networks
Apart from the integrated NICs, one Intel® 82574L
PCIe card was also installed, to be used by Fuel Admin on each
server.
Hardware Requirements for
x86:
Nr. of nodes
Controllers (3x):
Chassis: SuperChassis SC813MTQ-350CB
Motherboard: X10SDV-7TP4F
CPU
Intel® Xeon® D-1537 8-core/HT 1.7 GHz 35W
RAM
128 GB
Disk
1 x 120GB SSD and 1 x 2TB SATA 5400 rpm
Networks
NICs:
Intel® i350-AM2 Dual port GbE LAN
Intel X552-AT2 dual port
Nr. of nodes
Compute (2x):
Chassis: SuperChassis SC825TQ-R740WB
Motherboard: X10DRW-E
CPU
2 x Intel® Xeon® E5-2620 v4 8 -core/HT 2.0 GHz
85W
RAM
128 GB
Networks
NICs:
Intel® i350-AM2 Dual port GbE LAN
Intel 82599ES 10Gb SFI/SFP+
Installing Fuel Master
This section describes the installation of the Enea NFV Core
installation server (Fuel Master) as well as the deployment of the full
Enea NFV Core reference platform stack across a server cluster.
It is recommended to install the Fuel Master on a VM using
virt-manager, with a minimum of 8GB of RAM, 4 CPUs and at least 100GB
disk.
Mount the Enea NFV Core ISO file/media as a boot
device to the Fuel Master VM.
Reboot the VM and make sure it boots from the ISO:
The system now boots from the ISO image
Select Fuel Install (Static IP) as shown
in the figure below.
Press [Enter]
Wait until the Fuel setup screen is shown, this can take up to
30 minutes.
In the Fuel User section, confirm/change the
default password
Enter "admin" in the Fuel password input
Enter "admin" in the Confirm password input
Select "Check" and press [Enter]
In the Network Setup section, configure the
network interfaces of the Fuel Master node:
eth0 is the interface used by nodes to boot via PXE (Admin
network) and eth1 is the interface on the corporate/lab network
(Public network).
eth0 should be enabled and assigned static IP address
10.20.0.2/24, with no gateway.
eth1 should be enabled and configured with a static IP
address, netmask and gateway on your Public network.
You will access Fuel from your Public network using the IP
configured for eth1.
In the PXE setup menu, the default values can
be left unchanged.
In the DNS & Hostname section the
recommended values are as such:
The Bootstrap Image section should be
skipped, the ISO will be configured in advance to use the proper
repositories.
During the Fuel installation process bootstrap images for x86_64
and aarch64 architectures will be created so that the user can
directly install single or mixed arch clusters.
In the Time Sync section, change the fields
shown below to appropriate values if needed,
pool.ntp.org is set by default.
In the Feature groups section, enable the
checkbox for Experimental features. Move to the
<Apply> button and press <Enter>
Start the installation:
Select Quit Setup and press [Save and
Quit].
The installation will now start. Wait until the login screen
is shown.
Boot the Servers
Wait until the Fuel Master installation is complete, indicated by
the VM restarting and prompting for user login. After the Fuel Master node
(setup in the previous section) has rebooted and is at the login prompt,
you should boot the Node Servers (the Compute/Control/Storage blades,
nested or real) with a PXE booting scheme so that the FUEL Master can pick
them up for control.
Enable PXE booting:
For every controller and compute server, enable PXE Booting as
the first boot device in the BIOS boot menu (for x86) or UEFI boot
order menu (for aarch64).
Reboot all the control and compute blades.
Connect to the FUEL UI via the URL provided in the Console
(default: http://10.0.6.10)
Wait for the availability of nodes to appear in the Fuel
GUI.
Wait until all nodes are displayed in top right corner of the
Fuel GUI: Total nodes and Unallocated nodes (see figure below).
Installing additional Plugins/Features on FUEL
In order to obtain the extra features used by Enea NFV Core 1.0, a
few added Fuel plugins have to be installed at this stage. Supplementary
configuration will also need to be performed after the installation is
complete.
The following plugins will need to be installed:
Fuel Vitrage Plugin
Zabbix for Fuel
Tacker VNF Manager
KVM For NFV Plugin
Currently the KVM For NFV Plugin is only available for x86
architectures. Refer to the OPNFV KVMforNFV Project for more
information.
Login to the Fuel Master via ssh using the
default credentials (e.g. root@10.20.0.2 pwd: r00tme) and install the
additional plugins in /opt/opnfv:
$ fuel plugins --install /opt/opnfv/vitrage-1.0-1.0.4-1.noarch.rpm
$ fuel plugins --install /opt/opnfv/zabbix_monitoring-2.5-2.5.3-1.noarch.rpm
$ fuel plugins --install /opt/opnfv/tacker-1.0-1.0.0-1.noarch.rpm
$ fuel plugins --install /opt/opnfv/fuel-plugin-kvm-1.0-1.0.0-1.noarch.rpm
Expected output: Plugin ....... was successfully installed.
Create an OpenStack Environment
Follow the procedure below to create an OpenStack
environment:
Connect to Fuel WEB UI with a browser (http://10.0.6.10) (login:
admin/admin)
Create and name a new OpenStack environment that you want to
install.
Only Debian 9 is supported in this release. Select
Newton on Debian 9 (x86_64) or Newton
on Debian 9 (aarch64)if you are deploying on a cluster with
servers of the same CPU architectures or Newton on Debian 9
(amd64,arm64) if you are using Compute nodes of mixed
architectures:
Select compute virtualization method, then
choose QEMU-KVM as hypervisor and press
[Next].
Select Neutron with VLAN segmenation.
Neutron with tunneling segmentation is available
but not supported in this release. DPDK scenarios only work with VLAN
segmentation.
Select Storage Back-ends, then Ceph
for block storage.
Ceph for Image Storage,
Object storage and Ephemeral
storage have not been validated for this release. It is
advisable to only use the option mentioned above.
In the Additional Services select ”Install
Vitrage”:
Create the new environment by clicking the [Create]
Button.
Configure the Network Environment
To configure the network environment, please follow these
steps:
Open the environment you previously created.
Open the Networks tab and select
default in the Node Networks
group, on the left side menu:
Update the Public network configuration and change the following
fields to appropriate values:
CIDR to <CIDR for Public IP Addresses>
IP Range Start to <Public IP Address start>
(recommended to start with x.x.x.41)
IP Range End to <Public IP Address end> (recommended
to end with x.x.x.100)
Gateway to <Gateway for Public IP Addresses>
Check <Use VLAN tagging> if needed. For simplicity it
is recommended to use the public network in untagged mode.
Set an appropriate VLAN ID, if needed.
Update the Storage Network Configuration:
It is recommended to keep the default CIDR
Set IP Range Start to an appropriate value (default
192.168.1.1)
Set IP Range End to an appropriate value (default
192.168.1.254)
Set VLAN tagging as needed
Set an appropriate VLAN ID (default 102)
Update the Management Network configuration:
It is recommended to keep the default CIDR
Set IP Range Start to an appropriate value (default
192.168.0.1)
Set IP Range End to an appropriate value (default
192.168.0.254)
Set VLAN tagging as needed
Set an appropriate VLAN ID (default 101)
Select the Neutron L3 Node Networks group on
the left pane:
Update the Floating Network Parameters
configuration:
Set the Floating IP range start (recommended to start with
x.x.x.101)
Set the Floating IP range end (recommended to end with
x.x.x.200)
Update the Internal Network configuration:
It is recommended to keep the default CIDR and mask
Set Internal network gateway to an appropriate value
Update the Guest OS DNS servers with
appropriate values.
Save your settings
Select the Other Node Networks group on the
left pane:
Make sure the Public Gateway is Available and
Assign public networks to all nodes are checked.
Public Gateway is Available could be unchecked if
not connected to upstream network.
Update the Host OS DNS Servers
settings
Update the Host OS NTP Servers settings,
changing the NTP servers if needed, and save all your changes.
Adding/Removing Repositories
Enea NFV Core has been validated for complete offline deployment. To
this end, two repositories are defined and used. The first,
debian-testing-local (deb
http://10.20.0.2:8080/mirrors/debian testing main), contains a
snapshot of the Debian base OS, while the second, mos,
(deb http://10.20.0.2:8080/newton-10.0/ubuntu/x86_64 mos10.0 main
restricted), stores the Enea NFV Core specific Openstack and
Openstack related packages.
These repositories provide only the minimum necessary packages, but
it is possible to add extra repositories as needed. It is recommended
however, that the first deployment be performed without extra
repositories.
In the Settings tab of the FUEL UI, select
General and scroll down to the Repositories list
(see figure below).
Remove any extra repositories that point to external
repositories, by clicking the delete/minus button on the far right of
the repository entry.
Select Hypervisor type
Setting the Hypervisor type is done in the
Settings tab by selecting Compute on
the left side pane, and checking the KVM box:
Storage, Plugins and Additional Components
In the FUEL UI of your Environment, click the
Settings tab and select Storage.
Make sure the components shown below in Common and
Storage Backends are enabled:
Save your settings and select OpenStack Services
on the left side pane. Install Ceilometer and Aodh
should be enabled, while Tacker VNF manager should not.
Tacker functionality will be enabled after deployment is performed.
Select Other on the left pane and do the
following:
Enable and configure Zabbix for Fuel
Enable and configure Fuel Vitrage Plugin
Check the box for Use Zabbix Datasource in
Vitrage:
Enable and configure the KVM For NFV Plugin.
Currently the KVM For NFV Plugin is only available for x86
architectures. Refer to the OPNFV
KVMforNFV Project for more information.
Allocate Nodes and assign Functional Roles
This is accomplished in the following way:
Click on the Nodes Tab in the FUEL WEB
UI:
Assign roles:
Click on the +Add Nodes button
Check the Controller, Ceph OSD
roles
Check one node which you want to act as a Controller from
the bottom half of the screen.
Click Apply Changes
Click on the +Add Nodes button
Check Controller, Telemetry -
MongoDB
Check one node to assign these roles
Click Apply Changes
Click on +Add Nodes button
Check Controller, Vitrage
Check one node to assign as a Controller
Click Apply Changes
Check the Compute, Ceph OSD roles
Check the nodes you want to act as Computes from the bottom
half of the screen.
Click Apply Changes
Internally, for testing, the Controller nodes have a different
network configuration compared to the Compute nodes, but this is not
mandatory. The 5 nodes in the cluster can have the exact same
configuration.
Configure interfaces for the Controller nodes:
Select all Controller nodes
Click [Configure Interfaces]
Assign interfaces (in this case Public, Storage and
Management were set on the first 10GbE Port and Private on the
second 10GbE port, with Admin on a 1Gb port), and click
[Apply].
Configure Compute nodes interfaces:
Select the Compute nodes
Click <Configure Interfaces>
Assign interfaces (in this case Public, Storage and
Management were set on the first 10GbE Port and Private on the
second 10GbE port; Admin is on a 1Gb port)
For the Private network enable DPDK
Click Apply
Configure Workload Acceleration
Enea NFV Core supports NFV workload
acceleration features such as Huge Pages, Single-Root I/O Virtualization
(SR-IOV), Data Plane Developer Kit (DPDK), Non-Uniform Memory Access
(NUMA) and CPU pinning. Please refer to the Fuel
User Guide for more information.
Describe how to enable SR-IOV on the compute nodes (in the
Interface Configuration menu). Add: Make sure to select a network
interface which has no network assigned to it.
This step is needed for the DPDK based scenarios.
Click on the gear/settings icon on the right of a Compute node
and do the following:
In the menu that pops up, expand the Node
Attributes menu.
Set Huge Pages for Nova and DPDK to
appropriate values and save your settings. It is recommended to
use at least 2048 pages of 2MB for Nova and 2048MB for
DPDK.
Perform the same configuration for the other Compute
nodes.
Target Specific Configuration
Follow the steps below for setting custom target configuration, as
needed. Skip this step if no specific configurations are required.
Set up targets for provisioning with non-default "Offloading
Modes".
Some target nodes may require additional configuration after
they are PXE booted (bootstrapped). The most frequent changes occur in
the defaults of ethernet device "Offloading Modes" settings (e.g.
certain target ethernet drivers may strip VLAN traffic by
default).
If your target ethernet drivers have incorrect "Offloading
Modes" defaults, in the "Configure interfaces" page (described above),
expand the affected interface's "Offloading Modes" and (un)check the
settings you need. Insert the appropriate
figure/screenshot
Set up targets for "Verify Networks" with non-default
"Offloading Modes".
Please see the chapter Known Issues and Limitations, in the
for
the 1.1 release, for an updated and comprehensive
list of known issues and limitations, including the "Offloading Modes"
not being applied during the "Verify Networks" step.
Setting custom "Offloading Modes" in the Fuel GUI will only
apply during provisioning, not during "Verify Networks". If your
targets need this change, you have to apply the "Offloading Modes"
settings manually to bootstrapped nodes.
E.g.: Our driver has the "rx-vlan-filter" default "on" (expected
"off") on the OpenStack interface "ETH1", preventing VLAN traffic from
passing during "Verify Networks".
From the Fuel Master console, identify target nodes' admin
IPs
$ fuel nodes
Insert the appropriate figure/screenshot
ssh into each target node and disable the
"rx-vlan-filter" on the affected physical interface(s) allocated
for OpenStack traffic (ETH1):
$ ssh root@10.20.0.6 ethtool -K eth1 rx-vlan-filter off
Repeat the step above for all affected nodes/interfaces in
the POD.
Verify Networks
It is important that the Verify Networks action is performed as it
will verify that Communicate what is Communicate and does this
apply to our settings works for the networks you have setup.
Also, check that packages needed for a successful deployment can be
fetched:
From the FUEL UI, select the Networks tab,
then select "Connectivity check" on the left pane.
Select Verify Networks
Continue to fix your topology (physical switch, etc.) until the
Verification succeeded. Your network is configured
correctly. message is shown.
Deploy your Environment
After the configuration is complete and the network connectivity
checked, the environment needs to be deployed.
From the Dashboard tab click on Deploy
Changes. The process should take around 2 hours the first time
after a fresh Fuel Master installation. Part of the deploy process is to
build the target image, which can take between 30 and 60 minutes.
The entire deploy process goes through two phases:
Provisioning – at this stage the nodes have been booted
from PXE and are running a small bootstrap image in ramdisk. The
provisioning process will write the target image onto the disk and
make other preparations for running it after reboot.
OpenStack installation – at this stage the nodes have been
rebooted on the newly written target image and the OpenStack
components are installed and configured.
Installation Health-Check
Once the deploy process is complete, it is recommended to run a
health check from the Fuel menu, done in the following way:
Click the Health Check tab inside the FUEL
Web UI
Check the [Select All] option, then [Run Tests]
Allow tests to run and investigate results where
appropriate.
On Mixed Arch deployments, certain tests might fail due to
limitations detailed in the Know Issues and Limitations in the
.
Smoke Test
Once deployment is completeed successfully, a smoke test can and
should be done.
Click on the Horizon link in the Fuel
Dashboard
Login with credentials (admin/admin is the default):
If DPDK is used, the default flavour
m1.micro should be modified.
In order to do this, select Admin|System|Flavors and all flavors
will be displayed. Note that m1.micro has no
metadata. Specific metadata for DPDK will be added.
Click on No under Metadata. The "Update Flavor Metadata" window
will appear. Write "hw:mem_page_size" in Custom field and press the
plus sign.
Now set the value "any" for the "hw:mem_page_size" metadata you
created previously and press Save.
Select Project|Compute|Instances and all the instances (none in
this case) will be displayed. Click Launch Instance
to create a new instance, causing a wizard to appear.
Give a name to the first instance and press Next.
Select Image in the Select Boot
Source in the dropdown list. Add the TestVM image by
pressing on the add/plus sign, then press Next.
Add the m1.micro flavor by clicking on the
plus sign, then press Next.
Add the admin_internal_net network by
pressing on the plus sign and leave the defaults as they are.
The Launch Instance button is now active and
able to be selected.
After a short while, the first virtual machine is created and
running.
Repeat the steps above to create the second virtual machine.
Make a note of the IP addresses allocated:
Click on the first Instance Name link, the
Overview tab will appear. Select the
Console tab.
As indicated, if the console is not responding to keyboard
input, click the grey status bar. Enter, as shown, user
cirros and password
cubswin:).
It is not possible to verify the IP address received by the
machine and ping the other. The ping however, should succeed.