Tacker
Tacker is an OpenStack project,
building a generic VNF Manager and a NFV Orchestrator in order to deploy and
operate Network Services and Virtual Network Functions on an NFV
infrastructure platform. It is based on ETSI MANO Architectural Framework
and provides a functional stack to Orchestrate Network Services end-to-end
using VNFs.
Installing Tacker
Execute the following in the Fuel master console:
Go to the tacker plugin directory:cd /var/www/nailgun/plugins/tacker-1.0/repositories/tacker
Execute the following to install Ansible: yum install ansible
Run the initialization script:./files/init.shThis
will create a hosts file for ansible
/etc/ansible/hosts of the form:**********
[controllers]
node-5 mac_addr+68:05:ca:46:8b:64 ipmi_ip=0.0.0.0 ipmi_user=****** ipmi_pass=******
node-2 mac_addr+68:05:ca:46:8c:d4 ipmi_ip=0.0.0.0 ipmi_user=****** ipmi_pass=******
node-1 mac_addr+68:05:ca:46:8c:45 ipmi_ip=0.0.0.0 ipmi_user=****** ipmi_pass=******
[main_cont]
node-5
[computes]
node-3 mac_addr+68:05:ca:46:8c:c9 ipmi_ip=0.0.0.0 ipmi_user=****** ipmi_pass=******
node-4 mac_addr+68:05:ca:46:8c:c2 ipmi_ip=0.0.0.0 ipmi_user=****** ipmi_pass=******
[nodes:children]
controllers
computes
**********
Use your favorite text editor to manually edit
ipmi_ip with the ip address of the ipmi interface,
ipmi_user with the user for accessing the ipmi
interface, and ipmi_pass with the password used by
the user to access the ipmi interface for all the nodes.
Run the ansible playbook responsible with the installation and
configuration of tacker plugin.ansible-playbook tacker_deploy.yaml
Using Tacker
After Tacker has been installed and configured, perform the
following in order to start a VNF:
Log in using SSH to the controller node where Tacker has been
installed (Example here node-5) and authenticate
with openstack:
The controller where tacker has been installed is the one under
[main_cont] category in file
/etc/ansible/hosts
[root@fuel ~]# ssh node-5
root@node-5:~# . openrc
Activate the virtual environment for Tacker:
root@node-5:~# source /usr/share/tacker_venv/bin/activate
All subsequent commands will be executed in the venv.
Create testvnf.yaml file with the following
contents:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Demo example
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
num_cpus: 1
mem_size: 512 MB
disk_size: 1 GB
properties:
image: TestVM
flavor: m1.micro
availability_zone: nova
mgmt_driver: noop
metadata: {metering.vnf: VDU1}
monitoring_policy:
name: noop # was ping
# parameters:
# monitoring_delay: 45
# count: 3
# interval: 1
# timeout: 2
# actions:
# failure: respawn
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: admin_internal_net
vendor: Tacker
FIP1:
type: tosca.nodes.network.FloatingIP
properties:
floating_network: admin_floating_net
requirements:
- link:
node: CP1
outputs:
mgmt_ip-VDU1:
description: 'management ip address'
value: { get_attr: [FIP1, floating_ip_address] }
policies:
- vdu1_respawn_on_error:
type: tosca.policies.tacker.AlarmingEvent
targets: [VDU1]
triggers:
vdu1_down:
event_type:
type: tosca.events.resource.event
implementation: ceilometer
condition:
resource: compute.instance.*
field: traits.state
value: error
op: eq
metadata: VDU1
actions: [respawn]
Define a VNF Descriptor (VNFD):
(tacker_venv) root@node-5:~# tacker vnfd-create --vnfd-file testvnf.yaml TestVNF
Start a VNF:
(tacker_venv) root@node-5:~# tacker vnf-create --vnfd-name TestVNF someVnf
Verify:
(tacker_venv) root@node-5:~# tacker vnf-list
+------------+---------+-----------------------+--------+----------+-----------+
| id | name | mgmt_url | status | vim_id | vnfd_id |
+------------+---------+-----------------------+--------+----------+-----------+
| <VNF-UUID> | someVnf | {"VDU1": "<MGMT-IP>"} | ACTIVE | <VIM-ID> | <VNFD-ID> |
+------------+---------+-----------------------+--------+----------+-----------+
root@node-5:~# nova list
+-----------+---------------+--------+------------+-------------+
| ID | Name | Status | Task State | Power State |
+-----------+---------------+--------+------------+-------------+
| <VM-UUID> | ta-<NAME> | ACTIVE | - | Running |
+-----------+---------------+--------+------------+-------------+
-------------------------------------------------+
Networks |
-------------------------------------------------+
admin_internal_net=<INTERNAL-IP>, <FLOATING-IP> |
-------------------------------------------------+