VNF Chaining Example Use-case The following is an example of how to setup and configure a branch-to-branch service comprised on two commercial VNFs (SD-WAN + Firewall). This service will run in a service chain on top of the Enea NFV Access virtualization platform, deployed through the Enea uCPE Manager. In the example setup the following commercial VNFs are used: Juniper vSRX as the SD-WAN VNF and Fortigate as the Router/Firewall.
Prerequisites System requirements for the uCPE device: 3 x Network Interfaces 4 GB of RAM memory The following files are needed for this example use case: VNF images: Fortigate VNF. Juniper vSRX VNF. For VNF images and their license files, please contact the VNF provider. VNF Configuration files, provided with your Enea NFV Access Release: vSRX-domain-update-script. vSRX-Site<x>.conf. FortiFW-Site<x>.conf.
VNF Chaining with FortiGate The setup requires two physical appliances (uCPEs), each of them having three DPDK-compatible NICs and one interface available for uCPE management (i.e. connected to Enea uCPE Manager). On each uCPE, one of the DPDK-compatible interfaces is connected back-to-back with one interface from the other uCPE device. This link simulates a WAN/uplink connection. Optionally, one additional device (PC/laptop) can be connected on the LAN port of each branch to run LAN-to-LAN connectivity tests.
VNF Chaining with Fortigate
Use-case Setup Network Configuration: Both branches in the example have similar setups, therefore necessary step details are presented for only one branch. The second branch shall be configured in the same way, adapting as needed the corresponding VNFs configuration files. Assign three physical interfaces to the DPDK (one for management, one WAN and one for LAN). In the example, one of these interfaces gets an IP through DHCP and it will be used exclusively for the management plane. Create the following OVS-DPDK bridges: vnf_mgmt_br. Used by VNF management ports. wan_br. Used by the service uplink connection. In our case, Juniper vSRX will have its WAN virtual interface in this bridge. sfc_br. Used for creating the service chain. Each VNF will have a virtual interface in this bridge. lan_br. Used for the LAN interface of the Fortigate FW. Add corresponding DPDK ports (see Step 1) to the management, WAN and LAN bridges (sfc_br does not have a physical port attached to it). The networking setup (Steps 1-3) can be modeled using the Offline Configuration entry, so that it is automatically provisioned on the uCPE, once it gets enrolled into the management system (uCPE Manager). Onboarding the VNFs: Onboard Juniper vSRX using the VNF by filling the required fields with the following values: The Flavor selected must have at least 2 CPUs and 4 GB RAM since vSRX is quite resource consuming. Tested-inhouse with 4 vCPUs/ 6 GB RAM. Add three virtual interfaces: management, WAN and LAN. Select ISO on the Cloud-Init tab. Onboard Fortigate FW using the VNF Onboarding Wizard: The Flavor selected can be quite light in resource consumption, e.g. 1 CPU and 2 GB RAM. Add three virtual interfaces: management, WAN and LAN. Select ConfigDrive on the Cloud-Init tab. Add license as the Cloud-Init content in the Cloud-Init tab files. Steps 4-5 are done only once, i.e. they will not be repeated for Site 2. Instantiating the VNFs: Create the vSRX instance: Use vSRX-Site1.iso as the Cloud-Init file. Please follow the Juniper's documentation to create vSRX-Site1.iso file. The Domain Update Script field can be left empty for the Atom C3000 architecture, while for XeonD the vSRX-domain-update-script file will be used. Add virtual interfaces: Management interface added to vnf_mgmt_br. WAN interface added to wan_br. LAN interface added to sfc_br. The login/password values for the vSRX VNF are root/vsrx1234, respectively. Create the Fortigate FW instance: Use FortiFW-Site1.conf as Cloud-Init file. Add .lic (not part of the folder) as the license file. Add virtual interfaces: Management interface added to vnf_mgmt_br. WAN interface added to sfc_br. LAN interface added to lan_br. The login/password values for the Fortigate VNF are admin/<empty password>, respectively. At this point the service will be up and running on Site1. Repeat the necessary steps for Site2, by changing the configuration files accordingly. After the service is deployed on both branches, the VPN tunnel is established and LAN to LAN visibility can be verified by connecting one device on each uCPE LAN port.
Testing the Use-case Before testing LAN to LAN connectivity, preliminary tests of service can be run to ensure everything was set up properly. For instance, by connecting to vSRX CLI (any site), one can test IKE security associations: root@Atom-C3000:~ # cli root@Atom-C3000> show security ike security-associations Index State Initiator cookie Responder cookie Mode Remote Address 1588673 UP 2f2047b144ebfce4 0000000000000000 Aggressive 10.1.1.2 ... root@Atom-C3000> show security ike security-associations index 1588673 detail ... Also, from the vSRX CLI, a user can check that the VPN tunnel was established and get statistics of the packets passing the tunnel: root@Atom-C3000> show security ipsec security-associations ... root@Atom-C3000> show security ipsec statistics index <xxxxx> ... From the Fortigate Firewall CLI on Site 1, one can check connectivity to the remote Fortigate FW (from Site 2): FGVM080000136187 # execute ping 192.168.168.2 PING 192.168.168.2 (192.168.168.2): 56 data bytes 64 bytes from 192.168.168.2: icmp_seq=0 ttl=255 time=0.0 ms 64 bytes from 192.168.168.2: icmp_seq=1 ttl=255 time=0.0 ms 64 bytes from 192.168.168.2: icmp_seq=2 ttl=255 time=0.0 ms ... Since VNF management ports were configured to get IPs through DHCP, the user can use a Web-based management UI to check and modify the configuration settings of both vSRX and Fortigate. For example, in the case of vSRX, from the VNF CLI you can list the virtual interfaces as below: root@Atom-C3000> show interfaces terse ... fxp0.0 up up inet 172.24.15.92/22 gre up up ipip up up ... When using provided configurations, the VNF management port for Juniper vSRX is always fxp0.0. In the case of Fortigate, from the VNF CLI you can list the virtual interfaces as such: FGVM080000136187 # get system interface == [ port1 ] name: port1 mode: dhcp ip: 172.24.15.94 255.255.252.0 status: up netbios-forward: disable type: physical netflow-sampler: disable sflow-sampler: disable... ... When using provided configurations, the VNF management port for Fortigate is always port1. If functionality is as intended, LAN-to-LAN connectivity can be checked (through the VPN tunnel) by using two devices (PC/laptop) connected to the LAN ports of each uCPE. Optionally, these devices can be simulated by using Enea's sample VNF running on both uCPEs and connected to the lan_br on each side. Please note that instructions for onboarding and instantiating this VNF is not in the scope of this document. Since Fortigate VNF, which is acting as router and firewall, is configured to be the DHCP server for the LAN network, the device interface connected to the uCPE LAN port has to be configured to get dinamically assigned IPs. These IPs are in the 172.0.0.0/24 network for Site1 and the 172.10.10.0/24 network for Site2. Therefore, site-to-site connectivity can be checked (from Site1) as such: root@atom-c3000:~# ping 172.10.10.2 PING 172.10.10.1 (172.10.10.2): 56 data bytes ...