Skip to content
Snippets Groups Projects
Commit e67be126 authored by Franck MESSAOUDI's avatar Franck MESSAOUDI
Browse files

Update: UPF-eBPF Tutorial

parent 5a9e8d7e
No related branches found
No related tags found
1 merge request!139chore(ci): merging UPF-based tutorials and documentations to develop
......@@ -12,11 +12,8 @@
</tr>
</table>
**TODO: change picture.**
![SA Demo](./images/5gcn_vpp_upf.jpg)
**Reading time: ~ 30mins**
**Reading time: ~ 50mins**
**Tutorial replication time: ~ 1h30mins**
......@@ -24,7 +21,17 @@ Note: In case readers are interested in deploying debuggers/developers core netw
**TABLE OF CONTENTS**
1. Pre-requisites
1. [Understanding the (e)BPF-XDP](#1-understanding-ebpf-xdp)
1. [(extended) Berkeley Packet Filtering ((e)BPF)](#1-1-ebpf)
2. [eXpress Data Path (XDP)](#1-xdp)
2. [UPF Architecture](#2-upf-architecture)
1. [Management layer](#1-management-layer)
2. [Datapath layer](#2-datapath-layer)
3. [OAI 5G Testbed](#3-oai-5g-testbed)
4. [Pre-requisites](#4-pre-requisites)
1. [5G CN pre-requisites](#1-cn-pre-requisites)
2. [UPF pre-requisistes](#2-upf-pre-requisites)
5. [Deployment](#4-deployment)
2. [Building Container Images](./BUILD_IMAGES.md) or [Retrieving Container Images](./RETRIEVE_OFFICIAL_IMAGES.md)
3. Configuring Host Machines
4. Configuring OAI 5G Core Network Functions
......@@ -34,39 +41,135 @@ Note: In case readers are interested in deploying debuggers/developers core netw
8. [Undeploy the Core Network](#8-undeploy-the-core-network)
9. [Notes](#9-notes)
-----------------------------------------------------------------------------------------
## 1. Understanding the (e)BPF-XDP
### i. (extended) Berkeley Packet Filtering ((e)BPF)
The eBPF is a virtual machine having its origins in the Linux kernel. It runs sandboxed programs in a privileged mode (e.g. OS kernel) to *__safely__* and *__efficiently__* extend the capabilities of the kernel with custom code that can be injected at run-time without requiring to changes in the kernel source code or load kernel modules. The eBPF is an *event-driven* program that is triggered when the kernel or application passes a certain *hook point*. eBPF has some predefined hook points that include system calls, every kernel function, kernel trace points, and network events, to name few. If a predefined hook does not exists for a particular need, it is possible to create one at kernel probe or user probe, almost anywhere.
An eBPF program follows several steps before being executed. Series of components are used at these steps to compile, verify, and execute the eBPF program.
- It starts by a __compilation step__, where eBPF programs leverage a special bytecode stored by *libelf* in an Executable and Linkable Forma (ELF) object file that is generated by the *Clang/LLVM* toolchain starting from a source code written in a (restricted) C language.
- This is followed by a __loading step__, where *libbpf* loads the ELF file (i.e., bytecode) to the identified hook point, via system calls.
- Upon injection, the bytecode goes through __verification step__ to be analyzed and verified, whose aim is to guarantee that the code cannot harm the kernel, for example checking that only allowed memory accesses are performed and that the program will eventually terminate (i.e., no infinite loops). As a consequence, eBPF programs must use some limitations, such as a maximum number of instructions and the non support for infinite loops. Moreover, the only way to access memory is to use *maps*, which are set of key-value stores with different access semantics (e.g., array, hash, and queue), that can be shared between eBPF programs and user space.
- After the program is verified, it goes through the final step before execution; __Just-In-Time (JIT) Compilation__. Here the generic bytecode is translated into machine specific instruction set to optimize the execution speed of the program. This makes the eBPF programs run efficiently as native compiled kernel code.
### ii. eXpress Data Path (XDP)
XDP is a *high-performance packet processing* framework, enabling Datapath (DP) network packets processing in the Linux kernel at the earliest stage of the networking stack. It is located in the reception chain of the *network device driver* before the Socket Buffer Allocation (SKB), referred as hook point. XDP allows the execution of custom eBPF programs written in C and compiled into *eBPF bytecode*. These eBPF programs are ran as soon as possible, usually *immediately* on packet reception at the network interface. This early interception makes XDP highly efficient and suitable for use cases that require low-latency and high-performance packet handling.
XDP provides three models to *link and attach* eBPF programs to a network interface: (i) *Generic XDP* - It is loaded into the kernel as part of the ordinary network path. It is an easy way, mostly, used to test XDP programs on any (generic) hardware. However, this model does not provide full performance benefits.
(ii) *Native XDP* - Loaded by the network card driver as part of its initial receive path. While it requires the support from the network card driver, this solution offers better performances.
(iii) *Offloaded XDP* - Loaded directly on the network interface card, and it is executed without using the CPU. It requires support from the network interface device.
We may mention that *not* all the network device drivers implement XDP hooks, in such case, we use generic XDP hook. In Linux 4.18 and later, XDP hooks are supported by the following network device drivers: Veth, Virtio_net, Tap, Tun, Qede, Thunder, Bnxt, Lxgbe, Nfp, I40e, Mlx5, and MLX4.
In what follows, we describe the main steps in the XDP call flow.
- (1) __Packet Arrival__: At a packet reception by the NIC, the network driver is notified, which passes the packet to the XDP program attached to the interface.
- (2) __XDP Program Execution__: The XDP program is executed by the eBPF virtual machine. The program can perform various operations on the packet, such as packet filtering, forwarding, or modification. Based on the logic defined in the XDP program, the program returns a verdict to the XDP hook.
- (3) __Verdict Decision__: The XDP program returns one of several verdict options, in the form of a program return code, to indicate the desired action for the packet (ie., (drop it, pass it or forward it). This return code is a positive integer between 0 and 4 mapping predefined actions as shown in Table.
- (4) __Post-XDP Processing__: The driver applies the verdict returned by the XDP program to the packet. If the packet is to be dropped (code 0 or 1), it is immediately discarded. If the packet is to be forwarded or passed to the network stack (code 2, 3 or 4), the driver sets the appropriate fields in the packet's header and passes it along.
- (5) __Further Processing__: If the packet continues to the Linux networking stack, it undergoes additional processing, such as protocol parsing, routing, and higher-level networking operations.
XDP is widely used in high-performance networking applications, such as NFV, SDN, and DDoS mitigation. It has become a popular tool for accelerating and offloading packet processing from user-space applications to the kernel-space.
| Value | Action | Description |
| ----------- |---------------- | --------------------------------------------- |
| 0 | `XDP_ABORTED` | `eBPF program error, drop the packet` |
| 1 | `XDP_DROP` | `Drop the packet` |
| 2 | `XDP_PASS` | `Allow farther processing by network stack` |
| 3 | `XDP_REDIRECT` | `Forward the packet to a different interface` |
---------------------------------------------------------------------------------------------------------------------
## 2. UPF Architecture
OAI-UPF-eBPF as a part of the OAI 5G mobile Core Network implements a data network gateway function. It communicates with the SMF via the Packet Forwarding Control Plane (PFCP) protocol (N4 interface) and forwards packets between Access and Data Networks using N3 and N6 interfaces respectively. These two main UPF parts are implemented in two separate components: the Management layer and the Datapath layer.
<figure>
<img
src="./images/5gcn_eBPF_upf.png"
alt="This is the UPF architecture using the eBPF technology. The architecture is designed in two layers: user and kernel space layers"
width="900"
height="600" />
<figcaption><b><font size = "5">Figure 1: UPF Architecture: eBPF XDP based</font></b></figcaption>
### i. Management layer
The Management layer is a user space library, which is responsible about PFCP sessions management. It receives packet processing rules from SMF via the reference point N4, and configures the Datapath for proper forwarding. It implements functions such as `handle_pfcp_session_establishment_request()`, `handle_pfcp_session_modification_request()`, `handle_pfcp_session_deletion_request()`, to respectively create, update and delete a PFCP session. In addition to that, this layer is managing the eBPF programs lifecycle via CRUD functions; that is to say, it creates eBFP sessions (by distinguishing the uplink and downlink directions), update them , or delete them. It also compares PDRs with their precedence, extracts FARs, and creates and manages eBPF Maps, to name few of its role.
When a PFCP session request is received via the N4 interface, the request is parsed by `PFCP Session Manager`, which calls the `eBPF Program Manager` to dynamically load (update, or delete, respectively) an eBPF bytecode representing the new PFCP session context in case of establishment request (modification request, or deletion request, respectively).
There is one eBPF program running in kernel space for each PFCP session. The program contains the eBPF maps used to store
the PDRs and FARs. All the communication between the user space and the kernel space is through the libbpf library, which is maintained by the Linux kernel source tree. The PFCP Session Manager parses the structures received to eBPF map
entries and updates the maps accordingly. The PFCP session context is created in Datapath Layer, where the user traffic will be handled.
### ii. Datapath layer
The Datapath layer is a kernel space layer based on based on eBPF XDP packet processing. Its job is to process the user traffic inside as fast as possible, which imply doing the treatment as close as possible to the NIC by using XDP hooks. When the UPF is started, a service chain function is created within three main components (a Parser, a Detector, and a Forwarder): the `PFCP Session Lookup` as a traffic parser, the `PFCP Session's PDR Lookup` represeting the traffic detector, and the `FAR Program` to forward the traffic. Each of these three main components is an eBPF XDP program, representing a pipeline with several stages. At each stage a decision is made on the packet, weither is will be passed to the next stage (XDP_PASS action), droped for some reasons (XDP_DROP), or redirected (XDP_REDIRECT).
The Parser (i.e., PFCP Session Lookup) parses the ingress traffic to check if it is an uplink (GTPu) or a downlink (UDP) flow. In case of Uplink (respectively, Downlink) traffic, the couple (TEID, UE IP SRC) (respectively, (PORT DST, TOS, UE IP DST)) key is used to get the PFCP session context with a matching PDR. A tail call to the Detector (PFCP Session's PDR Lookup) is executed
Then. Here, the Traffic Detector searches inside the eBPF hash maps for the highest precedence PDR associated with the packet.
If such PDR is found, the packet passes to the Forwarder (i.e., FAR Program). The Forwarder uses the FAR ID obtained from the PDR (with the highest precedence) to find the FAR object, which is stored in a eBPF hash map. This FAR object contains the action (e.g. forward) that will be applied, the outer header creation and the destination interface. Besides that, the FAR Program accesses other eBPF maps to search for the MAC address of the next hop and the index of the destination interface where the packet will be redirected.
---------------------------------------------------------------------------------------------------------------------
## 3. OAI 5G Testbed
<figure>
<img
src="./images/5gcn_eBPF_testbed.png"
alt="This is the OAI 5GC architecture testbed. The architecture is designed in two layers: user and kernel space layers"
width="900"
height="400" />
<figcaption><b><font size = "5">Figure 2: UPF Architecture: eBPF XDP based</font></b></figcaption>
* In this demo the image tags and commits which were used are listed below, follow [Building images](./BUILD_IMAGES.md) to build images with the tags below.
You can also retrieve the images from `docker-hub`. See [Retrieving images](./RETRIEVE_OFFICIAL_IMAGES.md).
| CNF Name | Branch Name | Tag used at time of writing | Ubuntu 20.04 | RHEL8 |
| ----------- |:-------------- | ----------------------------- | ------------ | ---------------|
| AMF | `master` | `v1.6.0` | X | X |
| AUSF | `master` | `v1.6.0` | X | X |
| NRF | `master` | `v1.6.0` | X | X |
| SMF | `master` | `v1.6.0` | X | X |
| UDR | `master` | `v1.6.0` | X | X |
| UDM | `master` | `v1.6.0` | X | X |
| UPF | `develop` | `v1.6.0` | X | |
**TODO: update this table before release.**
| CNF Name | Branch Name | Tag used at time of writing | Ubuntu 20.04 | Ubuntu 22.04 | RHEL8 |
| ----------- |:-------------- | ----------------------------- | ------------ | --------------|------------- |
| AMF | `master` | `v1.6.0` | x | X | x |
| AUSF | `master` | `v1.6.0` | x | X | x |
| NRF | `master` | `v1.6.0` | x | X | x |
| SMF | `master` | `v1.6.0` | x | X | x |
| UDR | `master` | `v1.6.0` | x | X | x |
| UDM | `master` | `v1.6.0` | x | X | x |
| UPF | `master` | `v1.6.0` | X | X | |
<br/>
In previous tutorials, we were using the `oai-spgwu-tiny` implementation UPF. That implementation has limited throughput capacity and is a pure SW solution.
Moreover in this tutorial, we are going to integrate OAI 5G core with an UPF implementation that uses the eBPF kernel technology.
**About UPF-eBPF**
**TODO: add description here.**
The testbed is composed of four main machines defined as follow:
- `OAI-5G-CN`: This machine is used to host the the OAI 5G Core Control plane composed of functions: `SMF/AMF/NRF/PCF/UDM/AUSF` and a `MySQL`.
- `OAI-UPF-eBPF`: This machine is hosting the OAI UPF, it has three interfaces one is used for the management and N4 interface and the two others for the N3 and N6 interfaces.
- `Amarisoft-gNB`: This is the Amarisoft gNodeB
- `OAI-EXT-DN`: This machine is used as an external gateway it does the Source Natwork Address Translation (SNAT).
Let's begin !!
In addition to that, we are using the quectel UE that will generate the the user traffic.
* Steps 1 to 4 are similar to previous tutorials such as [minimalist](./DEPLOY_SA5G_MINI_WITH_GNBSIM.md) or [basic](./DEPLOY_SA5G_BASIC_DEPLOYMENT.md) deployments. Please follow these steps to deploy OAI 5G core network components.
## 1. Pre-requisites
---------------------------------------------------------------------------------------------------------------------
## 4. Pre-requisites
### i. 5G CN pre-requisites
Create a folder where you can store all the result files of the tutorial and later compare them with our provided result files, we recommend creating exactly the same folder to not break the flow of commands afterwards.
<!---
......@@ -81,6 +184,31 @@ docker-compose-host $: mkdir -p /tmp/oai/upf-ebpf-gnbsim
docker-compose-host $: chmod 777 /tmp/oai/upf-ebpf-gnbsim
```
### ii. UPF pre-requisistes
* Git
* gcc
* Clang
* make
* cmake
* LLVM
* binutils-dev
* libbpf-dev
* libelf-dev
* libpcap-dev
* zlib1g-dev
* libcap-dev
* python3-docutils
* tar
If you want to run OAI-UPF-eBPF from sources you can first install these dependencies on ubutnu 20.04 and 22.04 using the commad:
```console
oai-cn5g-upf$sudo apt install -y git gcc-multilib clang make cmake binutils-dev \
libbpf-dev libelf-dev libpcap-dev zlib1g-dev \
llvm libcap-dev python3-docutils tar
```
---------------------------------------------------------------------------------------------------------------------
## 5. Deploying OAI 5g Core Network
* We will use the same wrapper script for docker-compose that was used for previous tutorials to set up 5gcn with `UPF-eBPF`. Use the --help option to check how to use this wrapper script.
......@@ -270,7 +398,7 @@ $ docker logs oai-nrf
[2023-07-21 13:22:16.750] [nrf_app] [debug] DNN: default
...
```
2. SMF PFCP association with UPF-VPP
2. SMF PFCP association with UPF
``` console
$ docker logs oai-smf
[2023-07-21 11:22:16.732] [config ] [info] ==== OPENAIRINTERFACE smf vBranch: HEAD Abrev. Hash: 0602c5d7 Date: Tue Jul 18 16:34:07 2023 +0000 ====
......@@ -381,7 +509,6 @@ docker-compose-host $: docker logs oai-upf > /tmp/oai/upf-ebpf-gnbsim/upf.log 2>
docker-compose-host $: docker logs oai-udr > /tmp/oai/upf-ebpf-gnbsim/udr.log 2>&1
docker-compose-host $: docker logs oai-udm > /tmp/oai/upf-ebpf-gnbsim/udm.log 2>&1
docker-compose-host $: docker logs oai-ausf > /tmp/oai/upf-ebpf-gnbsim/ausf.log 2>&1
docker-compose-host $: docker logs oai-ext-dn > /tmp/oai/upf-ebpf-gnbsim/ext-dn.log 2>&1
docker-compose-host $: docker logs gnbsim-ebpf > /tmp/oai/upf-ebpf-gnbsim/gnbsim-ebpf.log 2>&1
```
......
docs/images/5gcn_eBPF_testbed.png

95.1 KiB

docs/images/5gcn_eBPF_upf.png

371 KiB

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment