(originally created by David Price Dec 2017)
nFAPI
OAI can now be run in both monolithic eNB mode and nFAPI mode (which PNF runs PHY and VNF runs MAC,etc).
Open-nFAPI
This builds on the open source nFAPI libraries here https://github.com/cisco/open-nFAPI
Best place to start is normal (monolithic) eNB mode. This will prove out your ePC, radio & eNB configuration. All testing has been done using Ettus B210.
Config files
eNB configuration file used is enb.band7.tm1.50PRB.usrpb210.conf
PNF configuration file used is oaiL1.nfapi.usrpb210.conf
VNF configuration file used is rcc.band7.tm1.50PRB.nfapi.conf
Hardware
You will need an extra PC for the PNF or VNF function. Currently I have the PNF and VNF running on identical hardware and using identical configuration. From a Linux configuration PNF & VNF are configured just like eNB (so low latency kernel, CPU speed set to max, etc). It is possible that the VNF can be run using less strict timings than the PNF. However, I have not tested that mode of operation.
- ePC - (HSS / MME / SPGW)
- VNF
- PNF - with Ettus
Startup
It will take a lot longer to start in nFAPI mode than in monolithic mode.
The PNF will connect to the VNF.
They will exchange messages resulting in the PNF being passed the VNF's configuration (eg various MIB values).
Then it will perform NODE_SYNC procedure.
This attempts to align the VNF and the PNF.
This will result in the wake up of the VNF being adjusted according to the jitter between the 2 nodes.
Once it has locked the timing then the cell can be brought up.
Timing
nFAPI is designed to work on a link with up to 6ms latency between PNF and VNF. Normally however, due to the timing of things like HARQ it is easier to run less than 4ms. OAI presents another level of complication in that TX is done ahead of the current subframe. In fact in normal monolithic mode TX is 4ms ahead. But that will consume your whole delay budget. Therefore I set the offset using sf_ahead variable. In nFAPI mode I set that to 2ms and in monolithic mode I left it as 4ms. This is likely to cause problems at higher data rates. However, other features are coming along that may help that (parallelism of tx processing).
Ideally the PNF and VNF will be time aligned, although it is not strictly necessary any jitter will eat into your link budget.
nFAPI attempts to get messages destined for PNF from the VNF in time for the relevant subframe. To aid with that the VNF runs ahead of the PNF. Therefore, if VNF is processing TX(SFN:1 SF:5) whilst the PNF is processing TX(SFN:1 SF:3), bear in mind that TX is ahead of RX by 2 in nFAPI mode (so PNF is processing RX(SFN:1 SF:1) at the same time).
PTP / IEEE1588
To align VNF and PNF you can run PTP on both machines. Because the Ettus supplies the timing the PNF should be master.
PNF command
sudo ptpd --interface eno1 -M
VNF command
sudo ptpd --interface eno1 -s
Wireshark
There is a wireshark dissector within open nfapi source. Using this allows you to decode the messages being sent between PNF and VNF (and vice versa). To "activate" you need to tell wireshark that packets on a certain port are nFAPI. So, perform a capture, see the UDP packets. Then right click on one and select "Decode as..." and select nFAPI. For the rest of the session those packets will be decoded as nFAPI. If you restart wireshark you will have to perform the decode as again.
Uplink packets are slightly different. The VNF is no longer processing a packet for the current subframe. It maybe a number of subframes later. Therefore the frame/subframe of the message is now passed around the VNF when processing the message.
So, what that means is the order of the packets can appear to be out of order. For example, you could see: SFN/SF:100/4 TX_REQ SFN/SF:100/4 DL_CONFIG SFN/SF:100/3 UL_CONFIG
Because uplink does not have to programmed ahead of schedule you can receive UL_CONFIG for 100/3 at the same time as you about to program for TX_REQ 100/5.
In HI_DCI0 there is another field SFNSF (note the lack of underscore), this is to help indicate what subframe the originating message was for.
The RA process also is affected due to the round trip times of nFAPI. Therefore the rach_raResponseWindowSize has been increased to cope with this.