nr_dlsim "-n100" "-q1" "-e27" "-s30"
is in the CI
the result is "ok" for a long while, but, even with perfect signal (no channel noise) these are retransmissions
Another change made it worse:
NR_DMRS_DownlinkConfig__dmrs_AdditionalPosition_pos1 instead of NR_DMRS_DownlinkConfig__dmrs_AdditionalPosition_pos0
with this dmrs configuration, the retransmissions grow from 1.2 retransmissions (so 1/5 packets resent) to 1.7
therefore the test is now "nok"
but there is a bug, likely in nr_dlsim or in the OAI UE because it seems not occuring with commercial UE
Designs
Child items
0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items
0
Link issues together to show that they're related.
Learn more.
Related merge requests
1
When this merge request is accepted, this issue will be closed automatically.
Adding @velumani and @sli since they probably know more than me about DMRS and channel estimation. In short, changing DMRS for DL configuration from pos0 to pos1 generates a degradation (instead of improvement) of performance in dlsim, noticed in the test with 256QAM where it requires a higher SNR to pass wrt develop (the change is in current integration branch).
This is because of a rounding off error from FFT in the DC subcarrier. You can observe this if you plot the constellation after channel compensation. I think the code rate is too high to even recover from a few symbols in error. If you increase the PRBs past DC ex., -b60 the decoder would recover the error bits and result would be OK. The roundoff error has been there for a very long time and @raymond.knopp once said that it would be difficult to get rid of it.
A long time ago I made some changes in channel estimation to improve the estimates around DC so we minimize this effect in this commit ca9b4d7e. But now after so many revisions of the channel estimation function, the changes I made no longer exists. It was removed in 39919152.
You will the distorted symbol in the constellation for any MCS but for high MCS, the decoder would fail to recover the bits that are affected by those symbols.
It is a typical behavior of the decoder. For a fixed number of error bits, it would successfully decode for one code rate and it wouldn't for a slightly higher code rate.
There are two ways I can think of to fix it
Find and fix the error that is somewhere in FFT
Flatten out the channel estimates around DC by either taking a local average or by omitting those DMRS during estimation (which I implemented and now removed).
I just checked with nr_ulsim, and indeed (i)FFT seems to add significant noise to the lowest frequencies. And not only the DC RE is affected but approximately 30 REs starting with the DC RE. Even with not using these REs for channel estimation, the data REs are affected and corrupted enough for unsuccessful decoding at highest MCS.
So improving channel estimation will not be enough, I fear someone really has to look into fixing the underlaying (i)FFT issue...
@sli I hope to push some modifications that will improve considerably the distortion at low frequency. The main issue is the scaling schedule of the DFT/IDFT. I thought there may have been a bug in the fixed-point implementation but I don't think it is the case at this stage. I didn't analyze the consequences on the receiver yet, but by choosing the right scaling across the stages of the DFT/IDFT it will improve the output distortion of the DFT/IDFT. On TX we get around 8-10dB SQNR improvement which is a lot more than I expected. I hope it will be similar on RX. But it also means that we need to adjust the scaling schedule as a function of the received signal strength (receiver gain or measured noise level).
I have changed the dft routines for OFDM dfts (i.e. not the DFT-precoding for PUSCH) to allow for a configurable scaling schedule and they work for the unitary simulator of the DFT/IDFT. I need to test with the new initializations of the schedule for the nr_dlsim/ulsim unitary simulators and nr-softmodem. A couple of days of work for this.
We can assess after these changes to go further. I think we will need help to do this properly with split-8 radios (e.g. USRPs).
I pushed a commit to that branch fixing the 5G NR compile errors and warnings (there are still warnings in LTE code, which I didn't touch). Unfortunately, my tests show worse performance for the 256 QAM tests...
It doesn't compile because I was in the process of finishing the changes to start testing with nr_dlsim but we needed to reinstall the machine that I used for development (dramix) so I pushed what I had. I need a couple of days to finish this. Sorry.
@sli I will continue where you left off. what I saw with the fft unit test is that we need to adjust the signal input level for TX (i.e change the default tx_backoff_dB) and choose a scaling schedule which shifts in the later stages instead of evenly which is the current case. I added a schedule parametwr vector which needs to be initialized before running the transmitter/receiver. I only did for NR. Will do 4G after. We should just change TX first and see the effect. Then RX. With nr_dlsim or any OAI gNB/oai UE setup the distortion is "squared" since we get in both directions.
another related thing for multiple-antenna cases which I saw just yesterday is that in the gNodeB precoder implementation we use the power normalization from the standard. This is not a great idea for the implementation. The TX is scaled by 1/sqrt(num_dmrs_ports). So for 2x2 3dB and 4x4 6 dB. So this actually will worsen the transmit path of the gNB (the IDFT operation) since the signal on all ports will be reduced in amplitude by this amount. I don't think we should do that at all. Moreover, unless we compensate for it in the TX gain of the RF system we reduce our power per antenna which is not what most of us want.
@sli I pushed a new commit. mcs 27 works in dlsim now with 29.6 dB SNR. The only thing left was to increase the signal level of the QAM modulation (gNB->TX_AMP). I saw this in the unitary test of the IDFTs. The default level of the dBFS backoff was 36 dB (for nr-softmodem this is the parameter tx_amp_backoff_dB which we use for 7.2 radios). I made this 30 dB (for nr-softmodem) and added the -Q option to nr_dlsim to control it. To have what was before use -Q 36. It works as of -Q 33 I think but not as well. We can tune this further using the 256QAM mcs 27 70% throughput criterion. But 30 seems ok for now.
So that's a signal input to the IDFT which is 10 dB stronger than what we had before. We really need to check the impact with a USRP. We will probably have to remove the <<4 in the trx_usrp_write() function in usrp_lib.cpp. 10 dB corresponds to 2 shifts almost. Maybe we can go even further with the backoff and remove the shifts completely, but this needs to be checked carefully.