OAI - UE : Test on real time improvement branch from Laurent Thomas
I create this issue to keep track on the work done for this topic.
Exchange of the mail :
I have done some more testing to try to find from where the problem comes. As you suggested, i have tested with the usrp_lib.cpp from develop but result shows the same problem.
I have tried to isolate the problem in lte-ue.c file and i have seen that when using the function UE_thread_rxn_txnp4 and UE_thread from develop i don't see the problem. (attached a version of the file lte-ue.c that is working)
To have numbers i have added a printf that show the issue in phy_procedures_lte_ue.c around line 1418, you need to put the log on with LOG_I : LOG_I(PHY,"[UE %d][PDSCH %x] Frame %d subframe %d Generating ACK (%d,%d) for %d bits on PUSCH\n",
then you should see the following printf when doing data transfer : [PHY][I][ue_ulsch_uespec_procedures] [UE 0][PDSCH 545b] Frame 214 subframe 3 Generating ACK (1,0) for 1 bits on PUSCH [PHY][I][ue_ulsch_uespec_procedures] [UE 0][PDSCH 545b] Frame 214 subframe 4 Generating ACK (0,0) for 1 bits on PUSCH When you have (1,0) it means UE send a ACK on PUSCH because the PDSCH has not been properly received and when you have (0,0) it means a NACK.
When testing at 14 Mbps DL and 8 Mbps UL no S1, I can see this NACK (0,0) print 200 times on develop compare to 1500 times on your branch during a 30 seconds test.
Thanks you, Gabriel
2017-01-11 23:33 GMT+01:00 laurent thomas email@example.com:
Gabriel, I did merge with last develop, then pushed. I didn't test beyond compiling because I'm not in office, so i don't have access to a RF platform. The only idea i have now on the 'regressions': please try to use the develop branch of usrp_lib.cpp instead of "my" version. It should work as the one I modified, nevertheless I may have done something bad. Thanks, Laurent On 11/01/2017 11:48, Gabriel Couturier wrote:
Hi Laurent, I have done some test on your branch, here are some results (noS1 test with iperf throughput 12Mbps DL and 8 Mbps UL). In a first step, i have tested without activating your change (cset shield -r) to see if this is not impacting the UE behaviour. I think it will not be activated by default in the first merge. I have then tested with your change activated to see the impact. The result are : -In 5MHz BW : As per the T-Tracer log i can see that the UE behaviour has changed, the number of NACK packet has increased compared to the develop branch. See the two attached screen shot just behind DL/UL HARQ (x8) UE0 line you have one line for the DL and one line for the UL, green means ACK and red NACK. realtime_branch.png on the left is on the realtime improvement branch and the other is on develop. This problem may not be a real problem this will need a retest when the rx-offset issue we are investigating since several day is fixed (To be quick we see that every 2 seconds we have some NACK on eNB side due to a problem in rx-offset when acquiring sample on UE side) - in 10MHz BW : The UE goes out of sync as soon as we are attached (just after the UE has send the RRCConnectionReconfiguration message). The develop branch (tag 2016.w50) is working fine. So no throughput test can be done on the realtime improvement branch. I have tested in 5 MHz activating your change with cset shield --force --kthread on -c 1-3 and running the UE with cset. The result is showing good improvement as per the print you have added : Delay to wake up UE_Thread_Rx (case 2) avg=1 iterations=30000 max=16:17:17:19:20:21:21:22:23:168 Delay to process sub-frame (case 3) avg=413 iterations=30000 max=906:914:928:928:959:985:1002:1184:1189:1234 Delay to wake up UE_Thread_Rx (case 2) avg=1 iterations=30000 max=18:18:20:27:28:32:33:80:191:273 Delay to process sub-frame (case 3) avg=412 iterations=30000 max=978:989:994:997:1035:1060:1107:1145:1234:1315 Delay between two IQ acquisitions (case 1) avg=999 iterations=60000 max=1079:1085:1089:1092:1098:1113:1126:1128:1182:1258 Compare to : Delay between two IQ acquisitions (case 1) avg=999 iterations=80000 max=4495:4766:4782:4843:5045:5132:5203:5580:6288:13880 Delay to wake up UE_Thread_Rx (case 2) avg=2 iterations=40000 max=273:275:285:287:294:455:475:555:686:944 Delay to process sub-frame (case 3) avg=414 iterations=40000 max=1005:1009:1041:1091:1091:1161:1188:1230:1259:1380 Delay to wake up UE_Thread_Rx (case 2) avg=2 iterations=40000 max=268:271:274:295:345:362:502:612:843:961 Delay to process sub-frame (case 3) avg=417 iterations=40000 max=942:943:949:953:960:972:977:1003:1036:1124 So the IQ acquisitions is way better and no U/L are printed. Also when testing on your branch compare to develop, there is a lot of this print that are going out : [PHY][I][process_timing_advance] [UE 0] Got timing advance -X from MAC, new value XX In my opinion to go further and to have this branch pushed on Develop we need : - To understand what is the difference on UE when your change are not activate and see if there is real regression (for this the rx-offset problem seen on develop should be fixed) - Understand what's the regression in 10MHz as here there is a real regression. - Understand why the timing advance print is way more frequent on your branch than on develop. As long as this is not done your branch need to be re-based with the latest tag. Can you merge the last tag 2017.w01 from Monday on your branch as i have tried to do it but there is conflict ? Thank you, Gabriel Ps : I will create on issue to follow this. Images intégrées 1Images intégrées 2