We have gitlab-runner running on the machine 'mozart'. When a test campaign is needed, this program runs the script .gitlab-ci.yml found in the top directory of the source code. Activation of a campaign is manual. It's not possible to run a campaign for each commit because it takes too much time to run. The script .gitlab-ci.yml in turn runs the python program cmake_targets/autotests/v2/main.py with a list of tests to run. Several files in cmake_targets/autotests/v2/ are used when tests are run. Logs are stored in cmake_targets/autotests/log/. The tests are defined in the files cmake_targets/autotests/README.txt and cmake_targets/autotests/test_case_list.xml. This version (v2) is a rewrite of the previous one, to speed it up and remove many bugs. It does not fully use cmake_targets/autotests/test_case_list.xml. There are several kinds of tests: - compilation - oaisim / phy simulators (ulsim, dlsim) - eNB with COTS UE For each test, we have a set of machines to use. The python program connects with ssh to them and runs shell scripts (found in cmake_targets/autotests/v2/actions/), collecting terminal outputs and log files (if any) for later analysis. As of today the eNB tests use a third party EPC provided by Nokia. It's possible to adapt the scripts to use openair-cn instead. The eNB tests that are done are: - monolithic FDD mode with 1 COTS UE and B210: - 5, 10 and 20 MHz - run iperf uplink and downlink, tcp and udp, in turn, for 10s (default running time of iperf) - monolithic TDD mode with 1 COTS UE and B210: - same tests as for FDD - split eNB, FDD with if4p5 split and 1 COTS UE and B210: - same tests as for FDD The COTS UE is in a Faraday cage. A full campaign takes less than one hour. (But the number of tests done is limited. Also, the previous version ran a test several times.) The results are stored in cmake_targets/autotests/log/ in the gitlab-runner directory. What is missing: - handle more cases, especially multiple UEs, and other RF equipment (X310, bladeRF, limeSDR, ...), run longer, maybe in parallel if more equipment is available (several UEs, several RF, ...). - automatic analysis of the results and production of a summary (webpage?) easy to check. The previous version had one, I didn't have time to finalize this part. Today, I manually inspect the output files and decide if the test is passed or not. All this is open to improvments, modifications, suggestions. Cedric Roux - 2017-11-15.