# Contributing to Arm RAN Acceleration Library (Arm RAL) Describes the requirements for contributing code to Arm RAN Acceleration Library (Arm RAL): - The license; - How to write and submit patches; - Naming of functions; - Documentation style; - Code style, and how to automatically check it; - Structure of tests and benchmarks. ## Licensing information Use of Arm RAN Acceleration Library is subject to a BSD-3-Clause license, the text of which can be found in the `license_terms` folder of your product installation. We will receive inbound contributions under the same license. ## Writing and submitting patches Contributions are managed via the RAL project [https://gitlab.arm.com/networking/ral](https://gitlab.arm.com/networking/ral) on [Arm's Gitlab](https://gitlab.arm.com). You will need to ask for access to Arm's Gitlab in order to be able to fork RAL and raise merge requests. Details on how to do this are given [here](https://gitlab.arm.com/documentation/contributions). Once you have access you can submit your patch for review as a [merge request](https://gitlab.arm.com/networking/ral/-/merge_requests). Patches must be based against the current head of `main` of RAL. Every patch must compile successfully and pass all tests. It is good practice to split the development of new functions into multiple patches to aid reviewing: present the initial unoptimized implementation and accompanying tests in one patch, and the optimized implementation in a second patch. Submit your patch for review as a [Merge Request](https://gitlab.arm.com/networking/ral/-/merge_requests). All patches must be accompanied by a commit message. An acceptable commit message comprises a one line summary less than 50 characters in length, followed by a single blank line, followed by the body of the commit message. The body should detail the changes in the patch with any relevant reasoning. ## Function naming Arm RAL functions are named according to: armral__{_variant} where: - *algorithm* is a word or words that summarises the main purpose of the function; - *precision* indicates the working precision of the internals of the implementation, which may not always be the same as the precision of the input and output arguments. For Fast Fourier Transform (FFT) functions use: - `cf32`: complex 32-bit floating point; - `cs16`: complex signed 16-bit integer. For all other functions use: - `f32`: 32-bit floating point; - `i16`: signed 16-bit integer. - *variant* is an optional suffix to distinguish different implementations of the same *algorithm* at the same *precision*. Examples from the library: Function | Algorithm | Precision | Variant -----------------------------------|------------------------------------------|-------------------------------|------------------------ `armral_fft_create_plan_cf32` | Create an FFT plan | complex 32-bit floating point | None `armral_fft_execute_cs16` | Execute an FFT | complex signed 16-bit integer | None `armral_cmplx_mat_mult_2x2_f32_iq` | Complex-valued 2x2 matrix multiplication | 32-bit floating point | Separate I and Q arrays `armral_cmplx_vecdot_i16_32bit` | Complex-valued vector dot-product | signed 16-bit integer | 32-bit acccumulator ## Directory structure The directory structure of Arm RAL is: ``` +-- CMakeLists.txt +-- README.md +-- RELEASE_NOTES.md +-- bench | +-- CRC | +-- bench.py | +-- main.cpp | +-- ... +-- docs | +-- ... +-- examples | +-- ... +-- include | +-- armral.h +-- license_terms | +-- BSD-3-Clause.txt | +-- ... +-- simulation | +-- ... +-- src | +-- BasicMathFun | +-- MatrixInv | +-- arm_cmplx_hermitian_mat_inversion_f32.cpp | +-- ... | +-- ... | +-- ... +-- test | +-- CRC | +-- main.cpp | +-- ... +-- utils | +-- ... ``` The `src` subdirectory contains the source files for user-facing functions, grouped by functionality into separate subdirectories (i.e. `src/BasicMathFun/MatrixInv` in the diagram above). If your patch adds new source files, you must add the path to them to the `ARMRAL_LIB_SOURCES` variable in the top-level `CMakeLists.txt` file. `test` contains tests for user-facing functions and `bench` contains benchmarks. There is specific information about writing tests and benchmarks below. ## Documentation Documentation for each user-facing function is written as a Doxygen comment immediately preceding the function's prototype in `include/armral.h`. Arm RAL uses the Javadoc style, which is a C-style multi-line comment that starts with `/**`: ```c /** * This algorithm performs the multiplication `A x` for matrix `A` and vector * `x`, and assumes that: * + Matrix and vector elements are complex float values. * + Matrices are stored in memory in row-major order. * * @param[in] m The number of rows in matrix `A` and the length of * the output vector `y`. * @param[in] n The number of columns in matrix `A` and the length * of the input vector `x`. * @param[in] p_src_a Points to the first input matrix. * @param[in] p_src_x Points to the input vector. * @param[out] p_dst Points to the output matrix. * @return An `armral_status` value that indicates success or failure. */ armral_status armral_cmplx_mat_vec_mult_f32(uint16_t m, uint16_t n, const armral_cmplx_f32_t *p_src_a, const armral_cmplx_f32_t *p_src_x, armral_cmplx_f32_t *p_dst); ``` The comment begins with a description of the purpose of the function and implementation details are subsequently given in one or more paragraphs. If the function implements an algorithm described in an external publication, for example a technical standard, provide a reference to that publication in the comment. In-line mathematical quantities and parameter names must be enclosed in backticks, for example \`A x\` in the preceding code sample. Enclose larger equations on their own lines in `
 
` tags: ``` * Computes the regularized pseudo-inverse of a single matrix. The `N-by-M` * regularized pseudo-inverse `C` of an `M-by-N` matrix `A` with `M <= N` is * defined as: * *
 *   C = A^H * (A * A^H + λ * I)^-1
 * 
``` The documentation must describe any restrictions on the inputs, for example when the length of an array needs to be a multiple of a certain number. Consider marking particularly important restrictions as *warnings*. For example, the documentation for `armral_cmplx_pseudo_inverse_direct_f32` says: ``` * \warning This method is numerically unstable for matrices that are not very * well conditioned. ``` Warnings are made prominent in the HTML version of the documentation that is generated from the Doxygen. The comment must finish with the ordered list of the function's parameters followed by the return value. Indicate which parameters are inputs and which are outputs. ## C/C++ code style C/C++ code style is maintained through the use of `clang-format` and `clang-tidy`. You must run these tools on the code before submitting a patch; instructions on how to run these tools are given below. `clang-format` and `clang-tidy` are part of the [LLVM Project](https://llvm.org/). Arm RAL is tested with version 17.0.0 of the tools. Matching your coding style as close as possible to the `clang-tidy` style will minimise the number of changes that `clang-tidy` will enforce: - Use snake case for names of variables and functions, i.e. `this_is_good` instead of `thisIsNotGood`. - Symbol names start with a lower case letter. This means that `_m` for a member variable, for example, will not be accepted. - Always use curly braces for single line `if` statements, `for` loops and `while` loops. - Opening curly braces for `if` statements, `for` loops and `while` loops are on the same line as the `if`, `for` or `while`. - Closing curly braces are the first non-white-space character on a new line. Their alignment must match the first character of the matching `if`/`for`/`while` statement. `else` statements are on the same line as a closing curly brace for the corresponding `if` or `else if` statement. ### Running clang-format Run `clang-format` on the current commit with: git clang-format HEAD~ This will correctly format any files modified in the current commit. You must then update your commit with the reformatted files. ### Running clang-tidy Before running `clang-tidy` you must compile the library with an LLVM compiler, i.e. `clang` and `clang++`, and tell CMake to write out the compilation commands by setting `-DCMAKE_EXPORT_COMPILE_COMMANDS=On`: mkdir cd cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=On -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DBUILD_TESTING=On make Substituting: - `` with a build directory name. The library builds in the specified directory. - `` with the path to the root directory of the library source. Then run `clang-tidy` with a list of files to check: cd clang-tidy -p ... -header-filter=.* where `` is the path to a modified file in the library source. Fix any errors and update your commit with the modified files. ## Python code style Python code style is maintained through the use of the `flake8` linter. Install `flake8` using `pip`: pip install flake8 and run it on an individual Python file: python -m flake8 --config=/flake8.txt Where: - `` is the path to the root directory of the library source. - `` is the name of the Python file to format. This will produce a list of errors, which you must fix manually. Once you have rerun `flake8` and it does not report any errors, add your updated Python file to the current commit. ## Writing tests Each function with a prototype in `armral.h` must be accompanied by a set of tests that run the function on a set of representative inputs and compare the result to known-good output. This reference output is preferably a separate reimplementation of the function. In some situations it may be necessary to compare against arrays of constant values instead but this should be avoided wherever possible. Arm RAL tests must exercise every path through the function that leads to a successful exit. Setting the CMake variable `ARMRAL_ENABLE_COVERAGE=On` enables the compiler flags needed to visualize code coverage with [gcovr](https://gcovr.com/en/stable/). The test inputs should cover the full range of values that a user can provide; if it is feasible to run a test using the largest value that can be stored in a variable, for example, then such a test should be included. In the top-level `CMakeLists.txt` add an `add_armral_test()` entry pointing to the source file for the tests. The source-code for the test must be placed in a subdirectory of `/test`, where `` is the root directory of the library source. Usually the source for all the tests of a single Arm RAL function is contained in a single `main.cpp` file. Successful tests must return `EXIT_SUCCESS` from the `main()` function; failing tests must return `EXIT_FAILURE`. ### Testing with AddressSanitizer It is recommended to use [AddressSanitizer](https://clang.llvm.org/docs/AddressSanitizer.html) to test your patches for memory errors as patches will not be accepted unless this passes. Setting the CMake variable `ARMRAL_ENABLE_ASAN=On` enables the flags needed to compile and link Arm RAL and its tests with AddressSanitizer. The `make check` target will then run the tests using AddressSanitizer and will fail if an error is detected. ## Writing benchmarks Each function with a prototype in `armral.h` must be accompanied by a set of benchmarks that run the function on a set of representative inputs. ### Files to add To create a set of benchmark cases for a newly added function, a directory should be added to `/bench`, where `` is the root directory of the library source. This directory may contain layers of subdirectories if necessary. The lowest directories in the tree must contain a `bench.py` file that specifies the cases to run, and a `main.cpp` file that executes the function being benchmarked. See [Directory structure](#directory-structure) for an example. #### bench.py The `bench.py` files write parsable JSON to standard output with fields for the case name (`"name"`), the arguments that will need to be used to run the function (`"args"`), and the number of repetitions that should be done (`"reps"`). ```json { "exe_name": "benchmark/name", "cases": [ { "name": "benchmark_name", "args": "1 2 3", "reps": 1000 }, {... } ] } ``` The following code block provides a template for the `bench.py` script. ```py #!/usr/bin/env python3 # Arm RAN Acceleration Library # Copyright 2020-2023 Arm Limited and/or its affiliates import json from pathlib import Path import os def get_path(x): return x if Path(x).is_file() else os.path.join("armral", x) exe_name = get_path() j = { "exe_name": exe_name, "cases": [] } reps = argArr = for in argArr: case = { "name": "_{}_{}...".format(,, ...), "args": "{} {} ...".format(, , ...), "reps": reps } j["cases"].append(case) print(json.dumps(j)) ``` Items in angle brackets `< >` are changed as appropriate according to the following descriptions. - ``: The name of the executable, e.g. `bench_mu_law_compression_8bit` (see [Naming scheme](#naming-scheme)). - ``: The number of times the case should be run for profiling (see [Number of repetitions](#number-of-repetitions)). - ``: The arguments that will be required in order to run the function that is to be benchmarked. This can be a list of individual elements, or can, for example, be a list of tuples if multiple arguments are required for each case. The length of the list determines how many cases are generated. See [Number of cases](#number-of-cases) for guidance on how many cases there should be. - ``: A snake case string to identify the function being benchmarked for a particular case, e.g. `mu_law_compression_8bit`. - ``: The arguments in the argument list. #### main.cpp The `main.cpp` is the file that gets compiled into the benchmark executable. The `main.cpp` file must: - contain a `main` function which handles a list of command line arguments. - contain a separate function that calls the function being benchmarked. - return `EXIT_SUCCESS` on completion. The following code block provides a basic template. ```cpp /* Arm RAN Acceleration Library Copyright 2020-2023 Arm Limited and/or its affiliates */ #include "armral.h" #include #include namespace { void ( , , ..., uint32_t num_reps) { printf("[] = %, = %, ..., number of repetitions = %u\n", , , ..., num_reps); // Define necessary variables here = ... = ... ... for (uint32_t i = 0; i < num_reps; ++i) { (, , ..., , , ...); } } } // anonymous namespace int main(int argc, char **argv) { if (argc != ) { // - // - // ... // num_reps - The number of times to repeat the function fprintf(stderr, "Usage: %s ... num_reps\n", argv[0]); exit(EXIT_FAILURE); } auto = ()atoi(argv[1]); auto = ()atoi(argv[2]); ... auto num_reps = (uint32_t)atoi(argv[ - 1]); (, , ..., num_reps); return EXIT_SUCCESS; } ``` The items in angle brackets `< >` are changed as appropriate according to the following descriptions. - ``: The name of the function that repeatedly calls the function being benchmarked, e.g. `run_mu_law_compression_8bit_perf` (see [Naming scheme](#naming-scheme)). - ``, ``: The types of the arguments which are passed in on the command line. - ``: An uppercase string to identify the function, e.g. `"MU LAW COMPRESSION 8BIT"`. - ``, ``: Descriptions to identify the arguments when printing. - ``, ``: The format specifiers for printing the arguments. - ``, ``: The types of the variables defined locally in ``. - ``, ``: The names of variables defined locally in ``. - ``: The name of the library function being benchmarked (e.g. `armral_mu_law_compr_8bit`). - ``: The number of arguments which are passed to the executable on the command line. This is equal to the number of arguments in the `args` field of the JSON object + 1 (since the filename is the first argument). - ``, ``: The names of the arguments which are passed to the executable on the command line. These are the names of the arguments provided in the `args` field of the JSON object generated by `bench.py`. - ``, ``: A description of each command line argument. ##### Outputs Print statements may be added to the `main.cpp` file to describe the benchmark being run, an example of which is given in the `main.cpp` template above. It is useful to include a description of the function being benchmarked, as well as the argument values and the number of repetitions. ##### CMakeLists.txt entry Once the new `main.cpp` file has been created, an entry must be added to `CMakeLists.txt` with the form: `add_armral_bench( )` where `` is the `exe_name` without `bench_` at the front (e.g. `mu_law_compression_8bit`). The entry goes with the other benchmark entries as part of the `if(BUILD_TESTING)` logic. #### Directory structure Benchmarks for different functions should be separated into different files. For example, for Mu Law compression and decompression there are different functions for 8-bit, 9-bit and 14-bit (de)compression. These should be in separate benchmarking executables. The Mu Law directory structure in `bench` therefore looks like: ``` +-- MuLaw | +-- Compression | +-- 8bit | +-- bench.py | +-- main.cpp | +-- 9bit | +-- bench.py | +-- main.cpp | +-- 14bit | +-- bench.py | +-- main.cpp | +-- Decompression | +-- 8bit | +-- bench.py | +-- main.cpp | +-- 9bit | +-- bench.py | +-- main.cpp | +-- 14bit | +-- bench.py | +-- main.cpp ``` ### Naming scheme This section describes how the added directories, executables and functions should be named. #### Directory name The name of the directory (and any subdirectories) added to to `/bench` should be descriptive of the function being benchmarked, and should be in upper camel case, e.g. `MuLaw`. #### Executable name The name of the executable should be `bench_` followed by a descriptive name of the benchmark in snake case e.g. `bench_mu_law_compression_8bit`. #### Function name The name of the function used in the `main.cpp` file should start with `run_`, be followed by a descriptive name of the function being benchmarked in snake case, and end with `_perf`, e.g. `run_mu_law_compression_8bit_perf`. ### Number of cases The number of cases which is output by the `bench.py` file for a particular function is determined by how many different sets of parameters are used as inputs to the function. To keep benchmarking runtimes manageable, the number of cases for a function should be kept to between 4 and 8. The cases should try to cover a broad set of allowed parameters. As an example, for 8-bit Mu Law compression the different cases correspond to different numbers of physical resource blocks. There are 4 different values for the resource block chosen (1, 2, 10 and 100), hence there are 4 benchmarking cases for this function. ### Number of repetitions In order to obtain stable performance results, the function being benchmarked is run a number of times, and the average taken. The number of repetitions is given in the `reps` field of the JSON object for a particular case. The number of repetitions should be sufficient to provide reproducible performance numbers. However, keep in mind how long the benchmark will take to run. In general, shorter benchmarks should have a higher number of repetitions, and longer benchmarks should have fewer repetitions.