|
| 1 | +[[cha:writing-tests]] |
| 2 | += Writing automated LinuxCNC tests |
| 3 | + |
| 4 | +== The test framework |
| 5 | + |
| 6 | +The code base has unit and integration tests that can be executed |
| 7 | +automatically to ensure the program work as intended. Such tests are |
| 8 | +often written to trigger a bug and to ensure the bug is detected if it |
| 9 | +resurface in the future, but also to validate behavior of components |
| 10 | +and interfaces. |
| 11 | + |
| 12 | +The tests are collected in the `tests/` directory. The individual tests |
| 13 | +are in subdirectories of this directory. The tests are grouped in |
| 14 | +directories. |
| 15 | + |
| 16 | +== Running tests |
| 17 | + |
| 18 | +The tests are executed by the `scripts/runtests` script generated from |
| 19 | +`scripts/runtests.in` during build. The runtests script will by default |
| 20 | +locate tests to run under `tests/`, but can be limited to only run a |
| 21 | +limited set of tests by specifying the directory of the test or tests |
| 22 | +as argument(s). |
| 23 | + |
| 24 | +Here is an example running only the tests in `tests/lathe/` |
| 25 | + |
| 26 | +---- |
| 27 | +$ scripts/runtests tests/lathe/ |
| 28 | +Running test: tests/lathe |
| 29 | +Runtest: 1 tests run, 1 successful, 0 failed + 0 expected, 0 skipped |
| 30 | +---- |
| 31 | + |
| 32 | +The runtests script looks for all files named _test_, _test.sh_ and |
| 33 | +_test.hal_ below the directories specified on the command line, or under |
| 34 | +`tests/` if no command line argument is specified. These files |
| 35 | +represent three different ways to run the tests. |
| 36 | + |
| 37 | +The _runtests_ script accept the following arguments, see the output |
| 38 | +from `scripts/runtests -h` for the authorative list: |
| 39 | +---- |
| 40 | +-n do not remove temporary files for successful tests. |
| 41 | +-s stop after any failed test. |
| 42 | +-p print stderr and result files. |
| 43 | +-c Remove temporary files from an earlier test run. |
| 44 | +-u Only run tests that require normal user access. |
| 45 | +-v Show stdout and stderr (normally it's hidden). |
| 46 | +---- |
| 47 | +== Writing tests |
| 48 | + |
| 49 | +Make sure the test can run successfully without a working X11 display, |
| 50 | +i.e. with the DISPLAY environment variable unset. |
| 51 | + |
| 52 | +1. Create a folder in `tests/`. |
| 53 | +2. Provide one test script. |
| 54 | +3. Evaluate the output with one of the options below. |
| 55 | + |
| 56 | +These are the files considered in the directory with the individual |
| 57 | +tests: |
| 58 | + |
| 59 | +.Test script (only one of these three) |
| 60 | + |
| 61 | +test:: |
| 62 | + A program executed and its exit code and output is checked using |
| 63 | + either checkresult or expected. |
| 64 | + |
| 65 | +test.sh:: |
| 66 | + A bash script executed and its exit code and output is checked using |
| 67 | + either checkresult or expected. |
| 68 | + |
| 69 | +test.hal:: |
| 70 | + A HAL script executed using `halrun -f test.hal` and its exit code |
| 71 | + and output is checked using either checkresult or expected. |
| 72 | + |
| 73 | +.Test evaluation |
| 74 | + |
| 75 | + expected:: |
| 76 | + A file whos content is compared to the output from running the test |
| 77 | + scripts. If the test output is identical to the content of the |
| 78 | + expected file, the test succeed. |
| 79 | + |
| 80 | +checkresult:: |
| 81 | + An excutable file to perform more complex validation than just comparing |
| 82 | + the output of a test script. It gets the filename of the test program as |
| 83 | + its command line argument. The exit code of this program controls the result |
| 84 | + of the test. If both `expected` and `checkresult` exist, only `checkresult` |
| 85 | + is consulted to validate the test output. |
| 86 | + |
| 87 | + xfail:: |
| 88 | + If this file exist, a test failure is expected and do not cause |
| 89 | + runtests to return an exit code signaling an error. |
| 90 | + |
| 91 | + skip:: |
| 92 | + If this file exist, the test is skipped and not executed at all. |
| 93 | + |
| 94 | + control:: |
| 95 | + This file can be used to flag specific needs in the test. At the |
| 96 | + moment, the use of _sudo_ can be flagged, and tests requiring sudo |
| 97 | + can be skipped when using `runtests -u`. To flag such requirement, |
| 98 | + add a line with `Restrictions: sudo` to this file. |
| 99 | + |
| 100 | +== Some testing approaches |
| 101 | + |
| 102 | +There are various ways to structure a test, depending on what one want |
| 103 | +to test. Here are a few ideas on how to do it. |
| 104 | + |
| 105 | +=== Non-interactive "GUI" |
| 106 | + |
| 107 | +If you want to test some operations in the user interface, a useful |
| 108 | +approach is is to write a custom "GUI" simulating the operations. |
| 109 | +This can be done by creating a normal LinuxCNC setup and pointing the |
| 110 | +[DISPLAY] DISPLAY value to a script that do the operations needed to |
| 111 | +test the behaviour. |
| 112 | + |
| 113 | +Examples of this approach can be found in `tests/halui/jogging/` and |
| 114 | +`tests/interp/subroutine-return/`. |
| 115 | + |
| 116 | +=== Recording HAL pin transitions |
| 117 | + |
| 118 | +Using the _sampler_ and _halsampler_ HAL components, one can set up a |
| 119 | +HAL configuration and collect pin value settings and changes and dump |
| 120 | +the result to stdout (or a file). The end result can then be compared |
| 121 | +with the expected output to verify if HAL behave as expected. |
| 122 | + |
| 123 | +Examples of this approach can be found in `tests/multiclick/` and |
| 124 | +`tests/stepgen.2/`. |
0 commit comments