2011-01-03 08:46:20 -08:00
|
|
|
.. include:: replace.txt
|
2013-07-17 17:09:36 -07:00
|
|
|
.. highlight:: bash
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
Testing framework
|
|
|
|
|
-----------------
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
|ns3| consists of a simulation core engine, a set of models, example programs,
|
|
|
|
|
and tests. Over time, new contributors contribute models, tests, and
|
2011-01-03 08:46:20 -08:00
|
|
|
examples. A Python test program ``test.py`` serves as the test
|
|
|
|
|
execution manager; ``test.py`` can run test code and examples to
|
2009-09-23 08:37:09 -07:00
|
|
|
look for regressions, can output the results into a number of forms, and
|
2020-01-09 13:54:32 +01:00
|
|
|
can manage code coverage analysis tools. On top of this, we layer
|
2015-08-13 11:08:56 -07:00
|
|
|
*buildslaves* that are automated build robots that perform
|
2009-09-23 08:37:09 -07:00
|
|
|
robustness testing by running the test framework on different systems
|
|
|
|
|
and with different configuration options.
|
|
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
Buildslaves
|
|
|
|
|
***********
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
At the highest level of |ns3| testing are the buildslaves (build robots).
|
2009-09-23 08:37:09 -07:00
|
|
|
If you are unfamiliar with
|
2015-08-13 11:08:56 -07:00
|
|
|
this system look at `<https://ns-buildmaster.ee.washington.edu:8010/>`_.
|
2011-01-03 08:46:20 -08:00
|
|
|
This is an open-source automated system that allows |ns3| to be rebuilt
|
2015-08-13 11:08:56 -07:00
|
|
|
and tested daily. By running the buildbots on a number
|
2011-01-03 08:46:20 -08:00
|
|
|
of different systems we can ensure that |ns3| builds and executes
|
2009-09-12 19:44:17 -07:00
|
|
|
properly on all of its supported systems.
|
|
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
Users (and developers) typically will not interact with the buildslave system other
|
|
|
|
|
than to read its messages regarding test results. If a failure is detected in
|
2009-09-12 19:44:17 -07:00
|
|
|
one of the automated build and test jobs, the buildbot will send an email to the
|
2015-08-13 11:08:56 -07:00
|
|
|
*ns-commits* mailing list. This email will look something like
|
2013-07-17 17:09:36 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
.. sourcecode:: text
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
[Ns-commits] Build failed in Jenkins: daily-ubuntu-without-valgrind » Ubuntu-64-15.04 #926
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
...
|
|
|
|
|
281 of 285 tests passed (281 passed, 3 skipped, 1 failed, 0 crashed, 0 valgrind errors)
|
|
|
|
|
List of SKIPped tests:
|
|
|
|
|
ns3-tcp-cwnd
|
|
|
|
|
ns3-tcp-interoperability
|
|
|
|
|
nsc-tcp-loss
|
|
|
|
|
List of FAILed tests:
|
|
|
|
|
random-variable-stream-generators
|
|
|
|
|
+ exit 1
|
|
|
|
|
Build step 'Execute shell' marked build as failure
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
In the full details URL shown in the email, one can find links to the detailed test output.
|
|
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
The buildslave system will do its job quietly if there are no errors, and the
|
2015-08-13 11:08:56 -07:00
|
|
|
system will undergo build and test cycles every day to verify that all is well.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
Test.py
|
|
|
|
|
*******
|
2020-01-09 13:54:32 +01:00
|
|
|
The buildbots use a Python program, ``test.py``, that is responsible for
|
2009-09-12 19:44:17 -07:00
|
|
|
running all of the tests and collecting the resulting reports into a human-
|
|
|
|
|
readable form. This program is also available for use by users and developers
|
|
|
|
|
as well.
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
``test.py`` is very flexible in allowing the user to specify the number
|
2009-09-12 19:44:17 -07:00
|
|
|
and kind of tests to run; and also the amount and kind of output to generate.
|
|
|
|
|
|
2011-04-12 14:35:16 -07:00
|
|
|
Before running ``test.py``, make sure that ns3's examples and tests
|
|
|
|
|
have been built by doing the following
|
2011-04-06 10:13:04 -07:00
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
2021-11-29 21:58:30 -03:00
|
|
|
$ ./ns3 configure --enable-examples --enable-tests
|
|
|
|
|
$ ./ns3 build
|
2011-04-06 10:13:04 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
By default, ``test.py`` will run all available tests and report status
|
|
|
|
|
back in a very concise form. Running the command
|
|
|
|
|
|
|
|
|
|
::
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
will result in a number of ``PASS``, ``FAIL``, ``CRASH`` or ``SKIP``
|
2009-09-12 19:44:17 -07:00
|
|
|
indications followed by the kind of test that was run and its display name.
|
2020-01-09 13:54:32 +01:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
.. sourcecode:: text
|
2011-01-03 08:46:20 -08:00
|
|
|
|
2009-09-12 19:44:17 -07:00
|
|
|
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
|
|
|
|
Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
|
|
|
|
'build' finished successfully (0.939s)
|
2015-08-13 11:08:56 -07:00
|
|
|
FAIL: TestSuite propagation-loss-model
|
2009-09-12 19:44:17 -07:00
|
|
|
PASS: TestSuite object-name-service
|
|
|
|
|
PASS: TestSuite pcap-file-object
|
|
|
|
|
PASS: TestSuite ns3-tcp-cwnd
|
|
|
|
|
...
|
|
|
|
|
PASS: TestSuite ns3-tcp-interoperability
|
|
|
|
|
PASS: Example csma-broadcast
|
|
|
|
|
PASS: Example csma-multicast
|
|
|
|
|
|
2011-04-06 10:13:04 -07:00
|
|
|
This mode is intended to be used by users who are interested in determining if
|
2009-09-12 19:44:17 -07:00
|
|
|
their distribution is working correctly, and by developers who are interested
|
|
|
|
|
in determining if changes they have made have caused any regressions.
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
There are a number of options available to control the behavior of ``test.py``.
|
|
|
|
|
if you run ``test.py --help`` you should see a command summary like:
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
.. sourcecode:: text
|
2009-10-12 14:01:36 -07:00
|
|
|
|
2025-09-14 23:18:52 +02:00
|
|
|
usage: test.py [-h] [-b BUILDPATH] [-c CONSTRAIN] [-d] [-e EXAMPLE] [-u]
|
|
|
|
|
[-f {QUICK,EXTENSIVE,TAKES_FOREVER} | -of {QUICK,EXTENSIVE,TAKES_FOREVER}]
|
|
|
|
|
[-g] [-k] [-l] [-m] [-n] [-p PYEXAMPLE] [-r] [-s SUITE] [-t TEXT-FILE]
|
|
|
|
|
[-v] [--verbose-failed] [-w HTML-FILE] [-x XML-FILE] [--nocolor]
|
|
|
|
|
[--jobs PROCESS_LIMIT] [--rerun-failed]
|
2020-01-09 13:54:32 +01:00
|
|
|
|
2025-09-14 23:18:52 +02:00
|
|
|
options:
|
2009-10-12 14:01:36 -07:00
|
|
|
-h, --help show this help message and exit
|
2025-09-14 23:18:52 +02:00
|
|
|
-b, --buildpath BUILDPATH
|
|
|
|
|
specify the path where ns-3 was built (defaults to the build
|
|
|
|
|
directory for the current variant)
|
|
|
|
|
-c, --constrain CONSTRAIN
|
2009-10-12 14:01:36 -07:00
|
|
|
constrain the test-runner by kind of test
|
2015-05-01 06:49:12 -07:00
|
|
|
-d, --duration print the duration of each test suite and example
|
2025-09-14 23:18:52 +02:00
|
|
|
-e, --example EXAMPLE
|
|
|
|
|
specify a single example to run (no relative path is needed)
|
|
|
|
|
-u, --update-data If examples use reference data files, get them to re-generate
|
|
|
|
|
them
|
|
|
|
|
-f, --fullness {QUICK,EXTENSIVE,TAKES_FOREVER}
|
|
|
|
|
choose the duration of tests to run: QUICK, EXTENSIVE, or
|
|
|
|
|
TAKES_FOREVER, where EXTENSIVE includes QUICK and TAKES_FOREVER
|
|
|
|
|
includes QUICK and EXTENSIVE (only QUICK tests are run by
|
|
|
|
|
default)
|
|
|
|
|
-of, --only-fullness {QUICK,EXTENSIVE,TAKES_FOREVER}
|
|
|
|
|
choose the duration of tests to run: QUICK, EXTENSIVE, or
|
|
|
|
|
TAKES_FOREVER (only tests marked with fullness will be executed)
|
2009-10-12 14:01:36 -07:00
|
|
|
-g, --grind run the test suites and examples using valgrind
|
|
|
|
|
-k, --kinds print the kinds of tests available
|
|
|
|
|
-l, --list print the list of known tests
|
2025-09-14 23:18:52 +02:00
|
|
|
-m, --multiple report multiple failures from test suites and test cases
|
|
|
|
|
-n, --no-build do not build before starting testing
|
|
|
|
|
-p, --pyexample PYEXAMPLE
|
|
|
|
|
specify a single python example to run (with relative path)
|
|
|
|
|
-r, --retain retain all temporary files (which are normally deleted)
|
|
|
|
|
-s, --suite SUITE specify a single test suite to run
|
|
|
|
|
-t, --text TEXT-FILE write detailed test results into TEXT-FILE.txt
|
2009-10-12 14:01:36 -07:00
|
|
|
-v, --verbose print progress and informational messages
|
2025-09-14 23:18:52 +02:00
|
|
|
--verbose-failed print progress and informational messages for failed jobs
|
|
|
|
|
-w, --web, --html HTML-FILE
|
2009-10-12 14:01:36 -07:00
|
|
|
write detailed test results into HTML-FILE.html
|
2025-09-14 23:18:52 +02:00
|
|
|
-x, --xml XML-FILE write detailed test results into XML-FILE.xml
|
|
|
|
|
--nocolor do not use colors in the standard output
|
|
|
|
|
--jobs PROCESS_LIMIT limit number of worker threads
|
|
|
|
|
--rerun-failed rerun failed tests
|
2009-10-12 14:01:36 -07:00
|
|
|
|
2009-09-12 19:44:17 -07:00
|
|
|
If one specifies an optional output style, one can generate detailed descriptions
|
2011-01-03 08:46:20 -08:00
|
|
|
of the tests and status. Available styles are ``text`` and ``HTML``.
|
2009-09-12 19:44:17 -07:00
|
|
|
The buildbots will select the HTML option to generate HTML test reports for the
|
2011-01-03 08:46:20 -08:00
|
|
|
nightly builds using
|
|
|
|
|
|
|
|
|
|
::
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py --html=nightly.html
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
In this case, an HTML file named ''nightly.html'' would be created with a pretty
|
|
|
|
|
summary of the testing done. A ''human readable'' format is available for users
|
2009-09-12 19:44:17 -07:00
|
|
|
interested in the details.
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
::
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py --text=results.txt
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
In the example above, the test suite checking the |ns3| wireless
|
2009-09-12 19:44:17 -07:00
|
|
|
device propagation loss models failed. By default no further information is
|
|
|
|
|
provided.
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
To further explore the failure, ``test.py`` allows a single test suite
|
|
|
|
|
to be specified. Running the command
|
|
|
|
|
|
|
|
|
|
::
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
$ ./test.py --suite=propagation-loss-model
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2012-08-24 09:24:44 -07:00
|
|
|
or equivalently
|
|
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
$ ./test.py -s propagation-loss-model
|
2012-08-24 09:24:44 -07:00
|
|
|
|
2009-09-12 19:44:17 -07:00
|
|
|
results in that single test suite being run.
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
.. sourcecode:: text
|
2011-01-03 08:46:20 -08:00
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
FAIL: TestSuite propagation-loss-model
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
To find detailed information regarding the failure, one must specify the kind
|
|
|
|
|
of output desired. For example, most people will probably be interested in
|
2011-01-03 08:46:20 -08:00
|
|
|
a text file::
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
$ ./test.py --suite=propagation-loss-model --text=results.txt
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
This will result in that single test suite being run with the test status written to
|
2011-01-03 08:46:20 -08:00
|
|
|
the file ''results.txt''.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
You should find something similar to the following in that file
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
.. sourcecode:: text
|
2011-01-03 08:46:20 -08:00
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
FAIL: Test Suite ''propagation-loss-model'' (real 0.02 user 0.01 system 0.00)
|
2009-09-12 19:44:17 -07:00
|
|
|
PASS: Test Case "Check ... Friis ... model ..." (real 0.01 user 0.00 system 0.00)
|
|
|
|
|
FAIL: Test Case "Check ... Log Distance ... model" (real 0.01 user 0.01 system 0.00)
|
|
|
|
|
Details:
|
|
|
|
|
Message: Got unexpected SNR value
|
|
|
|
|
Condition: [long description of what actually failed]
|
|
|
|
|
Actual: 176.395
|
|
|
|
|
Limit: 176.407 +- 0.0005
|
|
|
|
|
File: ../src/test/ns3wifi/propagation-loss-models-test-suite.cc
|
|
|
|
|
Line: 360
|
|
|
|
|
|
|
|
|
|
Notice that the Test Suite is composed of two Test Cases. The first test case
|
2020-01-09 13:54:32 +01:00
|
|
|
checked the Friis propagation loss model and passed. The second test case
|
2009-09-12 19:44:17 -07:00
|
|
|
failed checking the Log Distance propagation model. In this case, an SNR of
|
|
|
|
|
176.395 was found, and the test expected a value of 176.407 correct to three
|
|
|
|
|
decimal places. The file which implemented the failing test is listed as well
|
|
|
|
|
as the line of code which triggered the failure.
|
|
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
If you desire, you could just as easily have written an HTML file using the
|
2011-01-03 08:46:20 -08:00
|
|
|
``--html`` option as described above.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
Typically a user will run all tests at least once after downloading
|
2011-01-03 08:46:20 -08:00
|
|
|
|ns3| to ensure that his or her environment has been built correctly
|
2009-09-12 19:44:17 -07:00
|
|
|
and is generating correct results according to the test suites. Developers
|
|
|
|
|
will typically run the test suites before and after making a change to ensure
|
|
|
|
|
that they have not introduced a regression with their changes. In this case,
|
|
|
|
|
developers may not want to run all tests, but only a subset. For example,
|
|
|
|
|
the developer might only want to run the unit tests periodically while making
|
2011-01-03 08:46:20 -08:00
|
|
|
changes to a repository. In this case, ``test.py`` can be told to constrain
|
2009-09-23 08:37:09 -07:00
|
|
|
the types of tests being run to a particular class of tests. The following
|
2011-01-03 08:46:20 -08:00
|
|
|
command will result in only the unit tests being run::
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py --constrain=unit
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
To see a quick list of the legal kinds of constraints, you can ask for them
|
|
|
|
|
to be listed. The following command
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
::
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py --kinds
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
will result in the following list being displayed:
|
|
|
|
|
|
|
|
|
|
.. sourcecode:: text
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
|
|
|
|
Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
|
|
|
|
'build' finished successfully (0.939s)Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
2009-10-06 22:47:07 -07:00
|
|
|
core: Run all TestSuite-based tests (exclude examples)
|
2009-09-12 19:44:17 -07:00
|
|
|
example: Examples (to see if example programs run successfully)
|
|
|
|
|
performance: Performance Tests (check to see if the system is as fast as expected)
|
2009-10-06 22:47:07 -07:00
|
|
|
system: System Tests (spans modules to check integration of modules)
|
|
|
|
|
unit: Unit Tests (within modules to check basic functionality)
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
Any of these kinds of test can be provided as a constraint using the ``--constraint``
|
2009-10-06 22:47:07 -07:00
|
|
|
option.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
To see a quick list of all of the test suites available, you can ask for them
|
|
|
|
|
to be listed. The following command,
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
::
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py --list
|
|
|
|
|
|
|
|
|
|
will result in a list of the test suite being displayed, similar to
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
.. sourcecode:: text
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
Test Type Test Name
|
|
|
|
|
--------- ---------
|
|
|
|
|
performance many-uniform-random-variables-one-get-value-call
|
|
|
|
|
performance one-uniform-random-variable-many-get-value-calls
|
|
|
|
|
performance type-id-perf
|
|
|
|
|
system buildings-pathloss-test
|
|
|
|
|
system buildings-shadowing-test
|
|
|
|
|
system devices-mesh-dot11s-regression
|
|
|
|
|
system devices-mesh-flame-regression
|
|
|
|
|
system epc-gtpu
|
|
|
|
|
...
|
2025-09-06 16:55:45 -07:00
|
|
|
unit wifi-primary-channels
|
|
|
|
|
unit wifi-ru-allocation
|
|
|
|
|
unit wifi-spectrum-wifi-phy
|
|
|
|
|
unit wifi-transmit-mask
|
|
|
|
|
unit wifi-txop
|
2015-08-13 11:08:56 -07:00
|
|
|
example adhoc-aloha-ideal-phy
|
|
|
|
|
example adhoc-aloha-ideal-phy-matrix-propagation-loss-model
|
|
|
|
|
example adhoc-aloha-ideal-phy-with-microwave-oven
|
|
|
|
|
example aodv
|
2009-10-06 22:47:07 -07:00
|
|
|
...
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
Any of these listed suites can be selected to be run by itself using the
|
2020-01-09 14:03:13 +01:00
|
|
|
``--suite`` option as shown above.
|
|
|
|
|
|
|
|
|
|
To run multiple test suites at once it is possible to use a 'Unix filename pattern matching'
|
|
|
|
|
style, e.g.,
|
|
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
|
|
$ ../test.py -s 'ipv6*'
|
|
|
|
|
|
|
|
|
|
Note the use of quotes. The result is similar to
|
|
|
|
|
|
|
|
|
|
.. sourcecode:: text
|
|
|
|
|
|
|
|
|
|
PASS: TestSuite ipv6-protocol
|
|
|
|
|
PASS: TestSuite ipv6-packet-info-tag
|
|
|
|
|
PASS: TestSuite ipv6-list-routing
|
|
|
|
|
PASS: TestSuite ipv6-extension-header
|
|
|
|
|
PASS: TestSuite ipv6-address-generator
|
|
|
|
|
PASS: TestSuite ipv6-raw
|
|
|
|
|
PASS: TestSuite ipv6-dual-stack
|
|
|
|
|
PASS: TestSuite ipv6-fragmentation
|
|
|
|
|
PASS: TestSuite ipv6-address-helper
|
|
|
|
|
PASS: TestSuite ipv6-address
|
|
|
|
|
PASS: TestSuite ipv6-forwarding
|
|
|
|
|
PASS: TestSuite ipv6-ripng
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-04-26 12:53:50 -07:00
|
|
|
Similarly to test suites, one can run a single C++ example program
|
|
|
|
|
using the ``--example`` option. Note that the relative path for the
|
2012-09-14 11:48:14 -07:00
|
|
|
example does not need to be included and that the executables built
|
2015-08-13 11:08:56 -07:00
|
|
|
for C++ examples do not have extensions. Furthermore, the example
|
|
|
|
|
must be registered as an example to the test framework; it is not
|
|
|
|
|
sufficient to create an example and run it through test.py; it must
|
|
|
|
|
be added to the relevant ``examples-to-run.py`` file, explained below.
|
|
|
|
|
Entering
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
::
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py --example=udp-echo
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
results in that single example being run.
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
.. sourcecode:: text
|
2011-01-03 08:46:20 -08:00
|
|
|
|
2011-04-26 12:53:50 -07:00
|
|
|
PASS: Example examples/udp/udp-echo
|
|
|
|
|
|
2025-09-14 23:18:52 +02:00
|
|
|
You can also execute all instances of declared examples using different parameters,
|
|
|
|
|
as defined in ``examples-to-run.py`` by including a star at the end of the example name.
|
|
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
|
|
$ ./test.py --example=wifi-phy-configuration*
|
|
|
|
|
|
|
|
|
|
results in that example being run with the following parameters:
|
|
|
|
|
|
|
|
|
|
.. sourcecode:: text
|
|
|
|
|
|
|
|
|
|
[1/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=0
|
|
|
|
|
[2/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=8
|
|
|
|
|
[3/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=10
|
|
|
|
|
[4/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=12
|
|
|
|
|
[5/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=1
|
|
|
|
|
[6/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=13
|
|
|
|
|
[7/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=4
|
|
|
|
|
[8/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=3
|
|
|
|
|
[9/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=14
|
|
|
|
|
[10/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=5
|
|
|
|
|
[11/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=15
|
|
|
|
|
[12/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=6
|
|
|
|
|
[13/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=7
|
|
|
|
|
[14/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=11
|
|
|
|
|
[15/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=2
|
|
|
|
|
[16/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=9
|
|
|
|
|
[17/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=16
|
|
|
|
|
[18/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=19
|
|
|
|
|
[19/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=18
|
|
|
|
|
[20/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=17
|
|
|
|
|
[21/21] PASS: Example src/wifi/examples/wifi-phy-configuration --testCase=20
|
|
|
|
|
21 of 21 tests passed (21 passed, 0 skipped, 0 failed, 0 crashed, 0 valgrind errors)
|
|
|
|
|
|
2015-05-01 06:49:12 -07:00
|
|
|
You can specify the directory where |ns3| was built using the
|
2011-04-26 12:53:50 -07:00
|
|
|
``--buildpath`` option as follows.
|
|
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py --buildpath=/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build/debug --example=wifi-simple-adhoc
|
2011-04-26 12:53:50 -07:00
|
|
|
|
|
|
|
|
One can run a single Python example program using the ``--pyexample``
|
|
|
|
|
option. Note that the relative path for the example must be included
|
|
|
|
|
and that Python examples do need their extensions. Entering
|
|
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py --pyexample=examples/tutorial/first.py
|
2011-04-26 12:53:50 -07:00
|
|
|
|
|
|
|
|
results in that single example being run.
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
.. sourcecode:: text
|
2011-04-26 12:53:50 -07:00
|
|
|
|
|
|
|
|
PASS: Example examples/tutorial/first.py
|
|
|
|
|
|
|
|
|
|
Because Python examples are not built, you do not need to specify the
|
2015-05-01 06:49:12 -07:00
|
|
|
directory where |ns3| was built to run them.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
Normally when example programs are executed, they write a large amount of trace
|
|
|
|
|
file data. This is normally saved to the base directory of the distribution
|
2011-01-03 08:46:20 -08:00
|
|
|
(e.g., /home/user/ns-3-dev). When ``test.py`` runs an example, it really
|
2009-09-12 19:44:17 -07:00
|
|
|
is completely unconcerned with the trace files. It just wants to to determine
|
|
|
|
|
if the example can be built and run without error. Since this is the case, the
|
2020-01-09 13:54:32 +01:00
|
|
|
trace files are written into a ``/tmp/unchecked-traces`` directory. If you
|
|
|
|
|
run the above example, you should be able to find the associated
|
2011-01-03 08:46:20 -08:00
|
|
|
``udp-echo.tr`` and ``udp-echo-n-1.pcap`` files there.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
The list of available examples is defined by the contents of the ''examples''
|
2009-09-12 19:44:17 -07:00
|
|
|
directory in the distribution. If you select an example for execution using
|
2011-01-03 08:46:20 -08:00
|
|
|
the ``--example`` option, ``test.py`` will not make any attempt to decide
|
2009-09-12 19:44:17 -07:00
|
|
|
if the example has been configured or not, it will just try to run it and
|
|
|
|
|
report the result of the attempt.
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
When ``test.py`` runs, by default it will first ensure that the system has
|
2021-11-29 21:58:30 -03:00
|
|
|
been completely built. This can be defeated by selecting the ``--nobuild``
|
2009-09-12 19:44:17 -07:00
|
|
|
option.
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
::
|
|
|
|
|
|
2021-11-29 21:58:30 -03:00
|
|
|
$ ./test.py --list --nobuild
|
2013-07-17 17:09:36 -07:00
|
|
|
|
|
|
|
|
will result in a list of the currently built test suites being displayed, similar to:
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
.. sourcecode:: text
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
propagation-loss-model
|
2009-09-12 19:44:17 -07:00
|
|
|
ns3-tcp-cwnd
|
|
|
|
|
ns3-tcp-interoperability
|
2015-08-13 11:08:56 -07:00
|
|
|
pcap-file
|
2009-09-12 19:44:17 -07:00
|
|
|
object-name-service
|
2015-08-13 11:08:56 -07:00
|
|
|
random-variable-stream-generators
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2021-11-29 21:58:30 -03:00
|
|
|
Note the absence of the ``ns3`` build messages.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
``test.py`` also supports running the test suites and examples under valgrind.
|
2009-10-06 22:47:07 -07:00
|
|
|
Valgrind is a flexible program for debugging and profiling Linux executables. By
|
|
|
|
|
default, valgrind runs a tool called memcheck, which performs a range of memory-
|
2020-01-09 13:54:32 +01:00
|
|
|
checking functions, including detecting accesses to uninitialised memory, misuse
|
|
|
|
|
of allocated memory (double frees, access after free, etc.) and detecting memory
|
2011-01-03 08:46:20 -08:00
|
|
|
leaks. This can be selected by using the ``--grind`` option.
|
|
|
|
|
|
|
|
|
|
::
|
2009-10-06 22:47:07 -07:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py --grind
|
2009-10-06 22:47:07 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
As it runs, ``test.py`` and the programs that it runs indirectly, generate large
|
2009-10-12 14:01:36 -07:00
|
|
|
numbers of temporary files. Usually, the content of these files is not interesting,
|
|
|
|
|
however in some cases it can be useful (for debugging purposes) to view these files.
|
2011-01-03 08:46:20 -08:00
|
|
|
``test.py`` provides a ``--retain`` option which will cause these temporary
|
2020-01-09 13:54:32 +01:00
|
|
|
files to be kept after the run is completed. The files are saved in a directory
|
2011-01-03 08:46:20 -08:00
|
|
|
named ``testpy-output`` under a subdirectory named according to the current Coordinated
|
2009-10-12 14:01:36 -07:00
|
|
|
Universal Time (also known as Greenwich Mean Time).
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
::
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py --retain
|
2009-10-12 14:01:36 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
Finally, ``test.py`` provides a ``--verbose`` option which will print
|
2009-09-12 19:44:17 -07:00
|
|
|
large amounts of information about its progress. It is not expected that this
|
2009-10-06 22:47:07 -07:00
|
|
|
will be terribly useful unless there is an error. In this case, you can get
|
|
|
|
|
access to the standard output and standard error reported by running test suites
|
2011-01-03 08:46:20 -08:00
|
|
|
and examples. Select verbose in the following way::
|
2009-10-06 22:47:07 -07:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py --verbose
|
2009-10-06 22:47:07 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
All of these options can be mixed and matched. For example, to run all of the
|
2015-05-01 06:49:12 -07:00
|
|
|
|ns3| core test suites under valgrind, in verbose mode, while generating an HTML
|
2011-01-03 08:46:20 -08:00
|
|
|
output file, one would do::
|
2009-10-06 22:47:07 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
$ ./test.py --verbose --grind --constrain=core --html=results.html
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
TestTaxonomy
|
|
|
|
|
************
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
As mentioned above, tests are grouped into a number of broadly defined
|
2009-09-12 19:44:17 -07:00
|
|
|
classifications to allow users to selectively run tests to address the different
|
|
|
|
|
kinds of testing that need to be done.
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
* Build Verification Tests
|
2020-01-09 13:54:32 +01:00
|
|
|
* Unit Tests
|
2011-01-03 08:46:20 -08:00
|
|
|
* System Tests
|
|
|
|
|
* Examples
|
|
|
|
|
* Performance Tests
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2015-05-01 06:49:12 -07:00
|
|
|
Moreover, each test is further classified according to the expected time needed to
|
|
|
|
|
run it. Tests are classified as:
|
|
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
* QUICK
|
2015-05-01 06:49:12 -07:00
|
|
|
* EXTENSIVE
|
|
|
|
|
* TAKES_FOREVER
|
|
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
Note that specifying EXTENSIVE fullness will also run tests in QUICK category.
|
|
|
|
|
Specifying TAKES_FOREVER will run tests in EXTENSIVE and QUICK categories.
|
2015-05-01 06:49:12 -07:00
|
|
|
By default, only QUICK tests are ran.
|
|
|
|
|
|
2025-09-14 23:18:52 +02:00
|
|
|
Alternatively, one can specify only-fullness option, which runs tests
|
|
|
|
|
from just that particular category. This allows for tiered execution, both in
|
|
|
|
|
time constrained jobs (starting from TAKES_FOREVER to give them more time,
|
|
|
|
|
then EXTENSIVE, and then QUICK), or to progressively run more expensive tests
|
|
|
|
|
(QUICK, then EXTENSIVE, then TAKES_FOREVER).
|
|
|
|
|
|
2015-05-01 06:49:12 -07:00
|
|
|
As a rule of thumb, tests that must be run to ensure |ns3| coherence should be
|
|
|
|
|
QUICK (i.e., take a few seconds). Tests that could be skipped, but are nice to do
|
|
|
|
|
can be EXTENSIVE; these are tests that typically need minutes. TAKES_FOREVER is
|
|
|
|
|
left for tests that take a really long time, in the order of several minutes.
|
|
|
|
|
The main classification goal is to be able to run the buildbots in a reasonable
|
|
|
|
|
time, and still be able to perform more extensive tests when needed.
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
Unit Tests
|
|
|
|
|
++++++++++
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
Unit tests are more involved tests that go into detail to make sure that a
|
2009-09-23 08:37:09 -07:00
|
|
|
piece of code works as advertised in isolation. There is really no reason
|
2015-05-01 06:49:12 -07:00
|
|
|
for this kind of test to be built into an |ns3| module. It turns out, for
|
2009-09-12 19:44:17 -07:00
|
|
|
example, that the unit tests for the object name service are about the same
|
|
|
|
|
size as the object name service code itself. Unit tests are tests that
|
2015-05-01 06:49:12 -07:00
|
|
|
check a single bit of functionality that are not built into the |ns3| code,
|
2009-09-12 19:44:17 -07:00
|
|
|
but live in the same directory as the code it tests. It is possible that
|
|
|
|
|
these tests check integration of multiple implementation files in a module
|
2011-05-09 18:04:52 -07:00
|
|
|
as well. The file src/core/test/names-test-suite.cc is an example of this kind
|
|
|
|
|
of test. The file src/network/test/pcap-file-test-suite.cc is another example
|
2009-09-12 19:44:17 -07:00
|
|
|
that uses a known good pcap file as a test vector file. This file is stored
|
2011-05-09 18:04:52 -07:00
|
|
|
locally in the src/network directory.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
System Tests
|
|
|
|
|
++++++++++++
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
System tests are those that involve more than one module in the system. We
|
2021-12-16 00:23:35 +01:00
|
|
|
have some of this kind of test running in our current regression framework,
|
2020-01-09 13:54:32 +01:00
|
|
|
but they are typically overloaded examples. We provide a new place
|
2011-01-03 08:46:20 -08:00
|
|
|
for this kind of test in the directory ``src/test``. The file
|
2021-12-16 00:23:35 +01:00
|
|
|
``src/test/ns3tcp/ns3tcp-loss-test-suite.cc`` is an example of this kind of
|
2015-05-01 06:49:12 -07:00
|
|
|
test. It uses NSC TCP to test the |ns3| TCP implementation. Often there
|
2009-09-12 19:44:17 -07:00
|
|
|
will be test vectors required for this kind of test, and they are stored in
|
|
|
|
|
the directory where the test lives. For example,
|
2021-12-16 00:23:35 +01:00
|
|
|
``ns3tcp-loss-NewReno0-response-vectors.pcap`` is a file consisting of a number of TCP
|
|
|
|
|
headers that are used as the expected responses of the |ns3| TCP under test.
|
|
|
|
|
|
2023-02-18 00:43:21 -03:00
|
|
|
Note that Unit Tests are often preferable to System Tests, as they are more
|
2021-12-16 00:23:35 +01:00
|
|
|
independent from small changes in the modules that are not the goal of the test.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
Examples
|
|
|
|
|
++++++++
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-05-26 21:42:16 +00:00
|
|
|
The examples are tested by the framework to make sure they built and
|
|
|
|
|
will run. Limited checking is done on examples; currently the pcap
|
|
|
|
|
files are just written off into /tmp to be discarded. If the example
|
|
|
|
|
runs (don't crash) and the exit status is zero, the example will pass
|
|
|
|
|
the smoke test.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
Performance Tests
|
|
|
|
|
+++++++++++++++++
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
Performance tests are those which exercise a particular part of the system
|
|
|
|
|
and determine if the tests have executed to completion in a reasonable time.
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
Running Tests
|
2012-08-24 09:24:44 -07:00
|
|
|
*************
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-07-05 13:27:30 -07:00
|
|
|
Tests are typically run using the high level ``test.py`` program. To get a list of the available command-line options, run ``test.py --help``
|
2012-08-24 09:24:44 -07:00
|
|
|
|
|
|
|
|
The test program ``test.py`` will run both tests and those examples that
|
|
|
|
|
have been added to the list to check. The difference between tests
|
|
|
|
|
and examples is as follows. Tests generally check that specific simulation
|
|
|
|
|
output or events conforms to expected behavior. In contrast, the output
|
|
|
|
|
of examples is not checked, and the test program merely checks the exit
|
|
|
|
|
status of the example program to make sure that it runs without error.
|
|
|
|
|
|
|
|
|
|
Briefly, to run all tests, first one must configure tests during configuration
|
|
|
|
|
stage, and also (optionally) examples if examples are to be checked:
|
|
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
2022-01-13 23:59:59 -03:00
|
|
|
$ ./ns3 configure --enable-examples --enable-tests
|
2012-08-24 09:24:44 -07:00
|
|
|
|
2015-05-01 06:49:12 -07:00
|
|
|
Then, build |ns3|, and after it is built, just run ``test.py``. ``test.py -h``
|
2012-08-24 09:24:44 -07:00
|
|
|
will show a number of configuration options that modify the behavior
|
|
|
|
|
of test.py.
|
|
|
|
|
|
|
|
|
|
The program ``test.py`` invokes, for C++ tests and examples, a lower-level
|
|
|
|
|
C++ program called ``test-runner`` to actually run the tests. As discussed
|
|
|
|
|
below, this ``test-runner`` can be a helpful way to debug tests.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2025-09-14 23:44:03 +02:00
|
|
|
There are two options to control the scope of tests that are run:
|
|
|
|
|
|
|
|
|
|
* ``--fullness`` (abbreviated ``-f``): Runs all tests up to that classification
|
|
|
|
|
* ``./test.py`` (runs only QUICK)
|
|
|
|
|
* ``./test.py -f QUICK`` (runs only QUICK)
|
|
|
|
|
* ``./test.py -f EXTENSIVE`` (runs QUICK and EXTENSIVE)
|
|
|
|
|
* ``./test.py -f TAKES_FOREVER`` (runs QUICK, EXTENSIVE, and TAKES_FOREVER)
|
|
|
|
|
|
|
|
|
|
* ``--only-fullness`` (abbreviated ``-of``): Runs only tests of that classification
|
|
|
|
|
|
|
|
|
|
* ``./test.py -of QUICK`` (runs only QUICK)
|
|
|
|
|
* ``./test.py -of EXTENSIVE`` (runs only EXTENSIVE)
|
|
|
|
|
* ``./test.py -of TAKES_FOREVER`` (runs only TAKES_FOREVER)
|
|
|
|
|
|
|
|
|
|
The change in test scope can be further controlled by selecting individual
|
|
|
|
|
tests by using the ``-s`` option. For example, the following command
|
|
|
|
|
will cause all test cases of only one test suite to be executed:
|
|
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
|
|
|
|
$ ./test.py -s wifi-primary-channels -f TAKES_FOREVER
|
|
|
|
|
|
2011-07-05 13:27:30 -07:00
|
|
|
Debugging Tests
|
|
|
|
|
***************
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
The debugging of the test programs is best performed running the low-level
|
|
|
|
|
test-runner program. The test-runner is the bridge from generic Python
|
|
|
|
|
code to |ns3| code. It is written in C++ and uses the automatic test
|
|
|
|
|
discovery process in the |ns3| code to find and allow execution of all
|
2015-05-01 06:49:12 -07:00
|
|
|
of the various tests.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
The main reason why ``test.py`` is not suitable for debugging is that it is
|
|
|
|
|
not allowed for logging to be turned on using the ``NS_LOG`` environmental
|
|
|
|
|
variable when test.py runs. This limitation does not apply to the test-runner
|
|
|
|
|
executable. Hence, if you want to see logging output from your tests, you
|
2015-05-01 06:49:12 -07:00
|
|
|
have to run them using the test-runner directly.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2015-05-01 06:49:12 -07:00
|
|
|
In order to execute the test-runner, you run it like any other |ns3| executable
|
2021-11-29 21:58:30 -03:00
|
|
|
-- using ``ns3``. To get a list of available options, you can type::
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2022-01-13 23:59:59 -03:00
|
|
|
$ ./ns3 run "test-runner --help"
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
You should see something like the following
|
|
|
|
|
|
|
|
|
|
.. sourcecode:: text
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2025-09-14 23:18:52 +02:00
|
|
|
Usage: /mnt/c/tools/sources/ns-3-dev/build/utils/ns3-dev-test-runner-debug [OPTIONS]
|
2015-08-13 11:08:56 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
Options:
|
2025-09-14 23:18:52 +02:00
|
|
|
--help : print these options
|
|
|
|
|
--print-test-name-list : print the list of names of tests available
|
|
|
|
|
--list : an alias for --print-test-name-list
|
|
|
|
|
--print-test-types : print the type of tests along with their names
|
|
|
|
|
--print-test-type-list : print the list of types of tests available
|
|
|
|
|
--print-temp-dir : print name of temporary directory before running
|
|
|
|
|
the tests
|
|
|
|
|
--test-type=TYPE : process only tests of type TYPE
|
|
|
|
|
--test-name=NAME : process only test whose name matches NAME
|
|
|
|
|
--suite=NAME : an alias (here for compatibility reasons only)
|
|
|
|
|
for --test-name=NAME
|
|
|
|
|
--assert-on-failure : when a test fails, crash immediately (useful
|
|
|
|
|
when running under a debugger
|
|
|
|
|
--stop-on-failure : when a test fails, stop immediately
|
|
|
|
|
--fullness=FULLNESS : choose the duration of tests to run: QUICK,
|
|
|
|
|
EXTENSIVE, or TAKES_FOREVER, where EXTENSIVE
|
|
|
|
|
includes QUICK and TAKES_FOREVER includes
|
|
|
|
|
QUICK and EXTENSIVE (only QUICK tests are
|
|
|
|
|
run by default)
|
|
|
|
|
--only-fullness=FULLNESS : choose the duration of tests to run: QUICK,
|
|
|
|
|
EXTENSIVE, TAKES_FOREVER (only tests marked
|
|
|
|
|
with fullness will be executed)
|
|
|
|
|
--verbose : print details of test execution
|
|
|
|
|
--xml : format test run output as xml
|
|
|
|
|
--tempdir=DIR : set temp dir for tests to store output files
|
|
|
|
|
--datadir=DIR : set data dir for tests to read reference files
|
|
|
|
|
--out=FILE : send test result to FILE instead of standard output
|
|
|
|
|
--append=FILE : append test result to FILE instead of standard output
|
2015-08-13 11:08:56 -07:00
|
|
|
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
There are a number of things available to you which will be familiar to you if
|
2011-01-03 08:46:20 -08:00
|
|
|
you have looked at ``test.py``. This should be expected since the test-
|
2020-01-09 13:54:32 +01:00
|
|
|
runner is just an interface between ``test.py`` and |ns3|. You
|
|
|
|
|
may notice that example-related commands are missing here. That is because
|
2011-01-03 08:46:20 -08:00
|
|
|
the examples are really not |ns3| tests. ``test.py`` runs them
|
2009-09-12 19:44:17 -07:00
|
|
|
as if they were to present a unified testing environment, but they are really
|
|
|
|
|
completely different and not to be found here.
|
|
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
The first new option that appears here, but not in test.py is the ``--assert-on-failure``
|
2020-01-09 13:54:32 +01:00
|
|
|
option. This option is useful when debugging a test case when running under a
|
2011-01-03 08:46:20 -08:00
|
|
|
debugger like ``gdb``. When selected, this option tells the underlying
|
2010-04-15 16:37:57 -07:00
|
|
|
test case to cause a segmentation violation if an error is detected. This has
|
2020-01-09 13:54:32 +01:00
|
|
|
the nice side-effect of causing program execution to stop (break into the
|
2010-04-15 16:37:57 -07:00
|
|
|
debugger) when an error is detected. If you are using gdb, you could use this
|
|
|
|
|
option something like,
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
::
|
|
|
|
|
|
2021-11-29 21:58:30 -03:00
|
|
|
$ ./ns3 shell
|
2015-08-13 11:08:56 -07:00
|
|
|
$ cd build/utils
|
|
|
|
|
$ gdb ns3-dev-test-runner-debug
|
2020-01-09 13:54:32 +01:00
|
|
|
$ run --suite=global-value --assert-on-failure
|
2010-04-15 16:37:57 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
If an error is then found in the global-value test suite, a segfault would be
|
2011-01-03 08:46:20 -08:00
|
|
|
generated and the (source level) debugger would stop at the ``NS_TEST_ASSERT_MSG``
|
2010-04-15 16:37:57 -07:00
|
|
|
that detected the error.
|
|
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
To run one of the tests directly from the test-runner
|
2021-11-29 21:58:30 -03:00
|
|
|
using ``ns3``, you will need to specify the test suite to run.
|
2015-08-13 11:08:56 -07:00
|
|
|
So you could use the shell and do::
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2022-01-13 23:59:59 -03:00
|
|
|
$ ./ns3 run "test-runner --suite=pcap-file"
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2023-01-11 08:25:08 +01:00
|
|
|
|ns3| logging is available when you run it this way, such as::
|
2010-04-15 16:37:57 -07:00
|
|
|
|
2022-01-13 23:59:59 -03:00
|
|
|
$ NS_LOG="Packet" ./ns3 run "test-runner --suite=pcap-file"
|
2010-04-15 16:37:57 -07:00
|
|
|
|
2012-08-24 09:24:44 -07:00
|
|
|
Test output
|
|
|
|
|
+++++++++++
|
|
|
|
|
|
|
|
|
|
Many test suites need to write temporary files (such as pcap files)
|
2010-04-15 16:37:57 -07:00
|
|
|
in the process of running the tests. The tests then need a temporary directory
|
|
|
|
|
to write to. The Python test utility (test.py) will provide a temporary file
|
|
|
|
|
automatically, but if run stand-alone this temporary directory must be provided.
|
2015-08-13 11:08:56 -07:00
|
|
|
It can be annoying to continually have to provide
|
2011-01-03 08:46:20 -08:00
|
|
|
a ``--tempdir``, so the test runner will figure one out for you if you don't
|
2020-01-09 13:54:32 +01:00
|
|
|
provide one. It first looks for environment variables named ``TMP`` and
|
2011-01-03 08:46:20 -08:00
|
|
|
``TEMP`` and uses those. If neither ``TMP`` nor ``TEMP`` are defined
|
2020-01-09 13:54:32 +01:00
|
|
|
it picks ``/tmp``. The code then tacks on an identifier indicating what
|
2010-04-21 15:00:25 -04:00
|
|
|
created the directory (ns-3) then the time (hh.mm.ss) followed by a large random
|
2010-04-16 15:18:04 -07:00
|
|
|
number. The test runner creates a directory of that name to be used as the
|
2010-04-15 16:37:57 -07:00
|
|
|
temporary directory. Temporary files then go into a directory that will be
|
2011-01-03 08:46:20 -08:00
|
|
|
named something like
|
|
|
|
|
|
|
|
|
|
::
|
2010-04-15 16:37:57 -07:00
|
|
|
|
2010-04-16 15:18:04 -07:00
|
|
|
/tmp/ns-3.10.25.37.61537845
|
2010-04-15 16:37:57 -07:00
|
|
|
|
|
|
|
|
The time is provided as a hint so that you can relatively easily reconstruct
|
|
|
|
|
what directory was used if you need to go back and look at the files that were
|
|
|
|
|
placed in that directory.
|
|
|
|
|
|
2012-08-24 09:24:44 -07:00
|
|
|
Another class of output is test output like pcap traces that are generated
|
|
|
|
|
to compare to reference output. The test program will typically delete
|
|
|
|
|
these after the test suites all run. To disable the deletion of test
|
|
|
|
|
output, run ``test.py`` with the "retain" option:
|
|
|
|
|
|
|
|
|
|
::
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
$ ./test.py -r
|
2012-08-24 09:24:44 -07:00
|
|
|
|
|
|
|
|
and test output can be found in the ``testpy-output/`` directory.
|
|
|
|
|
|
|
|
|
|
Reporting of test failures
|
|
|
|
|
++++++++++++++++++++++++++
|
|
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
When you run a test suite using the test-runner it will run the test
|
2015-08-13 11:08:56 -07:00
|
|
|
and report PASS or FAIL.
|
2020-01-09 13:54:32 +01:00
|
|
|
To run more quietly, you need to specify an output file to which the tests will write their status using the ``--out`` option.
|
2010-04-15 16:37:57 -07:00
|
|
|
Try,
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
::
|
|
|
|
|
|
2022-01-13 23:59:59 -03:00
|
|
|
$ ./ns3 run "test-runner --suite=pcap-file --out=myfile.txt"
|
2015-08-13 11:08:56 -07:00
|
|
|
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2012-08-24 09:24:44 -07:00
|
|
|
Debugging test suite failures
|
|
|
|
|
+++++++++++++++++++++++++++++
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
To debug test crashes, such as
|
|
|
|
|
|
|
|
|
|
.. sourcecode:: text
|
2012-08-24 09:24:44 -07:00
|
|
|
|
2019-07-17 15:35:33 +02:00
|
|
|
CRASH: TestSuite wifi-interference
|
2012-08-24 09:24:44 -07:00
|
|
|
|
|
|
|
|
You can access the underlying test-runner program via gdb as follows, and
|
|
|
|
|
then pass the "--basedir=`pwd`" argument to run (you can also pass other
|
|
|
|
|
arguments as needed, but basedir is the minimum needed)::
|
|
|
|
|
|
2022-01-13 23:59:59 -03:00
|
|
|
$ ./ns3 run "test-runner" --command-template="gdb %s"
|
2012-08-24 09:24:44 -07:00
|
|
|
Waf: Entering directory `/home/tomh/hg/sep09/ns-3-allinone/ns-3-dev-678/build'
|
|
|
|
|
Waf: Leaving directory `/home/tomh/hg/sep09/ns-3-allinone/ns-3-dev-678/build'
|
|
|
|
|
'build' finished successfully (0.380s)
|
|
|
|
|
GNU gdb 6.8-debian
|
|
|
|
|
Copyright (C) 2008 Free Software Foundation, Inc.
|
|
|
|
|
L cense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
|
|
|
|
|
This is free software: you are free to change and redistribute it.
|
|
|
|
|
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
|
|
|
|
|
and "show warranty" for details.
|
|
|
|
|
This GDB was configured as "x86_64-linux-gnu"...
|
2015-08-13 11:08:56 -07:00
|
|
|
(gdb) r --suite=
|
2019-07-17 15:35:33 +02:00
|
|
|
Starting program: <..>/build/utils/ns3-dev-test-runner-debug --suite=wifi-interference
|
2012-08-24 09:24:44 -07:00
|
|
|
[Thread debugging using libthread_db enabled]
|
2022-11-21 13:01:52 -08:00
|
|
|
assert failed. file=../src/core/model/type-id.cc, line=138, cond="uid <= m_information.size() && uid != 0"
|
2012-08-24 09:24:44 -07:00
|
|
|
...
|
|
|
|
|
|
|
|
|
|
Here is another example of how to use valgrind to debug a memory problem
|
|
|
|
|
such as::
|
|
|
|
|
|
|
|
|
|
VALGR: TestSuite devices-mesh-dot11s-regression
|
|
|
|
|
|
2022-01-13 23:59:59 -03:00
|
|
|
$ ./ns3 run test-runner --command-template="valgrind %s --suite=devices-mesh-dot11s-regression"
|
2012-08-24 09:24:44 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
Class TestRunner
|
|
|
|
|
****************
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
The executables that run dedicated test programs use a TestRunner class. This
|
|
|
|
|
class provides for automatic test registration and listing, as well as a way to
|
2020-01-09 13:54:32 +01:00
|
|
|
execute the individual tests. Individual test suites use C++ global
|
2009-09-23 08:37:09 -07:00
|
|
|
constructors
|
2009-09-12 19:44:17 -07:00
|
|
|
to add themselves to a collection of test suites managed by the test runner.
|
|
|
|
|
The test runner is used to list all of the available tests and to select a test
|
|
|
|
|
to be run. This is a quite simple class that provides three static methods to
|
2020-01-09 13:54:32 +01:00
|
|
|
provide or Adding and Getting test suites to a collection of tests. See the
|
2011-01-03 08:46:20 -08:00
|
|
|
doxygen for class ``ns3::TestRunner`` for details.
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
Test Suite
|
|
|
|
|
**********
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
All |ns3| tests are classified into Test Suites and Test Cases. A
|
2009-09-12 19:44:17 -07:00
|
|
|
test suite is a collection of test cases that completely exercise a given kind
|
|
|
|
|
of functionality. As described above, test suites can be classified as,
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
* Build Verification Tests
|
2020-01-09 13:54:32 +01:00
|
|
|
* Unit Tests
|
2011-01-03 08:46:20 -08:00
|
|
|
* System Tests
|
|
|
|
|
* Examples
|
2025-09-14 23:18:52 +02:00
|
|
|
* Examples-as-Tests
|
2011-01-03 08:46:20 -08:00
|
|
|
* Performance Tests
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
This classification is exported from the TestSuite class. This class is quite
|
|
|
|
|
simple, existing only as a place to export this type and to accumulate test
|
2020-01-09 13:54:32 +01:00
|
|
|
cases. From a user perspective, in order to create a new TestSuite in the
|
2011-01-03 08:46:20 -08:00
|
|
|
system one only has to define a new class that inherits from class ``TestSuite``
|
2009-09-12 19:44:17 -07:00
|
|
|
and perform these two duties.
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
The following code will define a new class that can be run by ``test.py``
|
|
|
|
|
as a ''unit'' test with the display name, ``my-test-suite-name``.
|
|
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
.. sourcecode:: cpp
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
class MySuite : public TestSuite
|
|
|
|
|
{
|
|
|
|
|
public:
|
2022-11-21 13:01:52 -08:00
|
|
|
MyTestSuite();
|
2009-09-12 19:44:17 -07:00
|
|
|
};
|
2020-01-09 13:54:32 +01:00
|
|
|
|
2022-11-21 13:01:52 -08:00
|
|
|
MyTestSuite::MyTestSuite()
|
2024-02-20 16:05:46 +01:00
|
|
|
: TestSuite("my-test-suite-name", Type::UNIT)
|
2009-09-12 19:44:17 -07:00
|
|
|
{
|
2024-02-20 15:40:33 +01:00
|
|
|
AddTestCase(new MyTestCase, TestCase::Duration::QUICK);
|
2009-09-12 19:44:17 -07:00
|
|
|
}
|
2020-01-09 13:54:32 +01:00
|
|
|
|
2015-08-13 11:08:56 -07:00
|
|
|
static MyTestSuite myTestSuite;
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
The base class takes care of all of the registration and reporting required to
|
|
|
|
|
be a good citizen in the test framework.
|
|
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
Avoid putting initialization logic into the test suite or test case
|
2015-08-13 11:08:56 -07:00
|
|
|
constructors. This is
|
2020-01-09 13:54:32 +01:00
|
|
|
because an instance of the test suite is created at run time
|
|
|
|
|
(due to the static variable above) regardless of whether the test is being
|
2015-08-13 11:08:56 -07:00
|
|
|
run or not. Instead, the TestCase provides a virtual ``DoSetup`` method
|
|
|
|
|
that can be specialized to perform setup before ``DoRun`` is called.
|
|
|
|
|
|
2011-01-03 08:46:20 -08:00
|
|
|
Test Case
|
|
|
|
|
*********
|
2009-09-12 19:44:17 -07:00
|
|
|
|
|
|
|
|
Individual tests are created using a TestCase class. Common models for the use
|
|
|
|
|
of a test case include "one test case per feature", and "one test case per method."
|
|
|
|
|
Mixtures of these models may be used.
|
|
|
|
|
|
|
|
|
|
In order to create a new test case in the system, all one has to do is to inherit
|
2020-01-09 13:54:32 +01:00
|
|
|
from the ``TestCase`` base class, override the constructor to give the test
|
2015-08-13 11:08:56 -07:00
|
|
|
case a name and override the ``DoRun`` method to run the test. Optionally,
|
|
|
|
|
override also the ``DoSetup`` method.
|
2011-01-03 08:46:20 -08:00
|
|
|
|
2013-07-17 17:09:36 -07:00
|
|
|
.. sourcecode:: cpp
|
2011-01-03 08:46:20 -08:00
|
|
|
|
|
|
|
|
class MyTestCase : public TestCase
|
|
|
|
|
{
|
2022-11-21 13:01:52 -08:00
|
|
|
MyTestCase();
|
|
|
|
|
virtual void DoSetup();
|
|
|
|
|
virtual void DoRun();
|
2011-01-03 08:46:20 -08:00
|
|
|
};
|
2020-01-09 13:54:32 +01:00
|
|
|
|
2022-11-21 13:01:52 -08:00
|
|
|
MyTestCase::MyTestCase()
|
|
|
|
|
: TestCase("Check some bit of functionality")
|
2011-01-03 08:46:20 -08:00
|
|
|
{
|
|
|
|
|
}
|
2020-01-09 13:54:32 +01:00
|
|
|
|
2011-01-17 12:53:20 -08:00
|
|
|
void
|
2022-11-21 13:01:52 -08:00
|
|
|
MyTestCase::DoRun()
|
2011-01-03 08:46:20 -08:00
|
|
|
{
|
2022-11-21 13:01:52 -08:00
|
|
|
NS_TEST_ASSERT_MSG_EQ(true, true, "Some failure message");
|
2011-01-03 08:46:20 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
Utilities
|
2012-08-24 09:24:44 -07:00
|
|
|
*********
|
2009-09-12 19:44:17 -07:00
|
|
|
|
2020-01-09 13:54:32 +01:00
|
|
|
There are a number of utilities of various kinds that are also part of the
|
2009-09-12 19:44:17 -07:00
|
|
|
testing framework. Examples include a generalized pcap file useful for
|
|
|
|
|
storing test vectors; a generic container useful for transient storage of
|
|
|
|
|
test vectors during test execution; and tools for generating presentations
|
|
|
|
|
based on validation and verification testing results.
|
|
|
|
|
|
2012-08-24 09:24:44 -07:00
|
|
|
These utilities are not documented here, but for example, please see
|
|
|
|
|
how the TCP tests found in ``src/test/ns3tcp/`` use pcap files and reference
|
|
|
|
|
output.
|