Testing document edits
This commit is contained in:
@@ -25,20 +25,23 @@ CHAPTERS = \
|
||||
%.png : %.dia; $(DIA) -t png $< -e $@
|
||||
%.pdf : %.eps; $(EPSTOPDF) $< -o=$@
|
||||
|
||||
all: $(IMAGES) testing.pdf testing.html testing/testing.html
|
||||
all: $(IMAGES) version testing.pdf testing.html testing/testing.html
|
||||
|
||||
testing.pdf: $(IMAGES) $(CHAPTERS)
|
||||
testing.pdf: version $(IMAGES) $(CHAPTERS)
|
||||
$(TEXI2PDF) testing.texi
|
||||
|
||||
testing.html: $(IMAGES) $(CHAPTERS)
|
||||
testing.html: version $(IMAGES) $(CHAPTERS)
|
||||
$(TEXI2HTML) ${CSS} testing.texi
|
||||
|
||||
testing/testing.html: $(IMAGES) $(CHAPTERS)
|
||||
testing/testing.html: version $(IMAGES) $(CHAPTERS)
|
||||
$(TEXI2HTML) ${CSS} ${SPLIT} testing.texi
|
||||
|
||||
figures-clean:
|
||||
rm -rf $(IMAGES)
|
||||
|
||||
version:
|
||||
echo -n "ns-" > VERSION-PREFIX; cat VERSION-PREFIX ../../VERSION > VERSION; rm -rf VERSION-PREFIX
|
||||
|
||||
clean: figures-clean
|
||||
rm -rf testing.aux testing.cp testing.cps testing.fn testing.ky testing.pg
|
||||
rm -rf testing.tp testing.vr testing.toc testing.log testing.pdf testing.html testing/
|
||||
rm -rf testing.tp testing.vr testing.toc testing.log testing.pdf testing.html testing/ VERSION missfont.log
|
||||
|
||||
@@ -5,6 +5,11 @@
|
||||
@node Background
|
||||
@chapter Background
|
||||
|
||||
@cartouche
|
||||
@emph{This chapter may be skipped by readers familiar with the basics of
|
||||
software testing.}
|
||||
@end cartouche
|
||||
|
||||
Writing defect-free software is a difficult proposition. There are many
|
||||
dimensions to the problem and there is much confusion regarding what is
|
||||
meant by different terms in different contexts. We have found it worthwhile
|
||||
@@ -44,7 +49,7 @@ must be present for the product to succeed. From a testing perspective, some
|
||||
of these qualities that must be addressed are that @command{ns-3} must be
|
||||
``correct,'' ``robust,'' ``performant'' and ``maintainable.'' Ideally there
|
||||
should be metrics for each of these dimensions that are checked by the tests
|
||||
to identify when the product fails to meed its expectations / requirements.
|
||||
to identify when the product fails to meet its expectations / requirements.
|
||||
|
||||
@node Correctness
|
||||
@section Correctness
|
||||
@@ -55,15 +60,18 @@ something, the simulation should faithfully represent some physical entity or
|
||||
process to a specified accuracy and precision.
|
||||
|
||||
It turns out that there are two perspectives from which one can view
|
||||
correctness. Verifying that a particular process is implemented according
|
||||
to its specification is generically called verification. The process of
|
||||
deciding that the specification is correct is generically called validation.
|
||||
correctness. Verifying that a particular model is implemented according
|
||||
to its specification is generically called @emph{verification}. The process of
|
||||
deciding that the model is correct for its intended use is generically called
|
||||
@emph{validation}.
|
||||
|
||||
@node ValidationAndVerification
|
||||
@section Validation and Verification
|
||||
|
||||
A computer model is a mathematical or logical representation of something. It
|
||||
can represent a vehicle, a frog or a networking card. Models can also represent
|
||||
can represent a vehicle, an elephant (see
|
||||
@uref{http://www.simutools.org,,David Harel's talk about modeling an
|
||||
elephant at SIMUTools 2009}, or a networking card. Models can also represent
|
||||
processes such as global warming, freeway traffic flow or a specification of a
|
||||
networking protocol. Models can be completely faithful representations of a
|
||||
logical process specification, but they necessarily can never completely
|
||||
@@ -141,14 +149,14 @@ zero-terminated string representation of an integer, a dirty test might provide
|
||||
an unterminated string of random characters to verify that the system does not
|
||||
crash as a result of this unexpected input. Unfortunately, detecting such
|
||||
``dirty'' input and taking preventive measures to ensure the system does not
|
||||
fail catasrophically can require a huge amount of development overhead. In
|
||||
fail catastrophically can require a huge amount of development overhead. In
|
||||
order to reduce development time, a decision was taken early on in the project
|
||||
to minimize the amount of parameter validation and error handling in the
|
||||
@command{ns-3} codebase. For this reason, we do not spend much time on dirty
|
||||
testing -- it would just uncover the results of the design decision we know
|
||||
we took.
|
||||
|
||||
We do want to deonstrate that @command{ns-3} software does work across some set
|
||||
We do want to demonstrate that @command{ns-3} software does work across some set
|
||||
of conditions. We borrow a couple of definitions to narrow this down a bit.
|
||||
The @emph{domain of applicability} is a set of prescribed conditions for which
|
||||
the model has been tested, compared against reality to the extent possible, and
|
||||
@@ -158,7 +166,7 @@ the computerized model and reality within a domain of applicability.
|
||||
The @command{ns-3} testing environment provides tools to allow for setting up
|
||||
and running test environments over multiple systems (buildbot) and provides
|
||||
classes to encourage clean tests to verify the operation of the system over the
|
||||
expected ``domain of applicability'' and ``range of accuraccy.''
|
||||
expected ``domain of applicability'' and ``range of accuracy.''
|
||||
|
||||
@node Performant
|
||||
@section Performant
|
||||
@@ -171,7 +179,7 @@ This is really about the broad subject of software performance testing. One of
|
||||
the key things that is done is to compare two systems to find which performs
|
||||
better (cf benchmarks). This is used to demonstrate that, for example,
|
||||
@code{ns-3} can perform a basic kind of simulation at least as fast as a
|
||||
competing product, or can be used to identify parts of the system that perform
|
||||
competing tool, or can be used to identify parts of the system that perform
|
||||
badly.
|
||||
|
||||
In the @code{ns-3} test framework, we provide support for timing various kinds
|
||||
@@ -187,7 +195,8 @@ of tests for the entire system to ensure that it remains valid and verified
|
||||
over its lifetime.
|
||||
|
||||
When a feature stops functioning as intended after some kind of change to the
|
||||
system is integrated, it is called generically a regression. Originally the
|
||||
system is integrated, it is called generically a @emph{regression}.
|
||||
Originally the
|
||||
term regression referred to a change that caused a previously fixed bug to
|
||||
reappear, but the term has evolved to describe any kind of change that breaks
|
||||
existing functionality. There are many kinds of regressions that may occur
|
||||
|
||||
@@ -5,4 +5,34 @@
|
||||
@node How to write tests
|
||||
@chapter How to write tests
|
||||
|
||||
To be completed.
|
||||
A primary goal of the ns-3 project is to help users to improve the
|
||||
validity and credibility of their results. There are many elements
|
||||
to obtaining valid models and simulations, and testing is a major
|
||||
component. If you contribute models or examples to ns-3, you may
|
||||
be asked to contribute test code. Models that you contribute will be
|
||||
used for many years by other people, who probably have no idea upon
|
||||
first glance whether the model is correct. The test code that you
|
||||
write for your model will help to avoid future regressions in
|
||||
the output and will aid future users in understanding the validity
|
||||
and bounds of applicability of your models.
|
||||
|
||||
There are many ways to test that a model is valid. In this chapter,
|
||||
we hope to cover some common cases that can be used as a guide to
|
||||
writing new tests.
|
||||
|
||||
@cartouche
|
||||
@emph{The rest of this chapter remains to be written}
|
||||
@end cartouche
|
||||
|
||||
@section Testing for boolean outcomes
|
||||
|
||||
@section Testing outcomes when randomness is involved
|
||||
|
||||
@section Testing output data against a known distribution
|
||||
|
||||
@section Providing non-trivial input vectors of data
|
||||
|
||||
@section Storing and referencing non-trivial output data
|
||||
|
||||
@section Presenting your output test data
|
||||
|
||||
|
||||
@@ -10,7 +10,14 @@ software.
|
||||
|
||||
This document provides
|
||||
@itemize @bullet
|
||||
@item a description of the ns-3 testing framework;
|
||||
@item a guide to model developers or new model contributors for how to write tests;
|
||||
@item validation and verification results reported to date.
|
||||
@item background about terminology and software testing (Chapter 2);
|
||||
@item a description of the ns-3 testing framework (Chapter 3);
|
||||
@item a guide to model developers or new model contributors for how to write tests (Chapter 4);
|
||||
@item validation and verification results reported to date (Chapters 5-onward).
|
||||
@end itemize
|
||||
|
||||
In brief, the first three chapters should be read by ns developers and
|
||||
contributors who need to understand how to contribute test code and
|
||||
validated programs, and
|
||||
the remainder of the document provides space for people to report on what
|
||||
aspects of selected models have been validated.
|
||||
|
||||
@@ -11,6 +11,7 @@ This chapter describes validation of ns-3 propagation loss models.
|
||||
From source: @uref{http://www.scribd.com/doc/6650712/Wireless-CommunicationsPrinciples-and-Practice-Theodore-S,, Wireless Communications-Principles and Practice ,Theodore S Rappaport pg. 71 }
|
||||
|
||||
Given equation:
|
||||
@smallformat
|
||||
@verbatim
|
||||
Pr = Pt*Gt*Gr*lmb^2/((4*pi)^2*d^2*L)
|
||||
|
||||
@@ -33,12 +34,14 @@ Distance :: Pr :: SNR
|
||||
5000 1.99306e-13W 0.451571
|
||||
6000 1.38407e-13W 0.313591
|
||||
@end verbatim
|
||||
@end smallformat
|
||||
|
||||
@subsection Validation test
|
||||
|
||||
Test program available online at: @uref{http://xxx.xxx.com,,}
|
||||
|
||||
Taken at default settings (packetSize = 1000, numPackets = 1, lambda = 0.125, 802.11b at 2.4GHz):
|
||||
@smallformat
|
||||
@verbatim
|
||||
Distance :: Pr :: SNR
|
||||
100 4.98265e-10W 1128.93
|
||||
@@ -52,6 +55,7 @@ Distance :: Pr :: SNR
|
||||
7000 1.01687e-13W 0.230393
|
||||
8000 7.78539e-14W 0.176395
|
||||
@end verbatim
|
||||
@end smallformat
|
||||
|
||||
@subsection Discussion
|
||||
|
||||
@@ -64,6 +68,7 @@ As can be seen, the SNR outputted from the simulator, and the SNR computed from
|
||||
From source: @uref{http://www.plextek.co.uk/papers/aps2005mcw.pdf,, Urban Propagation Measurements and Statistical Path Loss Model at 3.5 GHz, Marcus C. Walden, Frank J. Rowsell}
|
||||
|
||||
Given equation:
|
||||
@smallformat
|
||||
@verbatim
|
||||
PL{dBm} = PL(d0) + 10*n*log(d/d0) + Xs
|
||||
|
||||
@@ -89,12 +94,14 @@ Distance :: Pr :: SNR
|
||||
500 3.98612e-14 .090314
|
||||
1000 4.98265e-15 .011289
|
||||
@end verbatim
|
||||
@end smallformat
|
||||
|
||||
@subsection Validation test
|
||||
|
||||
Test program available online at: @uref{http://xxx.xxx.com,,}
|
||||
|
||||
Taken at default settings (packetSize = 1000, numPackets = 1, exponent = 3, reference loss = 46.6777, 802.11b at 2.4GHz)
|
||||
@smallformat
|
||||
@verbatim
|
||||
Distance :: Pr :: snr
|
||||
10 4.98471e-9 11293.9
|
||||
@@ -107,7 +114,7 @@ Distance :: Pr :: snr
|
||||
500 3.98777e-14 0.0903516
|
||||
1000 4.98471e-15 0.0112939
|
||||
@end verbatim
|
||||
|
||||
@end smallformat
|
||||
|
||||
@subsection Discussion
|
||||
There is a ~.04% error between these results. I do not believe this is
|
||||
|
||||
6
doc/testing/random-variables.texi
Normal file
6
doc/testing/random-variables.texi
Normal file
@@ -0,0 +1,6 @@
|
||||
@node Random Variables
|
||||
@chapter Random Variables
|
||||
|
||||
@cartouche
|
||||
@emph{Write me}
|
||||
@end cartouche
|
||||
9
doc/testing/references.texi
Normal file
9
doc/testing/references.texi
Normal file
@@ -0,0 +1,9 @@
|
||||
@node References
|
||||
@chapter References
|
||||
|
||||
The following work about the validation of ns-3 models is published
|
||||
elsewhere.
|
||||
|
||||
@cartouche
|
||||
@emph{Write me}
|
||||
@end cartouche
|
||||
@@ -2,14 +2,30 @@
|
||||
@c Testing framework
|
||||
@c ========================================================================
|
||||
|
||||
@unnumbered Part I: Testing
|
||||
|
||||
@node TestingFramework
|
||||
@chapter Testing Framework
|
||||
|
||||
ns-3 consists of a simulation core engine, a set of models, example programs,
|
||||
and tests. Over time, new contributors contribute models, tests, and
|
||||
examples. A Python test program @samp{test.py} serves as the test
|
||||
execution manager; @code{test.py} can run test code and examples to
|
||||
look for regressions, can output the results into a number of forms, and
|
||||
can manage code coverage analysis tools. On top of this, we layer
|
||||
@samp{Buildbots} that are automated build robots that perform
|
||||
robustness testing by running the test framework on different systems
|
||||
and with different configuration options.
|
||||
|
||||
@cartouche
|
||||
Insert figure showing the components here
|
||||
@end cartouche
|
||||
|
||||
@node BuildBots
|
||||
@section Buildbots
|
||||
|
||||
The @command{ns-3} testing framework is composed of several major pieces. At
|
||||
the highest level are the buildbots (build robots). If you are unfamiliar with
|
||||
At the highest level of ns-3 testing are the buildbots (build robots).
|
||||
If you are unfamiliar with
|
||||
this system look at @uref{http://djmitche.github.com/buildbot/docs/0.7.11/}.
|
||||
This is an open-source automated system that allows @command{ns-3} to be rebuilt
|
||||
and tested each time something has changed. By running the buildbots on a number
|
||||
@@ -21,6 +37,7 @@ than to read its messages regarding test results. If a failure is detected in
|
||||
one of the automated build and test jobs, the buildbot will send an email to the
|
||||
@emph{ns-developers} mailing list. This email will look something like:
|
||||
|
||||
@smallformat
|
||||
@verbatim
|
||||
The Buildbot has detected a new failure of osx-ppc-g++-4.2 on NsNam.
|
||||
Full details are available at:
|
||||
@@ -35,11 +52,12 @@ one of the automated build and test jobs, the buildbot will send an email to the
|
||||
Build Source Stamp: HEAD
|
||||
Blamelist:
|
||||
|
||||
BUILD FAILED: failed shell_5 shell_6 shell_7 shell_8 shell_9 shell_10 shell_11 shell_12 shell_13 shell_14 shell_15
|
||||
BUILD FAILED: failed shell_5 shell_6 shell_7 shell_8 shell_9 shell_10 shell_11 shell_12
|
||||
|
||||
sincerely,
|
||||
-The Buildbot
|
||||
@end verbatim
|
||||
@end smallformat
|
||||
|
||||
In the full details URL shown in the email, one can search for the keyword
|
||||
@code{failed} and select the @code{stdio} link for the corresponding step to see
|
||||
@@ -68,6 +86,7 @@ back in a very concise form. Running the command,
|
||||
will result in a number of @code{PASS}, @code{FAIL}, @code{CRASH} or @code{SKIP}
|
||||
indications followed by the kind of test that was run and its display name.
|
||||
|
||||
@smallformat
|
||||
@verbatim
|
||||
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
||||
Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
||||
@@ -82,6 +101,7 @@ indications followed by the kind of test that was run and its display name.
|
||||
PASS: Example csma-broadcast
|
||||
PASS: Example csma-multicast
|
||||
@end verbatim
|
||||
@end smallformat
|
||||
|
||||
This mode is indented to be used by users who are interested in determining if
|
||||
their distribution is working correctly, and by developers who are interested
|
||||
@@ -134,6 +154,7 @@ the file ``results.txt''.
|
||||
|
||||
You should find something similar to the following in that file:
|
||||
|
||||
@smallformat
|
||||
@verbatim
|
||||
FAIL: Test Suite ``ns3-wifi-propagation-loss-models'' (real 0.02 user 0.01 system 0.00)
|
||||
PASS: Test Case "Check ... Friis ... model ..." (real 0.01 user 0.00 system 0.00)
|
||||
@@ -146,6 +167,7 @@ FAIL: Test Suite ``ns3-wifi-propagation-loss-models'' (real 0.02 user 0.01 syste
|
||||
File: ../src/test/ns3wifi/propagation-loss-models-test-suite.cc
|
||||
Line: 360
|
||||
@end verbatim
|
||||
@end smallformat
|
||||
|
||||
Notice that the Test Suite is composed of two Test Cases. The first test case
|
||||
checked the Friis propagation loss model and passed. The second test case
|
||||
@@ -158,14 +180,14 @@ If you desire, you could just as easily have written an HTML file using the
|
||||
@code{--html} option as described above.
|
||||
|
||||
Typically a user will run all tests at least once after downloading
|
||||
@command{ns-3} to ensure that his or her enviornment has been built correctly
|
||||
@command{ns-3} to ensure that his or her environment has been built correctly
|
||||
and is generating correct results according to the test suites. Developers
|
||||
will typically run the test suites before and after making a change to ensure
|
||||
that they have not introduced a regression with their changes. In this case,
|
||||
developers may not want to run all tests, but only a subset. For example,
|
||||
the developer might only want to run the unit tests periodically while making
|
||||
changes to a repository. In this case, @code{test.py} can be told to constrain
|
||||
the types of tests being run to a particular class of tests. The follwoing
|
||||
the types of tests being run to a particular class of tests. The following
|
||||
command will result in only the unit tests being run:
|
||||
|
||||
@verbatim
|
||||
@@ -188,6 +210,7 @@ to be listed. The following command
|
||||
|
||||
will result in the following list being displayed:
|
||||
|
||||
@smallformat
|
||||
@verbatim
|
||||
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
||||
Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
||||
@@ -198,6 +221,7 @@ will result in the following list being displayed:
|
||||
example: Examples (to see if example programs run successfully)
|
||||
performance: Performance Tests (check to see if the system is as fast as expected)
|
||||
@end verbatim
|
||||
@end smallformat
|
||||
|
||||
This list is displayed in increasing order of complexity of the tests. Any of these
|
||||
kinds of test can be provided as a constraint using the @code{--constraint} option.
|
||||
@@ -211,6 +235,7 @@ to be listed. The following command,
|
||||
|
||||
will result in a list of the test suite being displayed, similar to :
|
||||
|
||||
@smallformat
|
||||
@verbatim
|
||||
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
||||
Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
||||
@@ -222,6 +247,7 @@ will result in a list of the test suite being displayed, similar to :
|
||||
object-name-service
|
||||
random-number-generators
|
||||
@end verbatim
|
||||
@end smallformat
|
||||
|
||||
Any of these listed suites can be selected to be run by itself using the
|
||||
@code{--suite} option as shown above.
|
||||
@@ -308,7 +334,7 @@ are examples of this kind of test.
|
||||
@subsection Unit Tests
|
||||
|
||||
Unit tests are more involved tests that go into detail to make sure that a
|
||||
piece of code works as advertized in isolation. There is really no reason
|
||||
piece of code works as advertised in isolation. There is really no reason
|
||||
for this kind of test to be built into an ns-3 module. It turns out, for
|
||||
example, that the unit tests for the object name service are about the same
|
||||
size as the object name service code itself. Unit tests are tests that
|
||||
@@ -325,8 +351,8 @@ locally in the src/common directory.
|
||||
|
||||
System tests are those that involve more than one module in the system. We
|
||||
have lots of this kind of test running in our current regression framework,
|
||||
but they are overloaded examples. We provide a new place for this kind of
|
||||
test in the directory ``src/tests''. The file
|
||||
but they are typically overloaded examples. We provide a new place
|
||||
for this kind of test in the directory ``src/test''. The file
|
||||
src/test/ns3tcp/ns3-interop-test-suite.cc is an example of this kind of
|
||||
test. It uses NSC TCP to test the ns-3 TCP implementation. Often there
|
||||
will be test vectors required for this kind of test, and they are stored in
|
||||
@@ -376,6 +402,7 @@ In order to execute the test-runner, you run it like any other ns-3 executable
|
||||
|
||||
You should see something like the following:
|
||||
|
||||
@smallformat
|
||||
@verbatim
|
||||
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
||||
Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
|
||||
@@ -389,6 +416,7 @@ You should see something like the following:
|
||||
--suite=suite-name: Run the test suite named ``suite-name''
|
||||
--verbose: Turn on messages in the run test suites
|
||||
@end verbatim
|
||||
@end smallformat
|
||||
|
||||
There are a number of things available to you which will be familiar to you if
|
||||
you have looked at @command{test.py}. This should be expected since the test-
|
||||
@@ -417,12 +445,15 @@ file to which the tests will write their XML status using the @code{--out}
|
||||
option. You need to be careful interpreting the results because the test
|
||||
suites will @emph{append} results onto this file. Try,
|
||||
|
||||
@smallformat
|
||||
@verbatim
|
||||
./waf --run "test-runner --basedir=`pwd` --suite=pcap-file-object --out=myfile.xml''
|
||||
@end verbatim
|
||||
@end smallformat
|
||||
|
||||
If you look at the file @code{myfile.xml} you should see something like,
|
||||
|
||||
@smallformat
|
||||
@verbatim
|
||||
<TestSuite>
|
||||
<SuiteName>pcap-file-object</SuiteName>
|
||||
@@ -460,6 +491,7 @@ If you look at the file @code{myfile.xml} you should see something like,
|
||||
<SuiteTime>real 0.00 user 0.00 system 0.00</SuiteTime>
|
||||
</TestSuite>
|
||||
@end verbatim
|
||||
@end smallformat
|
||||
|
||||
If you are familiar with XML this should be fairly self-explanatory. It is
|
||||
also not a complete XML file since test suites are designed to have their
|
||||
@@ -471,12 +503,13 @@ section.
|
||||
|
||||
The executables that run dedicated test programs use a TestRunner class. This
|
||||
class provides for automatic test registration and listing, as well as a way to
|
||||
exeute the individual tests. Individual test suites use C++ global constructors
|
||||
execute the individual tests. Individual test suites use C++ global
|
||||
constructors
|
||||
to add themselves to a collection of test suites managed by the test runner.
|
||||
The test runner is used to list all of the available tests and to select a test
|
||||
to be run. This is a quite simple class that provides three static methods to
|
||||
provide or Adding and Getting test suites to a collection of tests. See the
|
||||
doxygen for class @code{ns3::TestRunner} for details
|
||||
doxygen for class @code{ns3::TestRunner} for details.
|
||||
|
||||
@node TestSuite
|
||||
@section Test Suite
|
||||
|
||||
@@ -1,18 +1,18 @@
|
||||
\input texinfo @c -*-texinfo-*-
|
||||
@c %**start of header
|
||||
@setfilename ns-3.info
|
||||
@settitle ns-3 manual
|
||||
@setfilename ns-3-testing.info
|
||||
@settitle ns-3 testing and validation
|
||||
@c @setchapternewpage odd
|
||||
@c %**end of header
|
||||
|
||||
@ifinfo
|
||||
Documentation for the @command{ns-3} project is available in
|
||||
several documents and the wiki:
|
||||
Doxygen, several documents, and the wiki:
|
||||
@itemize @bullet
|
||||
@item @uref{http://www.nsnam.org/doxygen/index.html,,ns-3 Doxygen/Manual}: Documentation of the public APIs of the simulator
|
||||
@item @uref{http://www.nsnam.org/tutorial/index.html,,ns-3 Tutorial}
|
||||
@item @uref{http://www.nsnam.org/doc//index.html,,ns-3 Tutorial}
|
||||
@item Reference Manual
|
||||
@item ns-3 Testing and Validation (this document)
|
||||
@item @uref{http://www.nsnam.org/wiki/index.php,, ns-3 wiki}
|
||||
@end itemize
|
||||
|
||||
@@ -26,12 +26,12 @@ the document should be discussed on the ns-developers@@isi.edu mailing list.
|
||||
|
||||
This is an @command{ns-3} reference manual.
|
||||
Primary documentation for the @command{ns-3} project is available in
|
||||
four forms:
|
||||
Doxygen, several documents, and the wiki:
|
||||
@itemize @bullet
|
||||
@item @uref{http://www.nsnam.org/doxygen/index.html,,ns-3 Doxygen}: Documentation of the public APIs of the simulator
|
||||
@item @uref{http://www.nsnam.org/docs/tutorial/index.html,,ns-3 Tutorial}
|
||||
@item @uref{http://www.nsnam.org/docs/manual/index.html,,ns-3 Manual}
|
||||
@item Testing and Validation (this document)
|
||||
@item ns-3 Testing and Validation (this document)
|
||||
@item @uref{http://www.nsnam.org/wiki/index.php,, ns-3 wiki}
|
||||
@end itemize
|
||||
|
||||
@@ -58,6 +58,9 @@ along with this program. If not, see @uref{http://www.gnu.org/licenses/}.
|
||||
@title ns-3 Testing and Validation
|
||||
@author ns-3 project
|
||||
@author feedback: ns-developers@@isi.edu
|
||||
|
||||
@b{Simulator version: }
|
||||
@include VERSION
|
||||
@today{}
|
||||
|
||||
@c @page
|
||||
@@ -78,22 +81,30 @@ along with this program. If not, see @uref{http://www.gnu.org/licenses/}.
|
||||
For a pdf version of this document,
|
||||
see @uref{http://www.nsnam.org/docs/testing.pdf}.
|
||||
|
||||
Simulator version:
|
||||
@include VERSION
|
||||
|
||||
@insertcopying
|
||||
@end ifnottex
|
||||
|
||||
@menu
|
||||
* Overview::
|
||||
* Background::
|
||||
* Testing framework::
|
||||
* TestingFramework::
|
||||
* How to write tests::
|
||||
* Random Variables::
|
||||
* Propagation Loss Models::
|
||||
* References::
|
||||
@end menu
|
||||
|
||||
@include overview.texi
|
||||
@include background.texi
|
||||
@include testing-framework.texi
|
||||
@include how-to-write-tests.texi
|
||||
@include validation.texi
|
||||
@include random-variables.texi
|
||||
@include propagation-loss.texi
|
||||
@include references.texi
|
||||
|
||||
@printindex cp
|
||||
|
||||
|
||||
8
doc/testing/validation.texi
Normal file
8
doc/testing/validation.texi
Normal file
@@ -0,0 +1,8 @@
|
||||
@unnumbered Part II: Validation
|
||||
|
||||
The goal of the remainder of this document is to list some validation
|
||||
results for the models of ns-3. This extends beyond test code; for instance,
|
||||
results that compare simulation results with actual field experiments or
|
||||
with textbook results, or even with the results of other simulators,
|
||||
are welcome here. If your validation results exist in another published
|
||||
report or web location, please add a reference here.
|
||||
Reference in New Issue
Block a user