Merge with code.nsnam.org

This commit is contained in:
Pavel Boyko
2009-09-14 10:19:55 +04:00
39 changed files with 16686 additions and 21 deletions

44
doc/testing/Makefile Normal file
View File

@@ -0,0 +1,44 @@
TEXI2HTML = texi2html
TEXI2PDF = texi2dvi --pdf
EPSTOPDF = epstopdf
DIA = dia
CONVERT = convert
CSS = --css-include=testing.css
SPLIT = --split section
FIGURES = figures
VPATH = $(FIGURES)
IMAGES_EPS = \
IMAGES_PNG = ${IMAGES_EPS:.eps=.png}
IMAGES_PDF = ${IMAGES_EPS:.eps=.pdf}
IMAGES = $(IMAGES_EPS) $(IMAGES_PNG) $(IMAGES_PDF)
CHAPTERS = \
testing.texi \
overview.texi \
propagation-loss.texi \
%.eps : %.dia; $(DIA) -t eps $< -e $@
%.png : %.dia; $(DIA) -t png $< -e $@
%.pdf : %.eps; $(EPSTOPDF) $< -o=$@
all: $(IMAGES) testing.pdf testing.html testing/testing.html
testing.pdf: $(IMAGES) $(CHAPTERS)
$(TEXI2PDF) testing.texi
testing.html: $(IMAGES) $(CHAPTERS)
$(TEXI2HTML) ${CSS} testing.texi
testing/testing.html: $(IMAGES) $(CHAPTERS)
$(TEXI2HTML) ${CSS} ${SPLIT} testing.texi
figures-clean:
rm -rf $(IMAGES)
clean: figures-clean
rm -rf testing.aux testing.cp testing.cps testing.fn testing.ky testing.pg
rm -rf testing.tp testing.vr testing.toc testing.log testing.pdf testing.html testing/

215
doc/testing/background.texi Normal file
View File

@@ -0,0 +1,215 @@
@c ========================================================================
@c Background
@c ========================================================================
@node Background
@chapter Background
Writing defect-free software is a difficult proposition. There are many
dimensions to the problem and there is much confusion regarding what is
meant by different terms in different contexts. We have found it worthwhile
to spend a little time reviewing the subject and defining some terms.
Software testing may be loosely defined as the process of executing a program
with the intent of finding errors. When one enters a discussion regarding
software testing, it quickly becomes apparent that there are many distinct
mind-sets with which one can approach the subject.
For example, one could break the process into broad functional categories
like ``correctness testing,'' ``performance testing,'' ``robustness testing''
and ``security testing.'' Another way to look at the problem is by life-cycle:
``requirements testing,'' ``design testing,'' ``acceptance testing,'' and
``maintenance testing.'' Yet another view is by the scope of the tested system.
In this case one may speak of ``unit testing,'' ``component testing,''
``integration testing,'' and ``system testing.'' These terms are also not
standardized in any way, and so ``maintenance testing'' and ``regression
testing'' may be heard interchangeably. Additionally, these terms are often
misused.
There are also a number of different philosophical approaches to software
testing. For example, some organizations advocate writing test programs
before actually imlementing the desired software, yielding ``test-driven
development.'' Some organizations advocate testing from a customer perspective
as soon as possible, following a parallel with the agile development process:
``test early and test often.'' This is sometimes called ``agile testing.'' It
seems that there is at least one approach to testing for every development
methodology.
The @command{ns-3} project is not in the business of advocating for any one of
these processes, but the project as a whole has requirements that help inform
the test process.
Like all major software products, @command{ns-3} has a number of qualities that
must be present for the product to succeed. From a testing perspective, some
of these qualities that must be addressed are that @command{ns-3} must be
``correct,'' ``robust,'' ``performant'' and ``maintainable.'' Ideally there
should be metrics for each of these dimensions that are checked by the tests
to identify when the product fails to meed its expectations / requirements.
@node Correctness
@section Correctness
The essential purpose of testing is to determine that a piece of software
behaves ``correctly.'' For @command{ns-3} this means that if we simulate
something, the simulation should faithfully represent some physical entity or
process to a specified accuracy and precision.
It turns out that there are two perspectives from which one can view
correctness. Verifying that a particular process is implemented according
to its specification is generically called verification. The process of
deciding that the specification is correct is generically called validation.
@node ValidationAndVerification
@section Validation and Verification
A computer model is a mathematical or logical representation of something. It
can represent a vehicle, a frog or a networking card. Models can also represent
processes such as global warming, freeway traffic flow or a specification of a
networking protocol. Models can be completely faithful representations of a
logical process specification, but they necessarily can never completely
simulate a physical object or process. In most cases, a number of
simplifications are made to the model to make simulation computationally
tractable.
Every model has a @emph{target system} that it is attempting to simulate. The
first step in creating a simulation model is to identify this target system and
the level of detail and accuracy that the simulation is desired to reproduce.
In the case of a logical process, the target system may be identified as ``TCP
as defined by RFC 793.'' In this case, it will probably be desirable to create
a model that completely and faithfully reproduces RFC 793. In the case of a
physical process this will not be possible. If, for example, you would like to
simulate a wireless networking card, you may determine that you need, ``an
accurate MAC-level implementation of the 802.11 specification and [...] a
not-so-slow PHY-level model of the 802.11a specification.''
Once this is done, one can develop an abstract model of the target system. This
is typically an exercise in managing the tradeoffs between complexity, resource
requiremens and accuracy. The process of developing an abstract model has been
called @emph{model qualification} in the literature. In the case of a TCP
protocol, this process results in a design for a collection of objects,
interactions and behaviors that will fully implement RFC 793 in @command{ns-3}.
In the case of the wireless card, this process results in a number of tradeoffs
to allow the physical layer to be simulated and the design of a network device
and channel for ns-3, along with the desired objects, interactions and behaviors.
This abstract model is then developed into an @command{ns-3} model that
implements the abstract model as a computer program. The process of getting the
implementation to agree with the abstract model is called @emph{model
verification} in the literature.
The process so far is open loop. What remains is to make a determination that a
given ns-3 model has some connection to some reality -- that a model is an
accurate representation of a real system, whether a logical process or a physical
entity.
If one is going to use a simulation model to try and predict how some real
system is going to behave, there must be some reason to believe your results --
i.e., can one trust that an inference made from the model translates into a
correct prediction for the real system. The process of getting the ns-3 model
behavior to agree with the desired target system behavior as defined by the model
qualification process is called @emph{model validation} in the literature. In the
case of a TCP implementation, you may want to compare the behavior of your ns-3
TCP model to some reference implementation in order to validate your model. In
the case of a wireless physical layer simulation, you may want to compare the
behavior of your model to that of real hardware in a controlled setting,
The @command{ns-3} testing environment provides tools to allow for both model
validation and testing, and encourages the publication of validation results.
@node Robustness
@section Robustness
Robustness is the quality of being able to withstand stresses, or changes in
environments, inputs or calculations, etc. A system or design is ``robust''
if it can deal with such changes with minimal loss of functionality.
This kind of testing is usually done with a particular focus. For example, the
system as a whole can be run on many different system configurations to
demonstrate that it can perform correctly in a large number of environments.
The system can be also be stressed by operating close to or beyond capacity by
generating or simulating resource exhaustion of various kinds. This genre of
testing is called ``stress testing.''
The system and its components may be exposed to so-called ``clean tests'' that
demostrate a positive result -- that is that the system operates correctly in
response to a large variation of expected configurations.
The system and its components may also be exposed to ``dirty tests'' which
provide inputs outside the expected range. For example, if a module expects a
zero-terminated string representation of an integer, a dirty test might provide
an unterminated string of random characters to verify that the system does not
crash as a result of this unexpected input. Unfortunately, detecting such
``dirty'' input and taking preventive measures to ensure the system does not
fail catasrophically can require a huge amount of development overhead. In
order to reduce development time, a decision was taken early on in the project
to minimize the amount of parameter validation and error handling in the
@command{ns-3} codebase. For this reason, we do not spend much time on dirty
testing -- it would just uncover the results of the design decision we know
we took.
We do want to deonstrate that @command{ns-3} software does work across some set
of conditions. We borrow a couple of definitions to narrow this down a bit.
The @emph{domain of applicability} is a set of prescribed conditions for which
the model has been tested, compared against reality to the extent possible, and
judged suitable for use. The @emph{range of accuracy} is an agreement between
the computerized model and reality within a domain of applicability.
The @command{ns-3} testing environment provides tools to allow for setting up
and running test environments over multiple systems (buildbot) and provides
classes to encourage clean tests to verify the operation of the system over the
expected ``domain of applicability'' and ``range of accuraccy.''
@node Performant
@section Performant
Okay, ``performant'' isn't a real English word. It is, however, a very concise
neologism that is quite often used to describe what we want @command{ns-3} to
be: powerful and fast enough to get the job done.
This is really about the broad subject of software performance testing. One of
the key things that is done is to compare two systems to find which performs
better (cf benchmarks). This is used to demonstrate that, for example,
@code{ns-3} can perform a basic kind of simulation at least as fast as a
competing product, or can be used to identify parts of the system that perform
badly.
In the @code{ns-3} test framework, we provide support for timing various kinds
of tests.
@node Maintainability
@section Maintainability
A software product must be maintainable. This is, again, a very broad
statement, but a testing framework can help with the task. Once a model has
been developed, validated and verified, we can repeatedly execute the suite
of tests for the entire system to ensure that it remains valid and verified
over its lifetime.
When a feature stops functioning as intended after some kind of change to the
system is integrated, it is called generically a regression. Originally the
term regression referred to a change that caused a previously fixed bug to
reappear, but the term has evolved to describe any kind of change that breaks
existing functionality. There are many kinds of regressions that may occur
in practice.
A @emph{local regression} is one in which a change affects the changed component
directy. For example, if a component is modified to allocate and free memory
but stale pointers are used, the component itself fails.
A @emph{remote regression} is one in which a change to one component breaks
functionality in another component. This reflects violation of an implied but
possibly unrecognized contract between components.
An @emph{unmasked regression} is one that creates a situation where a previously
existing bug that had no affect is suddenly exposed in the system. This may
be as simple as exercising a code path for the first time.
A @emph{performance regression} is one that causes the performance requirements
of the system to be violated. For example, doing some work in a low level
function that may be repeated large numbers of times may suddenly render the
system unusable from certain perspectives.
The @command{ns-3} testing framework provides tools for automating the process
used to validate and verify the code in nightly test suites to help quickly
identify possible regressions.

View File

@@ -0,0 +1,8 @@
@c ========================================================================
@c How to write tests
@c ========================================================================
@node How to write tests
@chapter How to write tests
To be completed.

16
doc/testing/overview.texi Normal file
View File

@@ -0,0 +1,16 @@
@c ========================================================================
@c Overview
@c ========================================================================
@node Overview
@chapter Overview
This document is concerned with the testing and validation of @command{ns-3}
software.
This document provides
@itemize @bullet
@item a description of the ns-3 testing framework;
@item a guide to model developers or new model contributors for how to write tests;
@item validation and verification results reported to date.
@end itemize

View File

@@ -0,0 +1,121 @@
@node Propagation Loss Models
@chapter Propagation Loss Models
@anchor{chap:propagation-loss-models}
This chapter describes validation of ns-3 propagation loss models.
@section FriisPropagationLossModel
@subsection Model reference
From source: @uref{http://www.scribd.com/doc/6650712/Wireless-CommunicationsPrinciples-and-Practice-Theodore-S,, Wireless Communications-Principles and Practice ,Theodore S Rappaport pg. 71 }
Given equation:
@verbatim
Pr = Pt*Gt*Gr*lmb^2/((4*pi)^2*d^2*L)
Pt = 10^(17.0206/10)/10^3 = .05035702
Pr = .05035702*.125^2/((4*pi)^2*d*1) = 4.98265e-6/d^2
bandwidth = 2.2*10^7
m_noiseFigure = 5.01187
noiseFloor = ((Thermal noise (K)* BOLTZMANN * bandwidth)* m_noiseFigure)
noiseFloor = ((290*1.3803*10^-23*2.2*10^7)*5.01187) = 4.41361e-13W
no interference, so SNR = Pr/4.41361e-13W
Distance :: Pr :: SNR
100 4.98265e-10W 1128.93
500 1.99306e-11W 45.1571
1000 4.98265e-12W 11.2893
2000 1.24566e-12W 2.82232
3000 5.53628e-13W 1.25436
4000 3.11416e-13W 0.70558
5000 1.99306e-13W 0.451571
6000 1.38407e-13W 0.313591
@end verbatim
@subsection Validation test
Test program available online at: @uref{http://xxx.xxx.com,,}
Taken at default settings (packetSize = 1000, numPackets = 1, lambda = 0.125, 802.11b at 2.4GHz):
@verbatim
Distance :: Pr :: SNR
100 4.98265e-10W 1128.93
500 1.99306e-11W 45.1571
1000 4.98265e-12W 11.2893
2000 1.24566e-12W 2.82232
3000 5.53628e-13W 1.25436
4000 3.11416e-13W 0.70558
5000 1.99306e-13W 0.451571
6000 1.38407e-13W 0.313591
7000 1.01687e-13W 0.230393
8000 7.78539e-14W 0.176395
@end verbatim
@subsection Discussion
As can be seen, the SNR outputted from the simulator, and the SNR computed from the source's equation are identical.
@section LogDistancePropagationLossModel
@subsection Model reference
From source: @uref{http://www.plextek.co.uk/papers/aps2005mcw.pdf,, Urban Propagation Measurements and Statistical Path Loss Model at 3.5 GHz, Marcus C. Walden, Frank J. Rowsell}
Given equation:
@verbatim
PL{dBm} = PL(d0) + 10*n*log(d/d0) + Xs
PL(1) from friis at 2.4GHz: 40.045997dBm
PL{dBm} = 10*log(.050357/Pr) = 40.045997 + 10*n*log(d) + Xg
Pr = .050357/(10^((40.045997 + 10*n*log(d) + Xg)/10))
bandwidth = 2.2*10^7
m_noiseFigure = 5.01187
no interference, so SNR = Pr/4.41361e-13W
@end verbatim
taking Xg to be constant at 0 to match ns-3 output:
@verbatim
Distance :: Pr :: SNR
10 4.98265e-9 11289.3
20 6.22831e-10 1411.16
40 7.78539e-11 176.407
60 2.30678e-11 52.2652
80 9.73173e-12 22.0494
100 4.98265e-12 11.2893
200 6.22831e-13 1.41116
500 3.98612e-14 .090314
1000 4.98265e-15 .011289
@end verbatim
@subsection Validation test
Test program available online at: @uref{http://xxx.xxx.com,,}
Taken at default settings (packetSize = 1000, numPackets = 1, exponent = 3, reference loss = 46.6777, 802.11b at 2.4GHz)
@verbatim
Distance :: Pr :: snr
10 4.98471e-9 11293.9
20 6.23089e-10 1411.74
40 7.78861e-11 176.468
60 2.30774e-11 52.2868
80 9.72576e-12 22.0585
100 4.98471e-12 11.2939
200 6.23089e-13 1.41174
500 3.98777e-14 0.0903516
1000 4.98471e-15 0.0112939
@end verbatim
@subsection Discussion
There is a ~.04% error between these results. I do not believe this is
due to rounding, as the results taken from the equation from the source
match exactly with the Friis results taken at one less power of ten.
(Friis and LogDistance can be modeled by Pt*Gt*Gr*lmb^2/((4*pi)^2*d^n*L),
where n is the exponent. n is 2 for Friis, and 3 for logDistance, which
accounts for the power of ten. ie: Friis at 100m is equivalent to LogDistance
at 10m.) Perhaps the ns-3 takes the random number into account despite
not being listed in the source.

View File

@@ -0,0 +1,567 @@
@c ========================================================================
@c Testing framework
@c ========================================================================
@node TestingFramework
@chapter Testing Framework
@node BuildBots
@section Buildbots
The @command{ns-3} testing framework is composed of several major pieces. At
the highest level are the buildbots (build robots). If you are unfamiliar with
this system look at @uref{http://djmitche.github.com/buildbot/docs/0.7.11/}.
This is an open-source automated system that allows @command{ns-3} to be rebuilt
and tested each time something has changed. By running the buildbots on a number
of different systems we can ensure that @command{ns-3} builds and executes
properly on all of its supported systems.
Users (and developers) typically will not interact with the buildbot system other
than to read its messages regarding test results. If a failure is detected in
one of the automated build and test jobs, the buildbot will send an email to the
@emph{ns-developers} mailing list. This email will look something like:
@verbatim
The Buildbot has detected a new failure of osx-ppc-g++-4.2 on NsNam.
Full details are available at:
http://ns-regression.ee.washington.edu:8010/builders/osx-ppc-g%2B%2B-4.2/builds/0
Buildbot URL: http://ns-regression.ee.washington.edu:8010/
Buildslave for this Build: darwin-ppc
Build Reason: The web-page 'force build' button was pressed by 'ww': ww
Build Source Stamp: HEAD
Blamelist:
BUILD FAILED: failed shell_5 shell_6 shell_7 shell_8 shell_9 shell_10 shell_11 shell_12 shell_13 shell_14 shell_15
sincerely,
-The Buildbot
@end verbatim
In the full details URL shown in the email, one can search for the keyword
@code{failed} and select the @code{stdio} link for the corresponding step to see
the reason for the failure.
The buildbot will do its job quietly if there are no errors, and the system will
undergo build and test cycles every day to verify that all is well.
@node Testpy
@section Test.py
The buildbots use a Python program, @command{test.py}, that is reponsible for
running all of the tests and collecting the resulting reports into a human-
readable form. This program is also available for use by users and developers
as well.
@command{test.py} is very flexible in allowing the user to specify the number
and kind of tests to run; and also the amount and kind of output to generate.
By default, @command{test.py} will run all available tests and report status
back in a very concise form. Running the command,
@verbatim
./test.py
@end verbatim
will result in a number of @code{PASS}, @code{FAIL}, @code{CRASH} or @code{SKIP}
indications followed by the kind of test that was run and its display name.
@verbatim
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
'build' finished successfully (0.939s)
FAIL: TestSuite ns3-wifi-propagation-loss-models
PASS: TestSuite object-name-service
PASS: TestSuite pcap-file-object
PASS: TestSuite ns3-tcp-cwnd
...
PASS: TestSuite ns3-tcp-interoperability
PASS: Example csma-broadcast
PASS: Example csma-multicast
@end verbatim
This mode is indented to be used by users who are interested in determining if
their distribution is working correctly, and by developers who are interested
in determining if changes they have made have caused any regressions.
If one specifies an optional output style, one can generate detailed descriptions
of the tests and status. Available styles are @command{text} and @command{HTML}.
The buildbots will select the HTML option to generate HTML test reports for the
nightly builds using,
@verbatim
./test.py --html=nightly.html
@end verbatim
In this case, an HTML file named ``nightly.html'' would be created with a pretty
summary of the testing done. A ``human readable'' format is available for users
interested in the details.
@verbatim
./test.py --text=results.txt
@end verbatim
In the example above, the test suite checking the @command{ns-3} wireless
device propagation loss models failed. By default no further information is
provided.
To further explore the failure, @command{test.py} allows a single test suite
to be specified. Running the command,
@verbatim
./test.py --suite=ns3-wifi-propagation-loss-models
@end verbatim
results in that single test suite being run.
@verbatim
FAIL: TestSuite ns3-wifi-propagation-loss-models
@end verbatim
To find detailed information regarding the failure, one must specify the kind
of output desired. For example, most people will probably be interested in
a text file:
@verbatim
./test.py --suite=ns3-wifi-propagation-loss-models --text=results.txt
@end verbatim
This will result in that single test suite being run with the test status written to
the file ``results.txt''.
You should find something similar to the following in that file:
@verbatim
FAIL: Test Suite ``ns3-wifi-propagation-loss-models'' (real 0.02 user 0.01 system 0.00)
PASS: Test Case "Check ... Friis ... model ..." (real 0.01 user 0.00 system 0.00)
FAIL: Test Case "Check ... Log Distance ... model" (real 0.01 user 0.01 system 0.00)
Details:
Message: Got unexpected SNR value
Condition: [long description of what actually failed]
Actual: 176.395
Limit: 176.407 +- 0.0005
File: ../src/test/ns3wifi/propagation-loss-models-test-suite.cc
Line: 360
@end verbatim
Notice that the Test Suite is composed of two Test Cases. The first test case
checked the Friis propagation loss model and passed. The second test case
failed checking the Log Distance propagation model. In this case, an SNR of
176.395 was found, and the test expected a value of 176.407 correct to three
decimal places. The file which implemented the failing test is listed as well
as the line of code which triggered the failure.
If you desire, you could just as easily have written an HTML file using the
@code{--html} option as described above.
Typically a user will run all tests at least once after downloading
@command{ns-3} to ensure that his or her enviornment has been built correctly
and is generating correct results according to the test suites. Developers
will typically run the test suites before and after making a change to ensure
that they have not introduced a regression with their changes. In this case,
developers may not want to run all tests, but only a subset. For example,
the developer might only want to run the unit tests periodically while making
changes to a repository. In this case, @code{test.py} can be told to constrain
the types of tests being run to a particular class of tests. The follwoing
command will result in only the unit tests being run:
@verbatim
./test.py --constrain=unit
@end verbatim
Similarly, the following command will result in only the example smoke tests
being run:
@verbatim
./test.py --constrain=unit
@end verbatim
To see a quick list of the legal kinds of constraints, you can ask for them
to be listed. The following command
@verbatim
./test.py --kinds
@end verbatim
will result in the following list being displayed:
@verbatim
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
'build' finished successfully (0.939s)Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
bvt: Build Verification Tests (to see if build completed successfully)
unit: Unit Tests (within modules to check basic functionality)
system: System Tests (spans modules to check integration of modules)
example: Examples (to see if example programs run successfully)
performance: Performance Tests (check to see if the system is as fast as expected)
@end verbatim
This list is displayed in increasing order of complexity of the tests. Any of these
kinds of test can be provided as a constraint using the @code{--constraint} option.
To see a quick list of all of the test suites available, you can ask for them
to be listed. The following command,
@verbatim
./test.py --list
@end verbatim
will result in a list of the test suite being displayed, similar to :
@verbatim
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
'build' finished successfully (0.939s)
ns3-wifi-propagation-loss-models
ns3-tcp-cwnd
ns3-tcp-interoperability
pcap-file-object
object-name-service
random-number-generators
@end verbatim
Any of these listed suites can be selected to be run by itself using the
@code{--suite} option as shown above.
Similarly to test suites, one can run a single example program using the @code{--example}
option.
@verbatim
./test.py --example=udp-echo
@end verbatim
results in that single example being run.
@verbatim
PASS: Example udp-echo
@end verbatim
Normally when example programs are executed, they write a large amount of trace
file data. This is normally saved to the base directory of the distribution
(e.g., /home/user/ns-3-dev). When @command{test.py} runs an example, it really
is completely unconcerned with the trace files. It just wants to to determine
if the example can be built and run without error. Since this is the case, the
trace files are written into a @code{/tmp/unchecked-traces} directory. If you
run the above example, you should be able to find the associated
@code{udp-echo.tr} and @code{udp-echo-n-1.pcap} files there.
The list of available examples is defined by the contents of the ``examples''
directory in the distribution. If you select an example for execution using
the @code{--example} option, @code{test.py} will not make any attempt to decide
if the example has been configured or not, it will just try to run it and
report the result of the attempt.
When @command{test.py} runs, by default it will first ensure that the system has
been completely built. This can be defeated by selecting the @code{--nowaf}
option.
@verbatim
./test.py --list --nowaf
@end verbatim
will result in a list of the currently built test suites being displayed, similar to :
@verbatim
ns3-wifi-propagation-loss-models
ns3-tcp-cwnd
ns3-tcp-interoperability
pcap-file-object
object-name-service
random-number-generators
@end verbatim
Note the absence of the @command{Waf} build messages.
Finally, @code{test.py} provides a @command{--verbose} option which will print
large amounts of information about its progress. It is not expected that this
will be terribly useful for most users.
@node TestTaxonomy
@section Test Taxonomy
As mentioned above, tests are grouped into a number of broadly defined
classifications to allow users to selectively run tests to address the different
kinds of testing that need to be done.
@itemize @bullet
@item Build Verification Tests
@item Unit Tests
@item System Tests
@item Examples
@item Performance Tests
@end itemize
@node BuildVerificationTests
@subsection Build Verification Tests
These are relatively simple tests that are built along with the distribution
and are used to make sure that the build is pretty much working. Our
current unit tests live in the source files of the code they test and are
built into the ns-3 modules; and so fit the description of BVTs. BVTs live
in the same source code that is built into the ns-3 code. Our current tests
are examples of this kind of test.
@node UnitTests
@subsection Unit Tests
Unit tests are more involved tests that go into detail to make sure that a
piece of code works as advertized in isolation. There is really no reason
for this kind of test to be built into an ns-3 module. It turns out, for
example, that the unit tests for the object name service are about the same
size as the object name service code itself. Unit tests are tests that
check a single bit of functionality that are not built into the ns-3 code,
but live in the same directory as the code it tests. It is possible that
these tests check integration of multiple implementation files in a module
as well. The file src/core/names-test-suite.cc is an example of this kind
of test. The file src/common/pcap-file-test-suite.cc is another example
that uses a known good pcap file as a test vector file. This file is stored
locally in the src/common directory.
@node SystemTests
@subsection System Tests
System tests are those that involve more than one module in the system. We
have lots of this kind of test running in our current regression framework,
but they are overloaded examples. We provide a new place for this kind of
test in the directory ``src/tests''. The file
src/test/ns3tcp/ns3-interop-test-suite.cc is an example of this kind of
test. It uses NSC TCP to test the ns-3 TCP implementation. Often there
will be test vectors required for this kind of test, and they are stored in
the directory where the test lives. For example,
ns3tcp-interop-response-vectors.pcap is a file consisting of a number of TCP
headers that are used as the expected responses of the ns-3 TCP under test
to a stimulus generated by the NSC TCP which is used as a ``known good''
implementation.
@node Examples
@subsection Examples
The examples are tested by the framework to make sure they built and will
run. Nothing is checked, and currently the pcap files are just written off
into /tmp to be discarded. If the examples run (don't crash) they pass this
smoke test.
@node PerformanceTests
@subsection Performance Tests
Performance tests are those which exercise a particular part of the system
and determine if the tests have executed to completion in a reasonable time.
@node RunningTests
@section Running Tests
Tests are typically run using the high level @code{test.py} program. They
can also be run ``manually'' using a low level test-runner executable directly
from @code{waf}.
@node RunningTestsUnderTestRunnerExecutable
@section Running Tests Under the Test Runner Executable
The test-runner is the bridge from generic Python code to @command{ns-3} code.
It is written in C++ and uses the automatic test discovery process in the
@command{ns-3} code to find and allow execution of all of the various tests.
Although it may not be used directly very often, it is good to understand how
@code{test.py} actually runs the various tests.
In order to execute the test-runner, you run it like any other ns-3 executable
-- using @code{waf}. To get a list of available options, you can type:
@verbatim
./waf --run "test-runner --help"
@end verbatim
You should see something like the following:
@verbatim
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build'
'build' finished successfully (0.353s)
--basedir=dir: Set the base directory (where to find src) to ``dir''
--constrain=test-type: Constrain checks to test suites of type ``test-type''
--help: Print this message
--kinds: List all of the available kinds of tests
--list: List all of the test suites (optionally constrained by test-type)
--out=file-name: Set the test status output file to ``file-name''
--suite=suite-name: Run the test suite named ``suite-name''
--verbose: Turn on messages in the run test suites
@end verbatim
There are a number of things available to you which will be familiar to you if
you have looked at @command{test.py}. This should be expected since the test-
runner is just an interface between @code{test.py} and @command{ns-3}. You
may notice that example-related commands are missing here. That is because
the examples are really not @command{ns-3} tests. @command{test.py} runs them
as if they were to present a unified testing environment, but they are really
completely different and not to be found here.
One new option that appears here is the @code{--basedir} option. It turns out
that the tests may need to reference the source directory of the @code{ns-3}
distribution to find local data, so a base directory is always required to run
a test. To run one of the tests directly from the test-runner, you will need
to specify the test suite to run along with the base directory. So you could do,
@verbatim
./waf --run "test-runner --basedir=`pwd` --suite=pcap-file-object"
@end verbatim
Note the ``backward'' quotation marks on the @code{pwd} command. This will run
the @code{pcap-file-object} test quietly. The only indication that
you will get that the test passed is the @emph{absence} of a message from
@code{waf} saying that the program returned something other than a zero
exit code. To get some output from the test, you need to specify an output
file to which the tests will write their XML status using the @code{--out}
option. You need to be careful interpreting the results because the test
suites will @emph{append} results onto this file. Try,
@verbatim
./waf --run "test-runner --basedir=`pwd` --suite=pcap-file-object --out=myfile.xml''
@end verbatim
If you look at the file @code{myfile.xml} you should see something like,
@verbatim
<TestSuite>
<SuiteName>pcap-file-object</SuiteName>
<TestCase>
<CaseName>Check to see that PcapFile::Open with mode ``w'' works</CaseName>
<CaseResult>PASS</CaseResult>
<CaseTime>real 0.00 user 0.00 system 0.00</CaseTime>
</TestCase>
<TestCase>
<CaseName>Check to see that PcapFile::Open with mode ``r'' works</CaseName>
<CaseResult>PASS</CaseResult>
<CaseTime>real 0.00 user 0.00 system 0.00</CaseTime>
</TestCase>
<TestCase>
<CaseName>Check to see that PcapFile::Open with mode ``a'' works</CaseName>
<CaseResult>PASS</CaseResult>
<CaseTime>real 0.00 user 0.00 system 0.00</CaseTime>
</TestCase>
<TestCase>
<CaseName>Check to see that PcapFileHeader is managed correctly</CaseName>
<CaseResult>PASS</CaseResult>
<CaseTime>real 0.00 user 0.00 system 0.00</CaseTime>
</TestCase>
<TestCase>
<CaseName>Check to see that PcapRecordHeader is managed correctly</CaseName>
<CaseResult>PASS</CaseResult>
<CaseTime>real 0.00 user 0.00 system 0.00</CaseTime>
</TestCase>
<TestCase>
<CaseName>Check to see that PcapFile can read out a known good pcap file</CaseName>
<CaseResult>PASS</CaseResult>
<CaseTime>real 0.00 user 0.00 system 0.00</CaseTime>
</TestCase>
<SuiteResult>PASS</SuiteResult>
<SuiteTime>real 0.00 user 0.00 system 0.00</SuiteTime>
</TestSuite>
@end verbatim
If you are familiar with XML this should be fairly self-explanatory. It is
also not a complete XML file since test suites are designed to have their
output appended to a master XML status file as described in the @command{test.py}
section.
@node ClassTestRunner
@section Class TestRunner
The executables that run dedicated test programs use a TestRunner class. This
class provides for automatic test registration and listing, as well as a way to
exeute the individual tests. Individual test suites use C++ global constructors
to add themselves to a collection of test suites managed by the test runner.
The test runner is used to list all of the available tests and to select a test
to be run. This is a quite simple class that provides three static methods to
provide or Adding and Getting test suites to a collection of tests. See the
doxygen for class @code{ns3::TestRunner} for details
@node TestSuite
@section Test Suite
All @command{ns-3} tests are classified into Test Suites and Test Cases. A
test suite is a collection of test cases that completely exercise a given kind
of functionality. As described above, test suites can be classified as,
@itemize @bullet
@item Build Verification Tests
@item Unit Tests
@item System Tests
@item Examples
@item Performance Tests
@end itemize
This classification is exported from the TestSuite class. This class is quite
simple, existing only as a place to export this type and to accumulate test
cases. From a user perspective, in order to create a new TestSuite in the
system one only has to define a new class that inherits from class @code{TestSuite}
and perform these two duties.
The following code will define a new class that can be run by @code{test.py}
as a ``unit'' test with the display name, ``my-test-suite-name''.
@verbatim
class MySuite : public TestSuite
{
public:
MyTestSuite ();
};
MyTestSuite::MyTestSuite ()
: TestSuite ("my-test-suite-name", UNIT)
{
AddTestCase (new MyTestCase);
}
MyTestSuite myTestSuite;
@end verbatim
The base class takes care of all of the registration and reporting required to
be a good citizen in the test framework.
@node TestCase
@section Test Case
Individual tests are created using a TestCase class. Common models for the use
of a test case include "one test case per feature", and "one test case per method."
Mixtures of these models may be used.
In order to create a new test case in the system, all one has to do is to inherit
from the @code{TestCase} base class, override the constructor to give the test
case a name and override the @code{DoRun} method to run the test.
@verbatim
class MyTestCase : public TestCase
{
MyTestCase ();
virtual bool DoRun (void);
};
MyTestCase::MyTestCase ()
: TestCase ("Check some bit of functionality")
{
}
bool
MyTestCase::DoRun (void)
{
NS_TEST_ASSERT_MSG_EQ (true, true, "Some failure message");
return GetErrorStatus ();
}
@end verbatim
@node Utilities
@section Utilities
There are a number of utilities of various kinds that are also part of the
testing framework. Examples include a generalized pcap file useful for
storing test vectors; a generic container useful for transient storage of
test vectors during test execution; and tools for generating presentations
based on validation and verification testing results.

156
doc/testing/testing.css Normal file
View File

@@ -0,0 +1,156 @@
body {
font-family: "Trebuchet MS", "Bitstream Vera Sans", verdana, lucida, arial, helvetica, sans-serif;
background: white;
color: black;
font-size: 11pt;
}
h1, h2, h3, h4, h5, h6 {
# color: #990000;
color: #009999;
}
pre {
font-size: 10pt;
background: #e0e0e0;
color: black;
}
a:link, a:visited {
font-weight: normal;
text-decoration: none;
color: #0047b9;
}
a:hover {
font-weight: normal;
text-decoration: underline;
color: #0047b9;
}
img {
border: 0px;
}
#main th {
font-size: 12pt;
background: #b0b0b0;
}
.odd {
font-size: 12pt;
background: white;
}
.even {
font-size: 12pt;
background: #e0e0e0;
}
.answer {
font-size: large;
font-weight: bold;
}
.answer p {
font-size: 12pt;
font-weight: normal;
}
.answer ul {
font-size: 12pt;
font-weight: normal;
}
#container {
position: absolute;
width: 100%;
height: 100%;
top: 0px;
}
#feedback {
color: #b0b0b0;
font-size: 9pt;
font-style: italic;
}
#header {
position: absolute;
margin: 0px;
top: 10px;
height:96px;
left: 175px;
right: 10em;
bottom: auto;
background: white;
clear: both;
}
#middle {
position: absolute;
left: 0;
height: auto;
width: 100%;
}
#main {
position: absolute;
top: 50px;
left: 175px;
right: 100px;
background: white;
padding: 0em 0em 0em 0em;
}
#navbar {
position: absolute;
top: 75px;
left: 0em;
width: 146px;
padding: 0px;
margin: 0px;
font-size: 10pt;
}
#navbar a:link, #navbar a:visited {
font-weight: normal;
text-decoration: none;
color: #0047b9;
}
#navbar a:hover {
font-weight: normal;
text-decoration: underline;
color: #0047b9;
}
#navbar dl {
width: 146px;
padding: 0;
margin: 0 0 10px 0px;
background: #99ffff url(images/box_bottom2.gif) no-repeat bottom left;
}
#navbar dt {
padding: 6px 10px;
font-size: 100%;
font-weight: bold;
background: #009999;
margin: 0px;
border-bottom: 1px solid #fff;
color: white;
background: #009999 url(images/box_top2.gif) no-repeat top left;
}
#navbar dd {
font-size: 100%;
margin: 0 0 0 0px;
padding: 6px 10px;
color: #0047b9;
}
dd#selected {
background: #99ffff url(images/arrow.gif) no-repeat;
background-position: 4px 10px;
}

98
doc/testing/testing.texi Normal file
View File

@@ -0,0 +1,98 @@
\input texinfo @c -*-texinfo-*-
@c %**start of header
@setfilename ns-3.info
@settitle ns-3 manual
@c @setchapternewpage odd
@c %**end of header
@ifinfo
Documentation for the @command{ns-3} project is available in
several documents and the wiki:
@itemize @bullet
@item @uref{http://www.nsnam.org/doxygen/index.html,,ns-3 Doxygen/Manual}: Documentation of the public APIs of the simulator
@item @uref{http://www.nsnam.org/tutorial/index.html,,ns-3 Tutorial}
@item @uref{http://www.nsnam.org/doc//index.html,,ns-3 Tutorial}
@item Reference Manual
@item @uref{http://www.nsnam.org/wiki/index.php,, ns-3 wiki}
@end itemize
This document is written in GNU Texinfo and is to be maintained in
revision control on the @command{ns-3} code server. Both PDF and HTML versions
should be available on the server. Changes to
the document should be discussed on the ns-developers@@isi.edu mailing list.
@end ifinfo
@copying
This is an @command{ns-3} reference manual.
Primary documentation for the @command{ns-3} project is available in
four forms:
@itemize @bullet
@item @uref{http://www.nsnam.org/doxygen/index.html,,ns-3 Doxygen}: Documentation of the public APIs of the simulator
@item @uref{http://www.nsnam.org/docs/tutorial/index.html,,ns-3 Tutorial}
@item @uref{http://www.nsnam.org/docs/manual/index.html,,ns-3 Manual}
@item Testing and Validation (this document)
@item @uref{http://www.nsnam.org/wiki/index.php,, ns-3 wiki}
@end itemize
This document is written in GNU Texinfo and is to be maintained in
revision control on the @command{ns-3} code server. Both PDF and HTML
versions should be available on the server. Changes to
the document should be discussed on the ns-developers@@isi.edu mailing list.
This software is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This software is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see @uref{http://www.gnu.org/licenses/}.
@end copying
@titlepage
@title ns-3 Testing and Validation
@author ns-3 project
@author feedback: ns-developers@@isi.edu
@today{}
@c @page
@vskip 0pt plus 1filll
@insertcopying
@end titlepage
@c So the toc is printed at the start.
@anchor{Full Table of Contents}
@contents
@ifnottex
@node Top, Overview, Full Table of Contents
@top ns-3 Manual (html version)
For a pdf version of this document,
see @uref{http://www.nsnam.org/docs/testing.pdf}.
@insertcopying
@end ifnottex
@menu
* Overview::
* Background::
* Testing framework::
* How to write tests::
* Propagation Loss Models::
@end menu
@include overview.texi
@include background.texi
@include testing-framework.texi
@include how-to-write-tests.texi
@include propagation-loss.texi
@printindex cp
@bye

BIN
src/common/known.pcap Normal file

Binary file not shown.

View File

@@ -0,0 +1,965 @@
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation;
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <sstream>
#include "ns3/test.h"
#include "ns3/pcap-file.h"
using namespace ns3;
// ===========================================================================
// Some utility functions for the tests.
// ===========================================================================
uint16_t
Swap (uint16_t val)
{
return ((val >> 8) & 0x00ff) | ((val << 8) & 0xff00);
}
uint32_t
Swap (uint32_t val)
{
return ((val >> 24) & 0x000000ff) | ((val >> 8) & 0x0000ff00) | ((val << 8) & 0x00ff0000) | ((val << 24) & 0xff000000);
}
bool
CheckFileExists (std::string filename)
{
FILE * p = fopen (filename.c_str (), "rb");
if (p == 0)
{
return false;
}
fclose (p);
return true;
}
bool
CheckFileLength (std::string filename, uint64_t sizeExpected)
{
FILE * p = fopen (filename.c_str (), "rb");
if (p == 0)
{
return false;
}
fseek (p, 0, SEEK_END);
uint64_t sizeActual = ftell (p);
fclose (p);
return sizeActual == sizeExpected;
}
// ===========================================================================
// Test case to make sure that the Pcap File Object can do its most basic job
// and create an empty pcap file.
// ===========================================================================
class WriteModeCreateTestCase : public TestCase
{
public:
WriteModeCreateTestCase ();
virtual ~WriteModeCreateTestCase ();
private:
virtual void DoSetup (void);
virtual bool DoRun (void);
virtual void DoTeardown (void);
std::string m_testFilename;
};
WriteModeCreateTestCase::WriteModeCreateTestCase ()
: TestCase ("Check to see that PcapFile::Open with mode \"w\" works")
{
}
WriteModeCreateTestCase::~WriteModeCreateTestCase ()
{
}
void
WriteModeCreateTestCase::DoSetup (void)
{
std::stringstream filename;
uint32_t n = rand ();
filename << n;
m_testFilename = "/tmp/" + filename.str () + ".pcap";
}
void
WriteModeCreateTestCase::DoTeardown (void)
{
remove (m_testFilename.c_str ());
}
bool
WriteModeCreateTestCase::DoRun (void)
{
PcapFile f;
//
// Opening a new file in write mode should result in an empty file of the
// given name.
//
bool err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << m_testFilename << ", \"w\") returns error");
f.Close ();
NS_TEST_ASSERT_MSG_EQ (CheckFileExists (m_testFilename), true,
"Open (" << m_testFilename << ", \"w\") does not create file");
NS_TEST_ASSERT_MSG_EQ (CheckFileLength (m_testFilename, 0), true,
"Open (" << m_testFilename << ", \"w\") does not result in an empty file");
//
// Calling Init() on a file created with "w" should result in a file just
// long enough to contain the pcap file header.
//
err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << m_testFilename << ", \"w\") returns error");
err = f.Init (1234, 5678, 7);
NS_TEST_ASSERT_MSG_EQ (err, false, "Init (1234, 5678, 7) returns error");
f.Close ();
NS_TEST_ASSERT_MSG_EQ (CheckFileLength (m_testFilename, 24), true,
"Init () does not result in a file with a pcap file header");
//
// Opening an existing file in write mode should result in that file being
// emptied.
//
err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << m_testFilename << ", \"w\") returns error");
f.Close ();
NS_TEST_ASSERT_MSG_EQ (CheckFileLength (m_testFilename, 0), true,
"Open (" << m_testFilename << ", \"w\") does not result in an empty file");
//
// Initialize the file again.
//
err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false,
"Open (" << m_testFilename << ", \"w\") returns error");
err = f.Init (1234, 5678, 7);
NS_TEST_ASSERT_MSG_EQ (err, false, "Init (1234, 5678, 7) returns error");
//
// Now we should be able to write to it since it was opened in "w" mode.
// This is just a permissions check so we don't actually look at the
// data.
//
uint8_t buffer[128];
err = f.Write (0, 0, buffer, 128);
NS_TEST_ASSERT_MSG_EQ (err, false, "Write (write-only-file " << m_testFilename << ") returns error");
return false;
}
// ===========================================================================
// Test case to make sure that the Pcap File Object can open an existing pcap
// file.
// ===========================================================================
class ReadModeCreateTestCase : public TestCase
{
public:
ReadModeCreateTestCase ();
virtual ~ReadModeCreateTestCase ();
private:
virtual void DoSetup (void);
virtual bool DoRun (void);
virtual void DoTeardown (void);
std::string m_testFilename;
};
ReadModeCreateTestCase::ReadModeCreateTestCase ()
: TestCase ("Check to see that PcapFile::Open with mode \"r\" works")
{
}
ReadModeCreateTestCase::~ReadModeCreateTestCase ()
{
}
void
ReadModeCreateTestCase::DoSetup (void)
{
std::stringstream filename;
uint32_t n = rand ();
filename << n;
m_testFilename = "/tmp/" + filename.str () + ".pcap";
}
void
ReadModeCreateTestCase::DoTeardown (void)
{
remove (m_testFilename.c_str ());
}
bool
ReadModeCreateTestCase::DoRun (void)
{
PcapFile f;
//
// Opening a non-existing file in read mode should result in an error.
//
bool err = f.Open (m_testFilename, "r");
NS_TEST_ASSERT_MSG_EQ (err, true, "Open (non-existing-filename " << m_testFilename << ", \"r\") does not return error");
NS_TEST_ASSERT_MSG_EQ (CheckFileExists (m_testFilename), false,
"Open (" << m_testFilename << ", \"r\") unexpectedly created a file");
//
// Okay, now create an uninitialized file using previously tested operations
//
err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (filename, \"w\") returns error");
f.Close ();
//
// Opening this file should result in an error since it has no pcap file header.
//
err = f.Open (m_testFilename, "r");
NS_TEST_ASSERT_MSG_EQ (err, true, "Open (non-initialized-filename " << m_testFilename << ", \"r\") does not return error");
//
// Okay, now open that non-initialized file in write mode and initialize it
// Note that we open it in write mode to initialize it.
//
err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << m_testFilename << ", \"w\") returns error");
err = f.Init (1234, 5678, 7);
NS_TEST_ASSERT_MSG_EQ (err, false, "Init (1234, 5678, 7) returns error");
f.Close ();
//
// Opening this file should now work since it has a pcap file header.
//
err = f.Open (m_testFilename, "r");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (initialized-filename " << m_testFilename << ", \"r\") returns error");
//
// Now we should not be able to write to it since it was opened in "r" mode
// even if it has been initialized..
//
uint8_t buffer[128];
err = f.Write (0, 0, buffer, 128);
NS_TEST_ASSERT_MSG_EQ (err, true, "Write (read-only-file " << m_testFilename << ") does not return error");
f.Close ();
return false;
}
// ===========================================================================
// Test case to make sure that the Pcap File Object can open an existing pcap
// file for appending.
// ===========================================================================
class AppendModeCreateTestCase : public TestCase
{
public:
AppendModeCreateTestCase ();
virtual ~AppendModeCreateTestCase ();
private:
virtual void DoSetup (void);
virtual bool DoRun (void);
virtual void DoTeardown (void);
std::string m_testFilename;
};
AppendModeCreateTestCase::AppendModeCreateTestCase ()
: TestCase ("Check to see that PcapFile::Open with mode \"a\" works")
{
}
AppendModeCreateTestCase::~AppendModeCreateTestCase ()
{
}
void
AppendModeCreateTestCase::DoSetup (void)
{
std::stringstream filename;
uint32_t n = rand ();
filename << n;
m_testFilename = "/tmp/" + filename.str () + ".pcap";
}
void
AppendModeCreateTestCase::DoTeardown (void)
{
remove (m_testFilename.c_str ());
}
bool
AppendModeCreateTestCase::DoRun (void)
{
PcapFile f;
//
// Opening a non-existing file in append mode should result in an error.
//
bool err = f.Open (m_testFilename, "a");
NS_TEST_ASSERT_MSG_EQ (err, true, "Open (non-existing-filename " << m_testFilename << ", \"a\") does not return error");
f.Close ();
NS_TEST_ASSERT_MSG_EQ (CheckFileExists (m_testFilename), false,
"Open (" << m_testFilename << ", \"a\") unexpectedly created a file");
//
// Okay, now create an uninitialized file using previously tested operations
//
err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << m_testFilename << ", \"w\") returns error");
f.Close ();
//
// Opening this file should result in an error since it has no pcap file header.
//
err = f.Open (m_testFilename, "a");
NS_TEST_ASSERT_MSG_EQ (err, true, "Open (non-initialized-filename " << m_testFilename << ", \"a\") does not return error");
//
// Okay, now open that non-initialized file in write mode and initialize it.
//
err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (non-initialized-filename " << m_testFilename << ", \"w\") returns error");
err = f.Init (1234, 5678, 7);
NS_TEST_ASSERT_MSG_EQ (err, false, "Init (1234, 5678, 7) returns error");
f.Close ();
//
// Opening this file should now work since it has a pcap file header.
//
err = f.Open (m_testFilename, "a");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (initialized-filename " << m_testFilename << ", \"r\") returns error");
//
// We should be able to write to it since it was opened in "a" mode.
//
uint8_t buffer[128];
err = f.Write (0, 0, buffer, 128);
NS_TEST_ASSERT_MSG_EQ (err, false, "Write (append-mode-file " << m_testFilename << ") returns error");
f.Close ();
return false;
}
// ===========================================================================
// Test case to make sure that the Pcap File Object can write out correct pcap
// file headers in both endian cases, and then read them in correctly.
// ===========================================================================
class FileHeaderTestCase : public TestCase
{
public:
FileHeaderTestCase ();
virtual ~FileHeaderTestCase ();
private:
virtual void DoSetup (void);
virtual bool DoRun (void);
virtual void DoTeardown (void);
std::string m_testFilename;
};
FileHeaderTestCase::FileHeaderTestCase ()
: TestCase ("Check to see that PcapFileHeader is managed correctly")
{
}
FileHeaderTestCase::~FileHeaderTestCase ()
{
}
void
FileHeaderTestCase::DoSetup (void)
{
std::stringstream filename;
uint32_t n = rand ();
filename << n;
m_testFilename = "/tmp/" + filename.str () + ".pcap";
}
void
FileHeaderTestCase::DoTeardown (void)
{
remove (m_testFilename.c_str ());
}
bool
FileHeaderTestCase::DoRun (void)
{
PcapFile f;
//
// Create an uninitialized file using previously tested operations
//
bool err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << m_testFilename << ", \"w\") returns error");
//
// Initialize the pcap file header.
//
err = f.Init (1234, 5678, 7);
NS_TEST_ASSERT_MSG_EQ (err, false,
"Init (1234, 5678, 7) returns error");
f.Close ();
//
// Take a look and see what was done to the file
//
FILE *p = fopen (m_testFilename.c_str (), "r+b");
NS_TEST_ASSERT_MSG_NE (p, 0, "fopen(" << m_testFilename << ") should have been able to open a correctly created pcap file");
uint32_t val32;
uint16_t val16;
size_t result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() magic number");
NS_TEST_ASSERT_MSG_EQ (val32, 0xa1b2c3d4, "Magic number written incorrectly");
result = fread (&val16, sizeof(val16), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() version major");
NS_TEST_ASSERT_MSG_EQ (val16, 2, "Version major written incorrectly");
result = fread (&val16, sizeof(val16), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() version minor");
NS_TEST_ASSERT_MSG_EQ (val16, 4, "Version minor written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() time zone correction");
NS_TEST_ASSERT_MSG_EQ (val32, 7, "Version minor written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() sig figs");
NS_TEST_ASSERT_MSG_EQ (val32, 0, "Sig figs written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() snap length");
NS_TEST_ASSERT_MSG_EQ (val32, 5678, "Snap length written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() data link type");
NS_TEST_ASSERT_MSG_EQ (val32, 1234, "Data length type written incorrectly");
fclose (p);
p = 0;
//
// We wrote a native-endian file out correctly, now let's see if we can read
// it back in correctly.
//
err = f.Open (m_testFilename, "r");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (existing-initialized-file " << m_testFilename << ", \"r\") returns error");
NS_TEST_ASSERT_MSG_EQ (f.GetMagic (), 0xa1b2c3d4, "Read back magic number incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetVersionMajor (), 2, "Read back version major incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetVersionMinor (), 4, "Read back version minor incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetTimeZoneOffset (), 7, "Read back time zone offset incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetSigFigs (), 0, "Read back sig figs incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetSnapLen (), 5678, "Read back snap len incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetDataLinkType (), 1234, "Read back data link type incorrectly");
//
// Re-open the file to erase its contents.
//
err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << m_testFilename << ", \"w\") returns error");
//
// Initialize the pcap file header, turning on swap mode manually to force
// the pcap file header to be written out in foreign-endian form, whichever
// endian-ness that might be.
//
err = f.Init (1234, 5678, 7, true);
NS_TEST_ASSERT_MSG_EQ (err, false, "Init (1234, 5678, 7) returns error");
f.Close ();
//
// Take a look and see what was done to the file. Everything should now
// appear byte-swapped.
//
p = fopen (m_testFilename.c_str (), "r+b");
NS_TEST_ASSERT_MSG_NE (p, 0, "fopen(" << m_testFilename << ") should have been able to open a correctly created pcap file");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() magic number");
NS_TEST_ASSERT_MSG_EQ (val32, Swap(uint32_t (0xa1b2c3d4)), "Magic number written incorrectly");
result = fread (&val16, sizeof(val16), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() version major");
NS_TEST_ASSERT_MSG_EQ (val16, Swap(uint16_t (2)), "Version major written incorrectly");
result = fread (&val16, sizeof(val16), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() version minor");
NS_TEST_ASSERT_MSG_EQ (val16, Swap(uint16_t (4)), "Version minor written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() time zone correction");
NS_TEST_ASSERT_MSG_EQ (val32, Swap(uint32_t (7)), "Version minor written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() sig figs");
NS_TEST_ASSERT_MSG_EQ (val32, 0, "Sig figs written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() snap length");
NS_TEST_ASSERT_MSG_EQ (val32, Swap(uint32_t (5678)), "Snap length written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() data link type");
NS_TEST_ASSERT_MSG_EQ (val32, Swap(uint32_t (1234)), "Data length type written incorrectly");
fclose (p);
p = 0;
//
// We wrote an opposite-endian file out correctly, now let's see if we can read
// it back in correctly.
//
err = f.Open (m_testFilename, "r");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (existing-initialized-file " << m_testFilename << ", \"r\") returns error");
NS_TEST_ASSERT_MSG_EQ (f.GetSwapMode (), true, "Byte-swapped file not correctly indicated");
NS_TEST_ASSERT_MSG_EQ (f.GetMagic (), 0xa1b2c3d4, "Read back magic number incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetVersionMajor (), 2, "Read back version major incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetVersionMinor (), 4, "Read back version minor incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetTimeZoneOffset (), 7, "Read back time zone offset incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetSigFigs (), 0, "Read back sig figs incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetSnapLen (), 5678, "Read back snap len incorrectly");
NS_TEST_ASSERT_MSG_EQ (f.GetDataLinkType (), 1234, "Read back data link type incorrectly");
f.Close ();
return false;
}
// ===========================================================================
// Test case to make sure that the Pcap File Object can write pcap packet
// records in both endian cases, and then read them in correctly.
// ===========================================================================
class RecordHeaderTestCase : public TestCase
{
public:
RecordHeaderTestCase ();
virtual ~RecordHeaderTestCase ();
private:
virtual void DoSetup (void);
virtual bool DoRun (void);
virtual void DoTeardown (void);
std::string m_testFilename;
};
RecordHeaderTestCase::RecordHeaderTestCase ()
: TestCase ("Check to see that PcapRecordHeader is managed correctly")
{
}
RecordHeaderTestCase::~RecordHeaderTestCase ()
{
}
void
RecordHeaderTestCase::DoSetup (void)
{
std::stringstream filename;
uint32_t n = rand ();
filename << n;
m_testFilename = "/tmp/" + filename.str () + ".pcap";
}
void
RecordHeaderTestCase::DoTeardown (void)
{
remove (m_testFilename.c_str ());
}
bool
RecordHeaderTestCase::DoRun (void)
{
PcapFile f;
//
// Create an uninitialized file using previously tested operations
//
bool err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << m_testFilename << ", \"w\") returns error");
//
// Initialize the pcap file header.
//
err = f.Init (37, 43, -7);
NS_TEST_ASSERT_MSG_EQ (err, false, "Init (37, 43, -7) returns error");
//
// Initialize a buffer with a counting pattern to check the data later.
//
uint8_t bufferOut[128];
for (uint32_t i = 0; i < 128; ++i)
{
bufferOut[i] = i;
}
//
// Now we should be able to write a packet to it since it was opened in "w"
// mode. The packet data written should be limited to 43 bytes in length
// by the Init() call above.
//
err = f.Write (1234, 5678, bufferOut, 128);
NS_TEST_ASSERT_MSG_EQ (err, false, "Write (write-only-file " << m_testFilename << ") returns error");
f.Close ();
//
// Let's peek into the file and see what actually went out for that
// packet.
//
FILE *p = fopen (m_testFilename.c_str (), "r+b");
NS_TEST_ASSERT_MSG_NE (p, 0, "fopen() should have been able to open a correctly created pcap file");
//
// A pcap file header takes up 24 bytes, a pcap record header takes up 16 bytes
// and we wrote in 43 bytes, so the file must be 83 bytes long. Let's just
// double check that this is exactly what happened.
//
fseek (p, 0, SEEK_END);
uint64_t size = ftell (p);
NS_TEST_ASSERT_MSG_EQ (size, 83, "Pcap file with one 43 byte packet is incorrect size");
//
// A pcap file header takes up 24 bytes, so we should see a pcap record header
// starting there in the file. We've tested this all before so we just assume
// it's all right and just seek to just past that point..
//
fseek (p, 24, SEEK_SET);
uint32_t val32;
size_t result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() seconds timestamp");
NS_TEST_ASSERT_MSG_EQ (val32, 1234, "Seconds timestamp written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() microseconds timestamp");
NS_TEST_ASSERT_MSG_EQ (val32, 5678, "Microseconds timestamp written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() included length");
NS_TEST_ASSERT_MSG_EQ (val32, 43, "Included length written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() actual length");
NS_TEST_ASSERT_MSG_EQ (val32, 128, "Actual length written incorrectly");
//
// Take a look and see what went out into the file. The packet data
// should be unchanged (unswapped).
//
uint8_t bufferIn[128];
result = fread (bufferIn, 1, 43, p);
NS_TEST_ASSERT_MSG_EQ (result, 43, "Unable to fread() packet data of expected length");
for (uint32_t i = 0; i < 43; ++i)
{
NS_TEST_ASSERT_MSG_EQ (bufferIn[i], bufferOut[i], "Incorrect packet data written");
}
fclose (p);
p = 0;
//
// Let's see if the PcapFile object can figure out how to do the same thing
// correctly read in a packet.
//
err = f.Open (m_testFilename, "r");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << m_testFilename << ", \"r\") of existing good file returns error");
uint32_t tsSec, tsUsec, inclLen, origLen, readLen;
err = f.Read (bufferIn, sizeof(bufferIn), tsSec, tsUsec, inclLen, origLen, readLen);
NS_TEST_ASSERT_MSG_EQ (err, false, "Read() of known good packet returns error");
NS_TEST_ASSERT_MSG_EQ (tsSec, 1234, "Incorrectly read seconds timestap from known good packet");
NS_TEST_ASSERT_MSG_EQ (tsUsec, 5678, "Incorrectly read microseconds timestap from known good packet");
NS_TEST_ASSERT_MSG_EQ (inclLen, 43, "Incorrectly read included length from known good packet");
NS_TEST_ASSERT_MSG_EQ (origLen, 128, "Incorrectly read original length from known good packet");
NS_TEST_ASSERT_MSG_EQ (readLen, 43, "Incorrectly constructed actual read length from known good packet given buffer size");
//
// Did the data come back correctly?
//
for (uint32_t i = 0; i < 43; ++i)
{
NS_TEST_ASSERT_MSG_EQ (bufferIn[i], bufferOut[i], "Incorrect packet data read from known good packet");
}
//
// We have to check to make sure that the pcap record header is swapped
// correctly. Open the file in write mode to clear the data.
//
err = f.Open (m_testFilename, "w");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << m_testFilename << ", \"w\") returns error");
//
// Initialize the pcap file header, forcing the object into swap mode.
//
err = f.Init (37, 43, -7, true);
NS_TEST_ASSERT_MSG_EQ (err, false, "Init (37, 43, -7) returns error");
//
// Now we should be able to write a packet to it since it was opened in "w"
// mode. The packet data written should be limited to 43 bytes in length
// by the Init() call above.
//
err = f.Write (1234, 5678, bufferOut, 128);
NS_TEST_ASSERT_MSG_EQ (err, false, "Write (write-only-file " << m_testFilename << ") returns error");
f.Close ();
//
// Let's peek into the file and see what actually went out for that
// packet.
//
p = fopen (m_testFilename.c_str (), "r+b");
NS_TEST_ASSERT_MSG_NE (p, 0, "fopen() should have been able to open a correctly created pcap file");
//
// A pcap file header takes up 24 bytes, a pcap record header takes up 16 bytes
// and we wrote in 43 bytes, so the file must be 83 bytes long. Let's just
// double check that this is exactly what happened.
//
fseek (p, 0, SEEK_END);
size = ftell (p);
NS_TEST_ASSERT_MSG_EQ (size, 83, "Pcap file with one 43 byte packet is incorrect size");
//
// A pcap file header takes up 24 bytes, so we should see a pcap record header
// starting there in the file. We've tested this all before so we just assume
// it's all right and just seek past it.
//
fseek (p, 24, SEEK_SET);
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() seconds timestamp");
NS_TEST_ASSERT_MSG_EQ (val32, Swap (uint32_t (1234)), "Swapped seconds timestamp written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() microseconds timestamp");
NS_TEST_ASSERT_MSG_EQ (val32, Swap (uint32_t (5678)), "Swapped microseconds timestamp written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() included length");
NS_TEST_ASSERT_MSG_EQ (val32, Swap (uint32_t (43)), "Swapped included length written incorrectly");
result = fread (&val32, sizeof(val32), 1, p);
NS_TEST_ASSERT_MSG_EQ (result, 1, "Unable to fread() actual length");
NS_TEST_ASSERT_MSG_EQ (val32, Swap (uint32_t (128)), "Swapped Actual length written incorrectly");
//
// Take a look and see what went out into the file. The packet data
// should be unchanged (unswapped).
//
result = fread (bufferIn, 1, 43, p);
NS_TEST_ASSERT_MSG_EQ (result, 43, "Unable to fread() packet data of expected length");
for (uint32_t i = 0; i < 43; ++i)
{
NS_TEST_ASSERT_MSG_EQ (bufferIn[i], bufferOut[i], "Incorrect packet data written");
}
fclose (p);
p = 0;
//
// Let's see if the PcapFile object can figure out how to do the same thing and
// correctly read in a packet. The record header info should come back to us
// swapped back into correct form.
//
err = f.Open (m_testFilename, "r");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << m_testFilename << ", \"r\") of existing good file returns error");
err = f.Read (bufferIn, sizeof(bufferIn), tsSec, tsUsec, inclLen, origLen, readLen);
NS_TEST_ASSERT_MSG_EQ (err, false, "Read() of known good packet returns error");
NS_TEST_ASSERT_MSG_EQ (tsSec, 1234, "Incorrectly read seconds timestap from known good packet");
NS_TEST_ASSERT_MSG_EQ (tsUsec, 5678, "Incorrectly read microseconds timestap from known good packet");
NS_TEST_ASSERT_MSG_EQ (inclLen, 43, "Incorrectly read included length from known good packet");
NS_TEST_ASSERT_MSG_EQ (origLen, 128, "Incorrectly read original length from known good packet");
NS_TEST_ASSERT_MSG_EQ (readLen, 43, "Incorrectly constructed actual read length from known good packet given buffer size");
//
// Did the data come back correctly (unchanged / unswapped)?
//
for (uint32_t i = 0; i < 43; ++i)
{
NS_TEST_ASSERT_MSG_EQ (bufferIn[i], bufferOut[i], "Incorrect packet data read from known good packet");
}
f.Close ();
return false;
}
// ===========================================================================
// Test case to make sure that the Pcap File Object can read out the contents
// of a known good pcap file.
// ===========================================================================
class ReadFileTestCase : public TestCase
{
public:
ReadFileTestCase ();
virtual ~ReadFileTestCase ();
private:
virtual void DoSetup (void);
virtual bool DoRun (void);
virtual void DoTeardown (void);
std::string m_testFilename;
};
ReadFileTestCase::ReadFileTestCase ()
: TestCase ("Check to see that PcapFile can read out a known good pcap file")
{
}
ReadFileTestCase::~ReadFileTestCase ()
{
}
void
ReadFileTestCase::DoSetup (void)
{
}
void
ReadFileTestCase::DoTeardown (void)
{
}
const uint32_t N_KNOWN_PACKETS = 6;
const uint32_t N_PACKET_BYTES = 16;
typedef struct PACKET_ENTRY {
uint32_t tsSec;
uint32_t tsUsec;
uint32_t inclLen;
uint32_t origLen;
uint16_t data[N_PACKET_BYTES];
} PacketEntry;
PacketEntry knownPackets[] = {
{2, 3696, 46, 46, {0x0001, 0x0800, 0x0604, 0x0001, 0x0000, 0x0000, 0x0003, 0x0a01,
0x0201, 0xffff, 0xffff, 0xffff, 0x0a01, 0x0204, 0x0000, 0x0000}},
{2, 3707, 46, 46, {0x0001, 0x0800, 0x0604, 0x0002, 0x0000, 0x0000, 0x0006, 0x0a01,
0x0204, 0x0000, 0x0000, 0x0003, 0x0a01, 0x0201, 0x0000, 0x0000}},
{2, 3801, 1070, 1070, {0x4500, 0x041c, 0x0000, 0x0000, 0x3f11, 0x0000, 0x0a01, 0x0101,
0x0a01, 0x0204, 0xc001, 0x0009, 0x0408, 0x0000, 0x0000, 0x0000}},
{2, 3811, 46, 46, {0x0001, 0x0800, 0x0604, 0x0001, 0x0000, 0x0000, 0x0006, 0x0a01,
0x0204, 0xffff, 0xffff, 0xffff, 0x0a01, 0x0201, 0x0000, 0x0000}},
{2, 3822, 46, 46, {0x0001, 0x0800, 0x0604, 0x0002, 0x0000, 0x0000, 0x0003, 0x0a01,
0x0201, 0x0000, 0x0000, 0x0006, 0x0a01, 0x0204, 0x0000, 0x0000}},
{2, 3915, 1070, 1070, {0x4500, 0x041c, 0x0000, 0x0000, 0x4011, 0x0000, 0x0a01, 0x0204,
0x0a01, 0x0101, 0x0009, 0xc001, 0x0408, 0x0000, 0x0000, 0x0000}}
};
bool
ReadFileTestCase::DoRun (void)
{
PcapFile f;
//
//
std::string filename = NS_TEST_SOURCEDIR + "known.pcap";
bool err = f.Open (filename, "r");
NS_TEST_ASSERT_MSG_EQ (err, false, "Open (" << filename << ", \"w\") returns error");
//
// We are going to read out the file header and all of the packets to make
// sure that we read what we know, a priori, to be there.
//
// The packet data was gotten using "tcpdump -nn -tt -r known.pcap -x"
// and the timestamp and first 32 bytes of the resulting dump were
// duplicated in the structure above.
//
uint8_t data[N_PACKET_BYTES];
uint32_t tsSec, tsUsec, inclLen, origLen, readLen;
PacketEntry *p = knownPackets;
for (uint32_t i = 0; i < N_KNOWN_PACKETS; ++i, ++p)
{
err = f.Read (data, sizeof(data), tsSec, tsUsec, inclLen, origLen, readLen);
NS_TEST_ASSERT_MSG_EQ (err, false, "Read() of known good pcap file returns error");
NS_TEST_ASSERT_MSG_EQ (tsSec, p->tsSec, "Incorrectly read seconds timestap from known good pcap file");
NS_TEST_ASSERT_MSG_EQ (tsUsec, p->tsUsec, "Incorrectly read microseconds timestap from known good pcap file");
NS_TEST_ASSERT_MSG_EQ (inclLen, p->inclLen, "Incorrectly read included length from known good packet");
NS_TEST_ASSERT_MSG_EQ (origLen, p->origLen, "Incorrectly read original length from known good packet");
NS_TEST_ASSERT_MSG_EQ (readLen, N_PACKET_BYTES, "Incorrect actual read length from known good packet given buffer size");
}
//
// The file should now be at EOF since we've read all of the packets.
// Another packet read should return an error.
//
err = f.Read (data, 1, tsSec, tsUsec, inclLen, origLen, readLen);
NS_TEST_ASSERT_MSG_EQ (err, true, "Read() of known good pcap file at EOF does not return error");
f.Close ();
return false;
}
class PcapFileTestSuite : public TestSuite
{
public:
PcapFileTestSuite ();
};
PcapFileTestSuite::PcapFileTestSuite ()
: TestSuite ("pcap-file-object", UNIT)
{
AddTestCase (new WriteModeCreateTestCase);
AddTestCase (new ReadModeCreateTestCase);
AddTestCase (new AppendModeCreateTestCase);
AddTestCase (new FileHeaderTestCase);
AddTestCase (new RecordHeaderTestCase);
AddTestCase (new ReadFileTestCase);
}
PcapFileTestSuite pcapFileTestSuite;

519
src/common/pcap-file.cc Normal file
View File

@@ -0,0 +1,519 @@
/* -*- Mode: C++; c-file-style: "gnu"; indent-tabs-mode:nil; -*- */
/*
* Copyright (c) 2009 University of Washington
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation;
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include "pcap-file.h"
//
// This file is used as part of the ns-3 test framework, so please refrain from
// adding any ns-3 specific constructs such as Packet to this file.
//
namespace ns3 {
const uint32_t MAGIC = 0xa1b2c3d4; /**< Magic number identifying standard pcap file format */
const uint32_t SWAPPED_MAGIC = 0xd4c3b2a1; /**< Looks this way if byte swapping is required */
const uint32_t NS_MAGIC = 0xa1b23cd4; /**< Magic number identifying nanosec resolution pcap file format */
const uint32_t NS_SWAPPED_MAGIC = 0xd43cb2a1; /**< Looks this way if byte swapping is required */
const uint16_t VERSION_MAJOR = 2; /**< Major version of supported pcap file format */
const uint16_t VERSION_MINOR = 4; /**< Minor version of supported pcap file format */
const int32_t SIGFIGS_DEFAULT = 0; /**< Significant figures for timestamps (libpcap doesn't even bother) */
PcapFile::PcapFile ()
: m_filename (""),
m_filePtr (0),
m_haveFileHeader (false),
m_swapMode (false)
{
}
PcapFile::~PcapFile ()
{
Close ();
}
void
PcapFile::Close (void)
{
if (m_filePtr)
{
fclose (m_filePtr);
}
m_filePtr = 0;
m_filename = "";
m_haveFileHeader = false;
}
uint32_t
PcapFile::GetMagic (void)
{
return m_fileHeader.m_magicNumber;
}
uint16_t
PcapFile::GetVersionMajor (void)
{
return m_fileHeader.m_versionMajor;
}
uint16_t
PcapFile::GetVersionMinor (void)
{
return m_fileHeader.m_versionMinor;
}
int32_t
PcapFile::GetTimeZoneOffset (void)
{
return m_fileHeader.m_zone;
}
uint32_t
PcapFile::GetSigFigs (void)
{
return m_fileHeader.m_sigFigs;
}
uint32_t
PcapFile::GetSnapLen (void)
{
return m_fileHeader.m_snapLen;
}
uint32_t
PcapFile::GetDataLinkType (void)
{
return m_fileHeader.m_type;
}
bool
PcapFile::GetSwapMode (void)
{
return m_swapMode;
}
uint8_t
PcapFile::Swap (uint8_t val)
{
return val;
}
uint16_t
PcapFile::Swap (uint16_t val)
{
return ((val >> 8) & 0x00ff) | ((val << 8) & 0xff00);
}
uint32_t
PcapFile::Swap (uint32_t val)
{
return ((val >> 24) & 0x000000ff) | ((val >> 8) & 0x0000ff00) | ((val << 8) & 0x00ff0000) | ((val << 24) & 0xff000000);
}
void
PcapFile::Swap (PcapFileHeader *from, PcapFileHeader *to)
{
to->m_magicNumber = Swap (from->m_magicNumber);
to->m_versionMajor = Swap (from->m_versionMajor);
to->m_versionMinor = Swap (from->m_versionMinor);
to->m_zone = Swap (uint32_t(from->m_zone));
to->m_sigFigs = Swap (from->m_sigFigs);
to->m_snapLen = Swap (from->m_snapLen);
to->m_type = Swap (from->m_type);
}
void
PcapFile::Swap (PcapRecordHeader *from, PcapRecordHeader *to)
{
to->m_tsSec = Swap (from->m_tsSec);
to->m_tsUsec = Swap (from->m_tsUsec);
to->m_inclLen = Swap (from->m_inclLen);
to->m_origLen = Swap (from->m_origLen);
}
bool
PcapFile::WriteFileHeader (void)
{
//
// If we're initializing the file, we need to write the pcap file header
// at the start of the file.
//
int result = fseek (m_filePtr, 0, SEEK_SET);
if (result)
{
return true;
}
//
// We have the ability to write out the pcap file header in a foreign endian
// format, so we need a temp place to swap on the way out.
//
PcapFileHeader header;
//
// the pointer headerOut selects either the swapped or non-swapped version of
// the pcap file header.
//
PcapFileHeader *headerOut = 0;
if (m_swapMode == false)
{
headerOut = &m_fileHeader;
}
else
{
Swap (&m_fileHeader, &header);
headerOut = &header;
}
//
// Watch out for memory alignment differences between machines, so write
// them all individually.
//
result = 0;
result |= (fwrite (&headerOut->m_magicNumber, sizeof(headerOut->m_magicNumber), 1, m_filePtr) != 1);
result |= (fwrite (&headerOut->m_versionMajor, sizeof(headerOut->m_versionMajor), 1, m_filePtr) != 1);
result |= (fwrite (&headerOut->m_versionMinor, sizeof(headerOut->m_versionMinor), 1, m_filePtr) != 1);
result |= (fwrite (&headerOut->m_zone, sizeof(headerOut->m_zone), 1, m_filePtr) != 1);
result |= (fwrite (&headerOut->m_sigFigs, sizeof(headerOut->m_sigFigs), 1, m_filePtr) != 1);
result |= (fwrite (&headerOut->m_snapLen, sizeof(headerOut->m_snapLen), 1, m_filePtr) != 1);
result |= (fwrite (&headerOut->m_type, sizeof(headerOut->m_type), 1, m_filePtr) != 1);
//
// If any of the fwrites above did not succeed in writinging the correct
// number of objects, result will be nonzero and will indicate an error.
//
return result != 0;
}
bool
PcapFile::ReadAndVerifyFileHeader (void)
{
//
// Pcap file header is always at the start of the file
//
int result = fseek (m_filePtr, 0, SEEK_SET);
if (result)
{
return true;
}
//
// Watch out for memory alignment differences between machines, so read
// them all individually.
//
result = 0;
result |= (fread (&m_fileHeader.m_magicNumber, sizeof(m_fileHeader.m_magicNumber), 1, m_filePtr) != 1);
result |= (fread (&m_fileHeader.m_versionMajor, sizeof(m_fileHeader.m_versionMajor), 1, m_filePtr) != 1);
result |= (fread (&m_fileHeader.m_versionMinor, sizeof(m_fileHeader.m_versionMinor), 1, m_filePtr) != 1);
result |= (fread (&m_fileHeader.m_zone, sizeof(m_fileHeader.m_zone), 1, m_filePtr) != 1);
result |= (fread (&m_fileHeader.m_sigFigs, sizeof(m_fileHeader.m_sigFigs), 1, m_filePtr) != 1);
result |= (fread (&m_fileHeader.m_snapLen, sizeof(m_fileHeader.m_snapLen), 1, m_filePtr) != 1);
result |= (fread (&m_fileHeader.m_type, sizeof(m_fileHeader.m_type), 1, m_filePtr) != 1);
//
// If any of the freads above did not succeed in reading the correct number of
// objects, result will be nonzero.
//
if (result)
{
return true;
}
//
// There are four possible magic numbers that can be there. Normal and byte
// swapped versions of the standard magic number, and normal and byte swapped
// versions of the magic number indicating nanosecond resolution timestamps.
//
if (m_fileHeader.m_magicNumber != MAGIC && m_fileHeader.m_magicNumber != SWAPPED_MAGIC &&
m_fileHeader.m_magicNumber != NS_MAGIC && m_fileHeader.m_magicNumber != NS_SWAPPED_MAGIC)
{
return true;
}
//
// If the magic number is swapped, then we can assume that everything else we read
// is swapped.
//
m_swapMode = (m_fileHeader.m_magicNumber == SWAPPED_MAGIC || m_fileHeader.m_magicNumber == NS_SWAPPED_MAGIC) ? true : false;
if (m_swapMode)
{
Swap (&m_fileHeader, &m_fileHeader);
}
//
// We only deal with one version of the pcap file format.
//
if (m_fileHeader.m_versionMajor != VERSION_MAJOR || m_fileHeader.m_versionMinor != VERSION_MINOR)
{
return true;
}
//
// A quick test of reasonablness for the time zone offset corresponding to
// a real place on the planet.
//
if (m_fileHeader.m_zone < -12 || m_fileHeader.m_zone > 12)
{
return true;
}
m_haveFileHeader = true;
return false;
}
bool
PcapFile::Open (std::string const &filename, std::string const &mode)
{
//
// If opening a new file, implicit close of any existing file required.
//
Close ();
//
// All pcap files are binary files, so we just do this automatically.
//
std::string realMode = mode + "b";
//
// Our modes may be subtly different from the standard fopen semantics since
// we need to have a pcap file header to succeed in some cases; so we need
// to process different modes according to our own definitions of the modes.
//
// In the case of read modes, we must read, check and save the pcap file
// header as well as just opening the file.
//
// In the case of write modes, we just pass the call on through to the
// library.
//
// In the case of append modes, we change the semantics to require the
// given file to exist. We can't just create a file since we can't make up
// a pcap file header on our own.
//
if (realMode == "rb" || realMode == "r+b")
{
m_filePtr = fopen (filename.c_str (), realMode.c_str ());
if (m_filePtr == 0)
{
return true;
}
m_filename = filename;
return ReadAndVerifyFileHeader ();
}
else if (realMode == "wb" || realMode == "w+b")
{
m_filePtr = fopen (filename.c_str (), realMode.c_str ());
if (m_filePtr)
{
m_filename = filename;
return false;
}
else
{
return true;
}
}
else if (realMode == "ab" || realMode == "a+b")
{
//
// Remember that semantics for append are different here. We never create
// a file since we can't make up a pcap file header. We first have to
// open the file in read-only mode and check to see that it exists and
// read the file header. If this all works out, then we can go ahead and
// open the file in append mode and seek to the end (imlicitly).
//
m_filePtr = fopen (filename.c_str (), "rb");
if (m_filePtr == 0)
{
return true;
}
bool result = ReadAndVerifyFileHeader ();
if (result == true)
{
Close ();
return true;
}
//
// We have a properly initialized file and have the pcap file header
// loaded and checked. This means that the file meets all of the
// critera for opening in append mode, but the file is in read-only mode
// now -- we must close it and open it in the correct mode.
//
fclose (m_filePtr);
m_filePtr = 0;
m_filePtr = fopen (filename.c_str (), realMode.c_str ());
if (m_filePtr == 0)
{
return true;
}
m_filename = filename;
return false;
}
else
{
return true;
}
}
bool
PcapFile::Init (uint32_t dataLinkType, uint32_t snapLen, int32_t timeZoneCorrection, bool swapMode)
{
//
// Initialize the in-memory file header.
//
m_fileHeader.m_magicNumber = MAGIC;
m_fileHeader.m_versionMajor = VERSION_MAJOR;
m_fileHeader.m_versionMinor = VERSION_MINOR;
m_fileHeader.m_zone = timeZoneCorrection;
m_fileHeader.m_sigFigs = 0;
m_fileHeader.m_snapLen = snapLen;
m_fileHeader.m_type = dataLinkType;
m_haveFileHeader = true;
m_swapMode = swapMode;
return WriteFileHeader ();
}
bool
PcapFile::Write (uint32_t tsSec, uint32_t tsUsec, uint8_t const * const data, uint32_t totalLen)
{
if (m_haveFileHeader == false)
{
return true;
}
uint32_t inclLen = totalLen > m_fileHeader.m_snapLen ? m_fileHeader.m_snapLen : totalLen;
PcapRecordHeader header;
header.m_tsSec = tsSec;
header.m_tsUsec = tsUsec;
header.m_inclLen = inclLen;
header.m_origLen = totalLen;
if (m_swapMode)
{
Swap (&header, &header);
}
//
// Watch out for memory alignment differences between machines, so write
// them all individually.
//
uint32_t result = 0;
result |= (fwrite (&header.m_tsSec, sizeof(header.m_tsSec), 1, m_filePtr) != 1);
result |= (fwrite (&header.m_tsUsec, sizeof(header.m_tsUsec), 1, m_filePtr) != 1);
result |= (fwrite (&header.m_inclLen, sizeof(header.m_inclLen), 1, m_filePtr) != 1);
result |= (fwrite (&header.m_origLen, sizeof(header.m_origLen), 1, m_filePtr) != 1);
result |= fwrite (data, 1, inclLen, m_filePtr) != inclLen;
return result != 0;
}
bool
PcapFile::Read (
uint8_t * const data,
uint32_t maxBytes,
uint32_t &tsSec,
uint32_t &tsUsec,
uint32_t &inclLen,
uint32_t &origLen,
uint32_t &readLen)
{
if (m_haveFileHeader == false)
{
return true;
}
PcapRecordHeader header;
//
// Watch out for memory alignment differences between machines, so read
// them all individually.
//
uint32_t result = 0;
result |= (fread (&header.m_tsSec, sizeof(header.m_tsSec), 1, m_filePtr) != 1);
result |= (fread (&header.m_tsUsec, sizeof(header.m_tsUsec), 1, m_filePtr) != 1);
result |= (fread (&header.m_inclLen, sizeof(header.m_inclLen), 1, m_filePtr) != 1);
result |= (fread (&header.m_origLen, sizeof(header.m_origLen), 1, m_filePtr) != 1);
//
// If any of the freads above did not succeed in reading the correct number of
// objects, result will be nonzero.
//
if (result)
{
return true;
}
if (m_swapMode)
{
Swap (&header, &header);
}
tsSec = header.m_tsSec;
tsUsec = header.m_tsUsec;
inclLen = header.m_inclLen;
origLen = header.m_origLen;
//
// We don't always want to force the client to keep a maximum length buffer
// around so we allow her to specify a minimum number of bytes to read.
// Usually 64 bytes is enough information to print all of the headers, so
// it isn't typically necessary to read all thousand bytes of an echo packet,
// for example, to figure out what is going on.
//
readLen = maxBytes < header.m_inclLen ? maxBytes : header.m_inclLen;
result = fread (data, 1, readLen, m_filePtr) != readLen;
if (result)
{
return result;
}
//
// To keep the file pointer pointed in the right place, however, we always
// need to account for the entire packet as stored originally.
//
if (readLen < header.m_inclLen)
{
uint64_t pos = ftell (m_filePtr);
int result = fseek (m_filePtr, pos + header.m_inclLen - readLen, SEEK_SET);
if (result)
{
return true;
}
}
return false;
}
} //namespace ns3

194
src/common/pcap-file.h Normal file
View File

@@ -0,0 +1,194 @@
/* -*- Mode: C++; c-file-style: "gnu"; indent-tabs-mode:nil; -*- */
/*
* Copyright (c) 2009 University of Washington
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation;
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef PCAP_FILE_H
#define PCAP_FILE_H
#include <string>
#include <stdint.h>
namespace ns3 {
/*
* A class representing a pcap file. This allows easy creation, writing and
* reading of files composed of stored packets; which may be viewed using
* standard tools.
*/
class PcapFile
{
public:
static const int32_t ZONE_DEFAULT = 0; /**< Time zone offset for current location */
static const uint32_t SNAPLEN_DEFAULT = 65535; /**< Default value for maximum octets to save per packet */
public:
PcapFile ();
~PcapFile ();
/**
* Create a new pcap file or open an existing pcap file. Semantics are
* similar to the C standard library function \c fopen, but differ in that
* positions in the file are based on packets not characters. For example
* if the file is opened for reading, the file position indicator (seek
* position) points to the beginning of the first packet in the file, not
* zero (which would point to the start of the pcap header).
*
* Possible modes are:
*
* \verbatim
* "r": Open a file for reading. The file must exist. The pcap header
* is assumed to exist in the file and will be read and checked.
* The file seek position indicator is set to point to the first
* packet on exit.
*
* "w": Create an empty file for writing. If a file with the same name
* already exists its content is erased and the file is treated as a
* new empty pcap file. The file is assumed not to have a pcap
* header and the caller is responsible for calling Init before saving
* any packet data. The file seek position indicator is set to point
* to the beginning of the file on exit since there will be no pcap
* header.
*
* "a": Append to an existing file. This mode allows for adding packet data
* to the end of an existing pcap file. The file must exist and have a
* valid pcap header written (N.B. this is different from standard fopen
* semantics). The file seek position indicator is set to point
* to the end of the file on exit.
*
* "r+": Open a file for update -- both reading and writing. The file must
* exist. The pcap header is assumed to have been written to the
* file and will be read and checked. The file seek position indicator
* is set to point to the first packet on exit.
*
* "w+": Create an empty file for both reading and writing. If a file with
* the same name already exists, its content is erased and the file is
* treated as a new empty pcap file. Since this new file will not have
* a pcap header, the caller is responsible for calling Init before
* saving any packet data. On exit, the file seek position indicator is
* set to point to the beginning of the file.
*
* "a+" Open a file for reading and appending. The file must exist and have a
* valid pcap header written (N.B. this is different from standard fopen
* semantics). The file seek position indicator is set to point
* to the end of the file on exit. Existing content is preserved.
* \endverbatim
*
* Since a pcap file is always a binary file, the file type is automatically
* selected as a binary file. For example, providing a mode string "a+"
* results in the underlying OS file being opened in "a+b" mode.
*
* \param filename String containing the name of the file.
*
* \param mode String containing the access mode for the file.
*
* \returns Error indication that should be interpreted as, "did an error
* happen"? That is, the method returns false if the open succeeds, true
* otherwise. The errno variable will be set by the OS to to provide a
* more descriptive failure indication.
*/
bool Open (std::string const &filename, std::string const &mode);
void Close (void);
/**
* Initialize the pcap file associated with this object. This file must have
* been previously opened with write permissions.
*
* \param dataLinkType A data link type as defined in the pcap library. If
* you want to make resulting pcap files visible in existing tools, the
* data link type must match existing definitions, such as PCAP_ETHERNET,
* PCAP_PPP, PCAP_80211, etc. If you are storing different kinds of packet
* data, such as naked TCP headers, you are at liberty to locally define your
* own data link types. According to the pcap-linktype man page, "well-known"
* pcap linktypes range from 0 to 177. If you use a large random number for
* your type, chances are small for a collision.
*
* \param snapLen An optional maximum size for packets written to the file.
* Defaults to 65535. If packets exceed this length they are truncated.
*
* \param timeZoneCorrection An integer describing the offset of your local
* time zone from UTC/GMT. For example, Pacific Standard Time in the US is
* GMT-8, so one would enter -8 for that correction. Defaults to 0 (UTC).
*
* \returns false if the open succeeds, true otherwise.
*
* \warning Calling this method on an existing file will result in the loss
* any existing data.
*/
bool Init (uint32_t dataLinkType,
uint32_t snapLen = SNAPLEN_DEFAULT,
int32_t timeZoneCorrection = ZONE_DEFAULT,
bool swapMode = false);
bool Write (uint32_t tsSec, uint32_t tsUsec, uint8_t const * const data, uint32_t totalLen);
bool Read (uint8_t * const data,
uint32_t maxBytes,
uint32_t &tsSec,
uint32_t &tsUsec,
uint32_t &inclLen,
uint32_t &origLen,
uint32_t &readLen);
bool GetSwapMode (void);
uint32_t GetMagic (void);
uint16_t GetVersionMajor (void);
uint16_t GetVersionMinor (void);
int32_t GetTimeZoneOffset (void);
uint32_t GetSigFigs (void);
uint32_t GetSnapLen (void);
uint32_t GetDataLinkType (void);
private:
typedef struct {
uint32_t m_magicNumber; /**< Magic number identifying this as a pcap file */
uint16_t m_versionMajor; /**< Major version identifying the version of pcap used in this file */
uint16_t m_versionMinor; /**< Minor version identifying the version of pcap used in this file */
int32_t m_zone; /**< Time zone correction to be applied to timestamps of packets */
uint32_t m_sigFigs; /**< Unused by pretty much everybody */
uint32_t m_snapLen; /**< Maximum length of packet data stored in records */
uint32_t m_type; /**< Data link type of packet data */
} PcapFileHeader;
typedef struct {
uint32_t m_tsSec; /**< seconds part of timestamp */
uint32_t m_tsUsec; /**< microseconds part of timestamp (nsecs for PCAP_NSEC_MAGIC) */
uint32_t m_inclLen; /**< number of octets of packet saved in file */
uint32_t m_origLen; /**< actual length of original packet */
} PcapRecordHeader;
uint8_t Swap (uint8_t val);
uint16_t Swap (uint16_t val);
uint32_t Swap (uint32_t val);
void Swap (PcapFileHeader *from, PcapFileHeader *to);
void Swap (PcapRecordHeader *from, PcapRecordHeader *to);
bool WriteFileHeader (void);
bool ReadAndVerifyFileHeader (void);
std::string m_filename;
FILE *m_filePtr;
PcapFileHeader m_fileHeader;
bool m_haveFileHeader;
bool m_swapMode;
};
}//namespace ns3
#endif /* PCAP_FILE_H */

View File

@@ -18,6 +18,8 @@ def build(bld):
'tag-buffer.cc',
'packet-tag-list.cc',
'ascii-writer.cc',
'pcap-file.cc',
'pcap-file-test-suite.cc',
]
headers = bld.new_task_gen('ns3header')
@@ -38,4 +40,5 @@ def build(bld):
'packet-tag-list.h',
'ascii-writer.h',
'sgi-hashmap.h',
'pcap-file.h',
]

View File

@@ -0,0 +1,975 @@
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation;
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include "test.h"
#include "names.h"
using namespace ns3;
// ===========================================================================
// Cook up a couple of simple object class that we can use in the object
// naming tests. They do nothing but be of the right type.
// ===========================================================================
class TestObject : public Object
{
public:
static TypeId GetTypeId (void)
{
static TypeId tid = TypeId ("TestObject")
.SetParent (Object::GetTypeId ())
.HideFromDocumentation ()
.AddConstructor<TestObject> ();
return tid;
}
TestObject () {}
virtual void Dispose (void) {}
};
class AlternateTestObject : public Object
{
public:
static TypeId GetTypeId (void)
{
static TypeId tid = TypeId ("AlternateTestObject")
.SetParent (Object::GetTypeId ())
.HideFromDocumentation ()
.AddConstructor<AlternateTestObject> ();
return tid;
}
AlternateTestObject () {}
virtual void Dispose (void) {}
};
// ===========================================================================
// Test case to make sure that the Object Name Service can do its most basic
// job and add associations between Objects using the lowest level add
// function, which is:
//
// Add (Ptr<Object> context, std::string name, Ptr<Object> object);
//
// All other add functions will just translate into this form, so this is the
// most basic Add functionality.
// ===========================================================================
class BasicAddTestCase : public TestCase
{
public:
BasicAddTestCase ();
virtual ~BasicAddTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
BasicAddTestCase::BasicAddTestCase ()
: TestCase ("Check low level Names::Add and Names::FindName functionality")
{
}
BasicAddTestCase::~BasicAddTestCase ()
{
}
void
BasicAddTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
BasicAddTestCase::DoRun (void)
{
std::string found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add (Ptr<Object> (0, false), "Name One", objectOne);
Ptr<TestObject> objectTwo = CreateObject<TestObject> ();
Names::Add (Ptr<Object> (0, false), "Name Two", objectTwo);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add (objectOne, "Child", childOfObjectOne);
Ptr<TestObject> childOfObjectTwo = CreateObject<TestObject> ();
Names::Add (objectTwo, "Child", childOfObjectTwo);
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Name One", "Could not Names::Add and Names::FindName an Object");
found = Names::FindName (objectTwo);
NS_TEST_ASSERT_MSG_EQ (found, "Name Two", "Could not Names::Add and Names::FindName a second Object");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
found = Names::FindName (childOfObjectTwo);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can correctly use a
// string context in the most basic ways
//
// Add (std::string context, std::string name, Ptr<Object> object);
//
// High level path-based functions will translate into this form, so this is
// the second most basic Add functionality.
// ===========================================================================
class StringContextAddTestCase : public TestCase
{
public:
StringContextAddTestCase ();
virtual ~StringContextAddTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
StringContextAddTestCase::StringContextAddTestCase ()
: TestCase ("Check string context Names::Add and Names::FindName functionality")
{
}
StringContextAddTestCase::~StringContextAddTestCase ()
{
}
void
StringContextAddTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
StringContextAddTestCase::DoRun (void)
{
std::string found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add ("/Names", "Name One", objectOne);
Ptr<TestObject> objectTwo = CreateObject<TestObject> ();
Names::Add ("/Names", "Name Two", objectTwo);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add ("/Names/Name One", "Child", childOfObjectOne);
Ptr<TestObject> childOfObjectTwo = CreateObject<TestObject> ();
Names::Add ("/Names/Name Two", "Child", childOfObjectTwo);
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Name One", "Could not Names::Add and Names::FindName an Object");
found = Names::FindName (objectTwo);
NS_TEST_ASSERT_MSG_EQ (found, "Name Two", "Could not Names::Add and Names::FindName a second Object");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
found = Names::FindName (childOfObjectTwo);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can correctly use a
// fully qualified path to add assocations
//
// Add (std::string name, Ptr<Object> object);
// ===========================================================================
class FullyQualifiedAddTestCase : public TestCase
{
public:
FullyQualifiedAddTestCase ();
virtual ~FullyQualifiedAddTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
FullyQualifiedAddTestCase::FullyQualifiedAddTestCase ()
: TestCase ("Check fully qualified path Names::Add and Names::FindName functionality")
{
}
FullyQualifiedAddTestCase::~FullyQualifiedAddTestCase ()
{
}
void
FullyQualifiedAddTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
FullyQualifiedAddTestCase::DoRun (void)
{
std::string found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add ("/Names/Name One", objectOne);
Ptr<TestObject> objectTwo = CreateObject<TestObject> ();
Names::Add ("/Names/Name Two", objectTwo);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add ("/Names/Name One/Child", childOfObjectOne);
Ptr<TestObject> childOfObjectTwo = CreateObject<TestObject> ();
Names::Add ("/Names/Name Two/Child", childOfObjectTwo);
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Name One", "Could not Names::Add and Names::FindName an Object");
found = Names::FindName (objectTwo);
NS_TEST_ASSERT_MSG_EQ (found, "Name Two", "Could not Names::Add and Names::FindName a second Object");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
found = Names::FindName (childOfObjectTwo);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can correctly use a
// relative path to add assocations. This functionality is provided as a
// convenience so clients don't always have to provide the name service
// namespace name in all of their strings.
//
//
// Add (std::string name, Ptr<Object> object);
// ===========================================================================
class RelativeAddTestCase : public TestCase
{
public:
RelativeAddTestCase ();
virtual ~RelativeAddTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
RelativeAddTestCase::RelativeAddTestCase ()
: TestCase ("Check relative path Names::Add and Names::FindName functionality")
{
}
RelativeAddTestCase::~RelativeAddTestCase ()
{
}
void
RelativeAddTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
RelativeAddTestCase::DoRun (void)
{
std::string found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add ("Name One", objectOne);
Ptr<TestObject> objectTwo = CreateObject<TestObject> ();
Names::Add ("Name Two", objectTwo);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add ("Name One/Child", childOfObjectOne);
Ptr<TestObject> childOfObjectTwo = CreateObject<TestObject> ();
Names::Add ("Name Two/Child", childOfObjectTwo);
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Name One", "Could not Names::Add and Names::FindName an Object");
found = Names::FindName (objectTwo);
NS_TEST_ASSERT_MSG_EQ (found, "Name Two", "Could not Names::Add and Names::FindName a second Object");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
found = Names::FindName (childOfObjectTwo);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can rename objects in
// its most basic way, which is
//
// Rename (Ptr<Object> context, std::string oldname, std::string newname);
//
// All other rename functions will just translate into this form, so this is the
// most basic rename functionality.
// ===========================================================================
class BasicRenameTestCase : public TestCase
{
public:
BasicRenameTestCase ();
virtual ~BasicRenameTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
BasicRenameTestCase::BasicRenameTestCase ()
: TestCase ("Check low level Names::Rename functionality")
{
}
BasicRenameTestCase::~BasicRenameTestCase ()
{
}
void
BasicRenameTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
BasicRenameTestCase::DoRun (void)
{
std::string found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add (Ptr<Object> (0, false), "Name", objectOne);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add (objectOne, "Child", childOfObjectOne);
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Name", "Could not Names::Add and Names::FindName an Object");
Names::Rename (Ptr<Object> (0, false), "Name", "New Name");
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "New Name", "Could not Names::Rename an Object");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
Names::Rename (objectOne, "Child", "New Child");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "New Child", "Could not Names::Rename a child Object");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can rename objects
// using a string context
//
// Rename (std::string context, std::string oldname, std::string newname);
// ===========================================================================
class StringContextRenameTestCase : public TestCase
{
public:
StringContextRenameTestCase ();
virtual ~StringContextRenameTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
StringContextRenameTestCase::StringContextRenameTestCase ()
: TestCase ("Check string context-based Names::Rename functionality")
{
}
StringContextRenameTestCase::~StringContextRenameTestCase ()
{
}
void
StringContextRenameTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
StringContextRenameTestCase::DoRun (void)
{
std::string found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add ("/Names", "Name", objectOne);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add ("/Names/Name", "Child", childOfObjectOne);
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Name", "Could not Names::Add and Names::FindName an Object");
Names::Rename ("/Names", "Name", "New Name");
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "New Name", "Could not Names::Rename an Object");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
Names::Rename ("/Names/New Name", "Child", "New Child");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "New Child", "Could not Names::Rename a child Object");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can rename objects
// using a fully qualified path name
//
// Rename (std::string oldpath, std::string newname);
// ===========================================================================
class FullyQualifiedRenameTestCase : public TestCase
{
public:
FullyQualifiedRenameTestCase ();
virtual ~FullyQualifiedRenameTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
FullyQualifiedRenameTestCase::FullyQualifiedRenameTestCase ()
: TestCase ("Check fully qualified path Names::Rename functionality")
{
}
FullyQualifiedRenameTestCase::~FullyQualifiedRenameTestCase ()
{
}
void
FullyQualifiedRenameTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
FullyQualifiedRenameTestCase::DoRun (void)
{
std::string found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add ("/Names/Name", objectOne);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add ("/Names/Name/Child", childOfObjectOne);
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Name", "Could not Names::Add and Names::FindName an Object");
Names::Rename ("/Names/Name", "New Name");
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "New Name", "Could not Names::Rename an Object");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
Names::Rename ("/Names/New Name/Child", "New Child");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "New Child", "Could not Names::Rename a child Object");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can rename objects
// using a relaltive path name
//
// Rename (std::string oldpath, std::string newname);
// ===========================================================================
class RelativeRenameTestCase : public TestCase
{
public:
RelativeRenameTestCase ();
virtual ~RelativeRenameTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
RelativeRenameTestCase::RelativeRenameTestCase ()
: TestCase ("Check relative path Names::Rename functionality")
{
}
RelativeRenameTestCase::~RelativeRenameTestCase ()
{
}
void
RelativeRenameTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
RelativeRenameTestCase::DoRun (void)
{
std::string found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add ("Name", objectOne);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add ("Name/Child", childOfObjectOne);
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Name", "Could not Names::Add and Names::FindName an Object");
Names::Rename ("Name", "New Name");
found = Names::FindName (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "New Name", "Could not Names::Rename an Object");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "Child", "Could not Names::Add and Names::FindName a child Object");
Names::Rename ("New Name/Child", "New Child");
found = Names::FindName (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "New Child", "Could not Names::Rename a child Object");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can look up an object
// and return its fully qualified path name
//
// FindPath (Ptr<Object> object);
// ===========================================================================
class FindPathTestCase : public TestCase
{
public:
FindPathTestCase ();
virtual ~FindPathTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
FindPathTestCase::FindPathTestCase ()
: TestCase ("Check Names::FindPath functionality")
{
}
FindPathTestCase::~FindPathTestCase ()
{
}
void
FindPathTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
FindPathTestCase::DoRun (void)
{
std::string found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add ("Name", objectOne);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add ("/Names/Name/Child", childOfObjectOne);
found = Names::FindPath (objectOne);
NS_TEST_ASSERT_MSG_EQ (found, "/Names/Name", "Could not Names::Add and Names::FindPath an Object");
found = Names::FindPath (childOfObjectOne);
NS_TEST_ASSERT_MSG_EQ (found, "/Names/Name/Child", "Could not Names::Add and Names::FindPath a child Object");
Ptr<TestObject> objectNotThere = CreateObject<TestObject> ();
found = Names::FindPath (objectNotThere);
NS_TEST_ASSERT_MSG_EQ (found, "", "Unexpectedly found a non-existent Object");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can find Objects using
// the lowest level find function, which is:
//
// Find (Ptr<Object> context, std::string name);
// ===========================================================================
class BasicFindTestCase : public TestCase
{
public:
BasicFindTestCase ();
virtual ~BasicFindTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
BasicFindTestCase::BasicFindTestCase ()
: TestCase ("Check low level Names::Find functionality")
{
}
BasicFindTestCase::~BasicFindTestCase ()
{
}
void
BasicFindTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
BasicFindTestCase::DoRun (void)
{
Ptr<TestObject> found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add ("Name One", objectOne);
Ptr<TestObject> objectTwo = CreateObject<TestObject> ();
Names::Add ("Name Two", objectTwo);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add ("Name One/Child", childOfObjectOne);
Ptr<TestObject> childOfObjectTwo = CreateObject<TestObject> ();
Names::Add ("Name Two/Child", childOfObjectTwo);
found = Names::Find<TestObject> (Ptr<Object> (0, false), "Name One");
NS_TEST_ASSERT_MSG_EQ (found, objectOne, "Could not find a previously named Object via object context");
found = Names::Find<TestObject> (Ptr<Object> (0, false), "Name Two");
NS_TEST_ASSERT_MSG_EQ (found, objectTwo, "Could not find a previously named Object via object context");
found = Names::Find<TestObject> (objectOne, "Child");
NS_TEST_ASSERT_MSG_EQ (found, childOfObjectOne, "Could not find a previously named child Object via object context");
found = Names::Find<TestObject> (objectTwo, "Child");
NS_TEST_ASSERT_MSG_EQ (found, childOfObjectTwo, "Could not find a previously named child Object via object context");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can find Objects using
// a string context-based find function, which is:
//
// Find (std::string context, std::string name);
// ===========================================================================
class StringContextFindTestCase : public TestCase
{
public:
StringContextFindTestCase ();
virtual ~StringContextFindTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
StringContextFindTestCase::StringContextFindTestCase ()
: TestCase ("Check string context-based Names::Find functionality")
{
}
StringContextFindTestCase::~StringContextFindTestCase ()
{
}
void
StringContextFindTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
StringContextFindTestCase::DoRun (void)
{
Ptr<TestObject> found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add ("Name One", objectOne);
Ptr<TestObject> objectTwo = CreateObject<TestObject> ();
Names::Add ("Name Two", objectTwo);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add ("Name One/Child", childOfObjectOne);
Ptr<TestObject> childOfObjectTwo = CreateObject<TestObject> ();
Names::Add ("Name Two/Child", childOfObjectTwo);
found = Names::Find<TestObject> ("/Names", "Name One");
NS_TEST_ASSERT_MSG_EQ (found, objectOne, "Could not find a previously named Object via string context");
found = Names::Find<TestObject> ("/Names", "Name Two");
NS_TEST_ASSERT_MSG_EQ (found, objectTwo, "Could not find a previously named Object via stribng context");
found = Names::Find<TestObject> ("/Names/Name One", "Child");
NS_TEST_ASSERT_MSG_EQ (found, childOfObjectOne, "Could not find a previously named child Object via string context");
found = Names::Find<TestObject> ("/Names/Name Two", "Child");
NS_TEST_ASSERT_MSG_EQ (found, childOfObjectTwo, "Could not find a previously named child Object via string context");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can find Objects using
// a fully qualified path name-based find function, which is:
//
// Find (std::string name);
// ===========================================================================
class FullyQualifiedFindTestCase : public TestCase
{
public:
FullyQualifiedFindTestCase ();
virtual ~FullyQualifiedFindTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
FullyQualifiedFindTestCase::FullyQualifiedFindTestCase ()
: TestCase ("Check fully qualified path Names::Find functionality")
{
}
FullyQualifiedFindTestCase::~FullyQualifiedFindTestCase ()
{
}
void
FullyQualifiedFindTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
FullyQualifiedFindTestCase::DoRun (void)
{
Ptr<TestObject> found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add ("/Names/Name One", objectOne);
Ptr<TestObject> objectTwo = CreateObject<TestObject> ();
Names::Add ("/Names/Name Two", objectTwo);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add ("/Names/Name One/Child", childOfObjectOne);
Ptr<TestObject> childOfObjectTwo = CreateObject<TestObject> ();
Names::Add ("/Names/Name Two/Child", childOfObjectTwo);
found = Names::Find<TestObject> ("/Names/Name One");
NS_TEST_ASSERT_MSG_EQ (found, objectOne, "Could not find a previously named Object via string context");
found = Names::Find<TestObject> ("/Names/Name Two");
NS_TEST_ASSERT_MSG_EQ (found, objectTwo, "Could not find a previously named Object via stribng context");
found = Names::Find<TestObject> ("/Names/Name One/Child");
NS_TEST_ASSERT_MSG_EQ (found, childOfObjectOne, "Could not find a previously named child Object via string context");
found = Names::Find<TestObject> ("/Names/Name Two/Child");
NS_TEST_ASSERT_MSG_EQ (found, childOfObjectTwo, "Could not find a previously named child Object via string context");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can find Objects using
// a relative path name-based find function, which is:
//
// Find (std::string name);
// ===========================================================================
class RelativeFindTestCase : public TestCase
{
public:
RelativeFindTestCase ();
virtual ~RelativeFindTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
RelativeFindTestCase::RelativeFindTestCase ()
: TestCase ("Check relative path Names::Find functionality")
{
}
RelativeFindTestCase::~RelativeFindTestCase ()
{
}
void
RelativeFindTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
RelativeFindTestCase::DoRun (void)
{
Ptr<TestObject> found;
Ptr<TestObject> objectOne = CreateObject<TestObject> ();
Names::Add ("Name One", objectOne);
Ptr<TestObject> objectTwo = CreateObject<TestObject> ();
Names::Add ("Name Two", objectTwo);
Ptr<TestObject> childOfObjectOne = CreateObject<TestObject> ();
Names::Add ("Name One/Child", childOfObjectOne);
Ptr<TestObject> childOfObjectTwo = CreateObject<TestObject> ();
Names::Add ("Name Two/Child", childOfObjectTwo);
found = Names::Find<TestObject> ("Name One");
NS_TEST_ASSERT_MSG_EQ (found, objectOne, "Could not find a previously named Object via string context");
found = Names::Find<TestObject> ("Name Two");
NS_TEST_ASSERT_MSG_EQ (found, objectTwo, "Could not find a previously named Object via stribng context");
found = Names::Find<TestObject> ("Name One/Child");
NS_TEST_ASSERT_MSG_EQ (found, childOfObjectOne, "Could not find a previously named child Object via string context");
found = Names::Find<TestObject> ("Name Two/Child");
NS_TEST_ASSERT_MSG_EQ (found, childOfObjectTwo, "Could not find a previously named child Object via string context");
return false;
}
// ===========================================================================
// Test case to make sure that the Object Name Service can find Objects using
// a second type.
// ===========================================================================
class AlternateFindTestCase : public TestCase
{
public:
AlternateFindTestCase ();
virtual ~AlternateFindTestCase ();
private:
virtual bool DoRun (void);
virtual void DoTeardown (void);
};
AlternateFindTestCase::AlternateFindTestCase ()
: TestCase ("Check GetObject operation in Names::Find")
{
}
AlternateFindTestCase::~AlternateFindTestCase ()
{
}
void
AlternateFindTestCase::DoTeardown (void)
{
Names::Delete ();
}
bool
AlternateFindTestCase::DoRun (void)
{
Ptr<TestObject> testObject = CreateObject<TestObject> ();
Names::Add ("Test Object", testObject);
Ptr<AlternateTestObject> alternateTestObject = CreateObject<AlternateTestObject> ();
Names::Add ("Alternate Test Object", alternateTestObject);
Ptr<TestObject> foundTestObject;
Ptr<AlternateTestObject> foundAlternateTestObject;
foundTestObject = Names::Find<TestObject> ("Test Object");
NS_TEST_ASSERT_MSG_EQ (foundTestObject, testObject,
"Could not find a previously named TestObject via GetObject");
foundAlternateTestObject = Names::Find<AlternateTestObject> ("Alternate Test Object");
NS_TEST_ASSERT_MSG_EQ (foundAlternateTestObject, alternateTestObject,
"Could not find a previously named AlternateTestObject via GetObject");
foundAlternateTestObject = Names::Find<AlternateTestObject> ("Test Object");
NS_TEST_ASSERT_MSG_EQ (foundAlternateTestObject, 0,
"Unexpectedly able to GetObject<AlternateTestObject> on a TestObject");
foundTestObject = Names::Find<TestObject> ("Alternate Test Object");
NS_TEST_ASSERT_MSG_EQ (foundTestObject, 0,
"Unexpectedly able to GetObject<TestObject> on an AlternateTestObject");
return false;
}
class NamesTestSuite : public TestSuite
{
public:
NamesTestSuite ();
};
NamesTestSuite::NamesTestSuite ()
: TestSuite ("object-name-service", UNIT)
{
AddTestCase (new BasicAddTestCase);
AddTestCase (new StringContextAddTestCase);
AddTestCase (new FullyQualifiedAddTestCase);
AddTestCase (new RelativeAddTestCase);
AddTestCase (new BasicRenameTestCase);
AddTestCase (new StringContextRenameTestCase);
AddTestCase (new FullyQualifiedRenameTestCase);
AddTestCase (new RelativeRenameTestCase);
AddTestCase (new FindPathTestCase);
AddTestCase (new BasicFindTestCase);
AddTestCase (new StringContextFindTestCase);
AddTestCase (new FullyQualifiedFindTestCase);
AddTestCase (new RelativeFindTestCase);
AddTestCase (new AlternateFindTestCase);
}
NamesTestSuite namesTestSuite;

434
src/core/rng-test-suite.cc Normal file
View File

@@ -0,0 +1,434 @@
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation;
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <math.h>
#include <gsl/gsl_cdf.h>
#include <gsl/gsl_histogram.h>
#include <time.h>
#include <fstream>
#include "test.h"
#include "random-variable.h"
using namespace ns3;
void
FillHistoRangeUniformly (double *array, uint32_t n, double start, double end)
{
double increment = (end - start) / (n - 1.);
double d = start;
for (uint32_t i = 0; i < n; ++i)
{
array[i] = d;
d += increment;
}
}
// ===========================================================================
// Test case for uniform distribution random number generator
// ===========================================================================
class RngUniformTestCase : public TestCase
{
public:
static const uint32_t N_RUNS = 5;
static const uint32_t N_BINS = 50;
static const uint32_t N_MEASUREMENTS = 1000000;
RngUniformTestCase ();
virtual ~RngUniformTestCase ();
double ChiSquaredTest (UniformVariable &u);
private:
virtual bool DoRun (void);
};
RngUniformTestCase::RngUniformTestCase ()
: TestCase ("Uniform Random Number Generator")
{
}
RngUniformTestCase::~RngUniformTestCase ()
{
}
double
RngUniformTestCase::ChiSquaredTest (UniformVariable &u)
{
gsl_histogram * h = gsl_histogram_alloc (N_BINS);
gsl_histogram_set_ranges_uniform (h, 0., 1.);
for (uint32_t i = 0; i < N_MEASUREMENTS; ++i)
{
gsl_histogram_increment (h, u.GetValue ());
}
double tmp[N_BINS];
double expected = ((double)N_MEASUREMENTS / (double)N_BINS);
for (uint32_t i = 0; i < N_BINS; ++i)
{
tmp[i] = gsl_histogram_get (h, i);
tmp[i] -= expected;
tmp[i] *= tmp[i];
tmp[i] /= expected;
}
gsl_histogram_free (h);
double chiSquared = 0;
for (uint32_t i = 0; i < N_BINS; ++i)
{
chiSquared += tmp[i];
}
return chiSquared;
}
bool
RngUniformTestCase::DoRun (void)
{
SeedManager::SetSeed (time (0));
double sum = 0.;
double maxStatistic = gsl_cdf_chisq_Qinv (0.05, N_BINS);
for (uint32_t i = 0; i < N_RUNS; ++i)
{
UniformVariable u;
double result = ChiSquaredTest (u);
sum += result;
}
sum /= (double)N_RUNS;
NS_TEST_ASSERT_MSG_LT (sum, maxStatistic, "Chi-squared statistic out of range");
return false;
}
// ===========================================================================
// Test case for normal distribution random number generator
// ===========================================================================
class RngNormalTestCase : public TestCase
{
public:
static const uint32_t N_RUNS = 5;
static const uint32_t N_BINS = 50;
static const uint32_t N_MEASUREMENTS = 1000000;
RngNormalTestCase ();
virtual ~RngNormalTestCase ();
double ChiSquaredTest (NormalVariable &n);
private:
virtual bool DoRun (void);
};
RngNormalTestCase::RngNormalTestCase ()
: TestCase ("Normal Random Number Generator")
{
}
RngNormalTestCase::~RngNormalTestCase ()
{
}
double
RngNormalTestCase::ChiSquaredTest (NormalVariable &n)
{
gsl_histogram * h = gsl_histogram_alloc (N_BINS);
double range[N_BINS + 1];
FillHistoRangeUniformly (range, N_BINS + 1, -4., 4.);
range[0] = -std::numeric_limits<double>::max ();
range[N_BINS] = std::numeric_limits<double>::max ();
gsl_histogram_set_ranges (h, range, N_BINS + 1);
double expected[N_BINS];
double sigma = 1.;
for (uint32_t i = 0; i < N_BINS; ++i)
{
expected[i] = gsl_cdf_gaussian_P (range[i + 1], sigma) - gsl_cdf_gaussian_P (range[i], sigma);
expected[i] *= N_MEASUREMENTS;
}
for (uint32_t i = 0; i < N_MEASUREMENTS; ++i)
{
gsl_histogram_increment (h, n.GetValue ());
}
double tmp[N_BINS];
for (uint32_t i = 0; i < N_BINS; ++i)
{
tmp[i] = gsl_histogram_get (h, i);
tmp[i] -= expected[i];
tmp[i] *= tmp[i];
tmp[i] /= expected[i];
}
gsl_histogram_free (h);
double chiSquared = 0;
for (uint32_t i = 0; i < N_BINS; ++i)
{
chiSquared += tmp[i];
}
return chiSquared;
}
bool
RngNormalTestCase::DoRun (void)
{
SeedManager::SetSeed (time (0));
double sum = 0.;
double maxStatistic = gsl_cdf_chisq_Qinv (0.05, N_BINS);
for (uint32_t i = 0; i < N_RUNS; ++i)
{
NormalVariable n;
double result = ChiSquaredTest (n);
sum += result;
}
sum /= (double)N_RUNS;
NS_TEST_ASSERT_MSG_LT (sum, maxStatistic, "Chi-squared statistic out of range");
return false;
}
// ===========================================================================
// Test case for exponential distribution random number generator
// ===========================================================================
class RngExponentialTestCase : public TestCase
{
public:
static const uint32_t N_RUNS = 5;
static const uint32_t N_BINS = 50;
static const uint32_t N_MEASUREMENTS = 1000000;
RngExponentialTestCase ();
virtual ~RngExponentialTestCase ();
double ChiSquaredTest (ExponentialVariable &n);
private:
virtual bool DoRun (void);
};
RngExponentialTestCase::RngExponentialTestCase ()
: TestCase ("Exponential Random Number Generator")
{
}
RngExponentialTestCase::~RngExponentialTestCase ()
{
}
double
RngExponentialTestCase::ChiSquaredTest (ExponentialVariable &e)
{
gsl_histogram * h = gsl_histogram_alloc (N_BINS);
double range[N_BINS + 1];
FillHistoRangeUniformly (range, N_BINS + 1, 0., 10.);
range[N_BINS] = std::numeric_limits<double>::max ();
gsl_histogram_set_ranges (h, range, N_BINS + 1);
double expected[N_BINS];
double mu = 1.;
for (uint32_t i = 0; i < N_BINS; ++i)
{
expected[i] = gsl_cdf_exponential_P (range[i + 1], mu) - gsl_cdf_exponential_P (range[i], mu);
expected[i] *= N_MEASUREMENTS;
}
for (uint32_t i = 0; i < N_MEASUREMENTS; ++i)
{
gsl_histogram_increment (h, e.GetValue ());
}
double tmp[N_BINS];
for (uint32_t i = 0; i < N_BINS; ++i)
{
tmp[i] = gsl_histogram_get (h, i);
tmp[i] -= expected[i];
tmp[i] *= tmp[i];
tmp[i] /= expected[i];
}
gsl_histogram_free (h);
double chiSquared = 0;
for (uint32_t i = 0; i < N_BINS; ++i)
{
chiSquared += tmp[i];
}
return chiSquared;
}
bool
RngExponentialTestCase::DoRun (void)
{
SeedManager::SetSeed (time (0));
double sum = 0.;
double maxStatistic = gsl_cdf_chisq_Qinv (0.05, N_BINS);
for (uint32_t i = 0; i < N_RUNS; ++i)
{
ExponentialVariable e;
double result = ChiSquaredTest (e);
sum += result;
}
sum /= (double)N_RUNS;
NS_TEST_ASSERT_MSG_LT (sum, maxStatistic, "Chi-squared statistic out of range");
return false;
}
// ===========================================================================
// Test case for pareto distribution random number generator
// ===========================================================================
class RngParetoTestCase : public TestCase
{
public:
static const uint32_t N_RUNS = 5;
static const uint32_t N_BINS = 50;
static const uint32_t N_MEASUREMENTS = 1000000;
RngParetoTestCase ();
virtual ~RngParetoTestCase ();
double ChiSquaredTest (ParetoVariable &p);
private:
virtual bool DoRun (void);
};
RngParetoTestCase::RngParetoTestCase ()
: TestCase ("Pareto Random Number Generator")
{
}
RngParetoTestCase::~RngParetoTestCase ()
{
}
double
RngParetoTestCase::ChiSquaredTest (ParetoVariable &p)
{
gsl_histogram * h = gsl_histogram_alloc (N_BINS);
double range[N_BINS + 1];
FillHistoRangeUniformly (range, N_BINS + 1, 1., 10.);
range[N_BINS] = std::numeric_limits<double>::max ();
gsl_histogram_set_ranges (h, range, N_BINS + 1);
double expected[N_BINS];
double a = 1.5;
double b = 0.33333333;
for (uint32_t i = 0; i < N_BINS; ++i)
{
expected[i] = gsl_cdf_pareto_P (range[i + 1], a, b) - gsl_cdf_pareto_P (range[i], a, b);
expected[i] *= N_MEASUREMENTS;
}
for (uint32_t i = 0; i < N_MEASUREMENTS; ++i)
{
gsl_histogram_increment (h, p.GetValue ());
}
double tmp[N_BINS];
for (uint32_t i = 0; i < N_BINS; ++i)
{
tmp[i] = gsl_histogram_get (h, i);
tmp[i] -= expected[i];
tmp[i] *= tmp[i];
tmp[i] /= expected[i];
}
gsl_histogram_free (h);
double chiSquared = 0;
for (uint32_t i = 0; i < N_BINS; ++i)
{
chiSquared += tmp[i];
}
return chiSquared;
}
bool
RngParetoTestCase::DoRun (void)
{
SeedManager::SetSeed (time (0));
double sum = 0.;
double maxStatistic = gsl_cdf_chisq_Qinv (0.05, N_BINS);
for (uint32_t i = 0; i < N_RUNS; ++i)
{
ParetoVariable e;
double result = ChiSquaredTest (e);
sum += result;
}
sum /= (double)N_RUNS;
NS_TEST_ASSERT_MSG_LT (sum, maxStatistic, "Chi-squared statistic out of range");
return false;
}
class RngTestSuite : public TestSuite
{
public:
RngTestSuite ();
};
RngTestSuite::RngTestSuite ()
: TestSuite ("random-number-generators", UNIT)
{
AddTestCase (new RngUniformTestCase);
AddTestCase (new RngNormalTestCase);
AddTestCase (new RngExponentialTestCase);
AddTestCase (new RngParetoTestCase);
}
RngTestSuite rngTestSuite;

View File

@@ -1,6 +1,6 @@
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
/*
* Copyright (c) 2005 INRIA
* Copyright (c) 2009 University of Washington
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -14,14 +14,530 @@
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* Author: Mathieu Lacage <mathieu.lacage@sophia.inria.fr>
*/
#include "test.h"
#include "abort.h"
#include <math.h>
namespace ns3 {
bool
TestDoubleIsEqual (const double x1, const double x2, const double epsilon)
{
int exponent;
double delta, difference;
//
// Find exponent of largest absolute value
//
{
double max = (fabs (x1) > fabs (x2)) ? x1 : x2;
frexp (max, &exponent);
}
//
// Form a neighborhood of size 2 * delta
//
delta = ldexp (epsilon, exponent);
difference = x1 - x2;
if (difference > delta || difference < -delta)
{
return false;
}
return true;
}
TestCase::TestCase (std::string name)
: m_name (name), m_verbose (false), m_basedir ("invalid"), m_ofs (0), m_error (false)
{
}
TestCase::~TestCase ()
{
}
void
TestCase::ReportStart (void)
{
DoReportStart ();
}
void
TestCase::ReportSuccess (void)
{
DoReportSuccess ();
}
void
TestCase::ReportFailure (
std::string cond,
std::string actual,
std::string limit,
std::string message,
std::string file,
int32_t line)
{
DoReportFailure (cond, actual, limit, message, file, line);
}
void
TestCase::ReportEnd (void)
{
DoReportStart ();
}
bool
TestCase::Run (void)
{
DoReportStart ();
DoSetup ();
m_error |= DoRun ();
DoTeardown ();
if (m_error == false)
{
DoReportSuccess ();
}
DoReportEnd ();
return m_error;
}
void
TestCase::SetVerbose (bool verbose)
{
m_verbose = verbose;
}
void
TestCase::SetName (std::string name)
{
m_name = name;
}
std::string
TestCase::GetName (void)
{
return m_name;
}
void
TestCase::SetBaseDir (std::string basedir)
{
m_basedir = basedir;
}
std::string
TestCase::GetBaseDir (void)
{
return m_basedir;
}
std::string
TestCase::GetSourceDir (std::string file)
{
std::string::size_type relPathBegin = file.find_first_of ("/");
NS_ABORT_MSG_IF (relPathBegin == std::string::npos, "TestCase::GetSrouceDir(): Internal Error");
std::string::size_type relPathEnd = file.find_last_of ("/");
NS_ABORT_MSG_IF (relPathEnd == std::string::npos, "TestCase::GetSrouceDir(): Internal Error");
return GetBaseDir () + file.substr (relPathBegin, relPathEnd + 1 - relPathBegin);
}
void
TestCase::SetStream (std::ofstream *ofs)
{
m_ofs = ofs;
}
std::ofstream *
TestCase::GetStream (void)
{
return m_ofs;
}
void
TestCase::SetErrorStatus (bool error)
{
m_error = error;
}
bool
TestCase::GetErrorStatus (void)
{
return m_error;
}
void
TestCase::DoReportStart (void)
{
m_startTime = times (&m_startTimes);
if (m_ofs == 0)
{
return;
}
*m_ofs << " <TestCase>" << std::endl;
*m_ofs << " <CaseName>" << GetName () << "</CaseName>" << std::endl;
}
void
TestCase::DoReportSuccess (void)
{
if (m_ofs == 0)
{
return;
}
*m_ofs << " <CaseResult>PASS</CaseResult>" << std::endl;
}
void
TestCase::DoReportFailure (
std::string cond,
std::string actual,
std::string limit,
std::string message,
std::string file,
int32_t line)
{
if (m_ofs == 0)
{
return;
}
*m_ofs << " <CaseResult>FAIL</CaseResult>" << std::endl;
*m_ofs << " <CaseCondition>" << cond << "</CaseCondition>" << std::endl;
*m_ofs << " <CaseActual>" << actual << "</CaseActual>" << std::endl;
*m_ofs << " <CaseLimit>" << limit << "</CaseLimit>" << std::endl;
*m_ofs << " <CaseMessage>" << message << "</CaseMessage>" << std::endl;
*m_ofs << " <CaseFile>" << file << "</CaseFile>" << std::endl;
*m_ofs << " <CaseLine>" << line << "</CaseLine>" << std::endl;
m_error |= true;
}
void
TestCase::DoReportEnd (void)
{
static long ticksPerSecond = sysconf (_SC_CLK_TCK);
if (m_ofs == 0)
{
return;
}
struct tms endTimes;
clock_t endTime = times (&endTimes);
clock_t elapsed = endTime - m_startTime;
clock_t elapsedUsr = endTimes.tms_utime - m_startTimes.tms_utime;
clock_t elapsedSys = endTimes.tms_stime - m_startTimes.tms_stime;
(*m_ofs).precision (2);
*m_ofs << std::fixed;
*m_ofs << " <CaseTime>" << "real " << static_cast<double> (elapsed) / ticksPerSecond
<< " user " << static_cast<double> (elapsedUsr) / ticksPerSecond
<< " system " << static_cast<double> (elapsedSys) / ticksPerSecond
<< "</CaseTime>" << std::endl;
*m_ofs << " </TestCase>" << std::endl;
}
void
TestCase::DoSetup (void)
{
}
void
TestCase::DoTeardown (void)
{
}
TestSuite::TestSuite (std::string name, TestType type)
: m_name (name), m_verbose (false), m_basedir ("invalid"), m_ofs (0), m_type (type)
{
TestRunner::AddTestSuite (this);
}
TestSuite::~TestSuite ()
{
for (TestCaseVector_t::iterator i = m_tests.begin (); i != m_tests.end (); ++i)
{
delete *i;
*i = 0;
}
m_tests.erase (m_tests.begin (), m_tests.end ());
}
void
TestSuite::ReportStart (void)
{
DoReportStart ();
}
void
TestSuite::ReportSuccess (void)
{
DoReportSuccess ();
}
void
TestSuite::ReportFailure (void)
{
DoReportFailure ();
}
void
TestSuite::ReportEnd (void)
{
DoReportEnd ();
}
bool
TestSuite::Run (void)
{
DoReportStart ();
DoSetup ();
bool error = DoRun ();
DoTeardown ();
if (error == false)
{
DoReportSuccess ();
}
else
{
DoReportFailure ();
}
DoReportEnd ();
return error;
}
uint32_t
TestSuite::AddTestCase (TestCase *testCase)
{
uint32_t index = m_tests.size ();
m_tests.push_back (testCase);
return index;
}
uint32_t
TestSuite::GetNTestCases (void)
{
return m_tests.size ();
}
TestCase *
TestSuite::GetTestCase (uint32_t n)
{
return m_tests[n];
}
TestSuite::TestType
TestSuite::GetTestType (void)
{
return m_type;
}
void
TestSuite::SetVerbose (bool verbose)
{
m_verbose = verbose;
}
void
TestSuite::SetName (std::string name)
{
m_name = name;
}
std::string
TestSuite::GetName (void)
{
return m_name;
}
void
TestSuite::SetBaseDir (std::string basedir)
{
m_basedir = basedir;
}
std::string
TestSuite::GetBaseDir (void)
{
return m_basedir;
}
void
TestSuite::SetStream (std::ofstream *ofs)
{
m_ofs = ofs;
}
void
TestSuite::DoReportStart (void)
{
m_startTime = times (&m_startTimes);
if (m_ofs == 0)
{
return;
}
*m_ofs << "<TestSuite>" << std::endl;
*m_ofs << " <SuiteName>" << GetName () << "</SuiteName>" << std::endl;
}
void
TestSuite::DoReportFailure (void)
{
if (m_ofs == 0)
{
return;
}
*m_ofs << " <SuiteResult>FAIL</SuiteResult>" << std::endl;
}
void
TestSuite::DoReportSuccess (void)
{
if (m_ofs == 0)
{
return;
}
*m_ofs << " <SuiteResult>PASS</SuiteResult>" << std::endl;
}
void
TestSuite::DoReportEnd (void)
{
static long ticksPerSecond = sysconf (_SC_CLK_TCK);
if (m_ofs == 0)
{
return;
}
struct tms endTimes;
clock_t endTime = times (&endTimes);
clock_t elapsed = endTime - m_startTime;
clock_t elapsedUsr = endTimes.tms_utime - m_startTimes.tms_utime;
clock_t elapsedSys = endTimes.tms_stime - m_startTimes.tms_stime;
(*m_ofs).precision (2);
*m_ofs << std::fixed;
*m_ofs << " <SuiteTime>" << "real " << static_cast<double> (elapsed) / ticksPerSecond
<< " user " << static_cast<double> (elapsedUsr) / ticksPerSecond
<< " system " << static_cast<double> (elapsedSys) / ticksPerSecond
<< "</SuiteTime>" << std::endl;
*m_ofs << "</TestSuite>" << std::endl;
}
void
TestSuite::DoSetup (void)
{
}
bool
TestSuite::DoRun (void)
{
for (TestCaseVector_t::iterator i = m_tests.begin (); i != m_tests.end (); ++i)
{
(*i)->SetVerbose (m_verbose);
(*i)->SetBaseDir (m_basedir);
(*i)->SetStream (m_ofs);
bool err = (*i)->Run ();
if (err)
{
return err;
}
}
return false;
}
void
TestSuite::DoTeardown (void)
{
}
class TestRunnerImpl
{
public:
uint32_t AddTestSuite (TestSuite *testSuite);
uint32_t GetNTestSuites (void);
TestSuite *GetTestSuite (uint32_t n);
bool RunTestSuite (uint32_t n);
static TestRunnerImpl *Instance (void);
private:
TestRunnerImpl ();
~TestRunnerImpl ();
typedef std::vector<TestSuite *> TestSuiteVector_t;
TestSuiteVector_t m_suites;
};
TestRunnerImpl::TestRunnerImpl ()
{
}
TestRunnerImpl::~TestRunnerImpl ()
{
}
TestRunnerImpl *
TestRunnerImpl::Instance (void)
{
static TestRunnerImpl runner;
return &runner;
}
uint32_t
TestRunnerImpl::AddTestSuite (TestSuite *testSuite)
{
uint32_t index = m_suites.size ();
m_suites.push_back (testSuite);
return index;
}
uint32_t
TestRunnerImpl::GetNTestSuites (void)
{
return m_suites.size ();
}
TestSuite *
TestRunnerImpl::GetTestSuite (uint32_t n)
{
return m_suites[n];
}
uint32_t
TestRunner::AddTestSuite (TestSuite *testSuite)
{
return TestRunnerImpl::Instance ()->AddTestSuite (testSuite);
}
uint32_t
TestRunner::GetNTestSuites (void)
{
return TestRunnerImpl::Instance ()->GetNTestSuites ();
}
TestSuite *
TestRunner::GetTestSuite (uint32_t n)
{
return TestRunnerImpl::Instance ()->GetTestSuite (n);
}
}; // namespace ns3
#ifdef RUN_SELF_TESTS
#include <iostream>
namespace ns3 {
@@ -66,7 +582,6 @@ TestManager::PrintTestNames (std::ostream &os)
}
}
std::ostream &
TestManager::Failure (void)
{

View File

@@ -1,6 +1,6 @@
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
/*
* Copyright (c) 2005 INRIA
* Copyright (c) 2009 University of Washington
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@@ -14,18 +14,986 @@
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* Author: Mathieu Lacage <mathieu.lacage@sophia.inria.fr>
*/
#ifndef TEST_H
#define TEST_H
#include <list>
#include <iostream>
#include <fstream>
#include <sstream>
#include <string>
#include <utility>
#include <ostream>
#include <vector>
#include <list>
#include <limits>
#include <stdint.h>
#include <sys/times.h>
//
// Note on below macros:
//
// When multiple statements are used in a macro, they should be bound together
// in a loop syntactically, so the macro can appear safely inside if clauses
// or other places that expect a single statement or a statement block. The
// "strange" do while construct is a generally expected best practice for
// defining a robust macro.
//
/**
* \brief Convenience macro to extract the source directory of the current
* source file.
*
* \see TestCase::GetSourceDir
*/
#define NS_TEST_SOURCEDIR \
TestCase::GetSourceDir (__FILE__)
// ===========================================================================
// Test for equality (generic version)
// ===========================================================================
/**
* \internal
*/
#define NS_TEST_ASSERT_MSG_EQ_INTERNAL(actual, limit, msg, file, line) \
do { \
if (!((actual) == (limit))) \
{ \
std::ostringstream msgStream; \
msgStream << msg; \
std::ostringstream actualStream; \
actualStream << actual; \
std::ostringstream limitStream; \
limitStream << limit; \
ReportFailure (std::string (#actual) + " (actual) EQ " + std::string (#limit) + " (limit)", \
actualStream.str (), limitStream.str (), msgStream.str (), file, line); \
return true; \
} \
} while (false)
/**
* \brief Test that an actual and expected (limit) value are equal and report
* and abort if not.
*
* Check to see if the expected (limit) value is equal to the actual value found
* in a test case. If the two values are equal nothing happens, but if the
* comparison fails, an error is reported in a consistent way and the execution
* of the current test case is aborted.
*
* The message is interpreted as a stream, for example:
*
* \code
* NS_TEST_ASSERT_MSG_EQ (result, true,
* "cannot open file " << filename << " in test");
* \endcode
*
* is legal.
*
* \param actual Expression for the actual value found during the test.
* \param limit Expression for the expected value of the test.
* \param msg Message that is output if the test does not pass.
*
* \warning Do not use this macro if you are comparing floating point numbers
* (float or double) as it is unlikely to do what you expect. Use
* NS_TEST_ASSERT_MSG_EQ_TOL instead.
*/
#define NS_TEST_ASSERT_MSG_EQ(actual, limit, msg) \
NS_TEST_ASSERT_MSG_EQ_INTERNAL (actual, limit, msg, __FILE__, __LINE__)
/**
* \internal
*
* Required to avoid use of return statement which allows use in methods
* (esp. callbacks) returning void.
*/
#define NS_TEST_EXPECT_MSG_EQ_INTERNAL(actual, limit, msg, file, line) \
do { \
if (!((actual) == (limit))) \
{ \
std::ostringstream msgStream; \
msgStream << msg; \
std::ostringstream actualStream; \
actualStream << actual; \
std::ostringstream limitStream; \
limitStream << limit; \
ReportFailure (std::string (#actual) + " (actual) EQ " + std::string (#limit) + " (limit)", \
actualStream.str (), limitStream.str (), msgStream.str (), file, line); \
} \
} while (false)
/**
* \brief Test that an actual and expected (limit) value are equal and report
* if not.
*
* Check to see if the expected (lmit) value is equal to the actual value found
* in a test case. If the two values are equal nothing happens, but if the
* comparison fails, an error is reported in a consistent way. EXPECT* macros
* do not return if an error is detected.
*
* The message is interpreted as a stream, for example:
*
* \code
* NS_TEST_EXPECT_MSG_EQUAL (result, true,
* "cannot open file " << filename << " in test");
* \endcode
*
* is legal.
*
* \param actual Expression for the actual value found during the test.
* \param limit Expression for the expected value of the test.
* \param msg Message that is output if the test does not pass.
*
* \warning Do not use this macro if you are comparing floating point numbers
* (float or double) as it is unlikely to do what you expect. Use
* NS_TEST_EXPECT_MSG_EQ_TOL instead.
*/
#define NS_TEST_EXPECT_MSG_EQ(actual, limit, msg) \
NS_TEST_EXPECT_MSG_EQ_INTERNAL (actual, limit, msg, __FILE__, __LINE__)
// ===========================================================================
// Test for equality with a provided tolerance (use for floating point
// comparisons -- both float and double)
// ===========================================================================
/**
* \internal
*/
#define NS_TEST_ASSERT_MSG_EQ_TOL_INTERNAL(actual, limit, tol, msg, file, line) \
do { \
if ((actual) > (limit) + (tol) || (actual) < (limit) - (tol)) \
{ \
std::ostringstream msgStream; \
msgStream << msg; \
std::ostringstream actualStream; \
actualStream << actual; \
std::ostringstream limitStream; \
limitStream << limit << " +- " << tol; \
std::ostringstream condStream; \
condStream << #actual << " (actual) LT " << #limit << " (limit) + " << #tol << " (tol) AND " << \
#actual << " (actual) GT " << #limit << " (limit) - " << #tol << " (tol)"; \
ReportFailure (condStream.str (), actualStream.str (), limitStream.str (), msgStream.str (), file, line); \
return true; \
} \
} while (false)
/**
* \brief Test that actual and expected (limit) values are equal to plus or minus
* some tolerance and report and abort if not.
*
* Check to see if the expected (limit) value is equal to the actual value found
* in a test case to some tolerance. This is not the same thing as asking if
* two floating point are equal to within some epsilon, but is useful for that
* case. This assertion is geared toward more of a measurement problem. Consider
* measuring a physical rod of some kind that you have ordered. You need to
* determine if it is "good." You won't measure the rod to an arbitrary precision
* of sixteen significant figures, you will measure the rod to determine if its
* length is within the tolerances you provided. For example, 12.00 inches plus
* or minus .005 inch may be just fine.
*
* In ns-3, you might want to measure a signal to noise ratio and check to see
* if the answer is what you expect. If you naively measure (double)1128.93 and
* compare this number with a constant 1128.93 you are almost certainly going to
* have your test fail because of floating point rounding errors. We provide a
* floating point comparison function ns3::TestDoubleIsEqual() but you will
* probably quickly find that is not what you want either. It may turn out to
* be the case that when you measured an snr that printed as 1128.93, what was
* actually measured was something more like 1128.9287653857625442 for example.
* Given that the double epsilon is on the order of 0.0000000000000009, you would
* need to provide sixteen significant figures of expected value for this kind of
* test to pass even with a typical test for floating point "approximate equality."
* That is clearly not required or desired. You really want to be able to provide
* 1128.93 along with a tolerance just like you provided 12 inches +- 0.005 inch
* above.
*
* This assertion is designed for real measurements by taking into account
* measurement tolerances. By doing so it also automatically compensates for
* floating point rounding errors. If you really want to check floating point
* equality down to the numeric_limits<double>::epsilon () range, consider using
* ns3::TestDoubleIsEqual().
*
* The message is interpreted as a stream, for example:
*
* \code
* NS_TEST_ASSERT_MSG_EQ_TOL (snr, 1128.93, 0.005, "wrong snr (" << snr << ") in test");
* \endcode
*
* is legal.
*
* \param actual Expression for the actual value found during the test.
* \param limit Expression for the expected value of the test.
* \param tol Tolerance of the test.
* \param msg Message that is output if the test does not pass.
*/
#define NS_TEST_ASSERT_MSG_EQ_TOL(actual, limit, tol, msg) \
NS_TEST_ASSERT_MSG_EQ_TOL_INTERNAL (actual, limit, tol, msg, __FILE__, __LINE__)
/**
* \internal
*
* Required to avoid use of return statement which allows use in methods
* (esp. callbacks) returning void.
*/
#define NS_TEST_EXPECT_MSG_EQ_TOL_INTERNAL(actual, limit, tol, msg, file, line) \
do { \
if ((actual) > (limit) + (tol) || (actual) < (limit) - (tol)) \
{ \
std::ostringstream msgStream; \
msgStream << msg; \
std::ostringstream actualStream; \
actualStream << actual; \
std::ostringstream limitStream; \
limitStream << limit << " +- " << tol; \
std::ostringstream condStream; \
condStream << #actual << " (actual) LT " << #limit << " (limit) + " << #tol << " (tol) AND " << \
#actual << " (actual) GT " << #limit << " (limit) - " << #tol << " (tol)"; \
ReportFailure (condStream.str (), actualStream.str (), limitStream.str (), msgStream.str (), file, line); \
} \
} while (false)
/**
* \brief Test that actual and expected (limit) values are equal to plus or minus
* some tolerance and report if not.
*
* Check to see if the expected (limit) value is equal to the actual value found
* in a test case to some tolerance. This is not the same thing as asking if
* two floating point are equal to within some epsilon, but is useful for that
* case. This assertion is geared toward more of a measurement problem. Consider
* measuring a physical rod of some kind that you have ordered. You need to
* determine if it is "good." You won't measure the rod to an arbitrary precision
* of sixteen significant figures, you will measure the rod to determine if its
* length is within the tolerances you provided. For example, 12.00 inches plus
* or minus .005 inch may be just fine.
*
* In ns-3, you might want to measure a signal to noise ratio and check to see
* if the answer is what you expect. If you naively measure (double)1128.93 and
* compare this number with a constant 1128.93 you are almost certainly going to
* have your test fail because of floating point rounding errors. We provide a
* floating point comparison function ns3::TestDoubleIsEqual() but you will
* probably quickly find that is not what you want either. It may turn out to
* be the case that when you measured an snr that printed as 1128.93, what was
* actually measured was something more like 1128.9287653857625442 for example.
* Given that the double epsilon is on the order of 0.0000000000000009, you would
* need to provide sixteen significant figures of expected value for this kind of
* test to pass even with a typical test for floating point "approximate equality."
* That is clearly not required or desired. You really want to be able to provide
* 1128.93 along with a tolerance just like you provided 12 inches +- 0.005 inch
* above.
*
* This assertion is designed for real measurements by taking into account
* measurement tolerances. By doing so it also automatically compensates for
* floating point rounding errors. If you really want to check floating point
* equality down to the numeric_limits<double>::epsilon () range, consider using
* ns3::TestDoubleIsEqual().
*
* The message is interpreted as a stream, for example:
*
* \code
* NS_TEST_EXPECT_MSG_EQ_TOL (snr, 1128.93, 0.005, "wrong snr (" << snr << ") in test");
* \endcode
*
* is legal.
*
* \param actual Expression for the actual value found during the test.
* \param limit Expression for the expected value of the test.
* \param tol Tolerance of the test.
* \param msg Message that is output if the test does not pass.
*/
#define NS_TEST_EXPECT_MSG_EQ_TOL(actual, limit, tol, msg) \
NS_TEST_EXPECT_MSG_EQ_TOL_INTERNAL (actual, limit, tol, msg, __FILE__, __LINE__)
// ===========================================================================
// Test for inequality
// ===========================================================================
/**
* \internal
*/
#define NS_TEST_ASSERT_MSG_NE_INTERNAL(actual, limit, msg, file, line) \
do { \
if (!((actual) != (limit))) \
{ \
std::ostringstream msgStream; \
msgStream << msg; \
std::ostringstream actualStream; \
actualStream << actual; \
std::ostringstream limitStream; \
limitStream << limit; \
ReportFailure (std::string (#actual) + " (actual) NE " + std::string (#limit) + " (limit)", \
actualStream.str (), limitStream.str (), msgStream.str (), file, line); \
return true; \
} \
} while (false)
/**
* \brief Test that an actual and expected (limit) value are equal and report
* and abort if not.
*
* Check to see if the expected (limit) value is not equal to the actual value
* found in a test case. If the two values are equal nothing happens, but if
* the comparison fails, an error is reported in a consistent way and the
* execution of the current test case is aborted.
*
* The message is interpreted as a stream, for example:
*
* \code
* NS_TEST_ASSERT_MSG_NE (result, false,
* "cannot open file " << filename << " in test");
* \endcode
*
* is legal.
*
* \param actual Expression for the actual value found during the test.
* \param limit Expression for the expected value of the test.
* \param msg Message that is output if the test does not pass.
*
* \warning Do not use this macro if you are comparing floating point numbers
* (float or double). Use NS_TEST_ASSERT_MSG_FLNE instead.
*/
#define NS_TEST_ASSERT_MSG_NE(actual, limit, msg) \
NS_TEST_ASSERT_MSG_NE_INTERNAL (actual, limit, msg, __FILE__, __LINE__)
/**
* \internal
*
* Required to avoid use of return statement which allows use in methods
* (callbacks) returning void.
*/
#define NS_TEST_EXPECT_MSG_NE_INTERNAL(actual, limit, msg, file, line) \
do { \
if (!((actual) != (limit))) \
{ \
std::ostringstream msgStream; \
msgStream << msg; \
std::ostringstream actualStream; \
actualStream << actual; \
std::ostringstream limitStream; \
limitStream << limit; \
ReportFailure (std::string (#actual) + " (actual) NE " + std::string (#limit) + " (limit)", \
actualStream.str (), limitStream.str (), msgStream.str (), file, line); \
} \
} while (false)
/**
* \brief Test that an actual and expected (limit) value are equal and report
* if not.
*
* Check to see if the expected (limit) value is equal to the actual value
* found in a test case. If the two values are equal nothing happens, but if
* the comparison fails, an error is reported in a consistent way. EXPECT*
* macros do not return if an error is detected.
*
* The message is interpreted as a stream, for example:
*
* \code
* NS_TEST_EXPECT_MSG_EQUAL (result, true,
* "cannot open file " << filename << " in test");
* \endcode
*
* is legal.
*
* \param actual Expression for the actual value found during the test.
* \param limit Expression for the expected value of the test.
* \param msg Message that is output if the test does not pass.
*
* \warning Do not use this macro if you are comparing floating point numbers
* (float or double). Use NS_TEST_EXPECT_MSG_FLNE instead.
*/
#define NS_TEST_EXPECT_MSG_NE(actual, limit, msg) \
NS_TEST_EXPECT_MSG_NE_INTERNAL (actual, limit, msg, __FILE__, __LINE__)
// ===========================================================================
// Test for less than relation
// ===========================================================================
/**
* \internal
*/
#define NS_TEST_ASSERT_MSG_LT_INTERNAL(actual, limit, msg, file, line) \
do { \
if (!((actual) < (limit))) \
{ \
std::ostringstream msgStream; \
msgStream << msg; \
std::ostringstream actualStream; \
actualStream << actual; \
std::ostringstream limitStream; \
limitStream << limit; \
ReportFailure (std::string (#actual) + " (actual) LT " + std::string (#limit) + " (limit)", \
actualStream.str (), limitStream.str (), msgStream.str (), file, line); \
return true; \
} \
} while (false)
/**
* \brief Test that an actual value is less than a limit and report and abort
* if not.
*
* Check to see if the actual value found in a test case is less than the
* limit value. If the actual value is lesser nothing happens, but if the
* check fails, an error is reported in a consistent way and the execution
* of the current test case is aborted.
*
* The message is interpreted as a stream.
*
* \param actual Expression for the actual value found during the test.
* \param limit Expression for the limit value of the test.
* \param msg Message that is output if the test does not pass.
*/
#define NS_TEST_ASSERT_MSG_LT(actual, limit, msg) \
NS_TEST_ASSERT_MSG_LT_INTERNAL (actual, limit, msg, __FILE__, __LINE__)
/**
* \internal
*
* Required to avoid use of return statement which allows use in methods
* (callbacks) returning void.
*/
#define NS_TEST_EXPECT_MSG_LT_INTERNAL(actual, limit, msg, file, line) \
do { \
if (!((actual) < (limit))) \
{ \
std::ostringstream msgStream; \
msgStream << msg; \
std::ostringstream actualStream; \
actualStream << actual; \
std::ostringstream limitStream; \
limitStream << limit; \
ReportFailure (std::string (#actual) + " (actual) LT " + std::string (#limit) + " (limit)", \
actualStream.str (), limitStream.str (), msgStream.str (), file, line); \
} \
} while (false)
/**
* \brief Test that an actual value is less than a limit and report if not.
*
* Check to see if the actual value found in a test case is less than the
* limit value. If the actual value is lesser nothing happens, but if the
* check fails, an error is reported in a consistent way. EXPECT* macros do
* not return if an error is detected.
*
* The message is interpreted as a stream.
*
* \param actual Expression for the actual value found during the test.
* \param limit Expression for the limit value of the test.
* \param msg Message that is output if the test does not pass.
*/
#define NS_TEST_EXPECT_MSG_LT(actual, limit, msg) \
NS_TEST_EXPECT_MSG_LT_INTERNAL (actual, limit, msg, __FILE__, __LINE__)
// ===========================================================================
// Test for greater than relation
// ===========================================================================
/**
* \internal
*/
#define NS_TEST_ASSERT_MSG_GT_INTERNAL(actual, limit, msg, file, line) \
do { \
if (!((actual) > (limit))) \
{ \
std::ostringstream msgStream; \
msgStream << msg; \
std::ostringstream actualStream; \
actualStream << actual; \
std::ostringstream limitStream; \
limitStream << limit; \
ReportFailure (std::string (#actual) + " (actual) GT " + std::string (#limit) + " (limit)", \
actualStream.str (), limitStream.str (), msgStream.str (), file, line); \
return true; \
} \
} while (false)
/**
* \brief Test that an actual value is greater than a limit and report and abort
* if not.
*
* Check to see if the actaul value found in a test case is greater than the
* limit value. If the actual value is greater nothing happens, but if the
* check fails, an error is reported in a consistent way and the execution
* of the current test case is aborted.
*
* The message is interpreted as a stream.
*
* \param actual Expression for the actual value found during the test.
* \param limit Expression for the limit value of the test.
* \param msg Message that is output if the test does not pass.
*/
#define NS_TEST_ASSERT_MSG_GT(actual, limit, msg) \
NS_TEST_ASSERT_MSG_GT_INTERNAL (actual, limit, msg, __FILE__, __LINE__)
/**
* \internal
*
* Required to avoid use of return statement which allows use in methods
* (callbacks) returning void.
*/
#define NS_TEST_EXPECT_MSG_GT_INTERNAL(actual, limit, msg, file, line) \
do { \
if (!((actual) > (limit))) \
{ \
std::ostringstream msgStream; \
msgStream << msg; \
std::ostringstream actualStream; \
actualStream << actual; \
std::ostringstream limitStream; \
limitStream << limit; \
ReportFailure (std::string (#actual) + " (actual) GT " + std::string (#limit) + " (limit)", \
actualStream.str (), limitStream.str (), msgStream.str (), file, line); \
} \
} while (false)
/**
* \brief Test that an actual value is greater than a limit and report if not.
*
* Check to see if the actual value found in a test case is greater than the
* limit value. If the actual value is greater nothing happens, but if the
* check fails, an error is reported in a consistent way. EXPECT* macros do
* not return if an error is detected.
*
* The message is interpreted as a stream.
*
* \param actual Expression for the actual value found during the test.
* \param limit Expression for the limit value of the test.
* \param msg Message that is output if the test does not pass.
*/
#define NS_TEST_EXPECT_MSG_GT(actual, limit, msg) \
NS_TEST_EXPECT_MSG_GT_INTERNAL (actual, limit, msg, __FILE__, __LINE__)
namespace ns3 {
/**
* \brief Compare two double precision floating point numbers and declare them
* equal if they are within some epsilon of each other.
*
* Approximate comparison of floating point numbers near equality is trickier
* than one may expect and is well-discussed in the literature. Basic
* strategies revolve around a suggestion by Knuth to compare the floating
* point numbers as binary integers, supplying a maximum difference between
* them . This max difference is specified in Units in the Last Place (ulps)
* or a floating point epsilon.
*
* This routine is based on the GNU Scientific Library function gsl_fcmp.
*
* \param a The first of double precision floating point numbers to compare
* \param a The second of double precision floating point numbers to compare
* \param epsilon The second of double precision floating point numberss to compare
* \returns Returns true if the doubles are equal to a precision defined by epsilon
*/
bool TestDoubleIsEqual (const double a, const double b, const double epsilon = std::numeric_limits<double>::epsilon ());
/**
* \brief A single test case.
*/
class TestCase
{
public:
TestCase (std::string name);
virtual ~TestCase ();
/**
* \brief Run this test case.
* \param verbose Turn on any output the test case may provide
* \returns Boolean sense of "an error has occurred."
*/
bool Run (void);
/**
* \brief Set the verbosity of this test case.
* \param verbose Whether or not to print "stuff."
*/
void SetVerbose (bool verbose);
/**
* \brief Set the name of this test case.
*/
void SetName (std::string name);
/**
* \brief Get the name of this test case.
*/
std::string GetName (void);
/**
* \brief Set the base directory of the ns-3 distribution.
*/
void SetBaseDir (std::string dir);
/**
* \brief Get the base directory of the ns-3 distribution.
*/
std::string GetBaseDir (void);
/**
* \brief Get the source directory of the current source file.
*
* One of the basic models of the test environment is that dedicated test-
* and response vectors live in the same directory as the source file. So it
* is a common operation to figure out what directory a given source file lives
* in.
*
* __FILE__ provides almost all of what we need, but in the gnu toolchain it
* comes out as something like "../src/core/pcap-file-test-suite.cc".
*
* We really don't want to have any dependency on the directory out of which a
* test is run, so we ask that any test-runner give us the base directory of the
* distribution, which is set via TestCase::SetBaseDir(). That string will look
* something like "/home/user/repos/ns-3-allinone/ns-3-dev".
*
* This function stitches the two pieces together and removes the file name to
* return something like "/home/user/repos/ns-3-allinone/ns-3-dev/src/core/".
*
* \param file The current source file name relative to the base directory.
* \returns The current source directory.
*
* \warning Always call this function as GetSourceDir (__FILE__) or use the
* convenience macro NS_TEST_SOURCEDIR.
*/
std::string GetSourceDir (std::string file);
/**
* \brief Set the stream to which status and result messages will be written.
*
* We really don't want to have to pass an ofstream around to every function
* and we especially don't want to have to make our clients plumb an ofstream
* around so we need to save it. Since file streams are not designed to be
* copied or assigned (what does it mean to have duplicate streams to a file)
* we have to stash a pointer to the stream.
*/
void SetStream (std::ofstream *ofs);
/**
* \brief Get the stream to which status and result messages will be written.
*/
std::ofstream *GetStream (void);
/**
* \brief Manually Set the error status of this test case.
*/
void SetErrorStatus (bool error);
/**
* \brief Get the error status of this test case.
*/
bool GetErrorStatus (void);
void ReportStart (void);
void ReportSuccess (void);
void ReportFailure (std::string cond, std::string actual, std::string limit, std::string message,
std::string file, int32_t line);
void ReportEnd (void);
protected:
/**
* \internal
* \brief Implementation of reporting method for the start of the test case.
*/
virtual void DoReportStart (void);
/**
* \internal
* \brief Implementation of reporting method for success of the test case.
*/
virtual void DoReportSuccess (void);
/**
* \internal
* \brief Implementation of reporting method for failure of the test case.
*/
virtual void DoReportFailure (std::string cond, std::string actual, std::string limit, std::string message,
std::string file, int32_t line);
/**
* \internal
* \brief Implementation of reporting method for the end of the test case.
*/
virtual void DoReportEnd (void);
/**
* \internal
* \param verbose Turn on any output the test case may provide
* \brief Implementation to do any local setup required for this test case.
*/
virtual void DoSetup (void);
/**
* \internal
* \brief Implementation to actually run this test case.
* \param verbose Turn on any output the test case may provide
* \returns Boolean sense of "an error has occurred."
*/
virtual bool DoRun (void) = 0;
/**
* \internal
* \param verbose Turn on any output the test case may provide
* \brief Implementation to do any local setup required for this test case.
*/
virtual void DoTeardown (void);
private:
TestCase (TestCase& tc);
TestCase& operator= (TestCase& tc);
std::string m_name;
bool m_verbose;
std::string m_basedir;
std::ofstream *m_ofs;
bool m_error;
clock_t m_startTime;
struct tms m_startTimes;
};
/**
* \brief A suite of tests to run.
*/
class TestSuite
{
public:
enum TestType {
BVT = 1, /**< This test suite implements a Build Verification Test */
UNIT, /**< This test suite implements a Unit Test */
SYSTEM, /**< This test suite implements a System Test */
EXAMPLE, /**< This test suite implements an Example Test */
PERFORMANCE /**< This test suite implements a Performance Test */
};
/**
* \brief Constuct a new test suite.
*
* \param name The name of the test suite.
* \param type The TestType of the test suite (defaults to UNIT test).
*/
TestSuite (std::string name, TestType type = UNIT);
/**
* \brief Destroy a test suite.
*/
virtual ~TestSuite ();
/**
* \brief Run this test suite.
*
* \param verbose Turn on any output the test case may provide
* \returns Boolean sense of "an error has occurred."
*/
bool Run (void);
/**
* \brief Add an individual test case to this test suite.
*
* \param testCase Pointer to the test case object to be added.
* \returns Integer assigned as identifer of the provided test case.
*/
uint32_t AddTestCase (TestCase *testCase);
/**
* \brief Get the number of test cases that have been added to this test suite.
*
* \returns Number of test cases in the suite.
*/
uint32_t GetNTestCases (void);
/**
* \brief Get the test case at index i.
*/
TestCase *GetTestCase (uint32_t i);
/**
* \brief get the kind of test this test suite implements
*
* \returns the TestType of the suite.
*/
TestType GetTestType (void);
/**
* \brief Set the verbosity of this test suite.
* \param verbose Whether or not to print "stuff."
*/
void SetVerbose (bool verbose);
/**
* \brief Set the name of this test suite.
*/
void SetName (std::string name);
/**
* \brief Get the name of this test suite.
*/
std::string GetName (void);
/**
* \brief Set the base directory of the ns-3 distribution.
*/
void SetBaseDir (std::string basedir);
/**
* \brief Get the base directory of the ns-3 distribution.
*/
std::string GetBaseDir (void);
/**
* \brief Set the stream to which status and result messages will be written.
*
* We really don't want to have to pass an ofstream around to every function
* and we especially don't want to have to make our clients plumb an ofstream
* around so we need to save it. Since file streams are not designed to be
* copied or assigned (what does it mean to have duplicate streams to a file)
* we have to stash a pointer to the stream.
*/
void SetStream (std::ofstream *ofs);
void ReportStart (void);
void ReportSuccess (void);
void ReportFailure (void);
void ReportEnd (void);
protected:
/**
* \internal
* \brief Implementation of reporting method for the start of the test suite.
*/
virtual void DoReportStart (void);
/**
* \internal
* \brief Implementation of reporting method for success of the test suite.
*/
virtual void DoReportSuccess (void);
/**
* \internal
* \brief Implementation of reporting method for failure of the test suite.
*/
virtual void DoReportFailure (void);
/**
* \internal
* \brief Implementation of reporting method for the end of the test suite.
*/
virtual void DoReportEnd (void);
/**
* \internal
* \param verbose Turn on any output the test case may provide
* \brief Implementation to do any local setup required for this test suite.
*/
virtual void DoSetup (void);
/**
* \internal
* \brief Implementation to actually run this test suite.
* \param verbose Turn on any output the test case may provide
* \returns Boolean sense of "an error has occurred."
*/
virtual bool DoRun (void);
/**
* \internal
* \param verbose Turn on any output the test case may provide
* \brief Implementation to do any local setup required for this test suite.
*/
virtual void DoTeardown (void);
private:
TestSuite (TestSuite& ts);
TestSuite& operator= (TestSuite& ts);
std::string m_name;
bool m_verbose;
std::string m_basedir;
std::ofstream *m_ofs;
TestType m_type;
clock_t m_startTime;
struct tms m_startTimes;
typedef std::vector<TestCase *> TestCaseVector_t;
TestCaseVector_t m_tests;
};
/**
* \brief A runner to execute tests.
*/
class TestRunner
{
public:
static uint32_t AddTestSuite (TestSuite *testSuite);
static uint32_t GetNTestSuites (void);
static TestSuite *GetTestSuite (uint32_t n);
};
/**
* \brief A simple way to store test vectors (for stimulus or from responses)
*/
template <typename T>
class TestVectors
{
public:
TestVectors ();
virtual ~TestVectors ();
void Reserve (uint32_t reserve);
uint32_t Add (T vector);
uint32_t GetN (void) const;
T Get (uint32_t i) const;
private:
TestVectors (const TestVectors& tv);
TestVectors& operator= (const TestVectors& tv);
bool operator== (const TestVectors& tv) const;
typedef std::vector<T> TestVector_t;
TestVector_t m_vectors;
};
template <typename T>
TestVectors<T>::TestVectors ()
: m_vectors ()
{
}
template <typename T>
void
TestVectors<T>::Reserve (uint32_t reserve)
{
m_vectors.reserve (reserve);
}
template <typename T>
TestVectors<T>::~TestVectors ()
{
}
template <typename T>
uint32_t
TestVectors<T>::Add (T vector)
{
uint32_t index = m_vectors.size ();
m_vectors.push_back (vector);
return index;
}
template <typename T>
uint32_t
TestVectors<T>::GetN (void) const
{
return m_vectors.size ();
}
template <typename T>
T
TestVectors<T>::Get (uint32_t i) const
{
NS_ABORT_MSG_UNLESS (m_vectors.size () > i, "TestVectors::Get(): Bad index");
return m_vectors[i];
}
}; // namespace ns3
//
// Original ns-3 unit test code for compatibility
//
#ifdef RUN_SELF_TESTS
namespace ns3 {

View File

@@ -79,6 +79,7 @@ def build(bld):
'callback.cc',
'names.cc',
'vector.cc',
'names-test-suite.cc',
]
headers = bld.new_task_gen('ns3header')
@@ -149,3 +150,5 @@ def build(bld):
'system-condition.h',
])
if bld.env['ENABLE_GSL']:
core.source.extend(['rng-test-suite.cc'])

View File

@@ -1,12 +1,5 @@
## -*- Mode: python; py-indent-offset: 4; indent-tabs-mode: nil; coding: utf-8; -*-
def configure(conf):
have_gsl = conf.pkg_check_modules('GSL', 'gsl', mandatory=False)
conf.env['ENABLE_GSL'] = have_gsl
conf.report_optional_feature("GSL", "GNU Scientific Library (GSL)",
conf.env['ENABLE_GSL'],
"GSL not found")
def build(bld):
obj = bld.create_ns3_module('wifi', ['node'])
obj.source = [
@@ -120,7 +113,6 @@ def build(bld):
if bld.env['ENABLE_GSL']:
obj.uselib = 'GSL GSLCBLAS M'
obj.env.append_value('CXXDEFINES', "ENABLE_GSL")
obj = bld.create_ns3_program('wifi-phy-test',
['core', 'simulator', 'mobility', 'node', 'wifi'])

View File

@@ -26,6 +26,8 @@
#include "ns3/packet.h"
#include "ns3/ptr.h"
#include "ns3/object-factory.h"
#include "ns3/pcap-writer.h"
#include "ns3/ascii-writer.h"
namespace ns3 {

2825
src/node/packetbb.cc Normal file

File diff suppressed because it is too large Load Diff

1745
src/node/packetbb.h Normal file

File diff suppressed because it is too large Load Diff

3835
src/node/test-packetbb.cc Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -45,6 +45,7 @@ def build(bld):
'ipv6.cc',
'ipv6-raw-socket-factory.cc',
'ipv6-routing-protocol.cc',
'packetbb.cc',
]
headers = bld.new_task_gen('ns3header')
@@ -91,4 +92,5 @@ def build(bld):
'ipv6.h',
'ipv6-raw-socket-factory.h',
'ipv6-routing-protocol.h',
'packetbb.h',
]

View File

@@ -0,0 +1,381 @@
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
/*
* Copyright (c) 2009 University of Washington
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation;
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include "ns3/log.h"
#include "ns3/abort.h"
#include "ns3/test.h"
#include "ns3/pcap-file.h"
#include "ns3/config.h"
#include "ns3/string.h"
#include "ns3/uinteger.h"
#include "ns3/data-rate.h"
#include "ns3/inet-socket-address.h"
#include "ns3/point-to-point-helper.h"
#include "ns3/internet-stack-helper.h"
#include "ns3/ipv4-address-helper.h"
#include "ns3/packet-sink-helper.h"
#include "ns3/tcp-socket-factory.h"
#include "ns3/simulator.h"
using namespace ns3;
NS_LOG_COMPONENT_DEFINE ("Ns3CwndTest");
// ===========================================================================
// This is a simple test to demonstrate how a known good model (a reference
// implementation) may be used to test another model without resorting to
// storing stimulus or response vectors.
//
// Node zero contains the model under test, in this case the ns-3 TCP
// implementation. Node one contains the reference implementation that we
// assume will generate good test vectors for us. In this case, a Linux
// TCP implementation is used to stimulate the ns-3 TCP model with what we
// assume are perfectly good packets. We watch the ns-3 implementation to
// see what it does in the presence of these assumed good stimuli.
//
// The test is arranged as a typical ns-3 script, but we use the trace system
// to peek into the running system and monitor the ns-3 TCP.
//
// The topology is just two nodes communicating over a point-to-point network.
// The point-to-point network is chosen because it is simple and allows us to
// easily generate pcap traces we can use to separately verify that the ns-3
// implementation is responding correctly. Once the oopration is verified, we
// enter a list of responses that capture the response succinctly.
//
// node 0 node 1
// +----------------+ +----------------+
// | ns-3 TCP | | Linux TCP |
// +----------------+ +----------------+
// | 10.1.1.1 | | 10.1.1.2 |
// +----------------+ +----------------+
// | point-to-point | | point-to-point |
// +----------------+ +----------------+
// | |
// +---------------------+
// 5 Mbps, 2 ms
//
// ===========================================================================
//
class SimpleSource : public Application
{
public:
SimpleSource ();
virtual ~SimpleSource();
void Setup (Ptr<Socket> socket, Address address, uint32_t packetSize, uint32_t nPackets, DataRate dataRate);
private:
virtual void StartApplication (void);
virtual void StopApplication (void);
void ScheduleTx (void);
void SendPacket (void);
Ptr<Socket> m_socket;
Address m_peer;
uint32_t m_packetSize;
uint32_t m_nPackets;
DataRate m_dataRate;
EventId m_sendEvent;
bool m_running;
uint32_t m_packetsSent;
};
SimpleSource::SimpleSource ()
: m_socket (0),
m_peer (),
m_packetSize (0),
m_nPackets (0),
m_dataRate (0),
m_sendEvent (),
m_running (false),
m_packetsSent (0)
{
}
SimpleSource::~SimpleSource()
{
m_socket = 0;
}
void
SimpleSource::Setup (Ptr<Socket> socket, Address address, uint32_t packetSize, uint32_t nPackets, DataRate dataRate)
{
m_socket = socket;
m_peer = address;
m_packetSize = packetSize;
m_nPackets = nPackets;
m_dataRate = dataRate;
}
void
SimpleSource::StartApplication (void)
{
m_running = true;
m_packetsSent = 0;
m_socket->Bind ();
m_socket->Connect (m_peer);
SendPacket ();
}
void
SimpleSource::StopApplication (void)
{
m_running = false;
if (m_sendEvent.IsRunning ())
{
Simulator::Cancel (m_sendEvent);
}
if (m_socket)
{
m_socket->Close ();
}
}
void
SimpleSource::SendPacket (void)
{
Ptr<Packet> packet = Create<Packet> (m_packetSize);
m_socket->Send (packet);
if (++m_packetsSent < m_nPackets)
{
ScheduleTx ();
}
}
void
SimpleSource::ScheduleTx (void)
{
if (m_running)
{
Time tNext (Seconds (m_packetSize * 8 / static_cast<double> (m_dataRate.GetBitRate ())));
m_sendEvent = Simulator::Schedule (tNext, &SimpleSource::SendPacket, this);
}
}
class Ns3TcpCwndTestCase : public TestCase
{
public:
Ns3TcpCwndTestCase ();
virtual ~Ns3TcpCwndTestCase ();
private:
virtual bool DoRun (void);
bool m_writeResults;
class CwndEvent {
public:
uint32_t m_oldCwnd;
uint32_t m_newCwnd;
};
TestVectors<CwndEvent> m_responses;
void CwndChange (uint32_t oldCwnd, uint32_t newCwnd);
};
Ns3TcpCwndTestCase::Ns3TcpCwndTestCase ()
: TestCase ("Check to see that the ns-3 TCP congestion window works as expected against liblinux2.6.26.so"),
m_writeResults (false)
{
}
Ns3TcpCwndTestCase::~Ns3TcpCwndTestCase ()
{
}
void
Ns3TcpCwndTestCase::CwndChange (uint32_t oldCwnd, uint32_t newCwnd)
{
CwndEvent event;
event.m_oldCwnd = oldCwnd;
event.m_newCwnd = newCwnd;
m_responses.Add (event);
}
bool
Ns3TcpCwndTestCase::DoRun (void)
{
//
// Just create two nodes. One (node zero) will be the node with the TCP
// under test which is the ns-3 TCP implementation. The other node (node
// one) will be the node with the reference implementation we use to drive
// the tests.
//
NodeContainer nodes;
nodes.Create (2);
//
// For this test we'll use a point-to-point net device. It's not as simple
// as a simple-net-device, but it provides nice places to hook trace events
// so we can see what's moving between our nodes.
//
PointToPointHelper pointToPoint;
pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("5Mbps"));
pointToPoint.SetChannelAttribute ("Delay", StringValue ("2ms"));
//
// Install the point-to-point devices on both nodes and connec them up.
//
NetDeviceContainer devices;
devices = pointToPoint.Install (nodes);
//
// Install two variants of the internet stack. The first, on node zero
// uses the TCP under test, which is the default ns-3 TCP implementation.
//
InternetStackHelper stack;
stack.Install (nodes.Get (0));
//
// The other node, node one, is going to be set up to use a Linux TCP
// implementation that we consider a known good TCP.
//
std::string nscStack = "liblinux2.6.26.so";
stack.SetTcp ("ns3::NscTcpL4Protocol", "Library", StringValue("liblinux2.6.26.so"));
stack.Install (nodes.Get (1));
//
// Assign the address 10.1.1.1 to the TCP implementation under test (index
// zero) and 10.1.1.2 to the reference implementation (index one).
//
Ipv4AddressHelper address;
address.SetBase ("10.1.1.0", "255.255.255.252");
Ipv4InterfaceContainer interfaces = address.Assign (devices);
//
// We need a place to send our TCP data on the node with the reference TCP
// implementation. We aren't really concerned about what happens there, so
// just create a sink.
//
uint16_t sinkPort = 8080;
Address sinkAddress (InetSocketAddress(interfaces.GetAddress (1), sinkPort));
PacketSinkHelper packetSinkHelper ("ns3::TcpSocketFactory", InetSocketAddress (Ipv4Address::GetAny (), sinkPort));
ApplicationContainer sinkApps = packetSinkHelper.Install (nodes.Get (1));
sinkApps.Start (Seconds (0.));
sinkApps.Stop (Seconds (1.1));
//
// We want to look at changes in the ns-3 TCP congestion window. The
// congestion window is flow clontrol imposed by the sender, so we need
// to crank up a flow from the ns-3 TCP node to the NSC TCP node and hook the
// CongestionWindow attribute on the socket. Normally one would use an on-off
// application to generate a flow, but this has a couple of problems. First,
// the socket of the on-off application is not created until Application Start
// time, so we wouldn't be able to hook the socket now at configuration time.
// Second, even if we could arrange a call after start time, the socket is not
// public.
//
// So, we can cook up a simple version of the on-off application that does what
// we want. On the plus side we don't need all of the complexity of the on-off
// application. On the minus side, we don't have a helper, so we have to get
// a little more involved in the details, but this is trivial.
//
// So first, we create a socket and do the trace connect on it; then we pass this
// socket into the constructor of our simple application which we then install
// in the node with the ns-3 TCP.
//
Ptr<Socket> ns3TcpSocket = Socket::CreateSocket (nodes.Get (0), TcpSocketFactory::GetTypeId ());
ns3TcpSocket->TraceConnectWithoutContext ("CongestionWindow", MakeCallback (&Ns3TcpCwndTestCase::CwndChange, this));
Ptr<SimpleSource> app = CreateObject<SimpleSource> ();
app->Setup (ns3TcpSocket, sinkAddress, 1040, 10, DataRate ("5Mbps"));
nodes.Get (0)->AddApplication (app);
app->Start (Seconds (1.));
app->Stop (Seconds (1.1));
//
// The idea here is that someone will look very closely at the all of the
// communications between the reference TCP and the TCP under test in this
// simulation and determine that all of the responses are correct. We expect
// that this means generating a pcap trace file from the point-to-point link
// and examining the packets closely using tcpdump, wireshark or some such
// program. So we provide the ability to generate a pcap trace of the
// test execution for your perusal.
//
// Once the validation test is determined to be running exactly as exptected,
// the set of congestion window changes is collected and hard coded into the
// test results which will then be checked during the actual execution of the
// test.
//
if (m_writeResults)
{
PointToPointHelper::EnablePcapAll ("tcp-cwnd");
}
Simulator::Stop (Seconds(2));
Simulator::Run ();
Simulator::Destroy ();
//
// As new acks are received by the TCP under test, the congestion window
// should be opened up by one segment (MSS bytes) each time. This should
// trigger a congestion window change event which we hooked and saved above.
// We should now be able to look through the saved response vectors and follow
// the congestion window as it opens up when the ns-3 TCP under test
// transmits its bits
//
// From inspecting the results, we know that we should see N_EVENTS congestion
// window change events. The window should expand N_EVENTS - 1 times (each
// time by MSS bytes) until it gets to its largest value. Then the application
// sending stops and the window should be slammed shut, with the last event
// reflecting the change from LARGEST_CWND back to MSS
//
const uint32_t MSS = 536;
const uint32_t N_EVENTS = 21;
const uint32_t LARGEST_CWND = MSS * N_EVENTS;
CwndEvent event;
NS_TEST_ASSERT_MSG_EQ (m_responses.GetN (), N_EVENTS, "Unexpectedly low number of cwnd change events");
for (uint32_t i = 0, from = 536, to = 1072; i < N_EVENTS - 1; ++i, from += 536, to += 536)
{
event = m_responses.Get (i);
NS_TEST_ASSERT_MSG_EQ (event.m_oldCwnd, from, "Wrong old cwnd value in cwnd change event " << i);
NS_TEST_ASSERT_MSG_EQ (event.m_newCwnd, to, "Wrong new cwnd value in cwnd change event " << i);
}
event = m_responses.Get (N_EVENTS - 1);
NS_TEST_ASSERT_MSG_EQ (event.m_oldCwnd, LARGEST_CWND, "Wrong old cwnd value in cwnd change event " << N_EVENTS - 1);
NS_TEST_ASSERT_MSG_EQ (event.m_newCwnd, MSS, "Wrong new cwnd value in cwnd change event " << N_EVENTS - 1);
return GetErrorStatus ();
}
class Ns3TcpCwndTestSuite : public TestSuite
{
public:
Ns3TcpCwndTestSuite ();
};
Ns3TcpCwndTestSuite::Ns3TcpCwndTestSuite ()
: TestSuite ("ns3-tcp-cwnd", SYSTEM)
{
AddTestCase (new Ns3TcpCwndTestCase);
}
Ns3TcpCwndTestSuite ns3TcpCwndTestSuite;

Binary file not shown.

View File

@@ -0,0 +1,307 @@
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
/*
* Copyright (c) 2009 University of Washington
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation;
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include "ns3/log.h"
#include "ns3/abort.h"
#include "ns3/test.h"
#include "ns3/pcap-file.h"
#include "ns3/config.h"
#include "ns3/string.h"
#include "ns3/uinteger.h"
#include "ns3/inet-socket-address.h"
#include "ns3/point-to-point-helper.h"
#include "ns3/internet-stack-helper.h"
#include "ns3/ipv4-address-helper.h"
#include "ns3/ipv4-header.h"
#include "ns3/packet-sink-helper.h"
#include "ns3/on-off-helper.h"
#include "ns3/simulator.h"
using namespace ns3;
NS_LOG_COMPONENT_DEFINE ("Ns3TcpInteropTest");
const bool WRITE_VECTORS = false; // hack hack hack
const uint32_t PCAP_LINK_TYPE = 1187373553; // Some large random number -- we use to verify data was written by this program
const uint32_t PCAP_SNAPLEN = 64; // Don't bother to save much data
// ===========================================================================
// This is a simple test to demonstrate how a known good model (a reference
// implementation) may be used to test another model in a relatively simple
// way.
//
// Node zero contains the model under test, in this case the ns-3 TCP
// implementation. Node one contains the reference implementation that we
// assume will generate good test vectors for us. In this case, a Linux
// TCP implementation is used to stimulate the ns-3 TCP model with what we
// assume are perfectly good packets. We watch the ns-3 implementation to
// see what it does in the presence of these assumed good stimuli.
//
// The test is arranged as a typical ns-3 script, but we use the trace system
// to peek into the running system and monitor the ns-3 TCP.
//
// The topology is just two nodes communicating over a point-to-point network.
// The point-to-point network is chosen because it is simple and allows us to
// easily generate pcap traces we can use to separately verify that the ns-3
// implementation is responding correctly. Once the opration is verified, we
// capture a set of response vectors that are then checked in the test to
// ensure that the ns-3 TCP continues to respond correctly over time.
//
// node 0 node 1
// +----------------+ +----------------+
// | ns-3 TCP | | Linux TCP |
// +----------------+ +----------------+
// | 10.1.1.1 | | 10.1.1.2 |
// +----------------+ +----------------+
// | point-to-point | | point-to-point |
// +----------------+ +----------------+
// | |
// +---------------------+
// 5 Mbps, 2 ms
//
// ===========================================================================
class Ns3TcpInteroperabilityTestCase : public TestCase
{
public:
Ns3TcpInteroperabilityTestCase ();
virtual ~Ns3TcpInteroperabilityTestCase ();
private:
virtual void DoSetup (void);
virtual bool DoRun (void);
virtual void DoTeardown (void);
void Ipv4L3Tx (std::string context, Ptr<const Packet> packet, uint32_t interfaceIndex);
std::string m_pcapFilename;
PcapFile m_pcapFile;
bool m_writeVectors;
};
Ns3TcpInteroperabilityTestCase::Ns3TcpInteroperabilityTestCase ()
: TestCase ("Check to see that the ns-3 TCP can work with liblinux2.6.26.so"), m_writeVectors(WRITE_VECTORS)
{
}
Ns3TcpInteroperabilityTestCase::~Ns3TcpInteroperabilityTestCase ()
{
}
void
Ns3TcpInteroperabilityTestCase::DoSetup (void)
{
//
// We expect there to be a file called tcp-interop-response-vectors.pcap" in
// the source directory of this file.
//
m_pcapFilename = NS_TEST_SOURCEDIR + "ns3tcp-interop-response-vectors.pcap";
if (m_writeVectors)
{
m_pcapFile.Open (m_pcapFilename, "w");
m_pcapFile.Init(PCAP_LINK_TYPE, PCAP_SNAPLEN);
}
else
{
m_pcapFile.Open (m_pcapFilename, "r");
NS_ABORT_MSG_UNLESS (m_pcapFile.GetDataLinkType () == PCAP_LINK_TYPE, "Wrong response vectors in directory");
}
}
void
Ns3TcpInteroperabilityTestCase::DoTeardown (void)
{
m_pcapFile.Close ();
}
void
Ns3TcpInteroperabilityTestCase::Ipv4L3Tx (std::string context, Ptr<const Packet> packet, uint32_t interfaceIndex)
{
//
// We're not testing IP so remove and toss the header. In order to do this,
// though, we need to copy the packet since we have a const version.
//
Ptr<Packet> p = packet->Copy ();
Ipv4Header ipHeader;
p->RemoveHeader (ipHeader);
//
// What is left is the TCP header and any data that may be sent. We aren't
// sending any TCP data, so we expect what remains is only TCP header, which
// is a small thing to save.
//
if (m_writeVectors)
{
//
// Save the TCP under test response for later testing.
//
Time tNow = Simulator::Now ();
int64_t tMicroSeconds = tNow.GetMicroSeconds ();
m_pcapFile.Write (uint32_t (tMicroSeconds / 1000000),
uint32_t (tMicroSeconds % 1000000),
p->PeekData(),
p->GetSize ());
}
else
{
//
// Read the TCP under test expected response from the expected vector
// file and see if it still does the right thing.
//
uint8_t expected[PCAP_SNAPLEN];
uint32_t tsSec, tsUsec, inclLen, origLen, readLen;
m_pcapFile.Read (expected, sizeof(expected), tsSec, tsUsec, inclLen, origLen, readLen);
uint8_t const *actual = p->PeekData();
uint32_t result = memcmp(actual, expected, readLen);
//
// Avoid streams of errors -- only report the first.
//
if (GetErrorStatus () == false)
{
NS_TEST_EXPECT_MSG_EQ (result, 0, "Expected data comparison error");
}
}
}
bool
Ns3TcpInteroperabilityTestCase::DoRun (void)
{
//
// Just create two nodes. One (node zero) will be the node with the TCP
// under test which is the ns-3 TCP implementation. The other node (node
// one) will be the node with the reference implementation we use to drive
// the tests.
//
NodeContainer nodes;
nodes.Create (2);
//
// For this test we'll use a point-to-point net device. It's not as simple
// as a simple-net-device, but it provides nice places to hook trace events
// so we can see what's moving between our nodes.
//
PointToPointHelper pointToPoint;
pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("5Mbps"));
pointToPoint.SetChannelAttribute ("Delay", StringValue ("2ms"));
//
// Install the point-to-point devices on both nodes and connec them up.
//
NetDeviceContainer devices;
devices = pointToPoint.Install (nodes);
//
// Install two variants of the internet stack. The first, on node zero
// uses the TCP under test, which is the default ns-3 TCP implementation.
//
InternetStackHelper stack;
stack.Install (nodes.Get (0));
//
// The other node, node one, is going to be set up to use a Linux TCP
// implementation that we consider a known good TCP.
//
std::string nscStack = "liblinux2.6.26.so";
stack.SetTcp ("ns3::NscTcpL4Protocol", "Library", StringValue("liblinux2.6.26.so"));
stack.Install (nodes.Get (1));
//
// Assign the address 10.1.1.1 to the TCP implementation under test (index
// zero) and 10.1.1.2 to the reference implementation (index one).
//
Ipv4AddressHelper address;
address.SetBase ("10.1.1.0", "255.255.255.252");
Ipv4InterfaceContainer interfaces = address.Assign (devices);
//
// We need a place for the TCP data to go on the node with the TCP under
// test, so just create a sink on node zero.
//
uint16_t sinkPort = 8080;
Address sinkAddress (InetSocketAddress(interfaces.GetAddress (0), sinkPort));
PacketSinkHelper packetSinkHelper ("ns3::TcpSocketFactory", InetSocketAddress (Ipv4Address::GetAny (), sinkPort));
ApplicationContainer sinkApps = packetSinkHelper.Install (nodes.Get (0));
sinkApps.Start (Seconds (0.));
//
// We need something to shove data down the pipe, so we create an on-off
// application on the soure node with the reference TCP implementation.
// The default behavior is to send for one second, then go quiet for one
// second, and repeat.
//
OnOffHelper onOffHelper ("ns3::TcpSocketFactory", sinkAddress);
onOffHelper.SetAttribute ("MaxBytes", UintegerValue (100000));
ApplicationContainer sourceApps = onOffHelper.Install(nodes.Get(1));
sourceApps.Start (Seconds (1.));
sourceApps.Stop (Seconds (10.));
//
// There are currently a limited number of trace hooks in the ns-3 TCP code.
// Rather than editing TCP to insert a bunch of trace hooks, we can just
// intercept the packets at the IPv4 layer. See internet-stack-helper.cc
// for complete description of the trace hooks. We're interested in the
// responses of the TCP under test, which implies we need to hook the node
// zero Ipv4 layer three transmit trace source. We'll then get all of the
// responses we need
//
Config::Connect ("/NodeList/0/$ns3::Ipv4L3Protocol/Tx",
MakeCallback (&Ns3TcpInteroperabilityTestCase::Ipv4L3Tx, this));
//
// The idea here is that someone will look very closely at the all of the
// communications between the reference TCP and the TCP under test in this
// simulation and determine that all of the responses are correct. We expect
// that this means generating a pcap trace file from the point-to-point link
// and examining the packets closely using tcpdump, wireshark or some such
// program. So we provide the ability to generate a pcap trace of the
// test execution for your perusal.
//
// Once the validation test is determined to be running exactly as exptected,
// we allow you to generate a file that contains the response vectors that
// will be checked during the actual execution of the test.
//
if (m_writeVectors)
{
PointToPointHelper::EnablePcapAll ("tcp-interop");
}
Simulator::Stop (Seconds(20));
Simulator::Run ();
Simulator::Destroy ();
return GetErrorStatus ();
}
class Ns3TcpInteroperabilityTestSuite : public TestSuite
{
public:
Ns3TcpInteroperabilityTestSuite ();
};
Ns3TcpInteroperabilityTestSuite::Ns3TcpInteroperabilityTestSuite ()
: TestSuite ("ns3-tcp-interoperability", SYSTEM)
{
AddTestCase (new Ns3TcpInteroperabilityTestCase);
}
Ns3TcpInteroperabilityTestSuite ns3TcpInteroperabilityTestSuite;

8
src/test/ns3tcp/ns3tcp.h Normal file
View File

@@ -0,0 +1,8 @@
/**
* \ingroup tests
* \defgroup Ns3TcpTests ns-3 TCP Implementation Tests
*
* \section Ns3TcpTestsOverview ns-3 Tcp Implementation Tests Overview
*
* ns-3 has a TCP implemtation and we test it a litte.
*/

1
src/test/ns3tcp/waf vendored Normal file
View File

@@ -0,0 +1 @@
exec "`dirname "$0"`"/../../../waf "$@"

16
src/test/ns3tcp/wscript Normal file
View File

@@ -0,0 +1,16 @@
## -*- Mode: python; py-indent-offset: 4; indent-tabs-mode: nil; coding: utf-8; -*-
def configure(conf):
pass
def build(bld):
ns3tcp = bld.create_ns3_module('ns3tcp')
ns3tcp.source = [
'ns3tcp-interop-test-suite.cc',
'ns3tcp-cwnd-test-suite.cc',
]
headers = bld.new_task_gen('ns3header')
headers.module = 'ns3tcp'
headers.source = [
'ns3tcp.h',
]

View File

@@ -0,0 +1,8 @@
/**
* \ingroup tests
* \defgroup Ns3WifiTests ns-3 Wifi Implementation Tests
*
* \section Ns3WifiTestsOverview ns-3 Wifi Implementation Tests Overview
*
* ns-3 has a Wifi implemtation and we test it a litte.
*/

View File

@@ -0,0 +1,579 @@
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
/*
* Copyright (c) 2009 The Boeing Company
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation;
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include "ns3/log.h"
#include "ns3/abort.h"
#include "ns3/test.h"
#include "ns3/pcap-file.h"
#include "ns3/config.h"
#include "ns3/string.h"
#include "ns3/uinteger.h"
#include "ns3/data-rate.h"
#include "ns3/inet-socket-address.h"
#include "ns3/internet-stack-helper.h"
#include "ns3/ipv4-address-helper.h"
#include "ns3/tcp-socket-factory.h"
#include "ns3/yans-wifi-helper.h"
#include "ns3/propagation-loss-model.h"
#include "ns3/propagation-delay-model.h"
#include "ns3/yans-wifi-channel.h"
#include "ns3/yans-wifi-phy.h"
#include "ns3/wifi-net-device.h"
#include "ns3/mobility-helper.h"
#include "ns3/constant-position-mobility-model.h"
#include "ns3/nqos-wifi-mac-helper.h"
#include "ns3/simulator.h"
using namespace ns3;
NS_LOG_COMPONENT_DEFINE ("Ns3WifiPropagationLossModelsTest");
// ===========================================================================
// This is a simple test to validate propagation loss models of ns-3 wifi.
//
// The basic configuration is,
//
// node 0 node 1
// +------------+ +------------+
// | ns-3 UDP | | ns-3 UDP |
// +------------+ +------------+
// | 10.1.1.1 | | 10.1.1.2 |
// +------------+ +------------+
// | wifi | | wifi |
// +------------+ +------------+
// | |
// (((*))) (((*)))
//
// |<-- distance -->|
//
//
// We vary the propagation loss model and the distance between the nodes,
// looking at the received power and SNR for a packet sent between them. We
// compare the found values with values found in an "authoritative source."
// ===========================================================================
//
class Ns3FriisPropagationLossModelTestCase : public TestCase
{
public:
Ns3FriisPropagationLossModelTestCase ();
virtual ~Ns3FriisPropagationLossModelTestCase ();
private:
virtual bool DoRun (void);
void SendPacket (uint32_t i, Ptr<Socket> socket, uint32_t size);
void Receive (Ptr<Packet> p, double snr, WifiMode mode, enum WifiPreamble preamble);
uint32_t m_gotCallbacks;
Ptr<Node> m_receiver;
uint32_t m_vectorIndex;
typedef struct {
Vector m_position;
double m_snr;
double m_tolerance;
} TestVector;
TestVectors<TestVector> m_testVectors;
};
Ns3FriisPropagationLossModelTestCase::Ns3FriisPropagationLossModelTestCase ()
: TestCase ("Check to see that the ns-3 Friis propagation loss model provides correct SNR values"),
m_gotCallbacks (false), m_receiver (0), m_vectorIndex (0), m_testVectors ()
{
}
Ns3FriisPropagationLossModelTestCase::~Ns3FriisPropagationLossModelTestCase ()
{
}
void
Ns3FriisPropagationLossModelTestCase::Receive (Ptr<Packet> p, double snr, WifiMode mode, enum WifiPreamble preamble)
{
TestVector testVector = m_testVectors.Get (m_vectorIndex++);
if (GetErrorStatus () == false)
{
NS_TEST_EXPECT_MSG_EQ_TOL (snr, testVector.m_snr, testVector.m_tolerance, "Got unexpected SNR value");
}
++m_gotCallbacks;
}
void
Ns3FriisPropagationLossModelTestCase::SendPacket (uint32_t i, Ptr<Socket> socket, uint32_t size)
{
TestVector testVector = m_testVectors.Get (i);
m_receiver->GetObject<ConstantPositionMobilityModel> ()->SetPosition (testVector.m_position);
socket->Send (Create<Packet> (size));
}
bool
Ns3FriisPropagationLossModelTestCase::DoRun (void)
{
//
// We want to test the propagation loss model calculations at a few chosen
// distances and compare the results to those we have manually calculated
// according to the model documentation. The following "TestVector" objects
// will drive the test.
//
// For example, the first test vector provides a position to which the
// receiver node will be moved prior to transmitting a packet. It also
// specifies that when the packet is received, the SNR shold be found
// to be 1129.93 +- 0.005 in the ReceiveOkCallback.
//
TestVector testVector;
testVector.m_position = Vector (100, 0, 0);
testVector.m_snr = 1128.93;
testVector.m_tolerance = 0.005;
m_testVectors.Add (testVector);
testVector.m_position = Vector (500, 0, 0);
testVector.m_snr = 45.1571;
testVector.m_tolerance = 0.00005;
m_testVectors.Add (testVector);
testVector.m_position = Vector (1000, 0, 0);
testVector.m_snr = 11.2893;
testVector.m_tolerance = 0.00005;
m_testVectors.Add (testVector);
testVector.m_position = Vector (2000, 0, 0);
testVector.m_snr = 2.82232;
testVector.m_tolerance = 0.000005;
m_testVectors.Add (testVector);
//
// Disable fragmentation for frames shorter than 2200 bytes.
//
Config::SetDefault ("ns3::WifiRemoteStationManager::FragmentationThreshold", StringValue ("2200"));
//
// Turn off RTS/CTS for frames shorter than 2200 bytes.
//
Config::SetDefault ("ns3::WifiRemoteStationManager::RtsCtsThreshold", StringValue ("2200"));
//
// Create the two nodes in the system. Data will be sent from node zero to
// node one.
//
NodeContainer nodes;
nodes.Create (2);
//
// Save a Ptr<Node> to the receiver node so we can get at its mobility model
// and change its position (distance) later.
//
m_receiver = nodes.Get (1);
//
// Use the regular WifiHelper to orchestrate hooking the various pieces of
// the wifi system together. Tell it that we want to use an 802.11b phy.
//
WifiHelper wifi;
wifi.SetStandard (WIFI_PHY_STANDARD_80211b);
//
// Create a physical layer helper and tell it we don't want any receiver
// gain.
//
YansWifiPhyHelper wifiPhy = YansWifiPhyHelper::Default ();
wifiPhy.Set ("RxGain", DoubleValue (0) );
//
// Create the channel helper and tell it that signals will be moving at the
// speed of light.
//
YansWifiChannelHelper wifiChannel ;
wifiChannel.SetPropagationDelay ("ns3::ConstantSpeedPropagationDelayModel");
//
// The propagation loss model is one of our independent variables in the
// test.
//
wifiChannel.AddPropagationLoss ("ns3::FriisPropagationLossModel",
"Lambda", DoubleValue (0.125),
"SystemLoss", DoubleValue (1.));
//
// Create a yans wifi channel and tell the phy helper to use it.
//
wifiPhy.SetChannel (wifiChannel.Create ());
//
// Create a non-quality-of-service mac layer and set it to ad-hoc mode.
//
NqosWifiMacHelper wifiMac = NqosWifiMacHelper::Default ();
wifi.SetRemoteStationManager ("ns3::ConstantRateWifiManager",
"DataMode", StringValue ("wifib-1mbs"),
"ControlMode",StringValue ("wifib-1mbs"));
wifiMac.SetType ("ns3::AdhocWifiMac");
//
// Create the wifi devices.
//
NetDeviceContainer devices = wifi.Install (wifiPhy, wifiMac, nodes);
//
// We need to reach down into the receiving wifi device's phy layer and hook
// the appropriate trace event to get the snr. This isn't one of the usual
// events so it takes some poking around to get there from here.
//
Ptr<YansWifiPhy> phy = devices.Get (1)->GetObject<WifiNetDevice> ()->GetPhy ()->GetObject<YansWifiPhy> ();
phy->SetReceiveOkCallback (MakeCallback (&Ns3FriisPropagationLossModelTestCase::Receive, this));
//
// Add mobility models to both nodes. This is used to place the two nodes a
// fixed distance apart. Node zero (the sender) is always at the origin and
// Node one (the receiver) is moved along the x-axis to a given distance from
// the origin. This distance is the second independent variable in our test.
//
MobilityHelper mobility;
Ptr<ListPositionAllocator> positionAlloc = CreateObject<ListPositionAllocator> ();
positionAlloc->Add (Vector (0., 0., 0.));
positionAlloc->Add (Vector (0, 0., 0.));
mobility.SetPositionAllocator (positionAlloc);
mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
mobility.Install (nodes);
//
// In order to use UDP sockets, we need to install the ns-3 internet stack
// on our nodes.
//
InternetStackHelper internet;
internet.Install (nodes);
//
// Assign IP addresses to our nodes. The source node is going to end up
// as 10.1.1.1 and the destination will be 10.1.1.2
//
Ipv4AddressHelper addresses;
addresses.SetBase ("10.1.1.0", "255.255.255.0");
Ipv4InterfaceContainer interfaces = addresses.Assign (devices);
//
// The destination is the wifi device on node one.
//
InetSocketAddress destaddr = InetSocketAddress (interfaces.GetAddress (1), 80);
//
// We just want to send packets from the source to the destination node, so
// the simplest thing is to cook something up manually.
//
TypeId tid = TypeId::LookupByName ("ns3::UdpSocketFactory");
Ptr<Socket> dest = Socket::CreateSocket (nodes.Get (1), tid);
dest->Bind (destaddr);
Ptr<Socket> source = Socket::CreateSocket (nodes.Get (0), tid);
source->Connect (destaddr);
//
// Schedule the packet sends, one packet per simulated second.
//
for (uint32_t i = 0; i < m_testVectors.GetN (); ++i)
{
Time t = Seconds (1. * i);
Simulator::Schedule (t, &Ns3FriisPropagationLossModelTestCase::SendPacket, this, i, source, 1000);
}
Simulator::Stop (Seconds(1. * m_testVectors.GetN ()));
Simulator::Run ();
source->Close ();
source = 0;
dest->Close ();
dest = 0;
m_receiver = 0;
Simulator::Destroy ();
//
// If we've already reported an error, just leave it at that.
//
if (GetErrorStatus () == false)
{
NS_TEST_ASSERT_MSG_EQ (m_gotCallbacks, m_testVectors.GetN (), "Did not get expected number of ReceiveOkCallbacks");
}
return GetErrorStatus ();
}
class Ns3LogDistancePropagationLossModelTestCase : public TestCase
{
public:
Ns3LogDistancePropagationLossModelTestCase ();
virtual ~Ns3LogDistancePropagationLossModelTestCase ();
private:
virtual bool DoRun (void);
void SendPacket (uint32_t i, Ptr<Socket> socket, uint32_t size);
void Receive (Ptr<Packet> p, double snr, WifiMode mode, enum WifiPreamble preamble);
uint32_t m_gotCallbacks;
Ptr<Node> m_receiver;
uint32_t m_vectorIndex;
typedef struct {
Vector m_position;
double m_snr;
double m_tolerance;
} TestVector;
TestVectors<TestVector> m_testVectors;
};
Ns3LogDistancePropagationLossModelTestCase::Ns3LogDistancePropagationLossModelTestCase ()
: TestCase ("Check to see that the ns-3 Log Distance propagation loss model provides correct SNR values"),
m_gotCallbacks (false), m_receiver (0), m_vectorIndex (0), m_testVectors ()
{
}
Ns3LogDistancePropagationLossModelTestCase::~Ns3LogDistancePropagationLossModelTestCase ()
{
}
void
Ns3LogDistancePropagationLossModelTestCase::Receive (Ptr<Packet> p, double snr, WifiMode mode, enum WifiPreamble preamble)
{
TestVector testVector = m_testVectors.Get (m_vectorIndex++);
if (GetErrorStatus () == false)
{
NS_TEST_EXPECT_MSG_EQ_TOL (snr, testVector.m_snr, testVector.m_tolerance, "Got unexpected SNR value");
}
++m_gotCallbacks;
}
void
Ns3LogDistancePropagationLossModelTestCase::SendPacket (uint32_t i, Ptr<Socket> socket, uint32_t size)
{
TestVector testVector = m_testVectors.Get (i);
m_receiver->GetObject<ConstantPositionMobilityModel> ()->SetPosition (testVector.m_position);
socket->Send (Create<Packet> (size));
}
bool
Ns3LogDistancePropagationLossModelTestCase::DoRun (void)
{
//
// We want to test the propagation loss model calculations at a few chosen
// distances and compare the results to those we have manually calculated
// according to the model documentation. The following "TestVector" objects
// will drive the test.
//
// For example, the first test vector provides a position to which the
// receiver node will be moved prior to transmitting a packet. It also
// specifies that when the packet is received, the SNR shold be found
// to be 1129.93 +- 0.005 in the ReceiveOkCallback.
//
TestVector testVector;
testVector.m_position = Vector (10, 0, 0);
testVector.m_snr = 11289.3;
testVector.m_tolerance = 0.05;
m_testVectors.Add (testVector);
testVector.m_position = Vector (20, 0, 0);
testVector.m_snr = 1411.16;
testVector.m_tolerance = 0.005;
m_testVectors.Add (testVector);
testVector.m_position = Vector (40, 0, 0);
testVector.m_snr = 176.407;
testVector.m_tolerance = 0.0005;
m_testVectors.Add (testVector);
testVector.m_position = Vector (80, 0, 0);
testVector.m_snr = 22.0494;
testVector.m_tolerance = 0.00005;
m_testVectors.Add (testVector);
//
// Disable fragmentation for frames shorter than 2200 bytes.
//
Config::SetDefault ("ns3::WifiRemoteStationManager::FragmentationThreshold", StringValue ("2200"));
//
// Turn off RTS/CTS for frames shorter than 2200 bytes.
//
Config::SetDefault ("ns3::WifiRemoteStationManager::RtsCtsThreshold", StringValue ("2200"));
//
// Create the two nodes in the system. Data will be sent from node zero to
// node one.
//
NodeContainer nodes;
nodes.Create (2);
//
// Save a Ptr<Node> to the receiver node so we can get at its mobility model
// and change its position (distance) later.
//
m_receiver = nodes.Get (1);
//
// Use the regular WifiHelper to orchestrate hooking the various pieces of
// the wifi system together. Tell it that we want to use an 802.11b phy.
//
WifiHelper wifi;
wifi.SetStandard (WIFI_PHY_STANDARD_80211b);
//
// Create a physical layer helper and tell it we don't want any receiver
// gain.
//
YansWifiPhyHelper wifiPhy = YansWifiPhyHelper::Default ();
wifiPhy.Set ("RxGain", DoubleValue (0) );
//
// Create the channel helper and tell it that signals will be moving at the
// speed of light.
//
YansWifiChannelHelper wifiChannel ;
wifiChannel.SetPropagationDelay ("ns3::ConstantSpeedPropagationDelayModel");
//
// The propagation loss model is one of our independent variables in the
// test.
//
wifiChannel.AddPropagationLoss ("ns3::LogDistancePropagationLossModel",
"Exponent", DoubleValue(3),
"ReferenceLoss", DoubleValue(40.045997));
//
// Create a yans wifi channel and tell the phy helper to use it.
//
wifiPhy.SetChannel (wifiChannel.Create ());
//
// Create a non-quality-of-service mac layer and set it to ad-hoc mode.
//
NqosWifiMacHelper wifiMac = NqosWifiMacHelper::Default ();
wifi.SetRemoteStationManager ("ns3::ConstantRateWifiManager",
"DataMode", StringValue ("wifib-1mbs"),
"ControlMode",StringValue ("wifib-1mbs"));
wifiMac.SetType ("ns3::AdhocWifiMac");
//
// Create the wifi devices.
//
NetDeviceContainer devices = wifi.Install (wifiPhy, wifiMac, nodes);
//
// We need to reach down into the receiving wifi device's phy layer and hook
// the appropriate trace event to get the snr. This isn't one of the usual
// events so it takes some poking around to get there from here.
//
Ptr<YansWifiPhy> phy = devices.Get (1)->GetObject<WifiNetDevice> ()->GetPhy ()->GetObject<YansWifiPhy> ();
phy->SetReceiveOkCallback (MakeCallback (&Ns3LogDistancePropagationLossModelTestCase::Receive, this));
//
// Add mobility models to both nodes. This is used to place the two nodes a
// fixed distance apart. Node zero (the sender) is always at the origin and
// Node one (the receiver) is moved along the x-axis to a given distance from
// the origin. This distance is the second independent variable in our test.
//
MobilityHelper mobility;
Ptr<ListPositionAllocator> positionAlloc = CreateObject<ListPositionAllocator> ();
positionAlloc->Add (Vector (0., 0., 0.));
positionAlloc->Add (Vector (0, 0., 0.));
mobility.SetPositionAllocator (positionAlloc);
mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
mobility.Install (nodes);
//
// In order to use UDP sockets, we need to install the ns-3 internet stack
// on our nodes.
//
InternetStackHelper internet;
internet.Install (nodes);
//
// Assign IP addresses to our nodes. The source node is going to end up
// as 10.1.1.1 and the destination will be 10.1.1.2
//
Ipv4AddressHelper addresses;
addresses.SetBase ("10.1.1.0", "255.255.255.0");
Ipv4InterfaceContainer interfaces = addresses.Assign (devices);
//
// The destination is the wifi device on node one.
//
InetSocketAddress destaddr = InetSocketAddress (interfaces.GetAddress (1), 80);
//
// We just want to send packets from the source to the destination node, so
// the simplest thing is to cook something up manually.
//
TypeId tid = TypeId::LookupByName ("ns3::UdpSocketFactory");
Ptr<Socket> dest = Socket::CreateSocket (nodes.Get (1), tid);
dest->Bind (destaddr);
Ptr<Socket> source = Socket::CreateSocket (nodes.Get (0), tid);
source->Connect (destaddr);
//
// Schedule the packet sends, one packet per simulated second.
//
for (uint32_t i = 0; i < m_testVectors.GetN (); ++i)
{
Time t = Seconds (1. * i);
Simulator::Schedule (t, &Ns3LogDistancePropagationLossModelTestCase::SendPacket, this, i, source, 1000);
}
Simulator::Stop (Seconds(1. * m_testVectors.GetN ()));
Simulator::Run ();
source->Close ();
source = 0;
dest->Close ();
dest = 0;
m_receiver = 0;
Simulator::Destroy ();
//
// If we've already reported an error, just leave it at that.
//
if (GetErrorStatus () == false)
{
NS_TEST_ASSERT_MSG_EQ (m_gotCallbacks, m_testVectors.GetN (), "Did not get expected number of ReceiveOkCallbacks");
}
return GetErrorStatus ();
}
class Ns3WifiPropagationLossModelsTestSuite : public TestSuite
{
public:
Ns3WifiPropagationLossModelsTestSuite ();
};
Ns3WifiPropagationLossModelsTestSuite::Ns3WifiPropagationLossModelsTestSuite ()
: TestSuite ("ns3-wifi-propagation-loss-models", SYSTEM)
{
AddTestCase (new Ns3FriisPropagationLossModelTestCase);
AddTestCase (new Ns3LogDistancePropagationLossModelTestCase);
}
Ns3WifiPropagationLossModelsTestSuite ns3WifiPropagationLossModelsTestSuite;

1
src/test/ns3wifi/waf vendored Normal file
View File

@@ -0,0 +1 @@
exec "`dirname "$0"`"/../../../waf "$@"

15
src/test/ns3wifi/wscript Normal file
View File

@@ -0,0 +1,15 @@
## -*- Mode: python; py-indent-offset: 4; indent-tabs-mode: nil; coding: utf-8; -*-
def configure(conf):
pass
def build(bld):
ns3wifi = bld.create_ns3_module('ns3wifi')
ns3wifi.source = [
'propagation-loss-models-test-suite.cc',
]
headers = bld.new_task_gen('ns3header')
headers.module = 'ns3wifi'
headers.source = [
'ns3wifi.h',
]

View File

@@ -41,6 +41,8 @@ all_modules = (
'devices/mesh/flame',
'applications/ping6',
'applications/radvd',
'test/ns3tcp',
'test/ns3wifi',
)
def set_options(opt):
@@ -61,7 +63,6 @@ def configure(conf):
conf.sub_config('core')
conf.sub_config('simulator')
conf.sub_config('devices/emu')
conf.sub_config('devices/wifi')
conf.sub_config('devices/tap-bridge')
conf.sub_config('contrib')
conf.sub_config('internet-stack')

886
test.py Executable file
View File

@@ -0,0 +1,886 @@
#! /usr/bin/env python
## -*- Mode: python; py-indent-offset: 4; indent-tabs-mode: nil; coding: utf-8; -*-
#
# Copyright (c) 2009 University of Washington
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation;
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
import os
import sys
import optparse
import subprocess
import multiprocessing
import threading
import Queue
import signal
import random
import xml.dom.minidom
#
# XXX This should really be part of a waf command to list the configuration
# items relative to optional ns-3 pieces.
#
# A list of interesting configuration items in the waf configuration
# cache which we may be interested in when deciding on which examples
# to run and how to run them. These are set by waf during the
# configuration phase and the corresponding assignments are usually
# found in the associated subdirectory wscript files.
#
interesting_config_items = [
"NS3_BUILDDIR",
"NS3_MODULE_PATH",
"ENABLE_EMU",
"ENABLE_GSL",
"ENABLE_GTK_CONFIG_STORE",
"ENABLE_LIBXML2",
"ENABLE_NSC",
"ENABLE_PYTHON_BINDINGS",
"ENABLE_PYTHON_SCANNING",
"ENABLE_REAL_TIME",
"ENABLE_STATIC_NS3",
"ENABLE_SUDO",
"ENABLE_TAP",
"ENABLE_THREADING",
]
#
# A list of examples to run as smoke tests just to ensure that they remain
# buildable and runnable over time. Also a condition under which to run
# the example (from the waf configuration).
#
# XXX Should this not be read from a configuration file somewhere and not
# hardcoded.
#
example_tests = [
("run-tests", "False"),
("csma-bridge", "True"),
("csma-bridge-one-hop", "True"),
("csma-broadcast", "True"),
("csma-multicast", "True"),
("csma-one-subnet", "True"),
("csma-packet-socket", "True"),
("csma-ping", "True"),
("csma-raw-ip-socket", "True"),
("csma-star", "True"),
("dynamic-global-routing", "True"),
("first", "True"),
("global-routing-slash32", "True"),
("hello-simulator", "True"),
("mixed-global-routing", "True"),
("mixed-wireless", "True"),
("object-names", "True"),
("realtime-udp-echo", "ENABLE_REAL_TIME == True"),
("second", "True"),
("simple-alternate-routing", "True"),
("simple-error-model", "True"),
("simple-global-routing", "True"),
("simple-point-to-point-olsr", "True"),
("simple-wifi-frame-aggregation", "True"),
("star", "True"),
("static-routing-slash32", "True"),
("tcp-large-transfer", "True"),
("tcp-nsc-zoo", "ENABLE_NSC == True"),
("tcp-star-server", "True"),
("test-ipv6", "True"),
("third", "True"),
("udp-echo", "True"),
("wifi-wired-bridging", "True"),
]
#
# Most of the examples produce gangs of trace files, so we want to find
# somewhere to put them that won't pollute the current directory. One
# obvious place is somewhere in /tmp.
#
TMP_TRACES_DIR = "/tmp/unchecked-traces"
#
# The test suites are going to want to output status. They are running
# concurrently. This means that unless we are careful, the output of
# the test suites will be interleaved. Rather than introducing a lock
# file that could unintentionally start serializing execution, we ask
# the tests to write their output to a temporary directory and then
# put together the final output file when we "join" the test tasks back
# to the main thread.
#
TMP_OUTPUT_DIR = "/tmp/testpy"
def get_node_text(node):
for child in node.childNodes:
if child.nodeType == child.TEXT_NODE:
return child.nodeValue
return "None"
#
# A simple example of writing a text file with a test result summary.
#
def translate_to_text(results_file, text_file):
f = open(text_file, 'w')
dom = xml.dom.minidom.parse(results_file)
for suite in dom.getElementsByTagName("TestSuite"):
result = get_node_text(suite.getElementsByTagName("SuiteResult")[0])
name = get_node_text(suite.getElementsByTagName("SuiteName")[0])
time = get_node_text(suite.getElementsByTagName("SuiteTime")[0])
output = "%s: Test Suite \"%s\" (%s)\n" % (result, name, time)
f.write(output)
if result != "CRASH":
for case in suite.getElementsByTagName("TestCase"):
result = get_node_text(case.getElementsByTagName("CaseResult")[0])
name = get_node_text(case.getElementsByTagName("CaseName")[0])
time = get_node_text(case.getElementsByTagName("CaseTime")[0])
output = " %s: Test Case \"%s\" (%s)\n" % (result, name, time)
f.write(output)
if result == "FAIL":
f.write(" Details:\n")
f.write(" Message: %s\n" % get_node_text(case.getElementsByTagName("CaseMessage")[0]))
f.write(" Condition: %s\n" % get_node_text(case.getElementsByTagName("CaseCondition")[0]))
f.write(" Actual: %s\n" % get_node_text(case.getElementsByTagName("CaseActual")[0]))
f.write(" Limit: %s\n" % get_node_text(case.getElementsByTagName("CaseLimit")[0]))
f.write(" File: %s\n" % get_node_text(case.getElementsByTagName("CaseFile")[0]))
f.write(" Line: %s\n" % get_node_text(case.getElementsByTagName("CaseLine")[0]))
for example in dom.getElementsByTagName("Example"):
result = get_node_text(example.getElementsByTagName("Result")[0])
name = get_node_text(example.getElementsByTagName("Name")[0])
output = "%s: Example \"%s\"\n" % (result, name)
f.write(output)
f.close()
#
# A simple example of writing an HTML file with a test result summary.
#
def translate_to_html(results_file, html_file):
f = open(html_file, 'w')
f.write("<html>\n")
f.write("<body>\n")
f.write("<center><h1>ns-3 Test Results</h1></center>\n")
dom = xml.dom.minidom.parse(results_file)
f.write("<h2>Test Suites</h2>\n")
for suite in dom.getElementsByTagName("TestSuite"):
name = get_node_text(suite.getElementsByTagName("SuiteName")[0])
result = get_node_text(suite.getElementsByTagName("SuiteResult")[0])
time = get_node_text(suite.getElementsByTagName("SuiteTime")[0])
if result == "PASS":
f.write("<h3 style=\"color:green\">%s: %s (%s)</h3>\n" % (result, name, time))
else:
f.write("<h3 style=\"color:red\">%s: %s (%s)</h3>\n" % (result, name, time))
f.write("<table border=\"1\">\n")
f.write("<th> Result </th>\n")
if result == "CRASH":
f.write("<tr>\n")
f.write("<td style=\"color:red\">%s</td>\n" % result)
f.write("</tr>\n")
f.write("</table>\n")
continue
f.write("<th>Test Case Name</th>\n")
f.write("<th> Time </th>\n")
if result == "FAIL":
f.write("<th>Details</th>\n")
for case in suite.getElementsByTagName("TestCase"):
f.write("<tr>\n")
name = get_node_text(case.getElementsByTagName("CaseName")[0])
result = get_node_text(case.getElementsByTagName("CaseResult")[0])
time = get_node_text(case.getElementsByTagName("CaseTime")[0])
if result == "FAIL":
f.write("<td style=\"color:red\">%s</td>\n" % result)
f.write("<td>%s</td>\n" % name)
f.write("<td>%s</td>\n" % time)
f.write("<td>")
f.write("<b>Message: </b>%s, " % get_node_text(case.getElementsByTagName("CaseMessage")[0]))
f.write("<b>Condition: </b>%s, " % get_node_text(case.getElementsByTagName("CaseCondition")[0]))
f.write("<b>Actual: </b>%s, " % get_node_text(case.getElementsByTagName("CaseActual")[0]))
f.write("<b>Limit: </b>%s, " % get_node_text(case.getElementsByTagName("CaseLimit")[0]))
f.write("<b>File: </b>%s, " % get_node_text(case.getElementsByTagName("CaseFile")[0]))
f.write("<b>Line: </b>%s" % get_node_text(case.getElementsByTagName("CaseLine")[0]))
f.write("</td>\n")
else:
f.write("<td style=\"color:green\">%s</td>\n" % result)
f.write("<td>%s</td>\n" % name)
f.write("<td>%s</td>\n" % time)
f.write("<td></td>\n")
f.write("</tr>\n")
f.write("</table>\n")
f.write("<h2>Examples</h2>\n")
f.write("<table border=\"1\">\n")
f.write("<th> Result </th>\n")
f.write("<th>Example Name</th>\n")
for example in dom.getElementsByTagName("Example"):
f.write("<tr>\n")
result = get_node_text(example.getElementsByTagName("Result")[0])
if result == "FAIL":
f.write("<td style=\"color:red\">%s</td>\n" % result)
else:
f.write("<td style=\"color:green\">%s</td>\n" % result)
name = get_node_text(example.getElementsByTagName("Name")[0])
f.write("<td>%s</td>\n" % name)
f.write("</tr>\n")
f.write("</table>\n")
f.write("</body>\n")
f.write("</html>\n")
f.close()
#
# Python Control-C handling is broken in the presence of multiple threads.
# Signals get delivered to the runnable/running thread by default and if
# it is blocked, the signal is simply ignored. So we hook sigint and set
# a global variable telling the system to shut down gracefully.
#
thread_exit = False
def sigint_hook(signal, frame):
global thread_exit
thread_exit = True
return 0
#
# Waf can be configured to compile in debug or optimized modes. In each
# case, the resulting built goes into a different directory. If we want
# test tests to run from the correct code-base, we have to figure out which
# mode waf is running in. This is called its active variant.
#
# XXX This function pokes around in the waf internal state file. To be a
# little less hacky, we should add a commmand to waf to return this info
# and use that result.
#
def read_waf_active_variant():
for line in open("build/c4che/default.cache.py").readlines():
if line.startswith("NS3_ACTIVE_VARIANT"):
exec(line, globals())
break
if options.verbose:
print "NS3_ACTIVE_VARIANT == %s" % NS3_ACTIVE_VARIANT
#
# In general, the build process itself naturally takes care of figuring out
# which tests are built into the test runner. For example, if waf configure
# determines that ENABLE_EMU is false due to some missing dependency,
# the tests for the emu net device simply will not be built and will
# therefore not be included in the built test runner.
#
# Examples, however, are a different story. In that case, we are just given
# a list of examples that could be run. Instead of just failing, for example,
# nsc-tcp-zoo if NSC is not present, we look into the waf saved configuration
# for relevant configuration items.
#
# XXX This function pokes around in the waf internal state file. To be a
# little less hacky, we should add a commmand to waf to return this info
# and use that result.
#
def read_waf_config():
for line in open("build/c4che/%s.cache.py" % NS3_ACTIVE_VARIANT).readlines():
for item in interesting_config_items:
if line.startswith(item):
exec(line, globals())
if options.verbose:
for item in interesting_config_items:
print "%s ==" % item, eval(item)
#
# It seems pointless to fork a process to run waf to fork a process to run
# the test runner, so we just run the test runner directly. The main thing
# that waf would do for us would be to sort out the shared library path but
# we can deal with that easily and do here.
#
# There can be many different ns-3 repositories on a system, and each has
# its own shared libraries, so ns-3 doesn't hardcode a shared library search
# path -- it is cooked up dynamically, so we do that too.
#
def make_library_path():
global LIBRARY_PATH
LIBRARY_PATH = "LD_LIBRARY_PATH='"
if sys.platform == "darwin":
LIBRARY_PATH = "DYLD_LIBRARY_PATH='"
elif sys.platform == "win32":
LIBRARY_PATH = "PATH='"
elif sys.platform == "cygwin":
LIBRARY_PATH = "PATH='"
for path in NS3_MODULE_PATH:
LIBRARY_PATH = LIBRARY_PATH + path + ":"
LIBRARY_PATH = LIBRARY_PATH + "'"
def run_job_synchronously(shell_command, directory):
cmd = "%s %s/%s/%s" % (LIBRARY_PATH, NS3_BUILDDIR, NS3_ACTIVE_VARIANT, shell_command)
if options.verbose:
print "Synchronously execute %s" % cmd
proc = subprocess.Popen(cmd, shell=True, cwd=directory, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout_results = proc.communicate()[0]
return (proc.returncode, stdout_results)
#
# This class defines a unit of testing work. It will typically refer to
# a test suite to run using the test-runner, or an example to run directly.
#
class Job():
def __init__(self):
self.is_break = False
self.is_example = False
self.shell_command = ""
self.display_name = ""
self.cwd = ""
self.tmp_file_name = ""
self.returncode = False
#
# A job is either a standard job or a special job indicating that a worker
# thread should exist. This special job is indicated by setting is_break
# to true.
#
def set_is_break(self, is_break):
self.is_break = is_break
#
# Examples are treated differently than standard test suites. This is
# mostly because they are completely unaware that they are being run as
# tests. So we have to do some special case processing to make them look
# like tests.
#
def set_is_example(self, is_example):
self.is_example = is_example
#
# This is the shell command that will be executed in the job. For example,
#
# "utils/test-runner --suite=some-test-suite"
#
def set_shell_command(self, shell_command):
self.shell_command = shell_command
#
# This is the dispaly name of the job, typically the test suite or example
# name. For example,
#
# "some-test-suite" or "udp-echo"
#
def set_display_name(self, display_name):
self.display_name = display_name
#
# This is the base directory of the repository out of which the tests are
# being run. It will be used deep down in the testing framework to determine
# where the source directory of the test was, and therefore where to find
# provided test vectors. For example,
#
# "/home/user/repos/ns-3-dev"
#
def set_basedir(self, basedir):
self.basedir = basedir
#
# This is the current working directory that will be given to an executing
# test as it is being run. It will be used for examples to tell them where
# to write all of the pcap files that we will be carefully ignoring. For
# example,
#
# "/tmp/unchecked-traces"
#
def set_cwd(self, cwd):
self.cwd = cwd
#
# This is the temporary results file name that will be given to an executing
# test as it is being run. We will be running all of our tests in parallel
# so there must be multiple temporary output files. These will be collected
# into a single XML file at the end and then be deleted. The file names are
# just giant random numbers, for example
#
# "/tmp/testpy/5437925246732857"
#
def set_tmp_file_name(self, tmp_file_name):
self.tmp_file_name = tmp_file_name
#
# The return code received when the job process is executed.
#
def set_returncode(self, returncode):
self.returncode = returncode
#
# The worker thread class that handles the actual running of a given test.
# Once spawned, it receives requests for work through its input_queue and
# ships the results back through the output_queue.
#
class worker_thread(threading.Thread):
def __init__(self, input_queue, output_queue):
threading.Thread.__init__(self)
self.input_queue = input_queue
self.output_queue = output_queue
def run(self):
while True:
job = self.input_queue.get()
#
# Worker threads continue running until explicitly told to stop with
# a special job.
#
if job.is_break:
return
#
# If the global interrupt handler sets the thread_exit variable,
# we stop doing real work and just report back a "break" in the
# normal command processing has happened.
#
if thread_exit == True:
job.set_is_break(True)
self.output_queue.put(job)
continue
#
# Otherwise go about the business of running tests as normal.
#
else:
if options.verbose:
print "Launch %s" % job.shell_command
if job.is_example:
#
# If we have an example, the shell command is all we need to
# know. It will be something like "examples/udp-echo"
#
(job.returncode, standard_out) = run_job_synchronously(job.shell_command, job.cwd)
else:
#
# If we're a test suite, we need to provide a little more info
# to the test runner, specifically the base directory and temp
# file name
#
(job.returncode, standard_out) = run_job_synchronously(job.shell_command + " --basedir=%s --out=%s" %
(job.basedir, job.tmp_file_name), job.cwd)
if options.verbose:
print standard_out
self.output_queue.put(job)
#
# This is the main function that does the work of interacting with the test-runner
# itself.
#
def run_tests():
#
# Run waf to make sure that everything is built, configured and ready to go
# unless we are explicitly told not to.
#
if options.nowaf == False:
proc = subprocess.Popen("./waf", shell=True)
proc.communicate()
#
# Pull some interesting configuration information out of waf, primarily
# so we can know where executables can be found, but also to tell us what
# pieces of the system have been built. This will tell us what examples
# are runnable.
#
read_waf_active_variant()
read_waf_config()
make_library_path()
#
# There are a couple of options that imply we can to exit before starting
# up a bunch of threads and running tests. Let's detect these cases and
# handle them without doing all of the hard work.
#
if options.kinds:
(rc, standard_out) = run_job_synchronously("utils/test-runner --kinds", os.getcwd())
print standard_out
if options.list:
(rc, standard_out) = run_job_synchronously("utils/test-runner --list", os.getcwd())
print standard_out
if options.kinds or options.list:
return
#
# We communicate results in two ways. First, a simple message relating
# PASS, FAIL, or SKIP is always written to the standard output. It is
# expected that this will be one of the main use cases. A developer can
# just run test.py with no options and see that all of the tests still
# pass.
#
# The second main use case is when detailed status is requested (with the
# --text or --html options). Typicall this will be text if a developer
# finds a problem, or HTML for nightly builds. In these cases, an
# XML file is written containing the status messages from the test suites.
# This file is then read and translated into text or HTML. It is expected
# that nobody will really be interested in the XML, so we write it to
# somewhere in /tmp with a random name to avoid collisions. Just in case
# some strange once-in-a-lifetime error occurs, we always write the info
# so it can be found, we just may not use it.
#
# When we run examples as smoke tests, they are going to want to create
# lots and lots of trace files. We aren't really interested in the contents
# of the trace files, so we also just stash them off in /tmp somewhere.
#
if not os.path.exists(TMP_OUTPUT_DIR):
os.makedirs(TMP_OUTPUT_DIR)
if not os.path.exists(TMP_TRACES_DIR):
os.makedirs(TMP_TRACES_DIR)
#
# Create the main output file and start filling it with XML. We need to
# do this since the tests will just append individual results to this file.
#
xml_results_file = TMP_OUTPUT_DIR + "%d.xml" % random.randint(0, sys.maxint)
f = open(xml_results_file, 'w')
f.write('<?xml version="1.0"?>\n')
f.write('<TestResults>\n')
f.close()
#
# We need to figure out what test suites to execute. We are either given one
# suite or example explicitly via the --suite or --example option, or we
# need to call into the test runner and ask it to list all of the available
# test suites. Further, we need to provide the constraint information if it
# has been given to us.
#
# This translates into allowing the following options with respect to the
# suites
#
# ./test,py: run all of the suites
# ./test.py --constrain=unit: run all unit suites
# ./test,py --suite=some-test-suite: run the single suite
# ./test,py --example=udp-echo: run no test suites
# ./test,py --suite=some-suite --example=some-example: run the single suite
#
# We can also use the --constrain option to provide an ordering of test
# execution quite easily.
#
if len(options.suite):
suites = options.suite + "\n"
elif len(options.example) == 0:
if len(options.constrain):
(rc, suites) = run_job_synchronously("utils/test-runner --list --constrain=%s" % options.constrain, os.getcwd())
else:
(rc, suites) = run_job_synchronously("utils/test-runner --list", os.getcwd())
else:
suites = ""
#
# suite_list will either a single test suite name that the user has
# indicated she wants to run or a list of test suites provided by
# the test-runner possibly according to user provided constraints.
# We go through the trouble of setting up the parallel execution
# even in the case of a single suite to avoid having two process the
# results in two different places.
#
suite_list = suites.split('\n')
#
# We now have a possibly large number of test suites to run, so we want to
# run them in parallel. We're going to spin up a number of worker threads
# that will run our test jobs for us.
#
# XXX Need to figure out number of CPUs without the multiprocessing
# dependency since multiprocessing is not standard `till Python 2.6
#
input_queue = Queue.Queue(0)
output_queue = Queue.Queue(0)
jobs = 0
threads=[]
processors = multiprocessing.cpu_count()
for i in range(processors):
thread = worker_thread(input_queue, output_queue)
threads.append(thread)
thread.start()
#
# We now have worker threads spun up, and a list of work to do. So, run
# through the list of test suites and dispatch a job to run each one.
#
# Dispatching will run with unlimited speed and the worker threads will
# execute as fast as possible from the queue.
#
for test in suite_list:
if len(test):
job = Job()
job.set_is_example(False)
job.set_display_name(test)
job.set_tmp_file_name(TMP_OUTPUT_DIR + "%d" % random.randint(0, sys.maxint))
job.set_cwd(os.getcwd())
job.set_basedir(os.getcwd())
job.set_shell_command("utils/test-runner --suite='%s'" % test)
if options.verbose:
print "Queue %s" % test
input_queue.put(job)
jobs = jobs + 1
#
# We've taken care of the discovered or specified test suites. Now we
# have to deal with examples run as smoke tests. We have a list of all of
# the example programs it makes sense to try and run. Each example will
# have a condition associated with it that must evaluate to true for us
# to try and execute it. This is used to determine if the example has
# a dependency that is not satisfied. For example, if an example depends
# on NSC being configured by waf, that example should have a condition
# that evaluates to true if NSC is enabled. For example,
#
# ("tcp-nsc-zoo", "ENABLE_NSC == True"),
#
# In this case, the example "tcp-nsc-zoo" will only be run if we find the
# waf configuration variable "ENABLE_NSC" to be True.
#
# We don't care at all how the trace files come out, so we just write them
# to a single temporary directory.
#
# XXX As it stands, all of the trace files have unique names, and so file
# collisions can only happen if two instances of an example are running in
# two versions of the test.py process concurrently. We may want to create
# uniquely named temporary traces directories to avoid this problem.
#
# We need to figure out what examples to execute. We are either given one
# suite or example explicitly via the --suite or --example option, or we
# need to walk the list of examples looking for available example
# conditions.
#
# This translates into allowing the following options with respect to the
# suites
#
# ./test,py: run all of the examples
# ./test.py --constrain=unit run no examples
# ./test.py --constrain=example run all of the examples
# ./test,py --suite=some-test-suite: run no examples
# ./test,py --example=some-example: run the single example
# ./test,py --suite=some-suite --example=some-example: run the single example
#
# XXX could use constrain to separate out examples used for performance
# testing
#
if len(options.suite) == 0 and len(options.example) == 0:
if len(options.constrain) == 0 or options.constrain == "example":
for test, condition in example_tests:
if eval(condition) == True:
job = Job()
job.set_is_example(True)
job.set_display_name(test)
job.set_tmp_file_name("")
job.set_cwd(TMP_TRACES_DIR)
job.set_basedir(os.getcwd())
job.set_shell_command("examples/%s" % test)
if options.verbose:
print "Queue %s" % test
input_queue.put(job)
jobs = jobs + 1
elif len(options.example):
#
# If you tell me to run an example, I will try and run the example
# irrespective of any condition.
#
job = Job()
job.set_is_example(True)
job.set_display_name(options.example)
job.set_tmp_file_name("")
job.set_cwd(TMP_TRACES_DIR)
job.set_basedir(os.getcwd())
job.set_shell_command("examples/%s" % options.example)
if options.verbose:
print "Queue %s" % test
input_queue.put(job)
jobs = jobs + 1
#
# Tell the worker threads to pack up and go home for the day. Each one
# will exit when they see their is_break task.
#
for i in range(processors):
job = Job()
job.set_is_break(True)
input_queue.put(job)
#
# Now all of the tests have been dispatched, so all we have to do here
# in the main thread is to wait for them to complete. Keyboard interrupt
# handling is broken as mentioned above. We use a signal handler to catch
# sigint and set a global variable. When the worker threads sense this
# they stop doing real work and will just start throwing jobs back at us
# with is_break set to True. In this case, there are no real results so we
# ignore them. If there are real results, we always print PASS or FAIL to
# standard out as a quick indication of what happened.
#
for i in range(jobs):
job = output_queue.get()
if job.is_break:
continue
if job.is_example:
kind = "Example"
else:
kind = "TestSuite"
if job.returncode == 0:
status = "PASS"
else:
status = "FAIL"
print "%s: %s %s" % (status, kind, job.display_name)
if job.is_example == True:
#
# Examples are the odd man out here. They are written without any
# knowledge that they are going to be run as a test, so we need to
# cook up some kind of output for them. We're writing an xml file,
# so we do some simple XML that says we ran the example.
#
# XXX We could add some timing information to the examples, i.e. run
# them through time and print the results here.
#
f = open(xml_results_file, 'a')
f.write('<Example>\n')
example_name = " <Name>%s</Name>\n" % job.display_name
f.write(example_name)
if job.returncode == 0:
f.write(' <Result>PASS</Result>\n')
elif job.returncode == 1:
f.write(' <Result>FAIL</Result>\n')
else:
f.write(' <Result>CRASH</Result>\n')
f.write('</Example>\n')
f.close()
else:
#
# If we're not running an example, we're running a test suite.
# These puppies are running concurrently and generating output
# that was written to a temporary file to avoid collisions.
#
# Now that we are executing sequentially in the main thread, we can
# concatenate the contents of the associated temp file to the main
# results file and remove that temp file.
#
# One thing to consider is that a test suite can crash just as
# well as any other program, so we need to deal with that
# possibility as well. If it ran correctly it will return 0
# if it passed, or 1 if it failed. In this case, we can count
# on the results file it saved being complete. If it crashed, it
# will return some other code, and the file should be considered
# corrupt and useless. If the suite didn't create any XML, then
# we're going to have to do it ourselves.
#
if job.returncode == 0 or job.returncode == 1:
f_to = open(xml_results_file, 'a')
f_from = open(job.tmp_file_name, 'r')
f_to.write(f_from.read())
f_to.close()
f_from.close()
else:
f = open(xml_results_file, 'a')
f.write("<TestSuite>\n")
f.write(" <Name>%s</Name>\n" % job.display_name)
f.write(' <Result>CRASH</Result>\n')
f.write("</TestSuite>\n")
f.close()
os.remove(job.tmp_file_name)
#
# We have all of the tests run and the results written out. One final
# bit of housekeeping is to wait for all of the threads to close down
# so we can exit gracefully.
#
for thread in threads:
thread.join()
#
# Back at the beginning of time, we started the body of an XML document
# since the test suites and examples were going to just write their
# individual pieces. So, we need to finish off and close out the XML
# document
#
f = open(xml_results_file, 'a')
f.write('</TestResults>\n')
f.close()
#
# The last things to do are to translate the XML results file to "human
# readable form" if the user asked for it
#
if len(options.html):
translate_to_html(xml_results_file, options.html)
if len(options.text):
translate_to_text(xml_results_file, options.text)
def main(argv):
random.seed()
parser = optparse.OptionParser()
parser.add_option("-c", "--constrain", action="store", type="string", dest="constrain", default="",
metavar="KIND",
help="constrain the test-runner by kind of test")
parser.add_option("-e", "--example", action="store", type="string", dest="example", default="",
metavar="EXAMPLE",
help="specify a single example to run")
parser.add_option("-k", "--kinds", action="store_true", dest="kinds", default=False,
help="print the kinds of tests available")
parser.add_option("-l", "--list", action="store_true", dest="list", default=False,
help="print the list of known tests")
parser.add_option("-n", "--nowaf", action="store_true", dest="nowaf", default=False,
help="do not run waf before starting testing")
parser.add_option("-s", "--suite", action="store", type="string", dest="suite", default="",
metavar="TEST-SUITE",
help="specify a single test suite to run")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
help="print progress and informational messages")
parser.add_option("-w", "--web", "--html", action="store", type="string", dest="html", default="",
metavar="HTML-FILE",
help="write detailed test results into HTML-FILE.html")
parser.add_option("-t", "--text", action="store", type="string", dest="text", default="",
metavar="TEXT-FILE",
help="write detailed test results into TEXT-FILE.txt")
global options
options = parser.parse_args()[0]
signal.signal(signal.SIGINT, sigint_hook)
run_tests()
return 0
if __name__ == '__main__':
sys.exit(main(sys.argv))

248
utils/test-runner.cc Normal file
View File

@@ -0,0 +1,248 @@
/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
/*
* Copyright (c) 2009 University of Washington
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation;
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include "ns3/test.h"
#include <iostream>
#include <fstream>
#include <string>
using namespace ns3;
//
// Run one of the test suites. Returns an integer with the boolean sense of
// "an error has occurred." That is, 0 == false -> no error; 1 == true -> an
// error occurred.
//
int
main (int argc, char *argv[])
{
bool doVerbose = false;
bool doList = false;
bool doHelp = false;
bool doSuite = false;
bool doKinds = false;
bool haveBasedir = false;
bool haveOutfile = false;
bool haveType = false;
std::string suiteName;
std::string basedir;
std::string outfileName;
std::string typeName;
for (int i = 1; i < argc; ++i)
{
std::string arg(argv[i]);
if (arg.find ("--basedir=") != std::string::npos)
{
basedir = arg.substr (arg.find_first_of ("=") + 1, 9999);
haveBasedir = true;
}
if (arg.find ("--constrain=") != std::string::npos)
{
typeName = arg.substr (arg.find_first_of ("=") + 1, 9999);
haveType = true;
}
if (arg.compare ("--help") == 0)
{
doHelp = true;
}
if (arg.compare ("--kinds") == 0)
{
doKinds = true;
}
if (arg.compare ("--list") == 0)
{
doList = true;
}
if (arg.find ("--out=") != std::string::npos)
{
outfileName = arg.substr (arg.find_first_of ("=") + 1, 9999);
haveOutfile = true;
}
if (arg.find ("--suite=") != std::string::npos)
{
suiteName = arg.substr (arg.find_first_of ("=") + 1, 9999);
doSuite = true;
}
if (arg.compare ("--verbose") == 0)
{
doVerbose = true;
}
}
//
// A help request trumps everything else. If we have one, just print the help
// and leave.
//
if (doHelp)
{
std::cout << " --basedir=dir: Set the base directory (where to find src) to \"dir\"" << std::endl;
std::cout << " --constrain=test-type: Constrain checks to test suites of type \"test-type\"" << std::endl;
std::cout << " --help: Print this message" << std::endl;
std::cout << " --kinds: List all of the available kinds of tests" << std::endl;
std::cout << " --list: List all of the test suites (optionally constrained by test-type)" << std::endl;
std::cout << " --out=file-name: Set the test status output file to \"file-name\"" << std::endl;
std::cout << " --suite=suite-name: Run the test suite named \"suite-name\"" << std::endl;
std::cout << " --verbose: Turn on messages in the run test suites" << std::endl;
return false;
}
//
// A kinds request trumps everything remaining. If we are asked, just
// print the list of types and leave.
//
if (doKinds)
{
//
// Coming up with a string to represent a test type is completely up to
// us here. We just define the types as being a string composed of the
// enum defined in test.h converted to lower case.
//
std::cout << " bvt: Build Verification Tests (to see if build completed successfully)" << std::endl;
std::cout << " unit: Unit Tests (within modules to check basic functionality)" << std::endl;
std::cout << " system: System Tests (spans modules to check integration of modules)" << std::endl;
std::cout << " example: Examples (to see if example programs run successfully)" << std::endl;
std::cout << " performance: Performance Tests (check to see if the system is as fast as expected)" << std::endl;
return false;
}
//
// A list request is the first functional request. It trumps running the
// actual tests. If we get a list request, we don't run anything, we just
// do the requested list which may or may not be qualified by a typename.
//
if (doList)
{
for (uint32_t i = 0; i < TestRunner::GetNTestSuites (); ++i)
{
TestSuite *suite = TestRunner::GetTestSuite (i);
//
// Filter the tests listed by type if requested.
//
if (haveType)
{
TestSuite::TestType type = suite->GetTestType ();
if (typeName == "bvt" && type != TestSuite::BVT)
{
continue;
}
if (typeName == "unit" && type != TestSuite::UNIT)
{
continue;
}
if (typeName == "system" && type != TestSuite::SYSTEM)
{
continue;
}
if (typeName == "example" && type != TestSuite::EXAMPLE)
{
continue;
}
if (typeName == "performance" && type != TestSuite::PERFORMANCE)
{
continue;
}
}
//
// This creates a list of test suite names that can be used by the
// high level test manager to get a list of all tests. It will then
// typically launch individual tests in parallel, calling back here
// with a specific "suite=" to run.
//
std::cout << suite->GetName () << std::endl;
}
return false;
}
//
// If we haven't been asked to run a test suite, we are just going to happily
// try and run everything. Test suites are possibly going to need to figure
// out where there source directory is, and to do that they will need to know
// where the base directory of the distribution is (the directory in which
// "src" is found). We could try and run without it, but when it is needed,
// the test will fail with an assertion. So to be safe, we require a basedir
// to proceed.
//
if (haveBasedir == false)
{
std::cout << "Must specify a base directory to run tests (use --basedir option)" << std::endl;
return true;
}
//
// If given an output file, we just append the output of each test suite
// we're asked to run to the end of that file. We need to append since the
// higher level test runner may be just running a number of tests back to
// back. We leave it up to that code to decide how to deal with possible
// parallel operation -- we just append to a file here. If no output file
// is specified, we don't do any output and just return the sense of error
// given by the test.
//
std::ofstream *pofs = 0;
std::ofstream ofs;
if (!outfileName.empty ())
{
ofs.open (outfileName.c_str (), std::fstream::out | std::fstream::app);
pofs = &ofs;
}
//
// If we have a specified test suite to run, then we only run that suite.
// The default case is to "run everything. We don't expect this to be done
// much since typically higher level code will be running suites in parallel
// but we'll do it if asked.
//
bool result = false;
for (uint32_t i = 0; i < TestRunner::GetNTestSuites (); ++i)
{
TestSuite *testSuite = TestRunner::GetTestSuite (i);
if (doSuite == false || (doSuite == true && suiteName == testSuite->GetName ()))
{
testSuite->SetBaseDir (basedir);
testSuite->SetStream (pofs);
testSuite->SetVerbose (doVerbose);
result |= testSuite->Run ();
}
}
ofs.close();
return result;
}

View File

@@ -9,6 +9,11 @@ def build(bld):
unit_tests.source = 'run-tests.cc'
## link unit test program with all ns3 modules
unit_tests.uselib_local = 'ns3'
test_runner = bld.create_ns3_program('test-runner', ['core'])
test_runner.install_path = None # do not install
test_runner.source = 'test-runner.cc'
test_runner.uselib_local = 'ns3'
obj = bld.create_ns3_program('bench-simulator', ['simulator'])
obj.source = 'bench-simulator.cc'

View File

@@ -358,8 +358,15 @@ def configure(conf):
else:
conf.report_optional_feature("static", "Static build", False,
"option --enable-static not selected")
have_gsl = conf.pkg_check_modules('GSL', 'gsl', mandatory=False)
conf.env['ENABLE_GSL'] = have_gsl
conf.report_optional_feature("GSL", "GNU Scientific Library (GSL)",
conf.env['ENABLE_GSL'],
"GSL not found")
if have_gsl:
conf.env.append_value('CXXDEFINES', "ENABLE_GSL")
conf.env.append_value('CCDEFINES', "ENABLE_GSL")
# Write a summary of optional features status
print "---- Summary of optional NS-3 features:"