Differences between revisions 3 and 24 (spanning 21 versions)
Revision 3 as of 2011-10-18 05:23:59
Size: 4193
Editor: KaiJaeger
Comment:
Revision 24 as of 2016-09-20 14:03:46
Size: 20504
Editor: KaiJaeger
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
## page was renamed from AplAplTemplate
Line 7: Line 6:
|| /!\ Work in progress ||
Line 13: Line 10:
You might find the framework flexible enough to suit your own needs when it comes to implementing tests.
You might find the framework flexible enough to suit your own needs when it comes to implementing tests, even independently from the APLTree project.
Line 19: Line 17:
Note that test case causing a crash are considered "broken" Test cases that do not get the expected result are considered "failing". Note that test cases causing a crash are considered "broken". Test cases that do not return the expected result are considered "failing".
Line 23: Line 21:
 1. The `#.Tester` class assumes that all your tests are hosted by an ordinary namespace. All methods take a ref to that namespace as an argument.
 1. All test functions inside that namespace are expected to start their names with the string `Test_` followed by digits.
 1. The `#.Tester` class assumes that all your tests are hosted by an ordinary namespace. All methods running test cases take a ref to that namespace as an argument.
 1. All test functions inside that namespace are expected to start their names with the string `Test_` followed by digits. '''Update''': since version 1.1 test functions can be grouped. For example, in order to group all functions testing a method `foo` the test functions are named `Test_foo_001` (or `Test_foo_01` or `Test_foo_00001`), `Test_foo_002`, `Test_foo_003` and so on.
Line 27: Line 25:
    * Run with error trapping and return a log (vector of text vectors) reporting details. Run this before checking in.
    * Run without error trapping. Use this for investigating why test cases unexpectedly fail.
    * Run "batch mode". It is assumed that those test cases that need a human being in front of the monitor are '''not''' executed when running in batch mode.
    * Run with error trapping and return a log (vector of text vectors) reporting details. Run this before checking your code into a Code Management System like !SubVersion. See `Tester.Run` for details.
    * Run without error trapping. Use this for investigating why test cases crash or fail. See `Tester.RunDebug` for details.
    * Run "batch mode". It is assumed that those test cases that need a human being in front of the monitor are '''not''' executed when running in batch mode. See `Tester.RunBatchTests` for details. (Of course it is best not to rely on a human being but this cannot always be avoided)
Line 31: Line 29:
    1. `stopFlag` (0=use error trapping)     1. `debugFlag` (0=use error trapping)
Line 33: Line 31:
 1. Every test function must return a scalar integer with one of:
    * `0`: the test case is okay.
    * `1`: the test case failed.
    * `-1`: means the test case did not execute itself because it is not designed to run in batch mode.
 1. Every test function must return a result. Use one of the following niladic functions as explicit result:

|| `∆OK` || Passed ||
|| `∆Failed` || Unexpected result||
|| `∆NoBatchTest` || Not executed because `batchFlag` was 1.||
|| `∆InActive` || Not executed because the test case is inactive (not ready, buggy, whatever)||
|| `∆LinuxOnly` || Not executed because runs under Linux only.||
|| `∆LinuxOrMacOnly` || Not executed because runs under Linux/Mac OS only.||
|| `∆LinuxOrWindowsOnly`|| Not executed because runs under Linux/Windows only.||
|| `∆MacOrWindowsOnly` || Not executed because runs under Mac OS/Windows only.||
|| `∆MacOnly` || Not executed because runs under Mac OS only.||
|| `∆NoAcreTests` || Not executed because it's acre related.||
|| `∆WindowsOnly` || Not executed because runs under Windows only.||

These niladic functions are made available as Helpers - see there.

== Work flow ==

No matter which of the `Run*` functions (`Run`, `RunBatchTests`, `RunDebug`, `RunThese`) you are going to call, the workflow is always the same:

=== INI files ===

First of all the `Run*` method checks whether there is a file `testcases_{computername}.ini`. If this is the case that INI file is processed. Use this to specify important computer-dependent variables.

It then checks whether there is a file `testcases.ini`. If this is the case that INI file is processed, too. Use this to specify general stuff that does not depend on a certain computer/environment.

Note that if one or both INI files exist there will be a flat namespace `{yourTestNamespace}.INI`, meaning that sections in the INI file are ignored. An example: if your test functions are hosted by a namespace "foo" and your INI file specifies a variable hello as holding "world" then:

{{{
       'world'≡#.foo.INI.hello
1
}}}

=== Initialisation ===

Now the `Run*` method checks whether there is a function `Initial` in the hosting namespace. If that is the case then the function is executed.

Note that the function must be either niladic or monadic and must not return a result. If the function is monadic then a Boolean is passed via the right argument telling the function that the test suite is going to run in batch mode if true.

Use this function to create stuff that's needed by '''all''' test cases, or tell the user something important - of course only if the batch flag is false.

It might be a good idea to call a function tidying up in line 1, just in case a failing test case has left something behind; see below for details.

Another thing to mention is that `Initial` is not a bad place to establish the Helpers - see there for details.

=== Finally: running the test cases ===

Now the test cases are executed one by one.

=== Tidying up ===

After the last test case was executed the `Run*` function checks whether there is a function "Cleanup" in the namespace hosting your test cases. If that's the case then this function is executed. Such a function must be a niladic, no-result returning function.

=== INI file again ===

Finally the namespace "INI" holding variables populated from your INI file(s) is deleted.

== Helpers ==
Over time it emerged that certain helpers are very useful in the namespace hosting the test cases. Therefore with version 1.4.0 a method `EstablishHelpersIn` was introduced that takes a reference as the right argument. That ref should point to the namespace hosting the test cases. If you are executing `#.Testers.EstablishHelpersIn` from within that namespace you can specify an empty vector as the right argument: it then defaults to `⎕IO⊃⎕RSI`.

One of the helpers is the method `ListHelpers` which returns a table with all names in the first column and the first line of the function (which is expected to be a comment telling what the function actually does) in the second column:

{{{
      ListHelpers
 Run Run all test cases
 RunDebug Run all test cases with DEBUG flag on
 RunThese Run just the specified tests.
 RunBatchTests Run all batch tests
 RunBatchTestsInDebugMode Run all batch tests in debug mode (no error trapping) and with stopFlag←1.
 E Get all functions into the editor starting their names with "Test_".
 L Prints a list with all test cases and the first comment line to the session.
 G Prints all groups to the session.
 FailsIf Usage : →FailsIf x, where x is a boolean scalar
 PassesIf Usage : →PassesIf x, where x is a boolean scalar
 GoToTidyUp Returns either an empty vector or "Label" which defaults to ∆TidyUp
 RenameTestFnsTo Renames a test function and tell acre.
 ListHelpers Lists all helpers available from the `Tester` class.
 ⍝ Niladic functions returning an integer constant, see table above.
}}}

The helpers fall into four groups:

=== Running test cases ===

`Run`, `RunDebug`, `RunThese`, `RunBatchTest` and `RunBatchTestsInDebugMode` are running all or selected test cases with or without error trapping. They are shortcuts for calling the methods in `Tester`.

=== Flow control ===

`FailsIf`, `PassesIf` and `GoToTidyUp` control the flow in test functions.

=== Test function template ===

In case the namespace has no test functions in it (yet) the `EstablishHelpersIn` function also creates a test function template named `Test_000`.

=== "Constants" ===

These niladic functions behave like constants and are useful for assigning explicit results in test functions.

=== Misc ===

`E`, `G` and `L` allow the user to edit all test functions and to list test groups, if any.

=== Examples ===

The test template contains examples for how to use the flow control functions.

The `Run*` functions are discussed later.

The most important helper function is probably `L`: with a large number of test functions it's the only reasonable way to gather information. Since version 1.8.0 `L` requires a right argument (empty or group name) and an optional left argument (test case numbers).

Examples from the `Fire` development workspace:

{{{
      ⍴L ''
99 2
      G
acre
InternalMethods
List
Misc
Replace
ReportGhosts
Search
      L 'List'
Test_List_001 Unnamed NSs & GUI instances are ignored.
Test_List_002 Just GUI instances are ignored
Test_List_003 Nothing is ignored at all; instances of GUI objects are treaded like namespaces
Test_List_004 Nothing is ignored but class instances
Test_List_005 Scan all object types but with "Recursive"≡None
Test_List_006 Scan all object types but with "Recursive"≡Just one level
Test_List_007 Search for "Hello" with a ref and an unnamed namespace
    1 7 99 L 'List'
Test_List_001 Unnamed NSs & GUI instances are ignored.
Test_List_007 Search for "Hello" with a ref and an unnamed namespace
}}}

=== RunThese ===

A particularly helpful method while developing/enhancing stuff is `RunThese`. The function allows you to run just selected test functions rather than a whole test suite but at the same time process any INI files and execute `Initial` and `Cleanup` if they exist.

`RunThese` offers the following options:

{{{
      RunThese 1 2 ⍝ Execute test cases 1 & 2 that do not belong to any group.
      RunThese ¯1 ¯2 ⍝ Same but stop just before a test case is actually executed.
      RunThese 'Group1' ⍝ Execute all test cases belonging to group 1.
      RunThese 'Group1' (2 3) ⍝ Execute test cases 2 & 3 of group 1.
      RunThese 'Group1' 2 3 ⍝ Same as before.
      RunThese 'L' ⍝ Execute all test cases of the group that start with "L" - if there is just one
      RunThese 'L*' ⍝ Execute all test cases of all groups starting with "L"
      RunThese 'Test_Group1_001' ⍝ Executes just this test case.
      RunThese 0 ⍝ Run all test cases but stop just before the execution.
}}}

== Best Practices ==

 * Try to keep your test cases simple and test just one thing at a time, for example just one method at a time, and only if that method is simple.
 * For more complex method create a group with the method name as group name.
 * Create everything you need on the fly and tidy up afterwards. Or more precisely, tidy up (left overs!), prepare, test, tidy up again.
 * Avoid a test case relying on changes made by an earlier test case. It's a tempting thing to do but you will almost certainly regret this at one stage or another.
 * Notice that the DRY principle (don't repeat yourself) can and should be ignored in test cases.
Line 39: Line 194:
<<SeeSaw(section="methods", toshow="<<Show>> the methods", tohide="<<Hide>> the methods", bg="#FEE1A5", speed="Slow")>>
{{{{#!wiki seesaw/methods/methods-bg/hide

Note that for convenience there are a couple of Helpers which are shortcuts for some of the methods listed here - see there.
Line 53: Line 212:
Returns the comment expected in line 1 of all test cases found in "refToTestNamespace". Returns the name and the comment expected in line 1 of all test cases found in "refToTestNamespace".
Line 61: Line 220:

Sets a global variable `TestCasesExecutedAt` in the namespace hosting the test cases.
Line 72: Line 233:
Sets a global variable `TestCasesExecutedAt` in the namespace hosting the test cases.
Line 78: Line 241:
Runs all test cases in "refToTestNamespace" <b>without</b> error trapping. If a test case encounters an invalid result it stops. Use this function to investigate the details after "Run" detected a problem.
This will work only if you use a particualar strategy when checking results in a test case
Runs all test cases in "refToTestNamespace" '''without''' error trapping. If a test case encounters an invalid result it stops. Use this function to investigate the details after "Run" detected a problem.

This will work only if you use a particular strategy when checking results in a test case.

Sets a global variable `TestCasesExecutedAt` in the namespace hosting the test cases.
Line 83: Line 249:
{{{
      {log}←testCaseNos RunTheseIn refToTestNamespace
      {log}←(groupName testCaseNos) RunTheseIn refToTestNamespace
}}}

Like `RunDebug` but it runs just "testCaseNos" in "refToTestNamespace". Use this in order to debug one or some particular test cases.

If a "groupname" was specified then "testCaseNos" might be empty. Then all test cases of the given group are executed.

Sets a global variable `TestCasesExecutedAt` in the namespace hosting the test cases.
Line 87: Line 264:
}}}}
Line 88: Line 267:
<<SeeSaw(section="samples", toshow="<<Show>> the examples", tohide="<<Hide>> the examples", bg="#FEE1A5", speed="Slow")>>
{{{{#!wiki seesaw/samples/samples-bg/hide

{{{
      )load APLTree\WinFile
Loaded....
      )cs #.TestCases
      ⊣#.Tester.EstablishHelpersIn ⎕THIS
 Run Run all test cases
 RunDebug Run all test cases with DEBUG flag on
 RunThese Run just the specified tests
 RunBatchTests Run all batch tests
 E Get all functions starting their names with "Test_" into the editor.
 L Prints a list with all test cases and the first comment line to the session
 FailsIf Usage : →FailsIf x, where x is a boolean scalar
 PassesIf Usage : →PassesIf x, where x is a boolean scalar
      L
Exercise "MkDir"
Exercise "RmDir"
Exercise "Dir"
Exercise "DirX"
Line 89: Line 289:
      Run
--- Tests started at 2011-10-18 07:15:25 on #.TestCases -------------------------------------
  Test_001: Exercise "MkDir"
  Test_002: Exercise "RmDir"
  Test_003: Exercise "Dir"
  Test_004: Exercise "DirX"
  Test_005: Exercise "cd"
  Test_006: Exercise "Delete"
  Test_007: Exercise "DateOf"
  Test_008: Exercise "DoesExistDir"
  Test_009: Exercise "DoesExistFile"
  Test_010: Exercise "DoesExist"
  Test_011: Exercise "GetAllDrives"
  Test_012: Exercise "GetDriveAndType"
  Test_013: Exercise "IsDirEmpty"
  Test_014: Exercise "ListDirXIndices"
  Test_015: Exercise "ListFileAttributes"
  Test_016: Exercise "YoungerThan"
  Test_017: Exercise "GetTempFileName"
  Test_018: Exercise "GetTempPath"
  Test_019: Exercise "WriteAnsiFile" and "ReadAnsiFile"
  Test_020: Exercise "CopyTo"
  Test_021: Exercise "MoveTo"
  Test_022: Exercise "ListDirsOnly"
  Test_023: Exercise "CheckPath"
  Test_024: Exercise "PWD"
  Test_025: Exercise "IsFilenameOkay"
  Test_026: Exercise "IsFoldername"
  Test_MyGroup_002: Exercise MyGroup -2-
  Test_MyGroup_020: Exercise MyGroup -20-
  Test_MyGroup_199: Exercise MyGroup -199-
 ----------------------------------------------------------------------------------------------
   29 test cases executed
   0 test cases failed
   0 test cases broken

      RunBatchTests
0

      RunThese 2 10 ⍝ Just test cases 2 and 10
--- Tests started at 2011-10-18 07:15:25 on #.TestCases -------------------------------------
  Test_002: Exercise "RmDir"
  Test_010: Exercise "DoesExist"
 ----------------------------------------------------------------------------------------------
   2 test cases executed
   0 test cases failed
   0 test cases broken

      RunThese ¯2 ⍝ A negative number...
--- Tests started at 2013-01-08 14:35:28 on #.TestCases --------------------------------------------------------

Run__[76] ⍝ ... stop just before the test cases gets executed
      →⎕LC ⍝
  Test_002 (1 of 1) : Exercise "RmDir"
 -----------------------------------------------------------------------------------------------------------------
   1 test case executed
   0 test cases failed
   0 test cases broken

⍝ Use `RunThese 0` for executing all test cases with stops.

      RunThese 'MyGroup' (2 199 ) ⍝ Test cases 2 and 199 of "MyGroup"
--- Tests started at 2011-10-18 07:15:25 on #.TestCases -------------------------------------
  Test_MyGroup_002: Exercise MyGroup -2-
  Test_MyGroup_199: Exercise MyGroup -199-
 ----------------------------------------------------------------------------------------------
   2 test cases executed
   0 test cases failed
   0 test cases broken

⍝ Since version 1.7.0 you can specify the group name with or without a prefix `Test_`:

      RunThese 'Test_MyGroup' (2 199 ) ⍝ Test cases 2 and 199 of "MyGroup"
--- Tests started at 2011-10-18 07:15:25 on #.TestCases -------------------------------------
  Test_MyGroup_002: Exercise MyGroup -2-
  Test_MyGroup_199: Exercise MyGroup -199-
 ----------------------------------------------------------------------------------------------
   2 test cases executed
   0 test cases failed
   0 test cases broken

      RunThese ('MyGroup' ⍬) ⍝ All test cases of group "MyGroup"
--- Tests started at 2011-10-18 07:15:25 on #.TestCases -------------------------------------
  Test_002: Exercise MyGroup -2-
  Test_020: Exercise MyGroup -20-
  Test_199: Exercise MyGroup -199-
 ----------------------------------------------------------------------------------------------
   3 test cases executed
   0 test cases failed
   0 test cases broken

      RunDebug 1 ⍝ Stop just before any test case gets executed
--- Tests started at 2012-12-21 09:05:59 on #.TestCases --------------------------------------

Run__[71]
⍝ Now you can trace into the test case if you want.

      RunDebug ¯4 ⍝ Stop at test case number 4
--- Tests started at 2011-10-18 07:16:03 on #.TestCases -------------------------------------

Run__[71]
⍝ Now you can trace into test case number 4.
}}}

}}}}
Line 93: Line 398:

<<Include(APLTreeDownloads)>>
Line 99: Line 406:
== Download ==

Tester

(Hide table-of-contents)

Tester is part of the CategoryAplTree project.

Overview

The purpose of this class is to provide a framework for testing all the projects of the APLTree project. Only with such a framework is it possible to make changes to the test suite without too much hassle.

You might find the framework flexible enough to suit your own needs when it comes to implementing tests, even independently from the APLTree project.

Details

Terminology

Note that test cases causing a crash are considered "broken". Test cases that do not return the expected result are considered "failing".

Assumptions and preconditions

  1. The #.Tester class assumes that all your tests are hosted by an ordinary namespace. All methods running test cases take a ref to that namespace as an argument.

  2. All test functions inside that namespace are expected to start their names with the string Test_ followed by digits. Update: since version 1.1 test functions can be grouped. For example, in order to group all functions testing a method foo the test functions are named Test_foo_001 (or Test_foo_01 or Test_foo_00001), Test_foo_002, Test_foo_003 and so on.

  3. The first line after the header must carry a comment telling what is actually tested. Keep in mind that later you might be in need for finding out which test cases are testing a certain method!
  4. It is assumed that you have three different scenarios when you want to run test cases with a different behaviour:
    • Run with error trapping and return a log (vector of text vectors) reporting details. Run this before checking your code into a Code Management System like SubVersion. See Tester.Run for details.

    • Run without error trapping. Use this for investigating why test cases crash or fail. See Tester.RunDebug for details.

    • Run "batch mode". It is assumed that those test cases that need a human being in front of the monitor are not executed when running in batch mode. See Tester.RunBatchTests for details. (Of course it is best not to rely on a human being but this cannot always be avoided)

  5. Every test function must accept a right argument which is a two-item vector of Booleans:
    1. debugFlag (0=use error trapping)

    2. batchFlag (0=does not run in batch mode)

  6. Every test function must return a result. Use one of the following niladic functions as explicit result:

∆OK

Passed

∆Failed

Unexpected result

∆NoBatchTest

Not executed because batchFlag was 1.

∆InActive

Not executed because the test case is inactive (not ready, buggy, whatever)

∆LinuxOnly

Not executed because runs under Linux only.

∆LinuxOrMacOnly

Not executed because runs under Linux/Mac OS only.

∆LinuxOrWindowsOnly

Not executed because runs under Linux/Windows only.

∆MacOrWindowsOnly

Not executed because runs under Mac OS/Windows only.

∆MacOnly

Not executed because runs under Mac OS only.

∆NoAcreTests

Not executed because it's acre related.

∆WindowsOnly

Not executed because runs under Windows only.

These niladic functions are made available as Helpers - see there.

Work flow

No matter which of the Run* functions (Run, RunBatchTests, RunDebug, RunThese) you are going to call, the workflow is always the same:

INI files

First of all the Run* method checks whether there is a file testcases_{computername}.ini. If this is the case that INI file is processed. Use this to specify important computer-dependent variables.

It then checks whether there is a file testcases.ini. If this is the case that INI file is processed, too. Use this to specify general stuff that does not depend on a certain computer/environment.

Note that if one or both INI files exist there will be a flat namespace {yourTestNamespace}.INI, meaning that sections in the INI file are ignored. An example: if your test functions are hosted by a namespace "foo" and your INI file specifies a variable hello as holding "world" then:

       'world'≡#.foo.INI.hello
1

Initialisation

Now the Run* method checks whether there is a function Initial in the hosting namespace. If that is the case then the function is executed.

Note that the function must be either niladic or monadic and must not return a result. If the function is monadic then a Boolean is passed via the right argument telling the function that the test suite is going to run in batch mode if true.

Use this function to create stuff that's needed by all test cases, or tell the user something important - of course only if the batch flag is false.

It might be a good idea to call a function tidying up in line 1, just in case a failing test case has left something behind; see below for details.

Another thing to mention is that Initial is not a bad place to establish the Helpers - see there for details.

Finally: running the test cases

Now the test cases are executed one by one.

Tidying up

After the last test case was executed the Run* function checks whether there is a function "Cleanup" in the namespace hosting your test cases. If that's the case then this function is executed. Such a function must be a niladic, no-result returning function.

INI file again

Finally the namespace "INI" holding variables populated from your INI file(s) is deleted.

Helpers

Over time it emerged that certain helpers are very useful in the namespace hosting the test cases. Therefore with version 1.4.0 a method EstablishHelpersIn was introduced that takes a reference as the right argument. That ref should point to the namespace hosting the test cases. If you are executing #.Testers.EstablishHelpersIn from within that namespace you can specify an empty vector as the right argument: it then defaults to ⎕IO⊃⎕RSI.

One of the helpers is the method ListHelpers which returns a table with all names in the first column and the first line of the function (which is expected to be a comment telling what the function actually does) in the second column:

      ListHelpers
 Run                       Run all test cases                                                           
 RunDebug                  Run all test cases with DEBUG flag on                                        
 RunThese                  Run just the specified tests.                                                
 RunBatchTests             Run all batch tests                                                          
 RunBatchTestsInDebugMode  Run all batch tests in debug mode (no error trapping) and with stopFlag←1.   
 E                         Get all functions into the editor starting their names with "Test_".         
 L                         Prints a list with all test cases and the first comment line to the session. 
 G                         Prints all groups to the session.                                            
 FailsIf                   Usage : →FailsIf x, where x is a boolean scalar                              
 PassesIf                  Usage : →PassesIf x, where x is a boolean scalar                             
 GoToTidyUp                Returns either an empty vector or "Label" which defaults to ∆TidyUp          
 RenameTestFnsTo           Renames a test function and tell acre.                                       
 ListHelpers               Lists all helpers available from the `Tester` class.                         
 ⍝ Niladic functions returning an integer constant, see table above.

The helpers fall into four groups:

Running test cases

Run, RunDebug, RunThese, RunBatchTest and RunBatchTestsInDebugMode are running all or selected test cases with or without error trapping. They are shortcuts for calling the methods in Tester.

Flow control

FailsIf, PassesIf and GoToTidyUp control the flow in test functions.

Test function template

In case the namespace has no test functions in it (yet) the EstablishHelpersIn function also creates a test function template named Test_000.

"Constants"

These niladic functions behave like constants and are useful for assigning explicit results in test functions.

Misc

E, G and L allow the user to edit all test functions and to list test groups, if any.

Examples

The test template contains examples for how to use the flow control functions.

The Run* functions are discussed later.

The most important helper function is probably L: with a large number of test functions it's the only reasonable way to gather information. Since version 1.8.0 L requires a right argument (empty or group name) and an optional left argument (test case numbers).

Examples from the Fire development workspace:

      ⍴L ''
99 2
      G
acre           
InternalMethods
List           
Misc           
Replace        
ReportGhosts   
Search         
      L 'List'
Test_List_001             Unnamed NSs & GUI instances are ignored.                                        
Test_List_002             Just GUI instances are ignored                                                  
Test_List_003             Nothing is ignored at all; instances of GUI objects are treaded like namespaces 
Test_List_004             Nothing is ignored but class instances                                          
Test_List_005             Scan all object types but with "Recursive"≡None                                 
Test_List_006             Scan all object types but with "Recursive"≡Just one level                       
Test_List_007             Search for "Hello" with a ref and an unnamed namespace                          
    1 7 99 L 'List'
Test_List_001             Unnamed NSs & GUI instances are ignored.               
Test_List_007             Search for "Hello" with a ref and an unnamed namespace   

RunThese

A particularly helpful method while developing/enhancing stuff is RunThese. The function allows you to run just selected test functions rather than a whole test suite but at the same time process any INI files and execute Initial and Cleanup if they exist.

RunThese offers the following options:

      RunThese 1 2    ⍝ Execute test cases 1 & 2 that do not belong to any group.
      RunThese ¯1 ¯2  ⍝ Same but stop just before a test case is actually executed.
      RunThese 'Group1' ⍝ Execute all test cases belonging to group 1.
      RunThese 'Group1' (2 3) ⍝ Execute test cases 2 & 3 of group 1.
      RunThese 'Group1' 2 3   ⍝ Same as before.
      RunThese 'L'    ⍝ Execute all test cases of the group that start with "L" - if there is just one
      RunThese 'L*'   ⍝ Execute all test cases of all groups starting with "L"
      RunThese 'Test_Group1_001'  ⍝ Executes just this test case.
      RunThese 0      ⍝ Run all test cases but stop just before the execution.

Best Practices

  • Try to keep your test cases simple and test just one thing at a time, for example just one method at a time, and only if that method is simple.
  • For more complex method create a group with the method name as group name.
  • Create everything you need on the fly and tidy up afterwards. Or more precisely, tidy up (left overs!), prepare, test, tidy up again.
  • Avoid a test case relying on changes made by an earlier test case. It's a tempting thing to do but you will almost certainly regret this at one stage or another.
  • Notice that the DRY principle (don't repeat yourself) can and should be ignored in test cases.

Methods

Show the methods

Note that for convenience there are a couple of Helpers which are shortcuts for some of the methods listed here - see there.

GetAllTestFns

r←GetAllTestFns refToTestNamespace

Returns the names of all test functions found in namespace "refToTestNamespace".

ListTestCases

r←ListTestCases refToTestNamespace;list

Returns the name and the comment expected in line 1 of all test cases found in "refToTestNamespace".

Run

{log}←Run refToTestNamespace;flags

Runs all test cases in "refToTestNamespace" with error trapping. Broken as well as failing tests are reported in the session as such but they don't stop the program from carrying on.

Sets a global variable TestCasesExecutedAt in the namespace hosting the test cases.

RunBatchTests

{(rc log)}←RunBatchTests refToTestNamespace

Runs all test cases in "refToTestNamespace" but tells the test functions that this is a batch run meaning that test cases which are reporting to the session for any reason and those in need for a human being for interaction should quit silently. Returns 0 for okay or a 1 in case one or more test cases are broken or failed. This method can run in a runtime as well as in an automated test environment.

Sets a global variable TestCasesExecutedAt in the namespace hosting the test cases.

RunDebug

{log}←{stop}RunDebug refToTestNamespace

Runs all test cases in "refToTestNamespace" without error trapping. If a test case encounters an invalid result it stops. Use this function to investigate the details after "Run" detected a problem.

This will work only if you use a particular strategy when checking results in a test case.

Sets a global variable TestCasesExecutedAt in the namespace hosting the test cases.

RunTheseIn

      {log}←testCaseNos RunTheseIn refToTestNamespace
      {log}←(groupName testCaseNos) RunTheseIn refToTestNamespace

Like RunDebug but it runs just "testCaseNos" in "refToTestNamespace". Use this in order to debug one or some particular test cases.

If a "groupname" was specified then "testCaseNos" might be empty. Then all test cases of the given group are executed.

Sets a global variable TestCasesExecutedAt in the namespace hosting the test cases.

Version

Returns version (x.y.z) and date (yyyy-mm-dd) of the class #.Tester

Examples

Show the examples

      )load APLTree\WinFile
Loaded....
      )cs #.TestCases
      ⊣#.Tester.EstablishHelpersIn ⎕THIS
 Run            Run all test cases                                                          
 RunDebug       Run all test cases with DEBUG flag on                                       
 RunThese       Run just the specified tests                                                
 RunBatchTests  Run all batch tests                                                         
 E              Get all functions starting their names with "Test_" into the editor.        
 L              Prints a list with all test cases and the first comment line to the session 
 FailsIf        Usage : →FailsIf x, where x is a boolean scalar                             
 PassesIf       Usage : →PassesIf x, where x is a boolean scalar                            
      L
Exercise "MkDir"                           
Exercise "RmDir"                           
Exercise "Dir"                             
Exercise "DirX"                          
...
      Run
--- Tests started at 2011-10-18 07:15:25  on #.TestCases -------------------------------------
  Test_001: Exercise "MkDir"
  Test_002: Exercise "RmDir"
  Test_003: Exercise "Dir"
  Test_004: Exercise "DirX"
  Test_005: Exercise "cd"
  Test_006: Exercise "Delete"
  Test_007: Exercise "DateOf"
  Test_008: Exercise "DoesExistDir"
  Test_009: Exercise "DoesExistFile"
  Test_010: Exercise "DoesExist"
  Test_011: Exercise "GetAllDrives"
  Test_012: Exercise "GetDriveAndType"
  Test_013: Exercise "IsDirEmpty"
  Test_014: Exercise "ListDirXIndices"
  Test_015: Exercise "ListFileAttributes"
  Test_016: Exercise "YoungerThan"
  Test_017: Exercise "GetTempFileName"
  Test_018: Exercise "GetTempPath"
  Test_019: Exercise "WriteAnsiFile" and "ReadAnsiFile"
  Test_020: Exercise "CopyTo"
  Test_021: Exercise "MoveTo"
  Test_022: Exercise "ListDirsOnly"
  Test_023: Exercise "CheckPath"
  Test_024: Exercise "PWD"
  Test_025: Exercise "IsFilenameOkay"
  Test_026: Exercise "IsFoldername"
  Test_MyGroup_002: Exercise MyGroup -2-
  Test_MyGroup_020: Exercise MyGroup -20-
  Test_MyGroup_199: Exercise MyGroup -199-
 ----------------------------------------------------------------------------------------------
   29 test cases executed
   0 test cases failed
   0 test cases broken

      RunBatchTests
0

      RunThese 2 10     ⍝ Just test cases 2 and 10
--- Tests started at 2011-10-18 07:15:25  on #.TestCases -------------------------------------
  Test_002: Exercise "RmDir"
  Test_010: Exercise "DoesExist"
 ----------------------------------------------------------------------------------------------
   2 test cases executed
   0 test cases failed
   0 test cases broken

      RunThese ¯2     ⍝ A negative number...
--- Tests started at 2013-01-08 14:35:28  on #.TestCases --------------------------------------------------------

Run__[76]             ⍝ ... stop just before the test cases gets executed
      →⎕LC ⍝
  Test_002 (1 of 1) : Exercise "RmDir"
 -----------------------------------------------------------------------------------------------------------------
   1 test case executed
   0 test cases failed
   0 test cases broken

⍝ Use `RunThese 0` for executing all test cases with stops.

      RunThese 'MyGroup' (2 199 )  ⍝ Test cases 2 and 199 of "MyGroup"
--- Tests started at 2011-10-18 07:15:25  on #.TestCases -------------------------------------
  Test_MyGroup_002: Exercise MyGroup -2-
  Test_MyGroup_199: Exercise MyGroup -199-
 ----------------------------------------------------------------------------------------------
   2 test cases executed
   0 test cases failed
   0 test cases broken

⍝ Since version 1.7.0 you can specify the group name with or without a prefix `Test_`:

      RunThese 'Test_MyGroup' (2 199 )  ⍝ Test cases 2 and 199 of "MyGroup"
--- Tests started at 2011-10-18 07:15:25  on #.TestCases -------------------------------------
  Test_MyGroup_002: Exercise MyGroup -2-
  Test_MyGroup_199: Exercise MyGroup -199-
 ----------------------------------------------------------------------------------------------
   2 test cases executed
   0 test cases failed
   0 test cases broken

      RunThese ('MyGroup' ⍬)  ⍝ All test cases of group "MyGroup"
--- Tests started at 2011-10-18 07:15:25  on #.TestCases -------------------------------------
  Test_002: Exercise MyGroup -2-
  Test_020: Exercise MyGroup -20-
  Test_199: Exercise MyGroup -199-
 ----------------------------------------------------------------------------------------------
   3 test cases executed
   0 test cases failed
   0 test cases broken

      RunDebug 1   ⍝ Stop just before any test case gets executed
--- Tests started at 2012-12-21 09:05:59  on #.TestCases --------------------------------------

Run__[71]
⍝ Now you can trace into the test case if you want.

      RunDebug ¯4   ⍝ Stop at test case number 4
--- Tests started at 2011-10-18 07:16:03  on #.TestCases -------------------------------------

Run__[71]
⍝ Now you can trace into test case number 4.

Project Page

For bug reports, future enhancements and a full version history see Tester/ProjectPage

Version Information

Original author:

KaiJaeger

Responsible:

KaiJaeger

Email:

kai@aplteam.com


CategoryAplTree

Tester (last edited 2018-03-03 11:47:23 by KaiJaeger)