SAF-Test Framework Overview

version 0.9

Content

This specification includes six parts. It is listed as below:

n            Purpose

n            Scope

n            Framework organization

n            Spec Directory

n            Environment Tool

n            Directory Structure for SAF-Test Framework

n            Test Case Naming

n            Test Case Return Code

n            License

n            Result File Format

n            Conformance Testing Definition

n            API Specification Coverage Requirement

n            Adding Test Suites to the Framework

n            Example assertion.xml for SAF Specification A.01.01

n            Test Case Assertion for SAF Specification B.01.01 and higher

n            Example coverage.txt

n            Test Case Reuse

n            Preparation

n            Building

n            Executing

Overview

Purpose

The service availability solution helps meet end-user expectations for voice, data and multimedia services delivered with the dependability of traditional telecommunications. The Service Availability™ Forum (SAF) is addressing this by fostering an ecosystem to enable the use of commercial off-the-shelf building blocks in the creation of high availability network infrastructure products, systems and services. SAF will accomplish this through developing and publishing high availability and management software interface specifications as well as promoting and facilitating their adoption by the industry.

Service Availability Forum Test (SAF Test) project intends to provide conformance test suites for the SAF APIs independent of the implementation. SAF Test focuses on API compliance, not functional testing. The user can use test suites to test their implementation.

SAF defines many specifications. Currently SAF Test supports AIS and HPI specifications.

This document has the following purposes:

  • Outline the source code directory structures for developers contributing to the test suite
  • Outline how to execute the tests in this framework and get the results file.

Scope

This document presents the concepts and policies for creating test suites for the AIS and HPI specifications, including test categories, approach and methodology, and rules for execution. Details for writing test cases are not included here.

Framework organization

To aid in development and testing, a build framework has been setup which consists of the following items:

  • Spec directories: A directory for HPI and each AIS service (e.g. AIS-mem-<version>)
  • Automated run system and get result file
  • Log directory
  • Doc directory
  • COPYING

Spec Directory

Each specification directory should contain the following:

  • header file
  • LDFLAG
  • Automated run system and get result file
  • Log directory

Test Framework Structure

Environment Tool

As with most test environments, general utilities are created to support the execution, data collection, and report analysis of test suites. A directory is created to allow a repository for logging all the execution information. It is the goal of this framework that support utilities and tools will be created in such a way that they can be shared among other test suites within the test framework.

There are no restrictions on the required use of any particular test harness. Rather, if a particular test suite requires a special test harness, then that test harness is included with that test suite. The test harness must be in a separate directory from the actual test cases.

In conjunction with environmental tools to support the execution of the framework test suites, information about the status of the development efforts and execution nuances needs to be posted to a common location. Currently in this framework, the following tools exist:

run_tests.sh -- run all the test cases or one test case

report.sh – summarize all execution results of the test cases.

other tools – some scripts which are convenient for develop and maintain test cases

Directory Structure for SAF-Test Framework

The following syntax summarizes the general form of the directory structure:

·         saf-test/AUTHORS – lists all the developers in this project

·         saf-test/COPYING -- License file (currently GPL, version 2)

·         saf-test/ChangeLog – records all the release history of this project.

·         saf-test/README – README file.

·         saf-test/Makefile -- Make all the test suites in this framework

·         saf-test/run_tests.sh -- The tool to run one or all available tests in the test suite and log the result.

·         saf-test/report.sh -- The tool to summarize all execution logs and create test reports.

·         saf-test/doc -- this directory include all documents

·         saf-test/log -- this directory include compile result information and will be created automatically while making

·         saf-test/<spec>/LDFLAGS -- Library which is to be tested

·         saf-test/<spec>/Makefile -- Make all the related test cases

·         saf-test/<spec>/run_tests.sh -- The tool to run one or all tests for the current specification and log the results.

·         saf-test/<spec>/report.sh -- The tool to summarize all execution logs and create test reports.

·         saf-test/<spec>/include -- header file of the SAF specifications

·         saf-test/<spec>/log -- this directory include compile result information and will be created automatically while making

·         saf-test/<spec>/src/<service>/coverage.txt -- claim which APIs are tested, which APIs are NOT tested.

·         saf-test/<spec>/src/<service>/<function area>/<APIName>/<Case Name> -- Test cases for API

·         saf-test/<spec>/src/<service>/<function area>/<APIName>/manual/<Case Name> -- Manual test cases for API

·         saf-test/<spec>/src/<service>/<function area>/<APIName>/assertion.xml -- specifies which tests are included

·         saf-test/<spec>/src/<service>/<function area>/<APIName>/coverage.txt -- specifies which assertions are covered by the current tests (Yes/No)

·         saf-test/utilities – this directory include all tools which can help user develop and maintain test cases.

 

Test Case Naming

All the test cases are created in saf-test/<spec>/src/<service>/<function area>/<APIName>/ directory or saf-test/<spec>/src/<service>/<function area>/<APIName>/manual/ directory. The test case can be named as number. Such as 1.c, 2.c and etc. But all the test cases' aim should be described clearly in the assertion.xml.

Test Case Return Code

Test framework specifies some macro-definitions to describe the test cases return status as below table. All the test cases must return one kind of code in their main functions. These macro-definitions are declared in the saf-test/include/saf-test.h which need every test case includes.

Macro-definitions

Values

Meanings

SAF_TEST_PASS

0

Test case passes.

SAF_TEST_FAIL

1

Test case fail because the implementation doesn't comply with the SAF Spec.

SAF_TEST_BLOCK     

2

Test case is blocked. It is killed by others.

SAF_TEST_NOTSUPPORT

3

Be Used in SAF HPI Spec test. Test case can’t run successfully because of the lack of some specific hardware on the current platform.

SAF_TEST_UNRESOLVED

4

The API which is being tested could not get the expected result because of other APIs’ failures.

For example, Suppose we want to test finalize function, we must do initialize firstly, but there are some errors in initialize functions, so when call finalize function, it can’t be successful. In this situation, the test case should return SAF_TEST_UNRESOLVED.

SAF_TEST_UNKNOWN

5

The result isn’t the other five results.

License

SAF-Test is an open source project. All tests are licensed as GPL, version 2

Result File Format

The results files are produced by run_tests.sh. This framework provides two formats for result files: text and XML.
The output should be in text following the grammar below:

 

SAF-Test Result

Spec Name= <Spec Name>

Total run test cases: <how many test cases run>

Pass: < the number of test cases which returned SAF_TEST_PASS >

Fail: < the number of test cases which returned SAF_TEST_FAIL>

Block: < the number of test cases which returned SAF_TEST_BLOCK>

Notsupport: < the number of test cases which returned SAF_TEST_NOTSUPPORT>

Unresolved: < the number of test cases which returned SAF_TEST_UNRESOLVED>

Unknown reason: < the number of test cases which returned SAF_TEST_UNKNOWN>

Finish testing

Spec Name= <Spec Name>

Total run test cases: …

Pass: …

Fail: ….

Block: …

Notsupport: …

Unresolved: …

Unknown reason: …

Finish testing

 

 

The output should be in XML following the grammar below:

 

<?xml version="1.0" ?>

<Report Title="SAF-Test">

<Spec Name="<Spec Name>">

<Data>

Total: <how many test cases run>

Pass: < the number of test cases which returned SAF_TEST_PASS>

Fail=: < the number of test cases which returned SAF_TEST_FAIL>

Block=: < the number of test cases which returned SAF_TEST_BLOCK>

Notsupport=: < the number of test cases which returned SAF_TEST_NOTSUPPORT>

Unresolved=: < the number of test cases which returned SAF_TEST_UNRESOLVED>

Unknown=: < the number of test cases which returned SAF_TEST_UNKNOWN>

</Data>

</Spec>

<Spec Name="…">

<Data>

 Total: …

Pass=: …

Fail=: …

Block=: …

Notsupport=: …

Unresolved=: …

Unknown=: …

</Data>

</Spec>

</Report>

 

Test Suite Development Requirements

Conformance Testing Definition

Conformance tests verify that every API in the specification is implemented, can be called successfully, and returns the correct value given correct and incorrect parameters. As the number of possible types and values of the parameters can be large, it is difficult to be absolutely comprehensive. However, well-written and reviewed conformance tests can still provide sufficient confidence that a given implementation is complete and will work. A conformance test for a particular specification should:

  • contain a description of all test cases
  • validate output of each method given all possible (within reason) combinations of input and output

API Specification Coverage Requirement

To do API conformance testing, you must determine the exact composition and characteristic of each API and then create individual test cases to cover the APIs. The following items need to be tested for each API:

  • Parameter Selection -- Minimum, maximum, normal, and error conditions of each range of values. For example, if a parameter has a range of 0-255, the conformance tests would include tests for the following values:
    • 0, 255 : Minimum and maximum values
    • 1 - 254 : A few numbers in the normal range
    • < 0, > 255 : Error conditions
  • Parameter Combination -- Each API may have more than one parameter. How the different parameters work together needs to be tested. The tester should predict the results from a reasonable number of parameter combinations and create tests to verify the predicted behavior.
  • Call Sequence -- The order in which APIs are called may cause different outcomes to occur within the product under test. If this condition exists for the product, the tester should take the call sequence under consideration.

 

But for SAF Specification B.01.01 including AIS and HPI, contributed conformance test cases are expected to cover Specification documents line by line. Additional items are required as below:

  • Line by Line – Test coverage should be thorough and ensure that every line in the specification, including sequence descriptions is considered.
  • Return Values – Return value coverage should follow these rules:
    • At least one test should be written for each possible return code.
    • If a return code can only be produced by following a particular scenario (calling sequence or parameter value), then at least one test of that scenario should exist.

 

Which parts are not covered?

SAF Specification includes many sections to present the system model and type definition which will help user and understanding and implementing the specification. These content are difficult t to cover by test cases. For example, in every service specification in AIS, below parts are not expected to be covered by conformance test:

n            Document Introduction

n            Overview

n            SA *** Service API/Service Model

n            SA *** Service API/Include File and Library Name

n            SA *** Service API/Type Definitions

In HPI Spec. below parts are not expected to be covered:

n            Document Introduction

n            Hardware Platform Interface Overview

n            The HPI Model

n            API Conventions

n            Data Type Definitions

Adding Test Suites to the Framework

To add a test case:

n            In specific directory of test framework, following the test case naming rule and return code definition, write the test case in C language. If the test cases is automatic, put it into saf-test/<spec>/src/<service>/<function area>/<APIName>/, if the test case is manual, put it into saf-test/<spec>/src/<service>/<function area>/<APIName>/manual/.

n            Describe your test case in the assertion.xml file (description and expected return value--see below)

n            Add a line to the coverage.txt file to indicate that this test is covered.

n            Add libraries in saf-test/<spec>/LDFLAGS

 

Example assertion.xml for SAF Specification A.01.01

<assertions>
<assertion id="1">
   saMsgInitialize: Called with correct parameters.
   Expected return: SA_OK
</assertion>
<assertion id="2">
   saMsgInitialize(): Called with invalid version number (-1)
   Expected return: SA_ERR_INVALID_PARAM
</assertion>
</assertions>

 

Test Case Assertion for SAF Specification B.01.01 and higher

Every Test cases has a assertion sentence which must describe which page and which line/lines the test cases is used for, and there can be more than one test cases for one line/lines.

 

Below assertion format is assertion.xml template for SAF Specification B.01.01 and higher.

<assertions spec=”SPECNAME” function=”APINAME”>

               <assertion id="<CASE1>" line="P<PAGE1>-<LINE1>: P<PAGE1>-<LINE2>">

                               Cases description1

               </assertion>

               <assertion id="<CASE2>" line="P<PAGE1>-<LINE3>: P<PAGE2>-<LINE4>" manual=”VALUE”>

                               Cases description2

               </assertion>

               <assertion id="<CASE3-SUB1>" line="P<PAGE2>-<LINE5>: P<PAGE2>-<LINE6>" manual=” VALUE”>

                               Cases description3

               </assertion>

               <assertion id="<CASE3-SUB2>" line="P<PAGE2>-<LINE5>: P<PAGE2>-<LINE6>">

                               Cases description4

               </assertion>

               ...

</assertions>

 

Assertion Rules for SAF Specification B.01.01 and higher:

1.            Assertion need declared the specification name and API name which tested by these test cases.

2.            Every case is identified by a case number which only include a main case number or a combination of a main case number and a sub-case number.

3.            Main case number is incremental as integer.

4.            One main case number corresponds to one and only one line range which is composed by page number and line number in the SAF specification document of the current API.

5.            One main case number can have several sub-case numbers which number also is incremental as integer and follow the main case number with a ‘_’. All sub cases of one main case are corresponding to one line range and will test it from different profiles.

6.            All the line ranges from up to down in the assertion.xml must be ordinal and can’t overlap

7.            Case descriptions summarize the case/sub-case main function

8.            If the test case is manual case, use manual flag and set value to “yes”.

 

assertion.xml example:

<assertions spec=”AIS-B.01.01” function=”saClmClusterNodeGet”>

               <assertion id="1" line="P21-30: P21-34">

                               ...

               </assertion>

               <assertion id="2" line="..." manual=”yes”>

                               ...

               </assertion>

               <assertion id="3-1" line="P21-38: P22-1">

                               Call saClmSelectionObjectGet(), before saClmFinalize() is invoked,

                               check if the returned selectionObject is valid.

               </assertion>

               <assertion id="3-2" line="P21-38: P22-1">

                               Call saClmSelectionObjectGet(), then invoke saClmFinalize(),

                               check if the returned selectionObject is invalid.

               </assertion>

               ...

</assertions>

Example coverage.txt

Below file is a sample coverage.txt:

 

This document defines the coverage for the <function name> function.
 
Assertion                  Covered?
  1                          YES
  2                          YES
  3                          NO
 
<Other customized description sentences>
 

Test Case Reuse

There maybe is this situation: one case 1.c is designed for one line range of the spec, it also can test the conformance of the second line range which will be covered by 3.c. In this situation, we write a pseudo test case as below: Case 3 just calls the case 1, if success, return same value with case 1.

 

/*

 * Copyright ….

 *

 * Authors:

 *     

 *

 * Spec:        AIS-B.01.01

 * Function:    saClmClusterNodeGet

 * Description:

 *     saClmClusterNodeGet is called with nodeId is zero.

 *     Return should be SA_AIS_OK.

 *     Same with 1

 * Line:        P31-22:P31-23

 */

 

#include <stdio.h>

#include <unistd.h>

 

#include "saf_test.h"

 

int main(int argc, char * argv[])

{

        char same_case[]="./1.test";

 

        execl(same_case, NULL);

        printf("        Can't find case %s\n", same_case);

        return SAF_TEST_UNKNOWN;

}

 

In assertion.xml, we can describe this kind of re-usage.

<assertions spec=”AIS-B.01.01” function=”saClmClusterNodeGet”>

       <assertion id="1" line="…">

      

       </assertion>

       <assertion id="2" line="…">

      

       </assertion>

       <assertion id="3" line=" P31-22:P31-23">

saClmClusterNodeGet is called with nodeId is zero.

Return should be SA_AIS_OK.

Same with 1

</assertion>

      

</assertions>

Test Suite Execution Step by Step

Preparation

In order to execution test suites, user needs to make some preparation to setup the test environment.

n            Setup the hardware software environment for the test object which is the implementation instance of the SAF Specification, such as openAIS, openHPI.

n            Install and configure the implementation instance correctly, that means the libraries and head files which follow the specification will exist before testing.

n            Download SAF-Test release from SAF-Test project home page. Extract them in a directory.

n            Make sure all libraries required by test cases are installed, such as glibc. saf-test/<spec>/LDFLAGS will give out this dependency information.

Building

Follow these steps to build:

Step1: If the test suits to run are for SAF Specification A.01.01, the tester need to follow below steps additional:

n            Link with the library you want to test. Do this by placing the library name in the saf-test/<spec>/LDFLAGS file. For example, if you want to test a library called "cms" you would put "--lcms" in LDFLAGS

n            Copy the head files you want to test to saf-test/<spec>/include

n            Edit saf-test/<spec>/include/im_spec.h in which includes these head files.

Step2: user can build test cases in two levels:

n            Go to the top level of the test framework. i.e. saf-test/, and execute make command. All the test cases in the test framework will be built.

n            Go to the Specification level of the test framework. i.e. saf-test/<spec>/, and execute make command. All the test cases in the directory will be built.

The log directory will be created automatically, in which the tester can view the build results.

Executing

User can execute test suites in two levels:

n            Execution in the top level of the test framework. i.e. saf-test/. All the test cases in the test framework can be run.

n            Execution in the Specification level of the test framework. i.e. saf-test/<spec>/. All the test cases in the directory can be run.

 

Selection 1 – execution test suites in sub-directory

1.        Go to the Specification level of the test framework. i.e. saf-test/<spec>/, and execute run_tests.sh. All the automatical test cases in the directory can be run. The command look like below and –help parameter will show detail.

                      run_tests.sh [all|Path]

2.        Go into every saf-test/<spec>/src/<service>/<function area>/<APIName>/manual/ directory to execution the test cases manually if user want to run the manual test cases.

3.        Execute saf-test/<spec>/report.sh to summarize the report file about this time running, user will be prompted to input the manual test case execution results one by one. The manual results will be calculated together with automatic cases. The command look like below and –help parameter will show detail.

                report.sh [all|PATH]

4.        The saf-test/<spec>/result.txt will include all running results. The report file will show how many cases were run, how many passed, how many failed, and etc. The log file is in saf-test/<spec>/log/ directory including automatic cases log, error log and manual case log.

Selection 2 – execution test suites in top-level directory.

1.        Go to the top level of the test framework. i.e. saf-test/, and execute run_tests.sh. All the automatical test cases in the directory can be run. The command look like below and –help parameter will show detail.

   run_tests.sh [all|SPEC]

2.        Go into every saf-test/<spec>/src/<service>/<function area>/<APIName>/manual/ directory to execution the test cases manually if user want to run the manual test cases.

3.        Execute saf-test/report.sh to summarize the report file about this time running; user will be prompted to input the manual test case execution results one by one. The manual results will be calculated together with automatic cases. The command look like below and –help parameter will show detail.

                report.sh [all|SPEC]

4.        The saf-test/<spec>/result.txt and saf-test/<spec>/result.xml will include all running results. The report file will show how many cases were run, how many passed, how many failed, and etc. The log file is in saf-test/<spec>/log/ directory, including automatic cases log, error log and manual case log.

Roles and Responsibilities

The following roles are currently defined:

Project Maintainer -- This is the person who maintain the SAF-Test project. His responsibility is to administrate users, maintain CVS and maintain release. The person is the spirit chief of this project.

Suite Maintainer -- This is the person who owns the SAF-Test suites. Probably these test suites are not developed by him. This person is responsible for fixing test suites bugs or reporting bugs to the tested library developer.

Suite Contributor -- This is an individual in the open source community who has provided a test suite for SAF-Test. A Suite Contributor works with the Suite Maintainer to get these suites added and work with Suite Maintainer to let the Suite Maintainer to take over all test suites.

Suite Tester -- This person is responsible for running the full test suites in SAF-Test and report bugs to Suite Maintainer.

Web Administrator -- This person has write access to this website. This person is responsible for ensuring that data on the website is up-to-date and accurate. They should fix any website bugs. They should add or upgrade test suite information on the web at Suite Contributors' requests.

Related Links

·                http://sourceforge.net/projects/saftest -- SAF Test sourceforge project page

·                http://saftest.sourceforge.net -- SAF Test home page

·                www.saforum.org -- Service Availability Forum home page