diff options
author | Yeoh Ee Peng <ee.peng.yeoh@intel.com> | 2019-02-14 13:50:37 +0800 |
---|---|---|
committer | Richard Purdie <richard.purdie@linuxfoundation.org> | 2019-02-21 12:34:00 +0000 |
commit | 1fd5ebdb06224489ad056e261962e23ece36fc87 (patch) | |
tree | 790b33a5498a9f97642ead84ce66dfd354bd8626 /scripts/lib/resulttool/template | |
parent | 95bd530b772f97e7329749b403bf9e2dff12ff7f (diff) | |
download | poky-1fd5ebdb06224489ad056e261962e23ece36fc87.tar.gz |
resulttool: enable merge, store, report and regression analysis
OEQA outputs test results into json files and these files were
archived by Autobuilder during QA releases. Example: each oe-selftest
run by Autobuilder for different host distro generate a
testresults.json file.
These scripts were developed as a test result tools to manage
these testresults.json file.
Using the "store" operation, user can store multiple testresults.json
files as well as the pre-configured directories used to hold those files.
Using the "merge" operation, user can merge multiple testresults.json
files to a target file.
Using the "report" operation, user can view the test result summary
for all available testresults.json files inside a ordinary directory
or a git repository.
Using the "regression-file" operation, user can perform regression
analysis on testresults.json files specified. Using the "regression-dir"
and "regression-git" operations, user can perform regression analysis
on directory and git accordingly.
These resulttool operations expect the testresults.json file to use
the json format below.
{
"<testresult_1>": {
"configuration": {
"<config_name_1>": "<config_value_1>",
"<config_name_2>": "<config_value_2>",
...
"<config_name_n>": "<config_value_n>",
},
"result": {
"<testcase_namespace_1>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
"<testcase_namespace_2>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
...
"<testcase_namespace_n>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
}
},
...
"<testresult_n>": {
"configuration": {
"<config_name_1>": "<config_value_1>",
"<config_name_2>": "<config_value_2>",
...
"<config_name_n>": "<config_value_n>",
},
"result": {
"<testcase_namespace_1>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
"<testcase_namespace_2>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
...
"<testcase_namespace_n>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
}
},
}
To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ resulttool
To store test result from oeqa automated tests, execute the below
$ resulttool store <source_dir> <git_branch>
To merge multiple testresults.json files, execute the below
$ resulttool merge <base_result_file> <target_result_file>
To report test report, execute the below
$ resulttool report <source_dir>
To perform regression file analysis, execute the below
$ resulttool regression-file <base_result_file> <target_result_file>
To perform regression dir analysis, execute the below
$ resulttool regression-dir <base_result_dir> <target_result_dir>
To perform regression git analysis, execute the below
$ resulttool regression-git <source_dir> <base_branch> <target_branch>
[YOCTO# 13012]
[YOCTO# 12654]
(From OE-Core rev: 78a322d7be402a5b9b5abf26ad35670a8535408a)
Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Diffstat (limited to 'scripts/lib/resulttool/template')
-rw-r--r-- | scripts/lib/resulttool/template/test_report_full_text.txt | 35 |
1 files changed, 35 insertions, 0 deletions
diff --git a/scripts/lib/resulttool/template/test_report_full_text.txt b/scripts/lib/resulttool/template/test_report_full_text.txt new file mode 100644 index 0000000000..bc4874ba4b --- /dev/null +++ b/scripts/lib/resulttool/template/test_report_full_text.txt | |||
@@ -0,0 +1,35 @@ | |||
1 | ============================================================================================================== | ||
2 | Test Report (Count of passed, failed, skipped group by file_dir, result_id) | ||
3 | ============================================================================================================== | ||
4 | -------------------------------------------------------------------------------------------------------------- | ||
5 | {{ 'file_dir'.ljust(max_len_dir) }} | {{ 'result_id'.ljust(max_len_result_id) }} | {{ 'passed'.ljust(10) }} | {{ 'failed'.ljust(10) }} | {{ 'skipped'.ljust(10) }} | ||
6 | -------------------------------------------------------------------------------------------------------------- | ||
7 | {% for report in test_count_reports |sort(attribute='test_file_dir_result_id') %} | ||
8 | {{ report.file_dir.ljust(max_len_dir) }} | {{ report.result_id.ljust(max_len_result_id) }} | {{ (report.passed|string).ljust(10) }} | {{ (report.failed|string).ljust(10) }} | {{ (report.skipped|string).ljust(10) }} | ||
9 | {% endfor %} | ||
10 | -------------------------------------------------------------------------------------------------------------- | ||
11 | |||
12 | ============================================================================================================== | ||
13 | Test Report (Percent of passed, failed, skipped group by file_dir, result_id) | ||
14 | ============================================================================================================== | ||
15 | -------------------------------------------------------------------------------------------------------------- | ||
16 | {{ 'file_dir'.ljust(max_len_dir) }} | {{ 'result_id'.ljust(max_len_result_id) }} | {{ 'passed_%'.ljust(10) }} | {{ 'failed_%'.ljust(10) }} | {{ 'skipped_%'.ljust(10) }} | ||
17 | -------------------------------------------------------------------------------------------------------------- | ||
18 | {% for report in test_percent_reports |sort(attribute='test_file_dir_result_id') %} | ||
19 | {{ report.file_dir.ljust(max_len_dir) }} | {{ report.result_id.ljust(max_len_result_id) }} | {{ (report.passed|string).ljust(10) }} | {{ (report.failed|string).ljust(10) }} | {{ (report.skipped|string).ljust(10) }} | ||
20 | {% endfor %} | ||
21 | -------------------------------------------------------------------------------------------------------------- | ||
22 | |||
23 | ============================================================================================================== | ||
24 | Test Report (Failed test cases group by file_dir, result_id) | ||
25 | ============================================================================================================== | ||
26 | -------------------------------------------------------------------------------------------------------------- | ||
27 | {% for report in test_count_reports |sort(attribute='test_file_dir_result_id') %} | ||
28 | {% if report.failed_testcases %} | ||
29 | file_dir | result_id : {{ report.file_dir }} | {{ report.result_id }} | ||
30 | {% for testcase in report.failed_testcases %} | ||
31 | {{ testcase }} | ||
32 | {% endfor %} | ||
33 | {% endif %} | ||
34 | {% endfor %} | ||
35 | -------------------------------------------------------------------------------------------------------------- \ No newline at end of file | ||