| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
(From OE-Core rev: 17d825c88499c6f7e3ffe22fae099871d0850abc)
Signed-off-by: Alexis Lothoré <alexis.lothore@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
|
|
|
|
|
|
|
|
|
| |
resulttool searches for "status" field, not "STATUS", in results.
This fix is more to avoid confusion than fixing anything, since the updated
tests are about regression.can_be_compared, which does not check for "status"
(From OE-Core rev: df342c81d02dea3f677966602089d4a972bb09bd)
Signed-off-by: Alexis Lothoré <alexis.lothore@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are multiple issues with the tests for the ptest fixup mechanism
introduced in c45d58f003e8d8b323169ca9d479dc49c43a9974:
- the feature does not impact regression.can_be_compared but directly the
comparison step in regression.regression_common
- the "status" field was wrong ("STATUS"), which prevents tests comparison
- there is a typo babeltrace2 ptest result name
(From OE-Core rev: 69205e2e2117fc469fcf3ae4b2174482db1d6297)
Signed-off-by: Alexis Lothoré <alexis.lothore@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multiple filters have been added to resulttool to ensure that test
results are compared only to relevant older tests results
Add some unit tests about added filters:
- tests should only be compared when machine matches
- tests run as oeselftest should only be compared when
OESELFTEST_METADATA matches
- many ptests should match despite non-static names
- runtime tests should match with matching tests regarding ltp tests
content
(From OE-Core rev: b84302ef56f2516742a496aef43b89d4c3decd37)
Signed-off-by: Alexis Lothoré <alexis.lothore@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
|
|
|
|
|
| |
(From OE-Core rev: 80cfa56d133bd3abbb1f37272607d8e15ce70861)
Signed-off-by: Alexis Lothoré <alexis.lothore@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
|
|
|
|
|
|
|
| |
Where there isn't a copyright statement, add one to make it explicit.
Also add license identifiers as MIT if there isn't one.
(From OE-Core rev: bb731d1f3d2a1d50ec0aed864dbca54cf795b040)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, the code will sum all of the different machine results into a
single report of the tests results. This can lead to confusion as to
which machine may be experiencing issues. Modify the code to store the
results in a per machine basis and report them accordingly.
(From OE-Core rev: 16d4031ea5df8a4ddfdb937d35464c09e1abd10e)
Signed-off-by: Jon Mason <jdmason@kudzu.us>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds SPDX license headers in place of the wide assortment of things
currently in our script headers. We default to GPL-2.0-only except for the
oeqa code where it was clearly submitted and marked as MIT on the most part
or some scripts which had the "or later" GPL versioning.
The patch also drops other obsolete bits of file headers where they were
encoountered such as editor modelines, obsolete maintainer information or
the phrase "All rights reserved" which is now obsolete and not required in
copyright headers (in this case its actually confusing for licensing as all
rights were not reserved).
More work is needed for OE-Core but this takes care of the bulk of the scripts
and meta/lib directories.
The top level LICENSE files are tweaked to match the new structure and the
SPDX naming.
(From OE-Core rev: f8c9c511b5f1b7dbd45b77f345cb6c048ae6763e)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a combined patch of the various tweaks and improvements I
made to resulttool:
* Avoid subprocess.run() as its a python 3.6 feature and we
have autobuilder workers with 3.5.
* Avoid python keywords as variable names
* Simplify dict accesses using .get()
* Rename resultsutils -> resultutils to match the resultstool ->
resulttool rename
* Formalised the handling of "file_name" to "TESTSERIES" which the code
will now add into the json configuration data if its not present, based
on the directory name.
* When we don't have failed test cases, print something saying so
instead of an empty table
* Tweak the table headers in the report to be more readable (reference
"Test Series" instead if file_id and ID instead of results_id)
* Improve/simplify the max string length handling
* Merge the counts and percentage data into one table in the report
since printing two reports of the same data confuses the user
* Removed the confusing header in the regression report
* Show matches, then regressions, then unmatched runs in the regression
report, also remove chatting unneeded output
* Try harder to "pair" up matching configurations to reduce noise in
the regressions report
* Abstracted the "mapping" table concept used to pairing in the
regression code to general code in resultutils
* Created multiple mappings for results analysis, results storage and
'flattening' results data in a merge
* Simplify the merge command to take a source and a destination,
letting the destination be a directory or a file, removing the need for
an output directory parameter
* Add the 'IMAGE_PKGTYPE' and 'DISTRO' config options to the regression
mappings
* Have the store command place the testresults files in a layout from
the mapping, making commits into the git repo for results storage more
useful for simple comparison purposes
* Set the oe-git-archive tag format appropriately for oeqa results
storage (and simplify the commit messages closer to their defaults)
* Fix oe-git-archive to use the commit/branch data from the results file
* Cleaned up the command option help to match other changes
* Follow the model of git branch/tag processing used by oe-build-perf-report
and use that to read the data using git show to avoid branch change
* Add ptest summary to the report command
* Update the tests to match the above changes
(From OE-Core rev: ff2c029b568f70aa9960dde04ddd207829812ea0)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|
|
OEQA outputs test results into json files and these files were
archived by Autobuilder during QA releases. Example: each oe-selftest
run by Autobuilder for different host distro generate a
testresults.json file.
These scripts were developed as a test result tools to manage
these testresults.json file.
Using the "store" operation, user can store multiple testresults.json
files as well as the pre-configured directories used to hold those files.
Using the "merge" operation, user can merge multiple testresults.json
files to a target file.
Using the "report" operation, user can view the test result summary
for all available testresults.json files inside a ordinary directory
or a git repository.
Using the "regression-file" operation, user can perform regression
analysis on testresults.json files specified. Using the "regression-dir"
and "regression-git" operations, user can perform regression analysis
on directory and git accordingly.
These resulttool operations expect the testresults.json file to use
the json format below.
{
"<testresult_1>": {
"configuration": {
"<config_name_1>": "<config_value_1>",
"<config_name_2>": "<config_value_2>",
...
"<config_name_n>": "<config_value_n>",
},
"result": {
"<testcase_namespace_1>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
"<testcase_namespace_2>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
...
"<testcase_namespace_n>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
}
},
...
"<testresult_n>": {
"configuration": {
"<config_name_1>": "<config_value_1>",
"<config_name_2>": "<config_value_2>",
...
"<config_name_n>": "<config_value_n>",
},
"result": {
"<testcase_namespace_1>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
"<testcase_namespace_2>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
...
"<testcase_namespace_n>": {
"status": "<PASSED or FAILED or ERROR or SKIPPED>",
"log": "<failure or error logging>"
},
}
},
}
To use these scripts, first source oe environment, then run the
entry point script to look for help.
$ resulttool
To store test result from oeqa automated tests, execute the below
$ resulttool store <source_dir> <git_branch>
To merge multiple testresults.json files, execute the below
$ resulttool merge <base_result_file> <target_result_file>
To report test report, execute the below
$ resulttool report <source_dir>
To perform regression file analysis, execute the below
$ resulttool regression-file <base_result_file> <target_result_file>
To perform regression dir analysis, execute the below
$ resulttool regression-dir <base_result_dir> <target_result_dir>
To perform regression git analysis, execute the below
$ resulttool regression-git <source_dir> <base_branch> <target_branch>
[YOCTO# 13012]
[YOCTO# 12654]
(From OE-Core rev: 78a322d7be402a5b9b5abf26ad35670a8535408a)
Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
|