summaryrefslogtreecommitdiffstats
path: root/meta/lib/oeqa/core/runner.py
Commit message (Collapse)AuthorAgeFilesLines
* oeqa/core/runner.py: Fix OETestTag listingNathan Rossi2019-09-071-11/+8
| | | | | | | | | | | | Use the __oeqa_testtags attribute added by OETestTag and display no, one or more tags separated by a comma. Also change the formatting of the output so that the list of tests is formatted as "<test> (<tags>)" and remove the table header for "id" (it is no longer available). (From OE-Core rev: d62e577a4e1a5f6accbce9f7bff7317a1162d72d) Signed-off-by: Nathan Rossi <nathan@nathanrossi.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core: Implement proper extra result collection and serializationNathan Rossi2019-09-071-4/+37
| | | | | | | | | | | | | | | Implement handling of extra result (e.g. ptestresult) collection with the addition of a "extraresults" extraction function in OETestResult. In order to be able to serialize and deserialize the extraresults data, allow OETestResult add* calls to take a details kwarg. The subunit module can handle cross-process transfer of binary data for the details kwarg. With a TestResult proxy class to sit inbetween to encode and decode to and from json. (From OE-Core rev: b0831d43606415807af80e2aa1d0566d0b8c209c) Signed-off-by: Nathan Rossi <nathan@nathanrossi.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/runner: Fix subunit setupClass/setupModule failure handlingRichard Purdie2019-05-121-2/+2
| | | | | | | | | The string format for subunit setupClass/setupModule failures is slightly different, tweak the regex to correctly handle both cases. (From OE-Core rev: 425fbba9348420df72de5c6292f50acaf2023d0b) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: Handle unexpectedSucessesRichard Purdie2019-05-091-2/+8
| | | | | | | | | | | | Instead of showing: RESULTS - ptest.PtestRunnerTest.test_ptestrunner - Testcase 1600: UNKNOWN (32.30s) map unexpectedSuccesses to PASSED and improve the way they're displayed. We expect/allow ptest runner to fail but if it passes we should handle it correctly. (From OE-Core rev: 7885939ca3dea9704185e93469fe515c17eb8017) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* meta/lib+scripts: Convert to SPDX license headersRichard Purdie2019-05-091-1/+4
| | | | | | | | | | | | | | | | | | | | | | | This adds SPDX license headers in place of the wide assortment of things currently in our script headers. We default to GPL-2.0-only except for the oeqa code where it was clearly submitted and marked as MIT on the most part or some scripts which had the "or later" GPL versioning. The patch also drops other obsolete bits of file headers where they were encoountered such as editor modelines, obsolete maintainer information or the phrase "All rights reserved" which is now obsolete and not required in copyright headers (in this case its actually confusing for licensing as all rights were not reserved). More work is needed for OE-Core but this takes care of the bulk of the scripts and meta/lib directories. The top level LICENSE files are tweaked to match the new structure and the SPDX naming. (From OE-Core rev: f8c9c511b5f1b7dbd45b77f345cb6c048ae6763e) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa: Drop OETestIDRichard Purdie2019-05-091-28/+3
| | | | | | | | | | These IDs refer to testopia which we're no longer using. We would now use the test names to definitively reference tests and the IDs can be dropped, along with their supporting code. (From OE-Core rev: 8e2d0575e4e7036b5f60e632f377a8ab2b96ead8) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: dump stdout and stderr of each test caseMardegan, Alberto2019-04-261-4/+16
| | | | | | | | | | | | | | | | | | Some CI pipelines might perform further processing of the test output (for instance, to plot some metrics into a chart). However, Since `thud` we switched away from the XML-based jUnit reporting, and at the same time we lost the ability of collecting the stdout and stderr of the various tests. We now restore this functionality by adding `stdout` and `stderr` keys to the JSON reports. This behavior is off by default; in order to enable it, one must set the `TESTREPORT_FULLLOGS` variable in the bitbake configuration. (From OE-Core rev: fd0048630ece5b21efb3a79e97046be0ab2a1514) Signed-off-by: Alberto Mardegan <amardegan@luxoft.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: Correctly markup regexsRichard Purdie2018-11-141-2/+2
| | | | | | | | | Avoid the warning "DeprecationWarning: invalid escape sequence \(" by marking the regexs correctly. (From OE-Core rev: cb49980fa4a158d5529902df731dec61a8c9b3d4) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/runner: Sort the test result output by result classRichard Purdie2018-11-141-1/+10
| | | | | | | | | | | We want to see failures/errors listed last since this is the most easily visible part of the log on consoles or autobuilder output and makes human processing easier rather than having to scroll up and scan for a single failure. (From OE-Core rev: 7954b19020c28a4120bc1671aa81b9e1e2b05fa2) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/runner: Always show a summary of success/fail/error/skip countsRichard Purdie2018-11-141-3/+1
| | | | | | | | | Its useful to have the counts of success/failure/error/skipped at the end of the results to allow for easier human reading of what happened. (From OE-Core rev: 080d8900d470a8e7f929b0c5c2765ad461744fbb) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/runtime/ptest: Inject results+logs into stored json results fileRichard Purdie2018-11-071-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows the ptest results from ptest-runner, run in an image to be transferred over to the resulting json results output. Each test is given a pass/skip/fail so individual results can be monitored and the raw log output from the ptest-runner is also dumped into the results json file as this means after the fact debugging becomes much easier. Currently the log output is not split up per test but that would make a good future enhancement. I attempted to implement this as python subTests however it failed as the output was too confusing, subTests don't support any kind of log output handling, subTest successes aren't logged and it was making things far more complex than they needed to be. We mark ptest-runner as "EXPECTEDFAILURE" since its unlikely every ptest will pass currently and we don't want that to fail the whole image test run. Its assumed there would be later analysis of the json output to determine regressions. We do have to change the test runner code so that 'unexpectedsuccess' is not a failure. Also, the test names are manipuated to remove spaces and brackets with "_" used as a replacement and any duplicate occurrences truncated. (From OE-Core rev: a13e088942e2a3c3521e98954a394e61a15234e8) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: Don't add empty log entriesRichard Purdie2018-10-291-1/+4
| | | | | | | | | There is no point in adding empty log entries to the json result files, only add them if there is log data. (From OE-Core rev: da6aef0b6bd4655805f7f743858d4e04341f75ed) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: write testresult to json filesYeoh Ee Peng2018-10-291-1/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As part of the solution to replace Testopia to store testresult, OEQA need to output testresult into single json file, where json testresult file will be stored in git repository by the future test-case-management tools. The json testresult file will store more than one set of results, where each set of results was uniquely identified by the result_id. The result_id would be like "runtime-qemux86-core-image-sato", where it was a runtime test with target machine equal to qemux86 and running on core-image-sato image. The json testresult file will only store the latest test content for a given result_id. The json testresult file contains the configuration (eg. COMMIT, BRANCH, MACHINE, IMAGE), result (eg. PASSED, FAILED, ERROR), test log, and result_id. Based on the destination json testresult file directory provided, it could have multiple instances of bitbake trying to write json testresult to a single testresult file, using locking a lockfile alongside the results file directory to prevent races. Also the library class inside this patch will be reused by the future test-case-management tools to write json testresult for manual test case executed. (From OE-Core rev: 00e03b5004f1eb6d59295544b3a8620504278f51) Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: refactor for OEQA to write json testresultYeoh Ee Peng2018-10-181-41/+29
| | | | | | | | | | | | | | Refactor the original _getDetailsNotPassed method to return testresult details (test status and log), which will be reused by future OEQA code to write json testresult. Take the opportunity to consolidate and simplify the logic used to gather test status and log within the TestResult instance. (From OE-Core rev: 79ee7d1c371a86edeb61c99679985118da657e5d) Signed-off-by: Yeoh Ee Peng <ee.peng.yeoh@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/runner: Print any errors/failures earlyRichard Purdie2018-08-231-0/+8
| | | | | | | | | Its a pain to have to wait until oe-selftest finishes to see the failures for example. (From OE-Core rev: 4c499a1b10a0c2647b6a753b8f9cd934ae4ad0da) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/runner: Use the proper logger functions instead of print()Richard Purdie2018-08-231-1/+1
| | | | | | (From OE-Core rev: 5ccd2284e5dd994230e9e229b7931d049c9f46c0) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa: Remove xmlrunnerRichard Purdie2018-07-181-45/+9
| | | | | | | | | | This isn't present on modern distros by default and doesn't work with testtools, needing multiple code paths in the code. Remove it in favour of finding a better replacement for results collection/analysis. (From OE-Core rev: 8001d933a8dc86004db014777f094d718086687d) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/runner: Simplify codeRichard Purdie2018-07-181-10/+5
| | | | | | | | | There doesn't appear to be any reason we need this _results indirection any more so remove it. (From OE-Core rev: b618261811c48ff3b98eab1b340a8cd09ef183c6) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa: Add selftest parallelisation supportRichard Purdie2018-07-181-4/+20
| | | | | | | | | | | | | | | | | | | This allows oe-selftest to take a -j option which specifies how much test parallelisation to use. Currently this is "module" based with each module being split and run in a separate build directory. Further splitting could be done but this seems a good compromise between test setup and parallelism. You need python-testtools and python-subunit installed to use this but only when the -j option is specified. See notes posted to the openedmbedded-architecture list for more details about the design choices here. Some of this functionality may make more sense in the oeqa core ultimately. (From OE-Core rev: 326ababfd620ae5ea29bf486b9d68ba3d60cad30) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa: Default to buffer mode for testsRichard Purdie2018-07-161-5/+0
| | | | | | | | | | | | Currently some tests run in buffer mode and some don't. Those that don't can corrupt stdout/stderr. Switch to using buffer mode everywhere so we're consistent. If there is useful output on stdout/stderr, it will be displayed if the test fails. (From OE-Core rev: 978548c0abde2cb94c2782538552f39bdf2bf630) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/runner: Ensure we don't print misleading results outputRichard Purdie2018-07-161-3/+11
| | | | | | | | | | | | | | The current code assumes if something isn't a failure of some kind, it was a pass. When test case IDs weren't matching, this lead to very confusing output where things would fail, then be listed as passing. This adds code to track successes, ensuring we don't end up in this position again with unmatched entries being listed as UNKNOWN. (From OE-Core rev: 4374c296d8963e4f6a1aa7bef7983ad0a1c2fcff) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: Improve test case comparisionRichard Purdie2018-07-161-2/+2
| | | | | | | | | | We can directly compare the test case IDs rather than representations, then if we're using subunit to split the tests, the comparisions still work as intended. (From OE-Core rev: 72e5f46f75454ba4c445c65c1cbc616a9e72fc6e) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/runner: Pass the value of buffer, don't force to TrueRichard Purdie2017-11-111-1/+1
| | | | | | | | The value could be False in which case we should pass that through. (From OE-Core rev: d0a3379bbcbcd8153bd59ccdb56d40fff7ad6c6b) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/runqemu: Only show stdout/stderr upon test failureRichard Purdie2017-11-091-0/+8
| | | | | | | | | | In general we don't need to see the output of runqemu however if it fails we do. Use the buffer option that already exists in TestResult but allow us to trigger it on a per test basis. (From OE-Core rev: 86bef95bccbe9dad3dc3445ea365439861966179) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: OEStreamLogger don't buffer test execution writesAníbal Limón2017-07-301-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since OEQA framework uses Python logging functionality to report test results there is a class that wraps PyUnit writes into logging commands (OEStreamLogger), so don't buffer the actual test execution to have insight of what is currently executing. This fix will change a little the test output format adding an '\n' previous the test result, for example: From: test_nonmatching_checksum (lic_checksum.LicenseTests) ... ok To: test_nonmatching_checksum (lic_checksum.LicenseTests) ... ok This is because the new line added by the PyUnit StreamLogger because currently we don't have a manner to identify when a test execution starts at report level (write msg). [YOCTO #11827] (From OE-Core rev: 4c2276469f58a88f864eb374c00dbbaace702de4) Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: Don't log details twice if test failsAníbal Limón2017-06-131-2/+0
| | | | | | | | | | | | The details of a test failure is upper on the unittest output so don't log twice the actual failure. [YOCTO #11622] (From OE-Core rev: 0f2e81c2a4458ad0ec6bab2710952ac2c2bbf1af) Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/loader: Allow unittest.TestCase's to be executedAníbal Limón2017-06-121-11/+14
| | | | | | | | | | | | | | | Currently there was a restriction to only execute tests that's inherits from OETestCase but in some circunstancies the features from the OEQA framework isn't needed so we need to support basic unittests. [YOCTO #10828] (From OE-Core rev: baac26f1b36e89e07637b738dd31ec7356f05a02) Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com> Signed-off-by: Ross Burton <ross.burton@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core: Add list tests support in context and runnerAníbal Limón2017-06-021-0/+98
| | | | | | | | | | | A common operation is to list tests, currently only selftest support it, this changes enables this functionality into the core framework. (From OE-Core rev: 7e803f1a855d3091a772b13efd3cc8e9c0c766e9) Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: OETestResult add internal _tc_map_resultsAníbal Limón2017-06-021-0/+2
| | | | | | | | | | | | This method is to assign results into the TestContext, create an internal one to support change implementation in Thread version. [YOCTO #11450] (From OE-Core rev: 8a6a9cb816d78e4cf71b79c35b579918d31053f2) Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core: Move OETestContext.log{Summary, Details} into OETestResultAníbal Limón2017-06-021-0/+88
| | | | | | | | | | | | | Those methods are used to write in the log the results so it makes sense to have defined at OETestResult because is a format of the result itself. [YOCTO #11450] (From OE-Core rev: 33a783f59ed4e232f41f8b09dfa7955f2ddc2f80) Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: OETestResult remove unneeded override of startTestAníbal Limón2017-06-021-3/+0
| | | | | | | | | | I override this method before for keep track of results and forget to remove it, now isn't need. (From OE-Core rev: 63606ffaaac1b84ddcad8a1c1006f8110050e20e) Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core: Don't expose OEStreamLogger in OETestContextAníbal Limón2017-06-021-0/+3
| | | | | | | | | | | | | The OEStreamLogger class is used for redirect PyUnit output to a certain logger so there is not need to expose at level of OETestContext because only OETestRunner needs to know. [YOCTO #11450] (From OE-Core rev: 8787fba3df8acd9d2438669d20b1a5060caa9022) Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner: Append PID in the test result folder nameLeonardo Sandoval2017-04-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It was found a case (see the bugzilla entry) where two runners were running at the same second, creating identical test result folders, so one of them (the second runner) was not able to create the folder because the other has already created it, raising the following exception (many text was removed from log) NOTE: Executing RunQueue Tasks NOTE: Running task 1 of 2 (/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-mips/build/meta/recipes-core/images/core-image-minimal.bb:do_testsdkext) NOTE: Running task 2 of 2 (/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-mips/build/meta/recipes-sato/images/core-image-sato.bb:do_testsdkext) NOTE: recipe core-image-sato-1.0-r0: task do_testsdkext: Started NOTE: recipe core-image-minimal-1.0-r0: task do_testsdkext: Started . . The stack trace of python calls that resulted in this exception/failure was: File: 'exec_python_func() autogenerated', lineno: 2, function: <module> 0001: *** 0002:do_testsdkext(d) 0003: File: '/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-mips/build/meta/classes/testsdk.bbclass', lineno: 188, function: do_testsdkext 0184: 0185:testsdkext_main[vardepsexclude] =+ "BB_ORIGENV" 0186: 0187:python do_testsdkext() { *** 0188: testsdkext_main(d) 0189:} 0190:addtask testsdkext 0191:do_testsdkext[nostamp] = "1" 0192: File: '/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-mips/build/meta/classes/testsdk.bbclass', lineno: 171, function: testsdkext_main 0167: except Exception as e: 0168: import traceback 0169: bb.fatal("Loading tests failed:\n%s" % traceback.format_exc()) 0170: *** 0171: result = tc.runTests() . . File: '/usr/lib64/python3.5/os.py', lineno: 241, function: makedirs 0237: cdir = bytes(curdir, 'ASCII') 0238: if tail == cdir: # xxx/newdir/. exists if xxx/newdir exists 0239: return 0240: try: *** 0241: mkdir(name, mode) 0242: except OSError: 0243: # Cannot rely on checking for EEXIST, since the operating system 0244: # could give priority to other errors like EACCES or EROFS 0245: if not exist_ok or not path.isdir(name): Exception: FileExistsError: [Errno 17] File exists: '/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-mips/build/build/TestResults_20170409130114' [YOCTO #11318] (From OE-Core rev: 10aa09d39c1b70ce2a88f59601f1f8dbc96ed817) Signed-off-by: Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core/runner.py: OEStreamLogger fix loggingAníbal Limón2017-02-151-1/+1
| | | | | | | | | | Change conditional to avoid losing stack traces when log into the logger. The logic only needs to don't log empty lines. (From OE-Core rev: 545dd180cc27a69732f2332565038bd06d01c014) Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
* oeqa/core: Add base OEQA frameworkAníbal Limón2017-01-231-0/+76
case: Defines OETestCase base class that provides custom methods/attrs defined by the framework. Every OETestCase instance contains a reference to the test data (d), the test context (tc) and the logger. Also implements _oe{SetUp,TearDown}Class for make special handling of OEQA decorators and validations. runner: Defines OETestRunner/OETestResult with support for RAW and XML result logs. exception: Custom exceptions related to the OEQA framework based on class OEQAException. [YOCTO #10230] [YOCTO #10233] (From OE-Core rev: c466086ccc4d4bb02d578a821cfb945945bfd529) Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com> Signed-off-by: Mariano Lopez <mariano.lopez@linux.intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>