summaryrefslogtreecommitdiffstats
path: root/meta/lib/oeqa/core/runner.py
diff options
context:
space:
mode:
authorRichard Purdie <richard.purdie@linuxfoundation.org>2018-11-02 13:13:43 +0000
committerRichard Purdie <richard.purdie@linuxfoundation.org>2018-12-16 14:31:27 +0000
commited5c12f11f7e18006f2de9eace333523ffb07002 (patch)
tree8d98ce5307f7933804e368a581b7e4012667e849 /meta/lib/oeqa/core/runner.py
parenteee5b0d104e6b36277a79afc9b02fdbffecc6aea (diff)
downloadpoky-ed5c12f11f7e18006f2de9eace333523ffb07002.tar.gz
oeqa/runtime/ptest: Inject results+logs into stored json results file
This allows the ptest results from ptest-runner, run in an image to be transferred over to the resulting json results output. Each test is given a pass/skip/fail so individual results can be monitored and the raw log output from the ptest-runner is also dumped into the results json file as this means after the fact debugging becomes much easier. Currently the log output is not split up per test but that would make a good future enhancement. I attempted to implement this as python subTests however it failed as the output was too confusing, subTests don't support any kind of log output handling, subTest successes aren't logged and it was making things far more complex than they needed to be. We mark ptest-runner as "EXPECTEDFAILURE" since its unlikely every ptest will pass currently and we don't want that to fail the whole image test run. Its assumed there would be later analysis of the json output to determine regressions. We do have to change the test runner code so that 'unexpectedsuccess' is not a failure. Also, the test names are manipuated to remove spaces and brackets with "_" used as a replacement and any duplicate occurrences truncated. (From OE-Core rev: a13e088942e2a3c3521e98954a394e61a15234e8) (From OE-Core rev: 526ceab9d0e43f73635bb92e8dd7763ef75ad33b) Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Diffstat (limited to 'meta/lib/oeqa/core/runner.py')
-rw-r--r--meta/lib/oeqa/core/runner.py8
1 files changed, 8 insertions, 0 deletions
diff --git a/meta/lib/oeqa/core/runner.py b/meta/lib/oeqa/core/runner.py
index 7e0b61b587..9a64e494f4 100644
--- a/meta/lib/oeqa/core/runner.py
+++ b/meta/lib/oeqa/core/runner.py
@@ -107,7 +107,11 @@ class OETestResult(_TestResult):
107 107
108 def logDetails(self, json_file_dir=None, configuration=None, result_id=None): 108 def logDetails(self, json_file_dir=None, configuration=None, result_id=None):
109 self.tc.logger.info("RESULTS:") 109 self.tc.logger.info("RESULTS:")
110
110 result = {} 111 result = {}
112 if hasattr(self.tc, "extraresults"):
113 result = self.tc.extraresults
114
111 for case_name in self.tc._registry['cases']: 115 for case_name in self.tc._registry['cases']:
112 case = self.tc._registry['cases'][case_name] 116 case = self.tc._registry['cases'][case_name]
113 117
@@ -126,6 +130,10 @@ class OETestResult(_TestResult):
126 tresultjsonhelper = OETestResultJSONHelper() 130 tresultjsonhelper = OETestResultJSONHelper()
127 tresultjsonhelper.dump_testresult_file(json_file_dir, configuration, result_id, result) 131 tresultjsonhelper.dump_testresult_file(json_file_dir, configuration, result_id, result)
128 132
133 def wasSuccessful(self):
134 # Override as we unexpected successes aren't failures for us
135 return (len(self.failures) == len(self.errors) == 0)
136
129class OEListTestsResult(object): 137class OEListTestsResult(object):
130 def wasSuccessful(self): 138 def wasSuccessful(self):
131 return True 139 return True