diff options
Diffstat (limited to 'documentation/test-manual')
-rw-r--r-- | documentation/test-manual/test-manual-intro.rst | 486 | ||||
-rw-r--r-- | documentation/test-manual/test-manual-test-process.rst | 103 | ||||
-rw-r--r-- | documentation/test-manual/test-manual-understand-autobuilder.rst | 287 | ||||
-rw-r--r-- | documentation/test-manual/test-manual.rst | 12 |
4 files changed, 888 insertions, 0 deletions
diff --git a/documentation/test-manual/test-manual-intro.rst b/documentation/test-manual/test-manual-intro.rst new file mode 100644 index 0000000000..491c4bad9a --- /dev/null +++ b/documentation/test-manual/test-manual-intro.rst | |||
@@ -0,0 +1,486 @@ | |||
1 | ***************************************** | ||
2 | The Yocto Project Test Environment Manual | ||
3 | ***************************************** | ||
4 | |||
5 | .. _test-welcome: | ||
6 | |||
7 | Welcome | ||
8 | ======= | ||
9 | |||
10 | Welcome to the Yocto Project Test Environment Manual! This manual is a | ||
11 | work in progress. The manual contains information about the testing | ||
12 | environment used by the Yocto Project to make sure each major and minor | ||
13 | release works as intended. All the project’s testing infrastructure and | ||
14 | processes are publicly visible and available so that the community can | ||
15 | see what testing is being performed, how it’s being done and the current | ||
16 | status of the tests and the project at any given time. It is intended | ||
17 | that Other organizations can leverage off the process and testing | ||
18 | environment used by the Yocto Project to create their own automated, | ||
19 | production test environment, building upon the foundations from the | ||
20 | project core. | ||
21 | |||
22 | Currently, the Yocto Project Test Environment Manual has no projected | ||
23 | release date. This manual is a work-in-progress and is being initially | ||
24 | loaded with information from the `README <>`__ files and notes from key | ||
25 | engineers: | ||
26 | |||
27 | - *``yocto-autobuilder2``:* This | ||
28 | ```README.md`` <http://git.yoctoproject.org/clean/cgit.cgi/yocto-autobuilder2/tree/README.md>`__ | ||
29 | is the main README which detials how to set up the Yocto Project | ||
30 | Autobuilder. The ``yocto-autobuilder2`` repository represents the | ||
31 | Yocto Project's console UI plugin to Buildbot and the configuration | ||
32 | necessary to configure Buildbot to perform the testing the project | ||
33 | requires. | ||
34 | |||
35 | - *``yocto-autobuilder-helper``:* This | ||
36 | ```README`` <http://git.yoctoproject.org/clean/cgit.cgi/yocto-autobuilder-helper/tree/README>`__ | ||
37 | and repository contains Yocto Project Autobuilder Helper scripts and | ||
38 | configuration. The ``yocto-autobuilder-helper`` repository contains | ||
39 | the "glue" logic that defines which tests to run and how to run them. | ||
40 | As a result, it can be used by any Continuous Improvement (CI) system | ||
41 | to run builds, support getting the correct code revisions, configure | ||
42 | builds and layers, run builds, and collect results. The code is | ||
43 | independent of any CI system, which means the code can work Buildbot, | ||
44 | Jenkins, or others. This repository has a branch per release of the | ||
45 | project defining the tests to run on a per release basis. | ||
46 | |||
47 | .. _test-yocto-project-autobuilder-overview: | ||
48 | |||
49 | Yocto Project Autobuilder Overview | ||
50 | ================================== | ||
51 | |||
52 | The Yocto Project Autobuilder collectively refers to the software, | ||
53 | tools, scripts, and procedures used by the Yocto Project to test | ||
54 | released software across supported hardware in an automated and regular | ||
55 | fashion. Basically, during the development of a Yocto Project release, | ||
56 | the Autobuilder tests if things work. The Autobuilder builds all test | ||
57 | targets and runs all the tests. | ||
58 | |||
59 | The Yocto Project uses now uses standard upstream | ||
60 | `Buildbot <https://docs.buildbot.net/0.9.15.post1/>`__ (version 9) to | ||
61 | drive its integration and testing. Buildbot Nine has a plug-in interface | ||
62 | that the Yocto Project customizes using code from the | ||
63 | ``yocto-autobuilder2`` repository, adding its own console UI plugin. The | ||
64 | resulting UI plug-in allows you to visualize builds in a way suited to | ||
65 | the project's needs. | ||
66 | |||
67 | A ``helper`` layer provides configuration and job management through | ||
68 | scripts found in the ``yocto-autobuilder-helper`` repository. The | ||
69 | ``helper`` layer contains the bulk of the build configuration | ||
70 | information and is release-specific, which makes it highly customizable | ||
71 | on a per-project basis. The layer is CI system-agnostic and contains a | ||
72 | number of Helper scripts that can generate build configurations from | ||
73 | simple JSON files. | ||
74 | |||
75 | .. note:: | ||
76 | |||
77 | The project uses Buildbot for historical reasons but also because | ||
78 | many of the project developers have knowledge of python. It is | ||
79 | possible to use the outer layers from another Continuous Integration | ||
80 | (CI) system such as | ||
81 | `Jenkins <https://en.wikipedia.org/wiki/Jenkins_(software)>`__ | ||
82 | instead of Buildbot. | ||
83 | |||
84 | The following figure shows the Yocto Project Autobuilder stack with a | ||
85 | topology that includes a controller and a cluster of workers: | ||
86 | |||
87 | .. _test-project-tests: | ||
88 | |||
89 | Yocto Project Tests - Types of Testing Overview | ||
90 | =============================================== | ||
91 | |||
92 | The Autobuilder tests different elements of the project by using | ||
93 | thefollowing types of tests: | ||
94 | |||
95 | - *Build Testing:* Tests whether specific configurations build by | ||
96 | varying ```MACHINE`` <&YOCTO_DOCS_REF_URL;#var-MACHINE>`__, | ||
97 | ```DISTRO`` <&YOCTO_DOCS_REF_URL;#var-DISTRO>`__, other configuration | ||
98 | options, and the specific target images being built (or world). Used | ||
99 | to trigger builds of all the different test configurations on the | ||
100 | Autobuilder. Builds usually cover many different targets for | ||
101 | different architectures, machines, and distributions, as well as | ||
102 | different configurations, such as different init systems. The | ||
103 | Autobuilder tests literally hundreds of configurations and targets. | ||
104 | |||
105 | - *Sanity Checks During the Build Process:* Tests initiated through | ||
106 | the ```insane`` <&YOCTO_DOCS_REF_URL;#ref-classes-insane>`__ | ||
107 | class. These checks ensure the output of the builds are correct. | ||
108 | For example, does the ELF architecture in the generated binaries | ||
109 | match the target system? ARM binaries would not work in a MIPS | ||
110 | system! | ||
111 | |||
112 | - *Build Performance Testing:* Tests whether or not commonly used steps | ||
113 | during builds work efficiently and avoid regressions. Tests to time | ||
114 | commonly used usage scenarios are run through ``oe-build-perf-test``. | ||
115 | These tests are run on isolated machines so that the time | ||
116 | measurements of the tests are accurate and no other processes | ||
117 | interfere with the timing results. The project currently tests | ||
118 | performance on two different distributions, Fedora and Ubuntu, to | ||
119 | ensure we have no single point of failure and can ensure the | ||
120 | different distros work effectively. | ||
121 | |||
122 | - *eSDK Testing:* Image tests initiated through the following command: | ||
123 | $ bitbake image -c testsdkext The tests utilize the ``testsdkext`` | ||
124 | class and the ``do_testsdkext`` task. | ||
125 | |||
126 | - *Feature Testing:* Various scenario-based tests are run through the | ||
127 | `OpenEmbedded | ||
128 | Self-Test <&YOCTO_DOCS_REF_URL;#testing-and-quality-assurance>`__ | ||
129 | (oe-selftest). We test oe-selftest on each of the main distrubutions | ||
130 | we support. | ||
131 | |||
132 | - *Image Testing:* Image tests initiated through the following command: | ||
133 | $ bitbake image -c testimage The tests utilize the | ||
134 | ```testimage*`` <&YOCTO_DOCS_REF_URL;#ref-classes-testimage*>`__ | ||
135 | classes and the | ||
136 | ```do_testimage`` <&YOCTO_DOCS_REF_URL;#ref-tasks-testimage>`__ task. | ||
137 | |||
138 | - *Layer Testing:* The Autobuilder has the possibility to test whether | ||
139 | specific layers work with the test of the system. The layers tested | ||
140 | may be selected by members of the project. Some key community layers | ||
141 | are also tested periodically. | ||
142 | |||
143 | - *Package Testing:* A Package Test (ptest) runs tests against packages | ||
144 | built by the OpenEmbedded build system on the target machine. See the | ||
145 | "`Testing Packages With | ||
146 | ptest <&YOCTO_DOCS_DEV_URL;#testing-packages-with-ptest>`__" section | ||
147 | in the Yocto Project Development Tasks Manual and the | ||
148 | "`Ptest <&YOCTO_WIKI_URL;/wiki/Ptest>`__" Wiki page for more | ||
149 | information on Ptest. | ||
150 | |||
151 | - *SDK Testing:* Image tests initiated through the following command: $ | ||
152 | bitbake image -c testsdk The tests utilize the | ||
153 | ```testsdk`` <&YOCTO_DOCS_REF_URL;#ref-classes-testsdk>`__ class and | ||
154 | the ``do_testsdk`` task. | ||
155 | |||
156 | - *Unit Testing:* Unit tests on various components of the system run | ||
157 | through ``oe-selftest`` and | ||
158 | ```bitbake-selftest`` <&YOCTO_DOCS_REF_URL;#testing-and-quality-assurance>`__. | ||
159 | |||
160 | - *Automatic Upgrade Helper:* This target tests whether new versions of | ||
161 | software are available and whether we can automatically upgrade to | ||
162 | those new versions. If so, this target emails the maintainers with a | ||
163 | patch to let them know this is possible. | ||
164 | |||
165 | .. _test-test-mapping: | ||
166 | |||
167 | How Tests Map to Areas of Code | ||
168 | ============================== | ||
169 | |||
170 | Tests map into the codebase as follows: | ||
171 | |||
172 | - *bitbake-selftest*: | ||
173 | |||
174 | These tests are self-contained and test BitBake as well as its APIs, | ||
175 | which include the fetchers. The tests are located in | ||
176 | ``bitbake/lib/*/tests``. | ||
177 | |||
178 | From within the BitBake repository, run the following: $ | ||
179 | bitbake-selftest | ||
180 | |||
181 | To skip tests that access the Internet, use the ``BB_SKIP_NETTEST`` | ||
182 | variable when running "bitbake-selftest" as follows: $ | ||
183 | BB_SKIP_NETTEST=yes bitbake-selftest | ||
184 | |||
185 | The default output is quiet and just prints a summary of what was | ||
186 | run. To see more information, there is a verbose option:$ | ||
187 | bitbake-selftest -v | ||
188 | |||
189 | Use this option when you wish to skip tests that access the network, | ||
190 | which are mostly necessary to test the fetcher modules. To specify | ||
191 | individual test modules to run, append the test module name to the | ||
192 | "bitbake-selftest" command. For example, to specify the tests for the | ||
193 | bb.data.module, run: $ bitbake-selftest bb.test.data.moduleYou can | ||
194 | also specify individual tests by defining the full name and module | ||
195 | plus the class path of the test, for example: $ bitbake-selftest | ||
196 | bb.tests.data.TestOverrides.test_one_override | ||
197 | |||
198 | The tests are based on `Python | ||
199 | unittest <https://docs.python.org/3/library/unittest.html>`__. | ||
200 | |||
201 | - *oe-selftest*: | ||
202 | |||
203 | - These tests use OE to test the workflows, which include testing | ||
204 | specific features, behaviors of tasks, and API unit tests. | ||
205 | |||
206 | - The tests can take advantage of parallelism through the "-j" | ||
207 | option, which can specify a number of threads to spread the tests | ||
208 | across. Note that all tests from a given class of tests will run | ||
209 | in the same thread. To parallelize large numbers of tests you can | ||
210 | split the class into multiple units. | ||
211 | |||
212 | - The tests are based on Python unittest. | ||
213 | |||
214 | - The code for the tests resides in | ||
215 | ``meta/lib/oeqa/selftest/cases/``. | ||
216 | |||
217 | - To run all the tests, enter the following command: $ oe-selftest | ||
218 | -a | ||
219 | |||
220 | - To run a specific test, use the following command form where | ||
221 | testname is the name of the specific test: $ oe-selftest -r | ||
222 | testname For example, the following command would run the tinfoil | ||
223 | getVar API test:$ oe-selftest -r | ||
224 | tinfoil.TinfoilTests.test_getvarIt is also possible to run a set | ||
225 | of tests. For example the following command will run all of the | ||
226 | tinfoil tests:$ oe-selftest -r tinfoil | ||
227 | |||
228 | - *testimage:* | ||
229 | |||
230 | - These tests build an image, boot it, and run tests against the | ||
231 | image's content. | ||
232 | |||
233 | - The code for these tests resides in | ||
234 | ``meta/lib/oeqa/runtime/cases/``. | ||
235 | |||
236 | - You need to set the | ||
237 | ```IMAGE_CLASSES`` <&YOCTO_DOCS_REF_URL;#var-IMAGE_CLASSES>`__ | ||
238 | variable as follows: IMAGE_CLASSES += "testimage" | ||
239 | |||
240 | - Run the tests using the following command form: $ bitbake image -c | ||
241 | testimage | ||
242 | |||
243 | - *testsdk:* | ||
244 | |||
245 | - These tests build an SDK, install it, and then run tests against | ||
246 | that SDK. | ||
247 | |||
248 | - The code for these tests resides in ``meta/lib/oeqa/sdk/cases/``. | ||
249 | |||
250 | - Run the test using the following command form: $ bitbake image -c | ||
251 | testsdk | ||
252 | |||
253 | - *testsdk_ext:* | ||
254 | |||
255 | - These tests build an extended SDK (eSDK), install that eSDK, and | ||
256 | run tests against the eSDK. | ||
257 | |||
258 | - The code for these tests resides in ``meta/lib/oeqa/esdk``. | ||
259 | |||
260 | - To run the tests, use the following command form: $ bitbake image | ||
261 | -c testsdkext | ||
262 | |||
263 | - *oe-build-perf-test:* | ||
264 | |||
265 | - These tests run through commonly used usage scenarios and measure | ||
266 | the performance times. | ||
267 | |||
268 | - The code for these tests resides in ``meta/lib/oeqa/buildperf``. | ||
269 | |||
270 | - To run the tests, use the following command form: $ | ||
271 | oe-build-perf-test optionsThe command takes a number of options, | ||
272 | such as where to place the test results. The Autobuilder Helper | ||
273 | Scripts include the ``build-perf-test-wrapper`` script with | ||
274 | examples of how to use the oe-build-perf-test from the command | ||
275 | line. | ||
276 | |||
277 | Use the ``oe-git-archive`` command to store test results into a | ||
278 | Git repository. | ||
279 | |||
280 | Use the ``oe-build-perf-report`` command to generate text reports | ||
281 | and HTML reports with graphs of the performance data. For | ||
282 | examples, see | ||
283 | `http://downloads.yoctoproject.org/releases/yocto/yocto-2.7/testresults/buildperf-centos7/perf-centos7.yoctoproject.org_warrior_20190414204758_0e39202.html <#>`__ | ||
284 | and | ||
285 | `http://downloads.yoctoproject.org/releases/yocto/yocto-2.7/testresults/buildperf-centos7/perf-centos7.yoctoproject.org_warrior_20190414204758_0e39202.txt <#>`__. | ||
286 | |||
287 | - The tests are contained in ``lib/oeqa/buildperf/test_basic.py``. | ||
288 | |||
289 | Test Examples | ||
290 | ============= | ||
291 | |||
292 | This section provides example tests for each of the tests listed in the | ||
293 | `How Tests Map to Areas of Code <#test-test-mapping>`__ section. | ||
294 | |||
295 | For oeqa tests, testcases for each area reside in the main test | ||
296 | directory at ``meta/lib/oeqa/selftest/cases`` directory. | ||
297 | |||
298 | For oe-selftest. bitbake testcases reside in the ``lib/bb/tests/`` | ||
299 | directory. | ||
300 | |||
301 | .. _bitbake-selftest-example: | ||
302 | |||
303 | ``bitbake-selftest`` | ||
304 | -------------------- | ||
305 | |||
306 | A simple test example from ``lib/bb/tests/data.py`` is: class | ||
307 | DataExpansions(unittest.TestCase): def setUp(self): self.d = | ||
308 | bb.data.init() self.d["foo"] = "value_of_foo" self.d["bar"] = | ||
309 | "value_of_bar" self.d["value_of_foo"] = "value_of_'value_of_foo'" def | ||
310 | test_one_var(self): val = self.d.expand("${foo}") | ||
311 | self.assertEqual(str(val), "value_of_foo") | ||
312 | |||
313 | In this example, a ```DataExpansions`` <>`__ class of tests is created, | ||
314 | derived from standard python unittest. The class has a common ``setUp`` | ||
315 | function which is shared by all the tests in the class. A simple test is | ||
316 | then added to test that when a variable is expanded, the correct value | ||
317 | is found. | ||
318 | |||
319 | Bitbake selftests are straightforward python unittest. Refer to the | ||
320 | Python unittest documentation for additional information on writing | ||
321 | these tests at: `https://docs.python.org/3/library/unittest.html <#>`__. | ||
322 | |||
323 | .. _oe-selftest-example: | ||
324 | |||
325 | ``oe-selftest`` | ||
326 | --------------- | ||
327 | |||
328 | These tests are more complex due to the setup required behind the scenes | ||
329 | for full builds. Rather than directly using Python's unittest, the code | ||
330 | wraps most of the standard objects. The tests can be simple, such as | ||
331 | testing a command from within the OE build environment using the | ||
332 | following example:class BitbakeLayers(OESelftestTestCase): def | ||
333 | test_bitbakelayers_showcrossdepends(self): result = | ||
334 | runCmd('bitbake-layers show-cross-depends') self.assertTrue('aspell' in | ||
335 | result.output, msg = "No dependencies were shown. bitbake-layers | ||
336 | show-cross-depends output: %s"% result.output) | ||
337 | |||
338 | This example, taken from ``meta/lib/oeqa/selftest/cases/bblayers.py``, | ||
339 | creates a testcase from the ```OESelftestTestCase`` <>`__ class, derived | ||
340 | from ``unittest.TestCase``, which runs the ``bitbake-layers`` command | ||
341 | and checks the output to ensure it contains something we know should be | ||
342 | here. | ||
343 | |||
344 | The ``oeqa.utils.commands`` module contains Helpers which can assist | ||
345 | with common tasks, including: | ||
346 | |||
347 | - *Obtaining the value of a bitbake variable:* Use | ||
348 | ``oeqa.utils.commands.get_bb_var()`` or use | ||
349 | ``oeqa.utils.commands.get_bb_vars()`` for more than one variable | ||
350 | |||
351 | - *Running a bitbake invocation for a build:* Use | ||
352 | ``oeqa.utils.commands.bitbake()`` | ||
353 | |||
354 | - *Running a command:* Use ``oeqa.utils.commandsrunCmd()`` | ||
355 | |||
356 | There is also a ``oeqa.utils.commands.runqemu()`` function for launching | ||
357 | the ``runqemu`` command for testing things within a running, virtualized | ||
358 | image. | ||
359 | |||
360 | You can run these tests in parallel. Parallelism works per test class, | ||
361 | so tests within a given test class should always run in the same build, | ||
362 | while tests in different classes or modules may be split into different | ||
363 | builds. There is no data store available for these tests since the tests | ||
364 | launch the ``bitbake`` command and exist outside of its context. As a | ||
365 | result, common bitbake library functions (bb.*) are also unavailable. | ||
366 | |||
367 | .. _testimage-example: | ||
368 | |||
369 | ``testimage`` | ||
370 | ------------- | ||
371 | |||
372 | These tests are run once an image is up and running, either on target | ||
373 | hardware or under QEMU. As a result, they are assumed to be running in a | ||
374 | target image environment, as opposed to a host build environment. A | ||
375 | simple example from ``meta/lib/oeqa/runtime/cases/python.py`` contains | ||
376 | the following:class PythonTest(OERuntimeTestCase): | ||
377 | @OETestDepends(['ssh.SSHTest.test_ssh']) @OEHasPackage(['python3-core']) | ||
378 | def test_python3(self): cmd = "python3 -c \\"import codecs; | ||
379 | print(codecs.encode('Uryyb, jbeyq', 'rot13'))\"" status, output = | ||
380 | self.target.run(cmd) msg = 'Exit status was not 0. Output: %s' % output | ||
381 | self.assertEqual(status, 0, msg=msg) | ||
382 | |||
383 | In this example, the ```OERuntimeTestCase`` <>`__ class wraps | ||
384 | ``unittest.TestCase``. Within the test, ``self.target`` represents the | ||
385 | target system, where commands can be run on it using the ``run()`` | ||
386 | method. | ||
387 | |||
388 | To ensure certain test or package dependencies are met, you can use the | ||
389 | ``OETestDepends`` and ``OEHasPackage`` decorators. For example, the test | ||
390 | in this example would only make sense if python3-core is installed in | ||
391 | the image. | ||
392 | |||
393 | .. _testsdk_ext-example: | ||
394 | |||
395 | ``testsdk_ext`` | ||
396 | --------------- | ||
397 | |||
398 | These tests are run against built extensible SDKs (eSDKs). The tests can | ||
399 | assume that the eSDK environment has already been setup. An example from | ||
400 | ``meta/lib/oeqa/sdk/cases/devtool.py`` contains the following:class | ||
401 | DevtoolTest(OESDKExtTestCase): @classmethod def setUpClass(cls): | ||
402 | myapp_src = os.path.join(cls.tc.esdk_files_dir, "myapp") cls.myapp_dst = | ||
403 | os.path.join(cls.tc.sdk_dir, "myapp") shutil.copytree(myapp_src, | ||
404 | cls.myapp_dst) subprocess.check_output(['git', 'init', '.'], | ||
405 | cwd=cls.myapp_dst) subprocess.check_output(['git', 'add', '.'], | ||
406 | cwd=cls.myapp_dst) subprocess.check_output(['git', 'commit', '-m', | ||
407 | "'test commit'"], cwd=cls.myapp_dst) @classmethod def | ||
408 | tearDownClass(cls): shutil.rmtree(cls.myapp_dst) def | ||
409 | \_test_devtool_build(self, directory): self._run('devtool add myapp %s' | ||
410 | % directory) try: self._run('devtool build myapp') finally: | ||
411 | self._run('devtool reset myapp') def test_devtool_build_make(self): | ||
412 | self._test_devtool_build(self.myapp_dst)In this example, the ``devtool`` | ||
413 | command is tested to see whether a sample application can be built with | ||
414 | the ``devtool build`` command within the eSDK. | ||
415 | |||
416 | .. _testsdk-example: | ||
417 | |||
418 | ``testsdk`` | ||
419 | ----------- | ||
420 | |||
421 | These tests are run against built SDKs. The tests can assume that an SDK | ||
422 | has already been extracted and its environment file has been sourced. A | ||
423 | simple example from ``meta/lib/oeqa/sdk/cases/python2.py`` contains the | ||
424 | following:class Python3Test(OESDKTestCase): def setUp(self): if not | ||
425 | (self.tc.hasHostPackage("nativesdk-python3-core") or | ||
426 | self.tc.hasHostPackage("python3-core-native")): raise | ||
427 | unittest.SkipTest("No python3 package in the SDK") def | ||
428 | test_python3(self): cmd = "python3 -c \\"import codecs; | ||
429 | print(codecs.encode('Uryyb, jbeyq', 'rot13'))\"" output = self._run(cmd) | ||
430 | self.assertEqual(output, "Hello, world\n")In this example, if | ||
431 | nativesdk-python3-core has been installed into the SDK, the code runs | ||
432 | the python3 interpreter with a basic command to check it is working | ||
433 | correctly. The test would only run if python3 is installed in the SDK. | ||
434 | |||
435 | .. _oe-build-perf-test-example: | ||
436 | |||
437 | ``oe-build-perf-test`` | ||
438 | ---------------------- | ||
439 | |||
440 | The performance tests usually measure how long operations take and the | ||
441 | resource utilisation as that happens. An example from | ||
442 | ``meta/lib/oeqa/buildperf/test_basic.py`` contains the following:class | ||
443 | Test3(BuildPerfTestCase): def test3(self): """Bitbake parsing (bitbake | ||
444 | -p)""" # Drop all caches and parse self.rm_cache() | ||
445 | oe.path.remove(os.path.join(self.bb_vars['TMPDIR'], 'cache'), True) | ||
446 | self.measure_cmd_resources(['bitbake', '-p'], 'parse_1', 'bitbake -p (no | ||
447 | caches)') # Drop tmp/cache | ||
448 | oe.path.remove(os.path.join(self.bb_vars['TMPDIR'], 'cache'), True) | ||
449 | self.measure_cmd_resources(['bitbake', '-p'], 'parse_2', 'bitbake -p (no | ||
450 | tmp/cache)') # Parse with fully cached data | ||
451 | self.measure_cmd_resources(['bitbake', '-p'], 'parse_3', 'bitbake -p | ||
452 | (cached)')This example shows how three specific parsing timings are | ||
453 | measured, with and without various caches, to show how BitBake’s parsing | ||
454 | performance trends over time. | ||
455 | |||
456 | .. _test-writing-considerations: | ||
457 | |||
458 | Considerations When Writing Tests | ||
459 | ================================= | ||
460 | |||
461 | When writing good tests, there are several things to keep in mind. Since | ||
462 | things running on the Autobuilder are accessed concurrently by multiple | ||
463 | workers, consider the following: | ||
464 | |||
465 | **Running "cleanall" is not permitted.** | ||
466 | |||
467 | This can delete files from DL_DIR which would potentially break other | ||
468 | builds running in parallel. If this is required, DL_DIR must be set to | ||
469 | an isolated directory. | ||
470 | |||
471 | **Running "cleansstate" is not permitted.** | ||
472 | |||
473 | This can delete files from SSTATE_DIR which would potentially break | ||
474 | other builds running in parallel. If this is required, SSTATE_DIR must | ||
475 | be set to an isolated directory. Alternatively, you can use the "-f" | ||
476 | option with the ``bitbake`` command to "taint" tasks by changing the | ||
477 | sstate checksums to ensure sstate cache items will not be reused. | ||
478 | |||
479 | **Tests should not change the metadata.** | ||
480 | |||
481 | This is particularly true for oe-selftests since these can run in | ||
482 | parallel and changing metadata leads to changing checksums, which | ||
483 | confuses BitBake while running in parallel. If this is necessary, copy | ||
484 | layers to a temporary location and modify them. Some tests need to | ||
485 | change metadata, such as the devtool tests. To prevent the metadate from | ||
486 | changes, set up temporary copies of that data first. | ||
diff --git a/documentation/test-manual/test-manual-test-process.rst b/documentation/test-manual/test-manual-test-process.rst new file mode 100644 index 0000000000..19c9b565de --- /dev/null +++ b/documentation/test-manual/test-manual-test-process.rst | |||
@@ -0,0 +1,103 @@ | |||
1 | *********************************** | ||
2 | Project Testing and Release Process | ||
3 | *********************************** | ||
4 | |||
5 | .. _test-daily-devel: | ||
6 | |||
7 | Day to Day Development | ||
8 | ====================== | ||
9 | |||
10 | This section details how the project tests changes, through automation | ||
11 | on the Autobuilder or with the assistance of QA teams, through to making | ||
12 | releases. | ||
13 | |||
14 | The project aims to test changes against our test matrix before those | ||
15 | changes are merged into the master branch. As such, changes are queued | ||
16 | up in batches either in the ``master-next`` branch in the main trees, or | ||
17 | in user trees such as ``ross/mut`` in ``poky-contrib`` (Ross Burton | ||
18 | helps review and test patches and this is his testing tree). | ||
19 | |||
20 | We have two broad categories of test builds, including "full" and | ||
21 | "quick". On the Autobuilder, these can be seen as "a-quick" and | ||
22 | "a-full", simply for ease of sorting in the UI. Use our Autobuilder | ||
23 | console view to see where me manage most test-related items, available | ||
24 | at: `https://autobuilder.yoctoproject.org/typhoon/#/console <#>`__. | ||
25 | |||
26 | Builds are triggered manually when the test branches are ready. The | ||
27 | builds are monitored by the SWAT team. For additional information, see | ||
28 | `https://wiki.yoctoproject.org/wiki/Yocto_Build_Failure_Swat_Team <#>`__. | ||
29 | If successful, the changes would usually be merged to the ``master`` | ||
30 | branch. If not successful, someone would respond to the changes on the | ||
31 | mailing list explaining that there was a failure in testing. The choice | ||
32 | of quick or full would depend on the type of changes and the speed with | ||
33 | which the result was required. | ||
34 | |||
35 | The Autobuilder does build the ``master`` branch once daily for several | ||
36 | reasons, in particular, to ensure the current ``master`` branch does | ||
37 | build, but also to keep ``yocto-testresults`` | ||
38 | (`http://git.yoctoproject.org/cgit.cgi/yocto-testresults/ <#>`__), | ||
39 | buildhistory | ||
40 | (`http://git.yoctoproject.org/cgit.cgi/poky-buildhistory/ <#>`__), and | ||
41 | our sstate up to date. On the weekend, there is a master-next build | ||
42 | instead to ensure the test results are updated for the less frequently | ||
43 | run targets. | ||
44 | |||
45 | Performance builds (buildperf-\* targets in the console) are triggered | ||
46 | separately every six hours and automatically push their results to the | ||
47 | buildstats repository at: | ||
48 | `http://git.yoctoproject.org/cgit.cgi/yocto-buildstats/ <#>`__. | ||
49 | |||
50 | The 'quick' targets have been selected to be the ones which catch the | ||
51 | most failures or give the most valuable data. We run 'fast' ptests in | ||
52 | this case for example but not the ones which take a long time. The quick | ||
53 | target doesn't include \*-lsb builds for all architectures, some world | ||
54 | builds and doesn't trigger performance tests or ltp testing. The full | ||
55 | build includes all these things and is slower but more comprehensive. | ||
56 | |||
57 | .. _test-yocto-project-autobuilder-overview: | ||
58 | |||
59 | Release Builds | ||
60 | ============== | ||
61 | |||
62 | The project typically has two major releases a year with a six month | ||
63 | cadence in April and October. Between these there would be a number of | ||
64 | milestone releases (usually four) with the final one being stablization | ||
65 | only along with point releases of our stable branches. | ||
66 | |||
67 | The build and release process for these project releases is similar to | ||
68 | that in `Day to Day Development <#test-daily-devel>`__, in that the | ||
69 | a-full target of the Autobuilder is used but in addition the form is | ||
70 | configured to generate and publish artefacts and the milestone number, | ||
71 | version, release candidate number and other information is entered. The | ||
72 | box to "generate an email to QA"is also checked. | ||
73 | |||
74 | When the build completes, an email is sent out using the send-qa-email | ||
75 | script in the ``yocto-autobuilder-helper`` repository to the list of | ||
76 | people configured for that release. Release builds are placed into a | ||
77 | directory in `https://autobuilder.yocto.io/pub/releases <#>`__ on the | ||
78 | Autobuilder which is included in the email. The process from here is | ||
79 | more manual and control is effectively passed to release engineering. | ||
80 | The next steps include: | ||
81 | |||
82 | - QA teams respond to the email saying which tests they plan to run and | ||
83 | when the results will be available. | ||
84 | |||
85 | - QA teams run their tests and share their results in the yocto- | ||
86 | testresults-contrib repository, along with a summary of their | ||
87 | findings. | ||
88 | |||
89 | - Release engineering prepare the release as per their process. | ||
90 | |||
91 | - Test results from the QA teams are included into the release in | ||
92 | separate directories and also uploaded to the yocto-testresults | ||
93 | repository alongside the other test results for the given revision. | ||
94 | |||
95 | - The QA report in the final release is regenerated using resulttool to | ||
96 | include the new test results and the test summaries from the teams | ||
97 | (as headers to the generated report). | ||
98 | |||
99 | - The release is checked against the release checklist and release | ||
100 | readiness criteria. | ||
101 | |||
102 | - A final decision on whether to release is made by the YP TSC who have | ||
103 | final oversight on release readiness. | ||
diff --git a/documentation/test-manual/test-manual-understand-autobuilder.rst b/documentation/test-manual/test-manual-understand-autobuilder.rst new file mode 100644 index 0000000000..69700088aa --- /dev/null +++ b/documentation/test-manual/test-manual-understand-autobuilder.rst | |||
@@ -0,0 +1,287 @@ | |||
1 | ******************************************* | ||
2 | Understanding the Yocto Project Autobuilder | ||
3 | ******************************************* | ||
4 | |||
5 | Execution Flow within the Autobuilder | ||
6 | ===================================== | ||
7 | |||
8 | The “a-full” and “a-quick” targets are the usual entry points into the | ||
9 | Autobuilder and it makes sense to follow the process through the system | ||
10 | starting there. This is best visualised from the Autobuilder Console | ||
11 | view (`https://autobuilder.yoctoproject.org/typhoon/#/console <#>`__). | ||
12 | |||
13 | Each item along the top of that view represents some “target build” and | ||
14 | these targets are all run in parallel. The ‘full’ build will trigger the | ||
15 | majority of them, the “quick” build will trigger some subset of them. | ||
16 | The Autobuilder effectively runs whichever configuration is defined for | ||
17 | each of those targets on a seperate buildbot worker. To understand the | ||
18 | configuration, you need to look at the entry on ``config.json`` file | ||
19 | within the ``yocto-autobuilder-helper`` repository. The targets are | ||
20 | defined in the ‘overrides’ section, a quick example could be qemux86-64 | ||
21 | which looks like:"qemux86-64" : { "MACHINE" : "qemux86-64", "TEMPLATE" : | ||
22 | "arch-qemu", "step1" : { "extravars" : [ "IMAGE_FSTYPES_append = ' wic | ||
23 | wic.bmap'" ] } },And to expand that, you need the “arch-qemu” entry from | ||
24 | the “templates” section, which looks like:"arch-qemu" : { "BUILDINFO" : | ||
25 | true, "BUILDHISTORY" : true, "step1" : { "BBTARGETS" : "core-image-sato | ||
26 | core-image-sato-dev core-image-sato-sdk core-image-minimal | ||
27 | core-image-minimal-dev core-image-sato:do_populate_sdk", "SANITYTARGETS" | ||
28 | : "core-image-minimal:do_testimage core-image-sato:do_testimage | ||
29 | core-image-sato-sdk:do_testimage core-image-sato:do_testsdk" }, "step2" | ||
30 | : { "SDKMACHINE" : "x86_64", "BBTARGETS" : | ||
31 | "core-image-sato:do_populate_sdk core-image-minimal:do_populate_sdk_ext | ||
32 | core-image-sato:do_populate_sdk_ext", "SANITYTARGETS" : | ||
33 | "core-image-sato:do_testsdk core-image-minimal:do_testsdkext | ||
34 | core-image-sato:do_testsdkext" }, "step3" : { "BUILDHISTORY" : false, | ||
35 | "EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; DISPLAY=:1 oe-selftest | ||
36 | ${HELPERSTMACHTARGS} -j 15"], "ADDLAYER" : | ||
37 | ["${BUILDDIR}/../meta-selftest"] } },Combining these two entries you can | ||
38 | see that “qemux86-64” is a three step build where the | ||
39 | ``bitbake BBTARGETS`` would be run, then ``bitbake | ||
40 | SANITYTARGETS`` for each step; all for | ||
41 | ``MACHINE=”qemx86-64”`` but with differing SDKMACHINE settings. In step | ||
42 | 1 an extra variable is added to the ``auto.conf`` file to enable wic | ||
43 | image generation. | ||
44 | |||
45 | While not every detail of this is covered here, you can see how the | ||
46 | templating mechanism allows quite complex configurations to be built up | ||
47 | yet allows duplication and repetition to be kept to a minimum. | ||
48 | |||
49 | The different build targets are designed to allow for parallelisation, | ||
50 | so different machines are usually built in parallel, operations using | ||
51 | the same machine and metadata are built sequentially, with the aim of | ||
52 | trying to optimise build efficiency as much as possible. | ||
53 | |||
54 | The ``config.json`` file is processed by the scripts in the Helper | ||
55 | repository in the ``scripts`` directory. The following section details | ||
56 | how this works. | ||
57 | |||
58 | .. _test-autobuilder-target-exec-overview: | ||
59 | |||
60 | Autobuilder Target Execution Overview | ||
61 | ===================================== | ||
62 | |||
63 | For each given target in a build, the Autobuilder executes several | ||
64 | steps. These are configured in ``yocto-autobuilder2/builders.py`` and | ||
65 | roughly consist of: | ||
66 | |||
67 | 1. *Run ``clobberdir``* | ||
68 | |||
69 | This cleans out any previous build. Old builds are left around to | ||
70 | allow easier debugging of failed builds. For additional information, | ||
71 | see ```clobberdir`` <#test-clobberdir>`__. | ||
72 | |||
73 | 2. *Obtain yocto-autobuilder-helper* | ||
74 | |||
75 | This step clones the ``yocto-autobuilder-helper`` git repository. | ||
76 | This is necessary to prevent the requirement to maintain all the | ||
77 | release or project-specific code within Buildbot. The branch chosen | ||
78 | matches the release being built so we can support older releases and | ||
79 | still make changes in newer ones. | ||
80 | |||
81 | 3. *Write layerinfo.json* | ||
82 | |||
83 | This transfers data in the Buildbot UI when the build was configured | ||
84 | to the Helper. | ||
85 | |||
86 | 4. *Call scripts/shared-repo-unpack* | ||
87 | |||
88 | This is a call into the Helper scripts to set up a checkout of all | ||
89 | the pieces this build might need. It might clone the BitBake | ||
90 | repository and the OpenEmbedded-Core repository. It may clone the | ||
91 | Poky repository, as well as additional layers. It will use the data | ||
92 | from the ``layerinfo.json`` file to help understand the | ||
93 | configuration. It will also use a local cache of repositories to | ||
94 | speed up the clone checkouts. For additional information, see | ||
95 | `Autobuilder Clone Cache <#test-autobuilder-clone-cache>`__. | ||
96 | |||
97 | This step has two possible modes of operation. If the build is part | ||
98 | of a parent build, its possible that all the repositories needed may | ||
99 | already be available, ready in a pre-prepared directory. An "a-quick" | ||
100 | or "a-full" build would prepare this before starting the other | ||
101 | sub-target builds. This is done for two reasons: | ||
102 | |||
103 | - the upstream may change during a build, for example, from a forced | ||
104 | push and this ensures we have matching content for the whole build | ||
105 | |||
106 | - if 15 Workers all tried to pull the same data from the same repos, | ||
107 | we can hit resource limits on upstream servers as they can think | ||
108 | they are under some kind of network attack | ||
109 | |||
110 | This pre-prepared directory is shared among the Workers over NFS. If | ||
111 | the build is an individual build and there is no "shared" directory | ||
112 | available, it would clone from the cache and the upstreams as | ||
113 | necessary. This is considered the fallback mode. | ||
114 | |||
115 | 5. *Call scripts/run-config* | ||
116 | |||
117 | This is another call into the Helper scripts where its expected that | ||
118 | the main functionality of this target will be executed. | ||
119 | |||
120 | .. _test-autobuilder-tech: | ||
121 | |||
122 | Autobuilder Technology | ||
123 | ====================== | ||
124 | |||
125 | The Autobuilder has Yocto Project-specific functionality to allow builds | ||
126 | to operate with increased efficiency and speed. | ||
127 | |||
128 | .. _test-clobberdir: | ||
129 | |||
130 | clobberdir | ||
131 | ---------- | ||
132 | |||
133 | When deleting files, the Autobuilder uses ``clobberdir``, which is a | ||
134 | special script that moves files to a special location, rather than | ||
135 | deleting them. Files in this location are deleted by an ``rm`` command, | ||
136 | which is run under ``ionice -c 3``. For example, the deletion only | ||
137 | happens when there is idle IO capacity on the Worker. The Autobuilder | ||
138 | Worker Janitor runs this deletion. See `Autobuilder Worker | ||
139 | Janitor <#test-autobuilder-worker-janitor>`__. | ||
140 | |||
141 | .. _test-autobuilder-clone-cache: | ||
142 | |||
143 | Autobuilder Clone Cache | ||
144 | ----------------------- | ||
145 | |||
146 | Cloning repositories from scratch each time they are required was slow | ||
147 | on the Autobuilder. We therefore have a stash of commonly used | ||
148 | repositories pre-cloned on the Workers. Data is fetched from these | ||
149 | during clones first, then "topped up" with later revisions from any | ||
150 | upstream when necesary. The cache is maintained by the Autobuilder | ||
151 | Worker Janitor. See `Autobuilder Worker | ||
152 | Janitor <#test-autobuilder-worker-janitor>`__. | ||
153 | |||
154 | .. _test-autobuilder-worker-janitor: | ||
155 | |||
156 | Autobuilder Worker Janitor | ||
157 | -------------------------- | ||
158 | |||
159 | This is a process running on each Worker that performs two basic | ||
160 | operations, including background file deletion at IO idle (see `Target | ||
161 | Execution: clobberdir <#test-list-tgt-exec-clobberdir>`__) and | ||
162 | maintainenance of a cache of cloned repositories to improve the speed | ||
163 | the system can checkout repositories. | ||
164 | |||
165 | .. _test-shared-dl-dir: | ||
166 | |||
167 | Shared DL_DIR | ||
168 | ------------- | ||
169 | |||
170 | The Workers are all connected over NFS which allows DL_DIR to be shared | ||
171 | between them. This reduces network accesses from the system and allows | ||
172 | the build to be sped up. Usage of the directory within the build system | ||
173 | is designed to be able to be shared over NFS. | ||
174 | |||
175 | .. _test-shared-sstate-cache: | ||
176 | |||
177 | Shared SSTATE_DIR | ||
178 | ----------------- | ||
179 | |||
180 | The Workers are all connected over NFS which allows the ``sstate`` | ||
181 | directory to be shared between them. This means once a Worker has built | ||
182 | an artefact, all the others can benefit from it. Usage of the directory | ||
183 | within the directory is designed for sharing over NFS. | ||
184 | |||
185 | .. _test-resulttool: | ||
186 | |||
187 | Resulttool | ||
188 | ---------- | ||
189 | |||
190 | All of the different tests run as part of the build generate output into | ||
191 | ``testresults.json`` files. This allows us to determine which tests ran | ||
192 | in a given build and their status. Additional information, such as | ||
193 | failure logs or the time taken to run the tests, may also be included. | ||
194 | |||
195 | Resulttool is part of OpenEmbedded-Core and is used to manipulate these | ||
196 | json results files. It has the ability to merge files together, display | ||
197 | reports of the test results and compare different result files. | ||
198 | |||
199 | For details, see `https://wiki.yoctoproject.org/wiki/Resulttool <#>`__. | ||
200 | |||
201 | .. _test-run-config-tgt-execution: | ||
202 | |||
203 | run-config Target Execution | ||
204 | =========================== | ||
205 | |||
206 | The ``scripts/run-config`` execution is where most of the work within | ||
207 | the Autobuilder happens. It runs through a number of steps; the first | ||
208 | are general setup steps that are run once and include: | ||
209 | |||
210 | 1. Set up any ``buildtools-tarball`` if configured. | ||
211 | |||
212 | 2. Call "buildhistory-init" if buildhistory is configured. | ||
213 | |||
214 | For each step that is configured in ``config.json``, it will perform the | ||
215 | following: | ||
216 | |||
217 | ## WRITER's question: What does "logging in as stepXa" and others refer | ||
218 | to below? ## | ||
219 | |||
220 | 1. Add any layers that are specified using the | ||
221 | ``bitbake-layers add-layer`` command (logging as stepXa) | ||
222 | |||
223 | 2. Call the ``scripts/setup-config`` script to generate the necessary | ||
224 | ``auto.conf`` configuration file for the build | ||
225 | |||
226 | 3. Run the ``bitbake BBTARGETS`` command (logging as stepXb) | ||
227 | |||
228 | 4. Run the ``bitbake SANITYTARGETS`` command (logging as stepXc) | ||
229 | |||
230 | 5. Run the ``EXTRACMDS`` command, which are run within the BitBake build | ||
231 | environment (logging as stepXd) | ||
232 | |||
233 | 6. Run the ``EXTRAPLAINCMDS`` command(s), which are run outside the | ||
234 | BitBake build environment (logging as stepXd) | ||
235 | |||
236 | 7. Remove any layers added in `step | ||
237 | 1 <#test-run-config-add-layers-step>`__ using the | ||
238 | ``bitbake-layers remove-layer`` command (logging as stepXa) | ||
239 | |||
240 | Once the execution steps above complete, ``run-config`` executes a set | ||
241 | of post-build steps, including: | ||
242 | |||
243 | 1. Call ``scripts/publish-artifacts`` to collect any output which is to | ||
244 | be saved from the build. | ||
245 | |||
246 | 2. Call ``scripts/collect-results`` to collect any test results to be | ||
247 | saved from the build. | ||
248 | |||
249 | 3. Call ``scripts/upload-error-reports`` to send any error reports | ||
250 | generated to the remote server. | ||
251 | |||
252 | 4. Cleanup the build directory using | ||
253 | ```clobberdir`` <#test-clobberdir>`__ if the build was successful, | ||
254 | else rename it to “build-renamed” for potential future debugging. | ||
255 | |||
256 | .. _test-deploying-yp-autobuilder: | ||
257 | |||
258 | Deploying Yocto Autobuilder | ||
259 | =========================== | ||
260 | |||
261 | The most up to date information about how to setup and deploy your own | ||
262 | Autbuilder can be found in README.md in the ``yocto-autobuilder2`` | ||
263 | repository. | ||
264 | |||
265 | We hope that people can use the ``yocto-autobuilder2`` code directly but | ||
266 | it is inevitable that users will end up needing to heavily customise the | ||
267 | ``yocto-autobuilder-helper`` repository, particularly the | ||
268 | ``config.json`` file as they will want to define their own test matrix. | ||
269 | |||
270 | The Autobuilder supports wo customization options: | ||
271 | |||
272 | - variable substitution | ||
273 | |||
274 | - overlaying configuration files | ||
275 | |||
276 | The standard ``config.json`` minimally attempts to allow substitution of | ||
277 | the paths. The Helper script repository includes a | ||
278 | ``local-example.json`` file to show how you could override these from a | ||
279 | separate configuration file. Pass the following into the environment of | ||
280 | the Autobuilder:$ ABHELPER_JSON="config.json local-example.json"As | ||
281 | another example, you could also pass the following into the | ||
282 | environment:$ ABHELPER_JSON="config.json /some/location/local.json"One | ||
283 | issue users often run into is validation of the ``config.json`` files. A | ||
284 | tip for minimizing issues from invalid json files is to use a Git | ||
285 | ``pre-commit-hook.sh`` script to verify the JSON file before committing | ||
286 | it. Create a symbolic link as follows:$ ln -s | ||
287 | ../../scripts/pre-commit-hook.sh .git/hooks/pre-commit | ||
diff --git a/documentation/test-manual/test-manual.rst b/documentation/test-manual/test-manual.rst new file mode 100644 index 0000000000..1bca408106 --- /dev/null +++ b/documentation/test-manual/test-manual.rst | |||
@@ -0,0 +1,12 @@ | |||
1 | ===================================== | ||
2 | Yocto Project Test Environment Manual | ||
3 | ===================================== | ||
4 | |||
5 | .. toctree:: | ||
6 | :caption: Table of Contents | ||
7 | :numbered: | ||
8 | |||
9 | test-manual-intro | ||
10 | test-manual-test-process | ||
11 | test-manual-understand-autobuilder | ||
12 | |||