Copyright © 2010-2018 Linux Foundation
Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by Creative Commons.
This version of the Yocto Project Test Environment Manual is for the 2.6 release of the Yocto Project. To be sure you have the latest version of the manual for this release, go to the Yocto Project documentation page and select the manual from that site. Manuals from the site are more up-to-date than manuals derived from the Yocto Project released TAR files.
If you located this manual through a web search, the version of the manual might not be the one you want (e.g. the search might have returned a manual much older than the Yocto Project version with which you are working). You can see all Yocto Project major releases by visiting the Releases page. If you need a version of this manual for a different Yocto Project release, visit the Yocto Project documentation page and select the manual set by using the "ACTIVE RELEASES DOCUMENTATION" or "DOCUMENTS ARCHIVE" pull-down menus.
To report any inaccuracies or problems with this
manual, send an email to the Yocto Project
discussion group at
yocto@yoctoproject.com
or log into
the freenode #yocto
channel.
Revision History | |
---|---|
Revision 2.7 | TBD |
Released with the Yocto Project 2.7 Release. |
Table of Contents
Table of Contents
Welcome to the Yocto Project Test Environment Manual! This manual is a work in progress. The manual contains information about the testing environment used by the Yocto Project to make sure each major and minor release works as planned. Other organizations can leverage off the process and testing environment used by the Yocto Project to create their own automated, production test environment.
Currently, the Yocto Project Test Environment Manual has no projected release date. This manual is a work-in-progress and is being initially loaded with information from the README files and notes from key engineers:
yocto-autobuilder
:
This
README.md
is not maintained.
However, some information from this README file still
applies but could need some modification.
In particular, information about setting up headless
sanity tests and build history.
The sections on these will be changing.
yocto-autobuilder
repository
is obsolete and is no longer maintained.
The new "Autobuilder" is maintained in the
yocto-autobuilder2
repository.
yocto-autobuilder2
:
This
README.md
is the main README for Yocto Project Autobuilder.
The yocto-autobuilder2
repository
represents the Yocto Project's testing codebase and
exists to configure and use Buildbot to do testing.
yocto-autobuilder-helper
:
This
README
is a valid Autobuilder Git repository that contains
Yocto Project Autobuilder Helper Scripts.
The yocto-autobuilder-helper
repository contains the "glue" logic that connects any
Continuous Improvement (CI) system to run builds,
support getting the correct code revisions, configure
builds and layers, run builds, and collect results.
The code is independent of any CI system, which means
the code can work Buildbot, Jenkins, or others.
The Yocto Project Autobuilder collectively refers to the software, tools, scripts, and procedures used by the Yocto Project to test released software across supported hardware in an automated and regular fashion. Basically, during the development of a Yocto Project release, the Autobuilder tests if things work. The Autobuilder builds all test targets and runs all the tests.
The Yocto Project uses the unpatched
Buildbot Nine
to drive its integration and testing.
Buildbot Nine has a plug-in interface that the Yocto Project
customizes using code from the
yocto-autobuilder2
repository.
The resulting customized UI plug-in allows you to
visualize builds in a way suited to the project.
A "helper" layer provides configuration and job management
through scripts found in the
yocto-autobuilder-helper
repository.
The helper layer contains the bulk of the build configuration
information and is release-specific, which makes it highly
customizable on a per-project basis.
The layer is CI system-agnostic and contains a number of helper
scripts that can generate build configurations from simple JSON
files.
The following figure shows the Yocto Project Autobuilder stack with a topology that includes a controller and a cluster of workers:
Two kinds of tests exist within the Yocto Project:
Beyond these types of testing, the Autobuilder tests different pieces by using the following types of tests:
Build Testing: Trigger builds of all the different test configurations on the Autobuilder. Builds usually cover each target for different architectures, machines, and distributions.
Build Performance Testing:
Tests to time commonly used usage scenarios are run through
oe-build-perf-test
.
eSDK Testing: Image tests initiated through the following command:
$ bitbake image
-c testsdkext
The tests utilize the testsdkext
class and the do_testsdkext
task.
Feature Testing: Various scenario-based tests are run through the OpenEmbedded Self-Test (oe-selftest).
Image Testing: Image tests initiated through the following command:
$ bitbake image
-c testimage
The tests utilize the
testimage*
classes and the
do_testimage
task.
Package Testing: A Package Test (ptest) runs tests against packages built by the OpenEmbedded build system on the target machine. See the "Testing Packages With ptest" section in the Yocto Project Development Tasks Manual and the "Ptest" Wiki page for more information on Ptest.
Sanity Checks During the Build Process:
Tests initiated through the
insane
class.
SDK Testing: Image tests initiated through the following command:
$ bitbake image
-c testsdk
The tests utilize the
testsdk
class and the do_testsdk
task.
Unit Testing:
Unit tests on various components of the system run
through oe-selftest
and
bitbake-selftest
.
Tests map into the codebase as follows:
bitbake-selftest:
These tests are self-contained and test BitBake
as well as its APIs, which include the fetchers.
The tests are located in
bitbake/lib/*/tests
.
From within the BitBake repository, run the following:
$ bitbake-selftest
The tests are based on Python unittest.
oe-selftest:
These tests use OE to test the workflows, which include testing specific features, behaviors of tasks, and API unit tests. The tests take advantage of parallelism through the "-j" option to run in multiple threads.
The tests are based on Python unittest.
The code for the tests resides in
meta/lib/oeqa/selftest
.
To run all the test, enter the following command:
$ oe-selftest -a
To run a specific test, use the following command
form where testname
is
the name of the specific test:
$ oe-selftest -r testname
testimage:
These tests build an image, boot it, and run tests against the image's content.
The code for these tests resides in
meta/lib/oeqa/runtime
.
You need to set the
IMAGE_CLASSES
variable as follows:
IMAGE_CLASSES += "testimage"
Run the tests using the following command form:
$ bitbake image
-c testimage
testsdk:
These tests build an SDK, install it, and then run tests against that SDK.
The code for these tests resides in
meta/lib/oeqa/sdk
.
Run the test using the following command form:
$ bitbake image
-c testsdk
testsdk_ext:
These tests build an extended SDK (eSDK), install that eSDK, and run tests against the eSDK.
The code for these tests resides in
meta/lib/oeqa/esdk
.
To run the tests, use the following command form:
$ bitbake image
-c testsdkext
oe-build-perf-test:
These tests run through commonly used usage scenarios and measure the performance times.
The code for these tests resides in NEED A DIRECTORY HERE.
NEED SOME INFORMATION ON HOW TO ENABLE THIS TEST OR INCLUDE IT HERE.
some setting
Run the tests using the following command form:
$ some command
This section provides example tests for each of the tests listed in the How Tests Map to Areas of Code" section.
bitbake-selftest
¶Content here.
oe-selftest
¶NEED CONTENT HERE.
testimage
¶NEED CONTENT HERE.
testsdk_ext
¶NEED CONTENT HERE.
testsdk
¶NEED CONTENT HERE.
oe-build-perf-test
¶NEED CONTENT HERE.
The following is going to be the replacement content for the section on "Nightly Builds". Not sure what we are going to call these builds. We need a name to replace "Nightly Builds".
Here is the content from Richards email:
In 1.6, we actually dropped the "nightly" bit pretty much everywhere. They are now named MACHINE or MACHINE-DISTRO, e.g. qemuarm or qemuarm- lsb (which tests poky-lsb with qemuarm). We now parallelise not just architecture but by machine so machine and real hardware are now separate. The flow is therefore to build the images+sdks, then test the images+sdks, trying to do as much as possible in parallel. We have two types of build trigger, "quick" and "full". quick runs all the things which commonly fail and one random oe-selftest. "full" runs all our targets, runs oe-selftest on all distros and includes ptest and build performance tests. Its slower but more complete and would be used for release builds.
yocto-autobuilder2
README.md
file.
I am making an assumption that we do not want to refer to the
Autobuilder stuff as "Autobuilder2".
My guess is that since this is the first documentation of any
automated test environment and process in the Yocto Project
user documentation, we will treat it as the start of things.
Automatic testing is based on the workers executing builds using Buildbot Nine configured for specific build jobs triggered in an automatic and regular fashion. Worker Configuration and triggering is accomplished through the Yocto Project Autobuilder layer and a set of helper scripts.
The configuration and helper scripts have as little code and as few custom Buildbot extensions as possible. The configuration collects required input from the user to furnish the helper scripts with the input needed for workers to accomplish their builds. The input consists of minimal user-customizable parameters used to trigger the helper build scripts.
Each builder maps to a named configuration in the helper scripts. The configuration is created with the steps and properties required to invoke the helper scripts for a worker's builds.
Each worker has a custom scheduler created for it and contains parameters configured for the scheduler that can supply the custom versions of the required values for the helper script parameters.
Following is the code layout for the Autobuilder:
builders.py
:
Configures the builders with the minimal buildsteps
to invoke the Yocto Project Autobuilder helper scripts.
lib/wiki.py
:
Implements functionality related to
MediaWiki.
The wikilog
plug-in uses this
functionality.
Effectively, this functionality provides helper functions
for the plug-in.
buildbot.util.service.HTTPClient
.
reporters/wikilog.py
:
A custom plug-in that is a Buildbot service that listens for
build failures and then writes information about the
failure to the configured wiki page.
steps/writelayerinfo.py
:
Implements a simple, custom buildset that iterates the
repo_
, branch_
,
and commit_
properties, which are set
by the schedulers, and then writes a JSON file with the
user's values.
config.py
:
Contains all values that might need changing to redeploy
the Autobuilder code elsewhere.
master.cfg
:
Performs most configuration by making calls into other
scripts.
Configuration specific for a worker cluster (i.e. a
Controller URL) resides here.
schedulers.py
:
Sets up the force schedulers with controls for modifying
inputs for each worker.
services.py
:
Configures IRC, mail, and Wikilog reporters.
workers.py
:
Configures the worker objects.
www.py
:
Sets up the Web User Interface.
The goal is to keep custom code minimized throughout the Autobuilder. The few customizations implemented support the Yocto Project Autobuilder Helper Script workflows and help replicate the workflows established with the Yocto Autobuilder layer. In particular, the following files accomplish this customization:
writelayerinfo.py
wikilog.py
wiki.py
Steps to deploy the Yocto Project Autobuilder assume each target
system has a copy of Buildbot installed.
Additionally, various pieces of functionality require that a copy
of the Autobuilder Helper Scripts (i.e.
yocto-autobuilder-helper
) is available
in the home directory at
~/yocto-autobuilder-helper
of the user
running Buildbot.
The following sections provide steps for Yocto Autobuilder deployment.
Follow these steps to deploy Yocto Autobuilder on an upstream controller:
Create the Master Yocto Controller:
$ buildbot create-master yocto-controller
Change Your Working Directory to the Master Yocto Controller:
$ cd yocto-controller
Create a Local Git Repository of the Yocto Project Autobuilder:
$ git clone https://git.yoctoproject.org/git/yocto-autobuilder2 yoctoabb
In the previous command, the local repository is
created in a yoctoabb
directory inside the directory of the Master
Yocto Controller directory.
Change Your Working Directory Back to the Master Yocto Controller:
$ cd ..
Create a Relative Symbolic Link to master.cfg
:
$ ln -rsyocto-controller
/yoctoabb/master.cfgyocto-controller
/master.cfg
The previous command sets up a relative symbolic
link to the master.cfg
using
a link of the same name.
Update the Buildbot URL in master.cfg
:
Use your $EDITOR
to edit the
Buildbot URL in the master.cfg
file.
Find the following line and replace the URL with
the URL for your Buildbot:
c['buildbotURL'] = "https://autobuilder.yoctoproject.org/main/"
Enable services in services.py
:
Use your $EDITOR
to edit the
services.py
file.
Set appropriate configuration values to enable
desired services.
Enable Automatic Authorization (Autorisation) in www.py
:
Use your $EDITOR
to edit the
www.py
file.
Configure autorisation if desired.
Modify Configuration Options in config.py
:
Use your $EDITOR
to edit the
config.py
file.
Modify configuration options such as worker
configurations.
Start Buildbot:
$ buildbot start yocto-controller
Create a Local Git Repository of the Yocto Autobuilder Helper Scripts::
Move up a directory so that you are above the
yocto-controller
location and clone the directory:
$ cd ..
$ git clone https://git.yoctoproject.org/git/yocto-autobuilder-helper
Follow these steps to deploy Yocto Autobuilder on an upstream worker:
Create the Worker:
$ buildbot-worker create-workeryocto-worker
localhost
example-worker
pass
example-worker
).
For example, you can pass
`hostname`
to use the
host's configured name.
Start the Worker:
$ buildbot-worker start yocto-worker
This case has yet to be defined.
It requires a custom config.json
file
for yocto-autobuilder-helper
.
If you plan on using the Yocto Project Autobuilder to run headless sanity testing, you need to do the following:
Install TightVNC client and server.
Create a bank of tap network devices (tap devs)
by running the
runqemu-gen-tapdevs
script
found in the
Source Directory
at
https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/scripts.
You must disable interface control on these new tap devices.
Add "xterm*vt100*geometry: 80x50+10+10" to
.Xdefaults
Set up and start the TightVNC session as the Autobuilder user.
Manually connect to the VNC session at least once prior to running a QEMU sanity test.
The production Yocto Autobuilder uses a cluster of build
workers.
The cluster shares the same
SSTATE_DIR
and
DL_DIR
through an NFS4 mounted Network Attached Storage (NAS).
The main nightly trigger pre-populates the
DL_DIR
, which allows the workers to not
have to deal with a lot of downloading.
In theory, you could also run your build workers with
NO_NETWORK
to enforce a single point for populating
DL_DIR
.
Running multiple build workers is fairly simple, but does require some setup:
Ensure the settings in
autobuilder.conf
are valid
for each worker.
Certain variables are set within this file that
work with the local configurations on each
worker.
Within
yocto-controller/controller.cfg
,
add your worker to the
c['workers']
list inside
the BUILDWORKERS
section.
For each worker change the
WORKER SETTINGS
section
of
yocto-worker/buildbot.tac
to match the settings in
controller.cfg
.
Workers must reside in the same path as the Build Controller, even if they are on completely different machines.
Build History is used to track changes to packages and images. By default, the Autobuilder does not collect build history. The production Autobuilder does have this functionality enabled.
Setting up build history requires the following steps:
Create an empty Git repository. Make a single commit to it and then create and push branches for each of the nightly core architectures (i.e.. mips, ppc, x86...).
Find a central location to create a clone for the repository created in the previous step. This works best if you have a setup similar to the production Autobuilder (i.e. NAS with many workers).
Run the following:
# This is an example of how to set up a local build history checkout. Paths # obviously are situationally dependent. $ mkdir /nas/buildhistory $ cd /nas/buildhistory $ git clone ssh://git@git.myproject.org/buildhistory $ git clone ssh://git@git.myproject.org/buildhistory nightly-arm $ git clone ssh://git@git.myproject.org/buildhistory nightly-x86 $ git clone ssh://git@git.myproject.org/buildhistory nightly-x86-64 $ git clone ssh://git@git.myproject.org/buildhistory nightly-ppc $ git clone ssh://git@git.myproject.org/buildhistory nightly-mips $ for x in `ls|grep nightly` do cd $x; git checkout $x; cd /nas/buildhistory; done
Within the autobuilder.conf
of each worker, change the following:
BUILD_HISTORY_COLLECT = True BUILD_HISTORY_DIR = "/nas/buildhistory" BUILD_HISTORY_REPO = "ssh://git@git.myproject.org/buildhistory"
Yocto Autobuilder: The Git repository is at http://git.yoctoproject.org/clean/cgit.cgi/yocto-autobuilder/tree/.
Essentially an extension to the vanilla buildbot. This extension mainly addresses configuration file handling and Yocto-specific build steps.
For better maintainability, the Autobuilder (see
Autobuilder.py
located at
http://git.yoctoproject.org/clean/cgit.cgi/yocto-autobuilder/tree/lib/python2.7/site-packages/autobuilder),
handles configuration from multiple files.
Additional build steps such as
CheckOutLayers.py
or
CreateBBLayersConf
are Yocto-specific
and simplify the worker's configuration.
TightVNC: Virtual Network Computing (VNC) is a client/server software package that allows remote network access to graphical desktops. With VNC, you can access your machine from everywhere provided that your machine is connected to the Internet. VNC is free (released under the GNU General Public License) and it is available on most platforms.
TightVNC is an enhanced version of VNC, which includes new features, improvements, optimizations, and bug fixes over the original VNC version. See the list of features at http://www.tightvnc.com/intro.php.
You need TightVNC in order to run headless sanity tests. See the bullet on headless sanity tests for more information.
Files Used for Yocto-Autobuilder Configuration:
config/autobuilder.conf
:
Used to set Autobuilder-wide parameters, such as
where various build artifacts are published
(e.g. DL_DIR
and
SSTATE_DIR
).
Another example is if build artifacts should be
published, which is necessary for production
Autobuilders but not desktop builders.
buildset-config/yoctoAB.conf
:
The main Yocto Project Autobuilder configuration
file.
Documentation for this file and its associated
format is in the
README-NEW-AUTOBUILDER
file.
The helper scripts work in conjunction with the Yocto Project Autobuilder. These scripts do the actual build configuration and execution for tests on a per release basis.
You can use pre-commit-hook.sh
to verify
the JSON file before committing it.
Create a symbolic link as follows:
$ ln -s ../../scripts/pre-commit-hook.sh .git/hooks/pre-commit
Most users will have to customize the helper script repository
to meet their needs.
The repository is located at
http://git.yoctoproject.org/clean/cgit.cgi/yocto-autobuilder-helper.
The scripts themselves should be more generically reusable.
The config.json
is less reusable as it
represents the Yocto Project Autobuilder test matrix.
Two customization options are possible: 1) variable substitution,
and 2) overlaying configuration files.
The standard config.json
minimally attempts
to allow substitution of the paths.
The helper script repository includes a
local-example.json
to show how you could override these from a separate configuration
file.
Pass the following into the environment of the autobuilder:
ABHELPER_JSON="config.json local-example.json"
As another example, you could also pass the following into the environment:
ABHELPER_JSON="config.json /some/location/local.json"