%poky; ] > The Yocto Project Development Environment This chapter takes a look at the Yocto Project development environment and also provides a detailed look at what goes on during development in that environment. The chapter provides Yocto Project Development environment concepts that help you understand how work is accomplished in an open source environment, which is very different as compared to work accomplished in a closed, proprietary environment. Specifically, this chapter addresses open source philosophy, workflows, Git, source repositories, licensing, recipe syntax, and development syntax.
Open Source Philosophy Open source philosophy is characterized by software development directed by peer production and collaboration through an active community of developers. Contrast this to the more standard centralized development models used by commercial software companies where a finite set of developers produces a product for sale using a defined set of procedures that ultimately result in an end product whose architecture and source material are closed to the public. Open source projects conceptually have differing concurrent agendas, approaches, and production. These facets of the development process can come from anyone in the public (community) that has a stake in the software project. The open source environment contains new copyright, licensing, domain, and consumer issues that differ from the more traditional development environment. In an open source environment, the end product, source material, and documentation are all available to the public at no cost. A benchmark example of an open source project is the Linux kernel, which was initially conceived and created by Finnish computer science student Linus Torvalds in 1991. Conversely, a good example of a non-open source project is the Windows family of operating systems developed by Microsoft Corporation. Wikipedia has a good historical description of the Open Source Philosophy here. You can also find helpful information on how to participate in the Linux Community here.
Workflows This section provides workflow concepts using the Yocto Project and Git. In particular, the information covers basic practices that describe roles and actions in a collaborative development environment. If you are familiar with this type of development environment, you might not want to read this section. The Yocto Project files are maintained using Git in "master" branches whose Git histories track every change and whose structures provides branches for all diverging functionality. Although there is no need to use Git, many open source projects do so. For the Yocto Project, a key individual called the "maintainer" is responsible for the "master" branch of a given Git repository. The "master" branch is the “upstream” repository from which final or most recent builds of the project occur. The maintainer is responsible for accepting changes from other developers and for organizing the underlying branch structure to reflect release strategies and so forth. For information on finding out who is responsible for (maintains) a particular area of code, see the "Submitting a Change to the Yocto Project" section of the Yocto Project Development Manual. The Yocto Project poky Git repository also has an upstream contribution Git repository named poky-contrib. You can see all the branches in this repository using the web interface of the Source Repositories organized within the "Poky Support" area. These branches temporarily hold changes to the project that have been submitted or committed by the Yocto Project development team and by community members who contribute to the project. The maintainer determines if the changes are qualified to be moved from the "contrib" branches into the "master" branch of the Git repository. Developers (including contributing community members) create and maintain cloned repositories of the upstream "master" branch. The cloned repositories are local to their development platforms and are used to develop changes. When a developer is satisfied with a particular feature or change, they "push" the changes to the appropriate "contrib" repository. Developers are responsible for keeping their local repository up-to-date with "master". They are also responsible for straightening out any conflicts that might arise within files that are being worked on simultaneously by more than one person. All this work is done locally on the developer’s machine before anything is pushed to a "contrib" area and examined at the maintainer’s level. A somewhat formal method exists by which developers commit changes and push them into the "contrib" area and subsequently request that the maintainer include them into "master". This process is called “submitting a patch” or "submitting a change." For information on submitting patches and changes, see the "Submitting a Change to the Yocto Project" section in the Yocto Project Development Manual. To summarize the development workflow: a single point of entry exists for changes into the project’s "master" branch of the Git repository, which is controlled by the project’s maintainer. And, a set of developers exist who independently develop, test, and submit changes to "contrib" areas for the maintainer to examine. The maintainer then chooses which changes are going to become a permanent part of the project. While each development environment is unique, there are some best practices or methods that help development run smoothly. The following list describes some of these practices. For more information about Git workflows, see the workflow topics in the Git Community Book. Make Small Changes: It is best to keep the changes you commit small as compared to bundling many disparate changes into a single commit. This practice not only keeps things manageable but also allows the maintainer to more easily include or refuse changes. It is also good practice to leave the repository in a state that allows you to still successfully build your project. In other words, do not commit half of a feature, then add the other half as a separate, later commit. Each commit should take you from one buildable project state to another buildable state. Use Branches Liberally: It is very easy to create, use, and delete local branches in your working Git repository. You can name these branches anything you like. It is helpful to give them names associated with the particular feature or change on which you are working. Once you are done with a feature or change and have merged it into your local master branch, simply discard the temporary branch. Merge Changes: The git merge command allows you to take the changes from one branch and fold them into another branch. This process is especially helpful when more than a single developer might be working on different parts of the same feature. Merging changes also automatically identifies any collisions or "conflicts" that might happen as a result of the same lines of code being altered by two different developers. Manage Branches: Because branches are easy to use, you should use a system where branches indicate varying levels of code readiness. For example, you can have a "work" branch to develop in, a "test" branch where the code or change is tested, a "stage" branch where changes are ready to be committed, and so forth. As your project develops, you can merge code across the branches to reflect ever-increasing stable states of the development. Use Push and Pull: The push-pull workflow is based on the concept of developers "pushing" local commits to a remote repository, which is usually a contribution repository. This workflow is also based on developers "pulling" known states of the project down into their local development repositories. The workflow easily allows you to pull changes submitted by other developers from the upstream repository into your work area ensuring that you have the most recent software on which to develop. The Yocto Project has two scripts named create-pull-request and send-pull-request that ship with the release to facilitate this workflow. You can find these scripts in the scripts folder of the Source Directory. For information on how to use these scripts, see the "Using Scripts to Push a Change Upstream and Request a Pull" section in the Yocto Project Development Manual. Patch Workflow: This workflow allows you to notify the maintainer through an email that you have a change (or patch) you would like considered for the "master" branch of the Git repository. To send this type of change, you format the patch and then send the email using the Git commands git format-patch and git send-email. For information on how to use these scripts, see the "Submitting a Change to the Yocto Project" section in the Yocto Project Development Manual.
Git The Yocto Project makes extensive use of Git, which is a free, open source distributed version control system. Git supports distributed development, non-linear development, and can handle large projects. It is best that you have some fundamental understanding of how Git tracks projects and how to work with Git if you are going to use the Yocto Project for development. This section provides a quick overview of how Git works and provides you with a summary of some essential Git commands. Notes For more information on Git, see . If you need to download Git, it is recommended that you add Git to your system through your distribution's "software store" (e.g. for Ubuntu, use the Ubuntu Software feature). For the Git download page, see . For examples beyond the limited few in this section on how to use Git with the Yocto Project, see the "Working With Yocto Project Source Files" section in the Yocto Project Development Manual.
Repositories, Tags, and Branches As mentioned briefly in the previous section and also in the "Workflows" section, the Yocto Project maintains source repositories at . If you look at this web-interface of the repositories, each item is a separate Git repository. Git repositories use branching techniques that track content change (not files) within a project (e.g. a new feature or updated documentation). Creating a tree-like structure based on project divergence allows for excellent historical information over the life of a project. This methodology also allows for an environment from which you can do lots of local experimentation on projects as you develop changes or new features. A Git repository represents all development efforts for a given project. For example, the Git repository poky contains all changes and developments for Poky over the course of its entire life. That means that all changes that make up all releases are captured. The repository maintains a complete history of changes. You can create a local copy of any repository by "cloning" it with the git clone command. When you clone a Git repository, you end up with an identical copy of the repository on your development system. Once you have a local copy of a repository, you can take steps to develop locally. For examples on how to clone Git repositories, see the "Working With Yocto Project Source Files" section in the Yocto Project Development Manual. It is important to understand that Git tracks content change and not files. Git uses "branches" to organize different development efforts. For example, the poky repository has several branches that include the current "&DISTRO_NAME_NO_CAP;" branch, the "master" branch, and many branches for past Yocto Project releases. You can see all the branches by going to and clicking on the [...] link beneath the "Branch" heading. Each of these branches represents a specific area of development. The "master" branch represents the current or most recent development. All other branches represent offshoots of the "master" branch. When you create a local copy of a Git repository, the copy has the same set of branches as the original. This means you can use Git to create a local working area (also called a branch) that tracks a specific development branch from the upstream source Git repository. in other words, you can define your local Git environment to work on any development branch in the repository. To help illustrate, consider the following example Git commands: $ cd ~ $ git clone git://git.yoctoproject.org/poky $ cd poky $ git checkout -b &DISTRO_NAME_NO_CAP; origin/&DISTRO_NAME_NO_CAP; In the previous example after moving to the home directory, the git clone command creates a local copy of the upstream poky Git repository. By default, Git checks out the "master" branch for your work. After changing the working directory to the new local repository (i.e. poky), the git checkout command creates and checks out a local branch named "&DISTRO_NAME_NO_CAP;", which tracks the upstream "origin/&DISTRO_NAME_NO_CAP;" branch. Changes you make while in this branch would ultimately affect the upstream "&DISTRO_NAME_NO_CAP;" branch of the poky repository. It is important to understand that when you create and checkout a local working branch based on a branch name, your local environment matches the "tip" of that particular development branch at the time you created your local branch, which could be different from the files in the "master" branch of the upstream repository. In other words, creating and checking out a local branch based on the "&DISTRO_NAME_NO_CAP;" branch name is not the same as cloning and checking out the "master" branch if the repository. Keep reading to see how you create a local snapshot of a Yocto Project Release. Git uses "tags" to mark specific changes in a repository. Typically, a tag is used to mark a special point such as the final change before a project is released. You can see the tags used with the poky Git repository by going to and clicking on the [...] link beneath the "Tag" heading. Some key tags for the poky are jethro-14.0.3, morty-16.0.1, pyro-17.0.0, and &DISTRO_NAME_NO_CAP;-&POKYVERSION;. These tags represent Yocto Project releases. When you create a local copy of the Git repository, you also have access to all the tags in the upstream repository. Similar to branches, you can create and checkout a local working Git branch based on a tag name. When you do this, you get a snapshot of the Git repository that reflects the state of the files when the change was made associated with that tag. The most common use is to checkout a working branch that matches a specific Yocto Project release. Here is an example: $ cd ~ $ git clone git://git.yoctoproject.org/poky $ cd poky $ git fetch --all --tags --prune $ git checkout tags/pyro-17.0.0 -b my-pyro-17.0.0 In this example, the name of the top-level directory of your local Yocto Project repository is poky. After moving to the poky directory, the git fetch command makes all the upstream tags available locally in your repository. Finally, the git checkout command creates and checks out a branch named "my-pyro-17.0.0" that is based on the specific change upstream in the repository associated with the "pyro-17.0.0" tag. The files in your repository now exactly match that particular Yocto Project release as it is tagged in the upstream Git repository. It is important to understand that when you create and checkout a local working branch based on a tag, your environment matches a specific point in time and not the entire development branch (i.e. the "tip" of the branch).
Basic Commands Git has an extensive set of commands that lets you manage changes and perform collaboration over the life of a project. Conveniently though, you can manage with a small set of basic operations and workflows once you understand the basic philosophy behind Git. You do not have to be an expert in Git to be functional. A good place to look for instruction on a minimal set of Git commands is here. If you do not know much about Git, you should educate yourself by visiting the links previously mentioned. The following list of Git commands briefly describes some basic Git operations as a way to get started. As with any set of commands, this list (in most cases) simply shows the base command and omits the many arguments they support. See the Git documentation for complete descriptions and strategies on how to use these commands: git init: Initializes an empty Git repository. You cannot use Git commands unless you have a .git repository. git clone: Creates a local clone of a Git repository that is on equal footing with a fellow developer’s Git repository or an upstream repository. git add: Locally stages updated file contents to the index that Git uses to track changes. You must stage all files that have changed before you can commit them. git commit: Creates a local "commit" that documents the changes you made. Only changes that have been staged can be committed. Commits are used for historical purposes, for determining if a maintainer of a project will allow the change, and for ultimately pushing the change from your local Git repository into the project’s upstream repository. git status: Reports any modified files that possibly need to be staged and gives you a status of where you stand regarding local commits as compared to the upstream repository. git checkout branch-name: Changes your working branch. This command is analogous to "cd". git checkout –b working-branch: Creates and checks out a working branch on your local machine that you can use to isolate your work. It is a good idea to use local branches when adding specific features or changes. Using isolated branches facilitates easy removal of changes if they do not work out. git branch: Displays the existing local branches associated with your local repository. The branch that you have currently checked out is noted with an asterisk character. git branch -D branch-name: Deletes an existing local branch. You need to be in a local branch other than the one you are deleting in order to delete branch-name. git pull: Retrieves information from an upstream Git repository and places it in your local Git repository. You use this command to make sure you are synchronized with the repository from which you are basing changes (.e.g. the "master" branch). git push: Sends all your committed local changes to the upstream Git repository that your local repository is tracking (e.g. a contribution repository). The maintainer of the project draws from these repositories to merge changes (commits) into the appropriate branch of project's upstream repository. git merge: Combines or adds changes from one local branch of your repository with another branch. When you create a local Git repository, the default branch is named "master". A typical workflow is to create a temporary branch that is based off "master" that you would use for isolated work. You would make your changes in that isolated branch, stage and commit them locally, switch to the "master" branch, and then use the git merge command to apply the changes from your isolated branch into the currently checked out branch (e.g. "master"). After the merge is complete and if you are done with working in that isolated branch, you can safely delete the isolated branch. git cherry-pick: Choose and apply specific commits from one branch into another branch. There are times when you might not be able to merge all the changes in one branch with another but need to pick out certain ones. gitk: Provides a GUI view of the branches and changes in your local Git repository. This command is a good way to graphically see where things have diverged in your local repository. You need to install the gitk package on your development system to use this command. git log: Reports a history of your commits to the repository. This report lists all commits regardless of whether you have pushed them upstream or not. git diff: Displays line-by-line differences between a local working file and the same file as understood by Git. This command is useful to see what you have changed in any given file.
Yocto Project Source Repositories The Yocto Project team maintains complete source repositories for all Yocto Project files at . This web-based source code browser is organized into categories by function such as IDE Plugins, Matchbox, Poky, Yocto Linux Kernel, and so forth. From the interface, you can click on any particular item in the "Name" column and see the URL at the bottom of the page that you need to clone a Git repository for that particular item. Having a local Git repository of the Source Directory, which is usually named "poky", allows you to make changes, contribute to the history, and ultimately enhance the Yocto Project's tools, Board Support Packages, and so forth. For any supported release of Yocto Project, you can also go to the Yocto Project Website and select the "Downloads" tab and get a released tarball of the poky repository or any supported BSP tarballs. Unpacking these tarballs gives you a snapshot of the released files. Notes The recommended method for setting up the Yocto Project Source Directory and the files for supported BSPs (e.g., meta-intel) is to use Git to create a local copy of the upstream repositories. Be sure to always work in matching branches for both the selected BSP repository and the Source Directory (i.e. poky) repository. For example, if you have checked out the "master" branch of poky and you are going to use meta-intel, be sure to checkout the "master" branch of meta-intel. In summary, here is where you can get the project files needed for development: Source Repositories: This area contains IDE Plugins, Matchbox, Poky, Poky Support, Tools, Yocto Linux Kernel, and Yocto Metadata Layers. You can create local copies of Git repositories for each of these areas. For steps on how to view and access these upstream Git repositories, see the "Accessing Source Repositories" Section in the Yocto Project Development Manual. Index of /releases: This is an index of releases such as the Eclipse Yocto Plug-in, miscellaneous support, Poky, Pseudo, installers for cross-development toolchains, and all released versions of Yocto Project in the form of images or tarballs. Downloading and extracting these files does not produce a local copy of the Git repository but rather a snapshot of a particular release or image. For steps on how to view and access these files, see the "Accessing Index of Releases" section in the Yocto Project Development Manual. "Downloads" page for the Yocto Project Website: This section will change due to reworking of the YP Website. The Yocto Project website includes a "Downloads" tab that allows you to download any Yocto Project release and Board Support Package (BSP) in tarball form. The tarballs are similar to those found in the Index of /releases: area. For steps on how to use the "Downloads" page, see the "Using the Downloads Page" section in the Yocto Project Development Manual.
Licensing Because open source projects are open to the public, they have different licensing structures in place. License evolution for both Open Source and Free Software has an interesting history. If you are interested in this history, you can find basic information here: Open source license history Free software license history In general, the Yocto Project is broadly licensed under the Massachusetts Institute of Technology (MIT) License. MIT licensing permits the reuse of software within proprietary software as long as the license is distributed with that software. MIT is also compatible with the GNU General Public License (GPL). Patches to the Yocto Project follow the upstream licensing scheme. You can find information on the MIT license here. You can find information on the GNU GPL here. When you build an image using the Yocto Project, the build process uses a known list of licenses to ensure compliance. You can find this list in the Source Directory at meta/files/common-licenses. Once the build completes, the list of all licenses found and used during that build are kept in the Build Directory at tmp/deploy/licenses. If a module requires a license that is not in the base list, the build process generates a warning during the build. These tools make it easier for a developer to be certain of the licenses with which their shipped products must comply. However, even with these tools it is still up to the developer to resolve potential licensing issues. The base list of licenses used by the build process is a combination of the Software Package Data Exchange (SPDX) list and the Open Source Initiative (OSI) projects. SPDX Group is a working group of the Linux Foundation that maintains a specification for a standard format for communicating the components, licenses, and copyrights associated with a software package. OSI is a corporation dedicated to the Open Source Definition and the effort for reviewing and approving licenses that conform to the Open Source Definition (OSD). You can find a list of the combined SPDX and OSI licenses that the Yocto Project uses in the meta/files/common-licenses directory in your Source Directory. For information that can help you maintain compliance with various open source licensing during the lifecycle of a product created using the Yocto Project, see the "Maintaining Open Source License Compliance During Your Product's Lifecycle" section.
Recipe Syntax Understanding recipe file syntax is important for writing recipes. The following list overviews the basic items that make up a BitBake recipe file. For more complete BitBake syntax descriptions, see the "Syntax and Operators" chapter of the BitBake User Manual. Variable Assignments and Manipulations: Variable assignments allow a value to be assigned to a variable. The assignment can be static text or might include the contents of other variables. In addition to the assignment, appending and prepending operations are also supported. The following example shows some of the ways you can use variables in recipes: S = "${WORKDIR}/postfix-${PV}" CFLAGS += "-DNO_ASM" SRC_URI_append = " file://fixup.patch" Functions: Functions provide a series of actions to be performed. You usually use functions to override the default implementation of a task function or to complement a default function (i.e. append or prepend to an existing function). Standard functions use sh shell syntax, although access to OpenEmbedded variables and internal methods are also available. The following is an example function from the sed recipe: do_install () { autotools_do_install install -d ${D}${base_bindir} mv ${D}${bindir}/sed ${D}${base_bindir}/sed rmdir ${D}${bindir}/ } It is also possible to implement new functions that are called between existing tasks as long as the new functions are not replacing or complementing the default functions. You can implement functions in Python instead of shell. Both of these options are not seen in the majority of recipes. Keywords: BitBake recipes use only a few keywords. You use keywords to include common functions (inherit), load parts of a recipe from other files (include and require) and export variables to the environment (export). The following example shows the use of some of these keywords: export POSTCONF = "${STAGING_BINDIR}/postconf" inherit autoconf require otherfile.inc Comments: Any lines that begin with the hash character (#) are treated as comment lines and are ignored: # This is a comment This next list summarizes the most important and most commonly used parts of the recipe syntax. For more information on these parts of the syntax, you can reference the Syntax and Operators chapter in the BitBake User Manual. Line Continuation: \ - Use the backward slash (\) character to split a statement over multiple lines. Place the slash character at the end of the line that is to be continued on the next line: VAR = "A really long \ line" You cannot have any characters including spaces or tabs after the slash character. Using Variables: ${...} - Use the ${VARNAME} syntax to access the contents of a variable: SRC_URI = "${SOURCEFORGE_MIRROR}/libpng/zlib-${PV}.tar.gz" It is important to understand that the value of a variable expressed in this form does not get substituted automatically. The expansion of these expressions happens on-demand later (e.g. usually when a function that makes reference to the variable executes). This behavior ensures that the values are most appropriate for the context in which they are finally used. On the rare occasion that you do need the variable expression to be expanded immediately, you can use the := operator instead of = when you make the assignment, but this is not generally needed. Quote All Assignments: "value" - Use double quotes around the value in all variable assignments. VAR1 = "${OTHERVAR}" VAR2 = "The version is ${PV}" Conditional Assignment: ?= - Conditional assignment is used to assign a value to a variable, but only when the variable is currently unset. Use the question mark followed by the equal sign (?=) to make a "soft" assignment used for conditional assignment. Typically, "soft" assignments are used in the local.conf file for variables that are allowed to come through from the external environment. Here is an example where VAR1 is set to "New value" if it is currently empty. However, if VAR1 has already been set, it remains unchanged: VAR1 ?= "New value" In this next example, VAR1 is left with the value "Original value": VAR1 = "Original value" VAR1 ?= "New value" Appending: += - Use the plus character followed by the equals sign (+=) to append values to existing variables. This operator adds a space between the existing content of the variable and the new content. Here is an example: SRC_URI += "file://fix-makefile.patch" Prepending: =+ - Use the equals sign followed by the plus character (=+) to prepend values to existing variables. This operator adds a space between the new content and the existing content of the variable. Here is an example: VAR =+ "Starts" Appending: _append - Use the _append operator to append values to existing variables. This operator does not add any additional space. Also, the operator is applied after all the +=, and =+ operators have been applied and after all = assignments have occurred. The following example shows the space being explicitly added to the start to ensure the appended value is not merged with the existing value: SRC_URI_append = " file://fix-makefile.patch" You can also use the _append operator with overrides, which results in the actions only being performed for the specified target or machine: SRC_URI_append_sh4 = " file://fix-makefile.patch" Prepending: _prepend - Use the _prepend operator to prepend values to existing variables. This operator does not add any additional space. Also, the operator is applied after all the +=, and =+ operators have been applied and after all = assignments have occurred. The following example shows the space being explicitly added to the end to ensure the prepended value is not merged with the existing value: CFLAGS_prepend = "-I${S}/myincludes " You can also use the _prepend operator with overrides, which results in the actions only being performed for the specified target or machine: CFLAGS_prepend_sh4 = "-I${S}/myincludes " Overrides: - You can use overrides to set a value conditionally, typically based on how the recipe is being built. For example, to set the KBRANCH variable's value to "standard/base" for any target MACHINE, except for qemuarm where it should be set to "standard/arm-versatile-926ejs", you would do the following: KBRANCH = "standard/base" KBRANCH_qemuarm = "standard/arm-versatile-926ejs" Overrides are also used to separate alternate values of a variable in other situations. For example, when setting variables such as FILES and RDEPENDS that are specific to individual packages produced by a recipe, you should always use an override that specifies the name of the package. Indentation: Use spaces for indentation rather than than tabs. For shell functions, both currently work. However, it is a policy decision of the Yocto Project to use tabs in shell functions. Realize that some layers have a policy to use spaces for all indentation. Using Python for Complex Operations: ${@python_code} - For more advanced processing, it is possible to use Python code during variable assignments (e.g. search and replacement on a variable). You indicate Python code using the ${@python_code} syntax for the variable assignment: SRC_URI = "ftp://ftp.info-zip.org/pub/infozip/src/zip${@d.getVar('PV',1).replace('.', '')}.tgz Shell Function Syntax: Write shell functions as if you were writing a shell script when you describe a list of actions to take. You should ensure that your script works with a generic sh and that it does not require any bash or other shell-specific functionality. The same considerations apply to various system utilities (e.g. sed, grep, awk, and so forth) that you might wish to use. If in doubt, you should check with multiple implementations - including those from BusyBox.
Development Concepts This section takes a more detailed look inside the development process. The following diagram represents development at a high level. The remainder of this chapter expands on the fundamental input, output, process, and Metadata) blocks that make up development in the Yocto Project environment. In general, development consists of several functional areas: User Configuration: Metadata you can use to control the build process. Metadata Layers: Various layers that provide software, machine, and distro Metadata. Source Files: Upstream releases, local projects, and SCMs. Build System: Processes under the control of BitBake. This block expands on how BitBake fetches source, applies patches, completes compilation, analyzes output for package generation, creates and tests packages, generates images, and generates cross-development tools. Package Feeds: Directories containing output packages (RPM, DEB or IPK), which are subsequently used in the construction of an image or SDK, produced by the build system. These feeds can also be copied and shared using a web server or other means to facilitate extending or updating existing images on devices at runtime if runtime package management is enabled. Images: Images produced by the development process. Application Development SDK: Cross-development tools that are produced along with an image or separately with BitBake.
User Configuration User configuration helps define the build. Through user configuration, you can tell BitBake the target architecture for which you are building the image, where to store downloaded source, and other build properties. The following figure shows an expanded representation of the "User Configuration" box of the general Yocto Project Development Environment figure: BitBake needs some basic configuration files in order to complete a build. These files are *.conf files. The minimally necessary ones reside as example files in the Source Directory. For simplicity, this section refers to the Source Directory as the "Poky Directory." When you clone the poky Git repository or you download and unpack a Yocto Project release, you can set up the Source Directory to be named anything you want. For this discussion, the cloned repository uses the default name poky. The Poky repository is primarily an aggregation of existing repositories. It is not a canonical upstream source. The meta-poky layer inside Poky contains a conf directory that has example configuration files. These example files are used as a basis for creating actual configuration files when you source the build environment script (i.e. &OE_INIT_FILE;). Sourcing the build environment script creates a Build Directory if one does not already exist. BitBake uses the Build Directory for all its work during builds. The Build Directory has a conf directory that contains default versions of your local.conf and bblayers.conf configuration files. These default configuration files are created only if versions do not already exist in the Build Directory at the time you source the build environment setup script. Because the Poky repository is fundamentally an aggregation of existing repositories, some users might be familiar with running the &OE_INIT_FILE; script in the context of separate OpenEmbedded-Core and BitBake repositories rather than a single Poky repository. This discussion assumes the script is executed from within a cloned or unpacked version of Poky. Depending on where the script is sourced, different sub-scripts are called to set up the Build Directory (Yocto or OpenEmbedded). Specifically, the script scripts/oe-setup-builddir inside the poky directory sets up the Build Directory and seeds the directory (if necessary) with configuration files appropriate for the Yocto Project development environment. The scripts/oe-setup-builddir script uses the $TEMPLATECONF variable to determine which sample configuration files to locate. The local.conf file provides many basic variables that define a build environment. Here is a list of a few. To see the default configurations in a local.conf file created by the build environment script, see the local.conf.sample in the meta-poky layer: Parallelism Options: Controlled by the BB_NUMBER_THREADS, PARALLEL_MAKE, and BB_NUMBER_PARSE_THREADS variables. Target Machine Selection: Controlled by the MACHINE variable. Download Directory: Controlled by the DL_DIR variable. Shared State Directory: Controlled by the SSTATE_DIR variable. Build Output: Controlled by the TMPDIR variable. Configurations set in the conf/local.conf file can also be set in the conf/site.conf and conf/auto.conf configuration files. The bblayers.conf file tells BitBake what layers you want considered during the build. By default, the layers listed in this file include layers minimally needed by the build system. However, you must manually add any custom layers you have created. You can find more information on working with the bblayers.conf file in the "Enabling Your Layer" section in the Yocto Project Development Manual. The files site.conf and auto.conf are not created by the environment initialization script. If you want the site.conf file, you need to create that yourself. The auto.conf file is typically created by an autobuilder: site.conf: You can use the conf/site.conf configuration file to configure multiple build directories. For example, suppose you had several build environments and they shared some common features. You can set these default build properties here. A good example is perhaps the packaging format to use through the PACKAGE_CLASSES variable. One useful scenario for using the conf/site.conf file is to extend your BBPATH variable to include the path to a conf/site.conf. Then, when BitBake looks for Metadata using BBPATH, it finds the conf/site.conf file and applies your common configurations found in the file. To override configurations in a particular build directory, alter the similar configurations within that build directory's conf/local.conf file. auto.conf: The file is usually created and written to by an autobuilder. The settings put into the file are typically the same as you would find in the conf/local.conf or the conf/site.conf files. You can edit all configuration files to further define any particular build environment. This process is represented by the "User Configuration Edits" box in the figure. When you launch your build with the bitbake target command, BitBake sorts out the configurations to ultimately define your build environment. It is important to understand that the OpenEmbedded build system reads the configuration files in a specific order: site.conf, auto.conf, and local.conf. And, the build system applies the normal assignment statement rules. Because the files are parsed in a specific order, variable assignments for the same variable could be affected. For example, if the auto.conf file and the local.conf set variable1 to different values, because the build system parses local.conf after auto.conf, variable1 is assigned the value from the local.conf file.
Metadata, Machine Configuration, and Policy Configuration The previous section described the user configurations that define BitBake's global behavior. This section takes a closer look at the layers the build system uses to further control the build. These layers provide Metadata for the software, machine, and policy. In general, three types of layer input exist: Policy Configuration: Distribution Layers provide top-level or general policies for the image or SDK being built. For example, this layer would dictate whether BitBake produces RPM or IPK packages. Machine Configuration: Board Support Package (BSP) layers provide machine configurations. This type of information is specific to a particular target architecture. Metadata: Software layers contain user-supplied recipe files, patches, and append files. The following figure shows an expanded representation of the Metadata, Machine Configuration, and Policy Configuration input (layers) boxes of the general Yocto Project Development Environment figure: In general, all layers have a similar structure. They all contain a licensing file (e.g. COPYING) if the layer is to be distributed, a README file as good practice and especially if the layer is to be distributed, a configuration directory, and recipe directories. The Yocto Project has many layers that can be used. You can see a web-interface listing of them on the Source Repositories page. The layers are shown at the bottom categorized under "Yocto Metadata Layers." These layers are fundamentally a subset of the OpenEmbedded Metadata Index, which lists all layers provided by the OpenEmbedded community. Layers exist in the Yocto Project Source Repositories that cannot be found in the OpenEmbedded Metadata Index. These layers are either deprecated or experimental in nature. BitBake uses the conf/bblayers.conf file, which is part of the user configuration, to find what layers it should be using as part of the build. For more information on layers, see the "Understanding and Creating Layers" section in the Yocto Project Development Manual.
Distro Layer The distribution layer provides policy configurations for your distribution. Best practices dictate that you isolate these types of configurations into their own layer. Settings you provide in conf/distro/distro.conf override similar settings that BitBake finds in your conf/local.conf file in the Build Directory. The following list provides some explanation and references for what you typically find in the distribution layer: classes: Class files (.bbclass) hold common functionality that can be shared among recipes in the distribution. When your recipes inherit a class, they take on the settings and functions for that class. You can read more about class files in the "Classes" section. conf: This area holds configuration files for the layer (conf/layer.conf), the distribution (conf/distro/distro.conf), and any distribution-wide include files. recipes-*: Recipes and append files that affect common functionality across the distribution. This area could include recipes and append files to add distribution-specific configuration, initialization scripts, custom image recipes, and so forth.
BSP Layer The BSP Layer provides machine configurations. Everything in this layer is specific to the machine for which you are building the image or the SDK. A common structure or form is defined for BSP layers. You can learn more about this structure in the Yocto Project Board Support Package (BSP) Developer's Guide. In order for a BSP layer to be considered compliant with the Yocto Project, it must meet some structural requirements. The BSP Layer's configuration directory contains configuration files for the machine (conf/machine/machine.conf) and, of course, the layer (conf/layer.conf). The remainder of the layer is dedicated to specific recipes by function: recipes-bsp, recipes-core, recipes-graphics, and recipes-kernel. Metadata can exist for multiple formfactors, graphics support systems, and so forth. While the figure shows several recipes-* directories, not all these directories appear in all BSP layers.
Software Layer The software layer provides the Metadata for additional software packages used during the build. This layer does not include Metadata that is specific to the distribution or the machine, which are found in their respective layers. This layer contains any new recipes that your project needs in the form of recipe files.
Sources In order for the OpenEmbedded build system to create an image or any target, it must be able to access source files. The general Yocto Project Development Environment figure represents source files using the "Upstream Project Releases", "Local Projects", and "SCMs (optional)" boxes. The figure represents mirrors, which also play a role in locating source files, with the "Source Mirror(s)" box. The method by which source files are ultimately organized is a function of the project. For example, for released software, projects tend to use tarballs or other archived files that can capture the state of a release guaranteeing that it is statically represented. On the other hand, for a project that is more dynamic or experimental in nature, a project might keep source files in a repository controlled by a Source Control Manager (SCM) such as Git. Pulling source from a repository allows you to control the point in the repository (the revision) from which you want to build software. Finally, a combination of the two might exist, which would give the consumer a choice when deciding where to get source files. BitBake uses the SRC_URI variable to point to source files regardless of their location. Each recipe must have a SRC_URI variable that points to the source. Another area that plays a significant role in where source files come from is pointed to by the DL_DIR variable. This area is a cache that can hold previously downloaded source. You can also instruct the OpenEmbedded build system to create tarballs from Git repositories, which is not the default behavior, and store them in the DL_DIR by using the BB_GENERATE_MIRROR_TARBALLS variable. Judicious use of a DL_DIR directory can save the build system a trip across the Internet when looking for files. A good method for using a download directory is to have DL_DIR point to an area outside of your Build Directory. Doing so allows you to safely delete the Build Directory if needed without fear of removing any downloaded source file. The remainder of this section provides a deeper look into the source files and the mirrors. Here is a more detailed look at the source file area of the base figure:
Upstream Project Releases Upstream project releases exist anywhere in the form of an archived file (e.g. tarball or zip file). These files correspond to individual recipes. For example, the figure uses specific releases each for BusyBox, Qt, and Dbus. An archive file can be for any released product that can be built using a recipe.
Local Projects Local projects are custom bits of software the user provides. These bits reside somewhere local to a project - perhaps a directory into which the user checks in items (e.g. a local directory containing a development source tree used by the group). The canonical method through which to include a local project is to use the externalsrc class to include that local project. You use either the local.conf or a recipe's append file to override or set the recipe to point to the local directory on your disk to pull in the whole source tree. For information on how to use the externalsrc class, see the "externalsrc.bbclass" section.
Source Control Managers (Optional) Another place the build system can get source files from is through an SCM such as Git or Subversion. In this case, a repository is cloned or checked out. The do_fetch task inside BitBake uses the SRC_URI variable and the argument's prefix to determine the correct fetcher module. For information on how to have the OpenEmbedded build system generate tarballs for Git repositories and place them in the DL_DIR directory, see the BB_GENERATE_MIRROR_TARBALLS variable. When fetching a repository, BitBake uses the SRCREV variable to determine the specific revision from which to build.
Source Mirror(s) Two kinds of mirrors exist: pre-mirrors and regular mirrors. The PREMIRRORS and MIRRORS variables point to these, respectively. BitBake checks pre-mirrors before looking upstream for any source files. Pre-mirrors are appropriate when you have a shared directory that is not a directory defined by the DL_DIR variable. A Pre-mirror typically points to a shared directory that is local to your organization. Regular mirrors can be any site across the Internet that is used as an alternative location for source code should the primary site not be functioning for some reason or another.
Package Feeds When the OpenEmbedded build system generates an image or an SDK, it gets the packages from a package feed area located in the Build Directory. The general Yocto Project Development Environment figure shows this package feeds area in the upper-right corner. This section looks a little closer into the package feeds area used by the build system. Here is a more detailed look at the area: Package feeds are an intermediary step in the build process. The OpenEmbedded build system provides classes to generate different package types, and you specify which classes to enable through the PACKAGE_CLASSES variable. Before placing the packages into package feeds, the build process validates them with generated output quality assurance checks through the insane class. The package feed area resides in the Build Directory. The directory the build system uses to temporarily store packages is determined by a combination of variables and the particular package manager in use. See the "Package Feeds" box in the illustration and note the information to the right of that area. In particular, the following defines where package files are kept: DEPLOY_DIR: Defined as tmp/deploy in the Build Directory. DEPLOY_DIR_*: Depending on the package manager used, the package type sub-folder. Given RPM, IPK, or DEB packaging and tarball creation, the DEPLOY_DIR_RPM, DEPLOY_DIR_IPK, DEPLOY_DIR_DEB, or DEPLOY_DIR_TAR, variables are used, respectively. PACKAGE_ARCH: Defines architecture-specific sub-folders. For example, packages could exist for the i586 or qemux86 architectures. BitBake uses the do_package_write_* tasks to generate packages and place them into the package holding area (e.g. do_package_write_ipk for IPK packages). See the "do_package_write_deb", "do_package_write_ipk", "do_package_write_rpm", and "do_package_write_tar" sections for additional information. As an example, consider a scenario where an IPK packaging manager is being used and package architecture support for both i586 and qemux86 exist. Packages for the i586 architecture are placed in build/tmp/deploy/ipk/i586, while packages for the qemux86 architecture are placed in build/tmp/deploy/ipk/qemux86.
BitBake The OpenEmbedded build system uses BitBake to produce images. You can see from the general Yocto Project Development Environment figure, the BitBake area consists of several functional areas. This section takes a closer look at each of those areas. Separate documentation exists for the BitBake tool. See the BitBake User Manual for reference material on BitBake.
Source Fetching The first stages of building a recipe are to fetch and unpack the source code: The do_fetch and do_unpack tasks fetch the source files and unpack them into the work directory. For every local file (e.g. file://) that is part of a recipe's SRC_URI statement, the OpenEmbedded build system takes a checksum of the file for the recipe and inserts the checksum into the signature for the do_fetch. If any local file has been modified, the do_fetch task and all tasks that depend on it are re-executed. By default, everything is accomplished in the Build Directory, which has a defined structure. For additional general information on the Build Directory, see the "build/" section. Unpacked source files are pointed to by the S variable. Each recipe has an area in the Build Directory where the unpacked source code resides. The name of that directory for any given recipe is defined from several different variables. You can see the variables that define these directories by looking at the figure: TMPDIR - The base directory where the OpenEmbedded build system performs all its work during the build. PACKAGE_ARCH - The architecture of the built package or packages. TARGET_OS - The operating system of the target device. PN - The name of the built package. PV - The version of the recipe used to build the package. PR - The revision of the recipe used to build the package. WORKDIR - The location within TMPDIR where a specific package is built. S - Contains the unpacked source files for a given recipe.
Patching Once source code is fetched and unpacked, BitBake locates patch files and applies them to the source files: The do_patch task processes recipes by using the SRC_URI variable to locate applicable patch files, which by default are *.patch or *.diff files, or any file if "apply=yes" is specified for the file in SRC_URI. BitBake finds and applies multiple patches for a single recipe in the order in which it finds the patches. Patches are applied to the recipe's source files located in the S directory. For more information on how the source directories are created, see the "Source Fetching" section.
Configuration and Compilation After source code is patched, BitBake executes tasks that configure and compile the source code: This step in the build process consists of three tasks: do_prepare_recipe_sysroot: This task sets up the two sysroots in ${WORKDIR} (i.e. recipe-sysroot and recipe-sysroot-native) so that the sysroots contain the contents of the do_populate_sysroot tasks of the recipes on which the recipe containing the tasks depends. A sysroot exists for both the target and for the native binaries, which run on the host system. do_configure: This task configures the source by enabling and disabling any build-time and configuration options for the software being built. Configurations can come from the recipe itself as well as from an inherited class. Additionally, the software itself might configure itself depending on the target for which it is being built. The configurations handled by the do_configure task are specific to source code configuration for the source code being built by the recipe. If you are using the autotools class, you can add additional configuration options by using the EXTRA_OECONF or PACKAGECONFIG_CONFARGS variables. For information on how this variable works within that class, see the meta/classes/autotools.bbclass file. do_compile: Once a configuration task has been satisfied, BitBake compiles the source using the do_compile task. Compilation occurs in the directory pointed to by the B variable. Realize that the B directory is, by default, the same as the S directory. do_install: Once compilation is done, BitBake executes the do_install task. This task copies files from the B directory and places them in a holding area pointed to by the D variable.
Package Splitting After source code is configured and compiled, the OpenEmbedded build system analyzes the results and splits the output into packages: The do_package and do_packagedata tasks combine to analyze the files found in the D directory and split them into subsets based on available packages and files. The analyzing process involves the following as well as other items: splitting out debugging symbols, looking at shared library dependencies between packages, and looking at package relationships. The do_packagedata task creates package metadata based on the analysis such that the OpenEmbedded build system can generate the final packages. Working, staged, and intermediate results of the analysis and package splitting process use these areas: PKGD - The destination directory for packages before they are split. PKGDATA_DIR - A shared, global-state directory that holds data generated during the packaging process. PKGDESTWORK - A temporary work area used by the do_package task. PKGDEST - The parent directory for packages after they have been split. The FILES variable defines the files that go into each package in PACKAGES. If you want details on how this is accomplished, you can look at the package class. Depending on the type of packages being created (RPM, DEB, or IPK), the do_package_write_* task creates the actual packages and places them in the Package Feed area, which is ${TMPDIR}/deploy. You can see the "Package Feeds" section for more detail on that part of the build process. Support for creating feeds directly from the deploy/* directories does not exist. Creating such feeds usually requires some kind of feed maintenance mechanism that would upload the new packages into an official package feed (e.g. the Ångström distribution). This functionality is highly distribution-specific and thus is not provided out of the box.
Image Generation Once packages are split and stored in the Package Feeds area, the OpenEmbedded build system uses BitBake to generate the root filesystem image: The image generation process consists of several stages and depends on several tasks and variables. The do_rootfs task creates the root filesystem (file and directory structure) for an image. This task uses several key variables to help create the list of packages to actually install: IMAGE_INSTALL: Lists out the base set of packages to install from the Package Feeds area. PACKAGE_EXCLUDE: Specifies packages that should not be installed. IMAGE_FEATURES: Specifies features to include in the image. Most of these features map to additional packages for installation. PACKAGE_CLASSES: Specifies the package backend to use and consequently helps determine where to locate packages within the Package Feeds area. IMAGE_LINGUAS: Determines the language(s) for which additional language support packages are installed. PACKAGE_INSTALL: The final list of packages passed to the package manager for installation into the image. With IMAGE_ROOTFS pointing to the location of the filesystem under construction and the PACKAGE_INSTALL variable providing the final list of packages to install, the root file system is created. Package installation is under control of the package manager (e.g. dnf/rpm, opkg, or apt/dpkg) regardless of whether or not package management is enabled for the target. At the end of the process, if package management is not enabled for the target, the package manager's data files are deleted from the root filesystem. As part of the final stage of package installation, postinstall scripts that are part of the packages are run. Any scripts that fail to run on the build host are run on the target when the target system is first booted. If you are using a read-only root filesystem, all the post installation scripts must succeed during the package installation phase since the root filesystem is read-only. The final stages of the do_rootfs task handle post processing. Post processing includes creation of a manifest file and optimizations. The manifest file (.manifest) resides in the same directory as the root filesystem image. This file lists out, line-by-line, the installed packages. The manifest file is useful for the testimage class, for example, to determine whether or not to run specific tests. See the IMAGE_MANIFEST variable for additional information. Optimizing processes run across the image include mklibs, prelink, and any other post-processing commands as defined by the ROOTFS_POSTPROCESS_COMMAND variable. The mklibs process optimizes the size of the libraries, while the prelink process optimizes the dynamic linking of shared libraries to reduce start up time of executables. After the root filesystem is built, processing begins on the image through the do_image task. The build system runs any pre-processing commands as defined by the IMAGE_PREPROCESS_COMMAND variable. This variable specifies a list of functions to call before the OpenEmbedded build system creates the final image output files. The do_image task dynamically creates other do_image_* tasks as needed, which include compressing the root filesystem image to reduce the overall size of the image. The process turns everything into an image file or a set of image files. The formats used for the root filesystem depend on the IMAGE_FSTYPES variable. The final task involved in image creation is the do_image_complete task. This task completes the image by applying any image post processing as defined through the IMAGE_POSTPROCESS_COMMAND variable. The variable specifies a list of functions to call once the OpenEmbedded build system has created the final image output files. The entire image generation process is run under Pseudo. Running under Pseudo ensures that the files in the root filesystem have correct ownership.
SDK Generation The OpenEmbedded build system uses BitBake to generate the Software Development Kit (SDK) installer script for both the standard and extensible SDKs: For more information on the cross-development toolchain generation, see the "Cross-Development Toolchain Generation" section. For information on advantages gained when building a cross-development toolchain using the do_populate_sdk task, see the "Building an SDK Installer" section in the Yocto Project Software Development Kit (SDK) Developer's Guide. Like image generation, the SDK script process consists of several stages and depends on many variables. The do_populate_sdk and do_populate_sdk_ext tasks use these key variables to help create the list of packages to actually install. For information on the variables listed in the figure, see the "Application Development SDK" section. The do_populate_sdk task helps create the standard SDK and handles two parts: a target part and a host part. The target part is the part built for the target hardware and includes libraries and headers. The host part is the part of the SDK that runs on the SDKMACHINE. The do_populate_sdk_ext task helps create the extensible SDK and handles host and target parts differently than its counter part does for the standard SDK. For the extensible SDK, the task encapsulates the build system, which includes everything needed (host and target) for the SDK. Regardless of the type of SDK being constructed, the tasks perform some cleanup after which a cross-development environment setup script and any needed configuration files are created. The final output is the Cross-development toolchain installation script (.sh file), which includes the environment setup script.
Stamp Files and the Rerunning of Tasks For each task that completes successfully, BitBake writes a stamp file into the STAMPS_DIR directory. The beginning of the stamp file's filename is determined by the STAMP variable, and the end of the name consists of the task's name and current input checksum. This naming scheme assumes that BB_SIGNATURE_HANDLER is "OEBasicHash", which is almost always the case in current OpenEmbedded. To determine if a task needs to be rerun, BitBake checks if a stamp file with a matching input checksum exists for the task. If such a stamp file exists, the task's output is assumed to exist and still be valid. If the file does not exist, the task is rerun. The stamp mechanism is more general than the shared state (sstate) cache mechanism described in the "Setscene Tasks and Shared State" section. BitBake avoids rerunning any task that has a valid stamp file, not just tasks that can be accelerated through the sstate cache. However, you should realize that stamp files only serve as a marker that some work has been done and that these files do not record task output. The actual task output would usually be somewhere in TMPDIR (e.g. in some recipe's WORKDIR.) What the sstate cache mechanism adds is a way to cache task output that can then be shared between build machines. Since STAMPS_DIR is usually a subdirectory of TMPDIR, removing TMPDIR will also remove STAMPS_DIR, which means tasks will properly be rerun to repopulate TMPDIR. If you want some task to always be considered "out of date", you can mark it with the nostamp varflag. If some other task depends on such a task, then that task will also always be considered out of date, which might not be what you want. For details on how to view information about a task's signature, see the "Viewing Task Variable Dependencies" section.
Setscene Tasks and Shared State The description of tasks so far assumes that BitBake needs to build everything and there are no prebuilt objects available. BitBake does support skipping tasks if prebuilt objects are available. These objects are usually made available in the form of a shared state (sstate) cache. For information on variables affecting sstate, see the SSTATE_DIR and SSTATE_MIRRORS variables. The idea of a setscene task (i.e do_taskname_setscene) is a version of the task where instead of building something, BitBake can skip to the end result and simply place a set of files into specific locations as needed. In some cases, it makes sense to have a setscene task variant (e.g. generating package files in the do_package_write_* task). In other cases, it does not make sense, (e.g. a do_patch task or do_unpack task) since the work involved would be equal to or greater than the underlying task. In the OpenEmbedded build system, the common tasks that have setscene variants are do_package, do_package_write_*, do_deploy, do_packagedata, and do_populate_sysroot. Notice that these are most of the tasks whose output is an end result. The OpenEmbedded build system has knowledge of the relationship between these tasks and other tasks that precede them. For example, if BitBake runs do_populate_sysroot_setscene for something, there is little point in running any of the do_fetch, do_unpack, do_patch, do_configure, do_compile, and do_install tasks. However, if do_package needs to be run, BitBake would need to run those other tasks. It becomes more complicated if everything can come from an sstate cache because some objects are simply not required at all. For example, you do not need a compiler or native tools, such as quilt, if there is nothing to compile or patch. If the do_package_write_* packages are available from sstate, BitBake does not need the do_package task data. To handle all these complexities, BitBake runs in two phases. The first is the "setscene" stage. During this stage, BitBake first checks the sstate cache for any targets it is planning to build. BitBake does a fast check to see if the object exists rather than a complete download. If nothing exists, the second phase, which is the setscene stage, completes and the main build proceeds. If objects are found in the sstate cache, the OpenEmbedded build system works backwards from the end targets specified by the user. For example, if an image is being built, the OpenEmbedded build system first looks for the packages needed for that image and the tools needed to construct an image. If those are available, the compiler is not needed. Thus, the compiler is not even downloaded. If something was found to be unavailable, or the download or setscene task fails, the OpenEmbedded build system then tries to install dependencies, such as the compiler, from the cache. The availability of objects in the sstate cache is handled by the function specified by the BB_HASHCHECK_FUNCTION variable and returns a list of the objects that are available. The function specified by the BB_SETSCENE_DEPVALID variable is the function that determines whether a given dependency needs to be followed, and whether for any given relationship the function needs to be passed. The function returns a True or False value.
Images The images produced by the OpenEmbedded build system are compressed forms of the root filesystem that are ready to boot on a target device. You can see from the general Yocto Project Development Environment figure that BitBake output, in part, consists of images. This section is going to look more closely at this output: For a list of example images that the Yocto Project provides, see the "Images" chapter. Images are written out to the Build Directory inside the tmp/deploy/images/machine/ folder as shown in the figure. This folder contains any files expected to be loaded on the target device. The DEPLOY_DIR variable points to the deploy directory, while the DEPLOY_DIR_IMAGE variable points to the appropriate directory containing images for the current configuration. kernel-image: A kernel binary file. The KERNEL_IMAGETYPE variable setting determines the naming scheme for the kernel image file. Depending on that variable, the file could begin with a variety of naming strings. The deploy/images/machine directory can contain multiple image files for the machine. root-filesystem-image: Root filesystems for the target device (e.g. *.ext3 or *.bz2 files). The IMAGE_FSTYPES variable setting determines the root filesystem image type. The deploy/images/machine directory can contain multiple root filesystems for the machine. kernel-modules: Tarballs that contain all the modules built for the kernel. Kernel module tarballs exist for legacy purposes and can be suppressed by setting the MODULE_TARBALL_DEPLOY variable to "0". The deploy/images/machine directory can contain multiple kernel module tarballs for the machine. bootloaders: Bootloaders supporting the image, if applicable to the target machine. The deploy/images/machine directory can contain multiple bootloaders for the machine. symlinks: The deploy/images/machine folder contains a symbolic link that points to the most recently built file for each machine. These links might be useful for external scripts that need to obtain the latest version of each file.
Application Development SDK In the general Yocto Project Development Environment figure, the output labeled "Application Development SDK" represents an SDK. The SDK generation process differs depending on whether you build a standard SDK (e.g. bitbake -c populate_sdk imagename) or an extensible SDK (e.g. bitbake -c populate_sdk_ext imagename). This section is going to take a closer look at this output: The specific form of this output is a self-extracting SDK installer (*.sh) that, when run, installs the SDK, which consists of a cross-development toolchain, a set of libraries and headers, and an SDK environment setup script. Running this installer essentially sets up your cross-development environment. You can think of the cross-toolchain as the "host" part because it runs on the SDK machine. You can think of the libraries and headers as the "target" part because they are built for the target hardware. The environment setup script is added so that you can initialize the environment before using the tools. The Yocto Project supports several methods by which you can set up this cross-development environment. These methods include downloading pre-built SDK installers or building and installing your own SDK installer. For background information on cross-development toolchains in the Yocto Project development environment, see the "Cross-Development Toolchain Generation" section. For information on setting up a cross-development environment, see the Yocto Project Software Development Kit (SDK) Developer's Guide. Once built, the SDK installers are written out to the deploy/sdk folder inside the Build Directory as shown in the figure at the beginning of this section. Depending on the type of SDK, several variables exist that help configure these files. The following list shows the variables associated with a standard SDK: DEPLOY_DIR: Points to the deploy directory. SDKMACHINE: Specifies the architecture of the machine on which the cross-development tools are run to create packages for the target hardware. SDKIMAGE_FEATURES: Lists the features to include in the "target" part of the SDK. TOOLCHAIN_HOST_TASK: Lists packages that make up the host part of the SDK (i.e. the part that runs on the SDKMACHINE). When you use bitbake -c populate_sdk imagename to create the SDK, a set of default packages apply. This variable allows you to add more packages. TOOLCHAIN_TARGET_TASK: Lists packages that make up the target part of the SDK (i.e. the part built for the target hardware). SDKPATH: Defines the default SDK installation path offered by the installation script. This next list, shows the variables associated with an extensible SDK: DEPLOY_DIR: Points to the deploy directory. SDK_EXT_TYPE: Controls whether or not shared state artifacts are copied into the extensible SDK. By default, all required shared state artifacts are copied into the SDK. SDK_INCLUDE_PKGDATA: Specifies whether or not packagedata will be included in the extensible SDK for all recipes in the "world" target. SDK_INCLUDE_TOOLCHAIN: Specifies whether or not the toolchain will be included when building the extensible SDK. SDK_LOCAL_CONF_WHITELIST: A list of variables allowed through from the build system configuration into the extensible SDK configuration. SDK_LOCAL_CONF_BLACKLIST: A list of variables not allowed through from the build system configuration into the extensible SDK configuration. SDK_INHERIT_BLACKLIST: A list of classes to remove from the INHERIT value globally within the extensible SDK configuration.