From 6846d4d00bc3a9d4e188ad9c8cfdf6e45cd1ba06 Mon Sep 17 00:00:00 2001 From: Michael Opdenacker Date: Fri, 9 Dec 2022 19:01:55 +0100 Subject: manuals: define proper numbered lists Using "#." instead of "1.", "2.", "3.", etc. (From yocto-docs rev: 11c2585acd0fa6c330702af2359ce5a9e47cde1f) Signed-off-by: Michael Opdenacker Reported-by: Quentin Schulz Signed-off-by: Richard Purdie --- documentation/dev-manual/bmaptool.rst | 6 +- documentation/dev-manual/building.rst | 46 +++++------ documentation/dev-manual/changes.rst | 48 +++++------ documentation/dev-manual/debugging.rst | 22 ++--- documentation/dev-manual/gobject-introspection.rst | 18 ++--- documentation/dev-manual/layers.rst | 18 ++--- documentation/dev-manual/packages.rst | 4 +- documentation/dev-manual/qemu.rst | 12 +-- documentation/dev-manual/quilt.rst | 16 ++-- documentation/dev-manual/runtime-testing.rst | 28 +++---- documentation/dev-manual/start.rst | 94 +++++++++++----------- documentation/dev-manual/upgrading-recipes.rst | 28 +++---- documentation/dev-manual/wayland.rst | 4 +- documentation/dev-manual/wic.rst | 8 +- 14 files changed, 176 insertions(+), 176 deletions(-) (limited to 'documentation/dev-manual') diff --git a/documentation/dev-manual/bmaptool.rst b/documentation/dev-manual/bmaptool.rst index 4ee6f5e48b..9add72cf3b 100644 --- a/documentation/dev-manual/bmaptool.rst +++ b/documentation/dev-manual/bmaptool.rst @@ -28,18 +28,18 @@ Following, is an example that shows how to flash a Wic image. Realize that while this example uses a Wic image, you can use Bmaptool to flash any type of image. Use these steps to flash an image using Bmaptool: -1. *Update your local.conf File:* You need to have the following set +#. *Update your local.conf File:* You need to have the following set in your ``local.conf`` file before building your image:: IMAGE_FSTYPES += "wic wic.bmap" -2. *Get Your Image:* Either have your image ready (pre-built with the +#. *Get Your Image:* Either have your image ready (pre-built with the :term:`IMAGE_FSTYPES` setting previously mentioned) or take the step to build the image:: $ bitbake image -3. *Flash the Device:* Flash the device with the image by using Bmaptool +#. *Flash the Device:* Flash the device with the image by using Bmaptool depending on your particular setup. The following commands assume the image resides in the :term:`Build Directory`'s ``deploy/images/`` area: diff --git a/documentation/dev-manual/building.rst b/documentation/dev-manual/building.rst index 2798dd3e98..3064974cc5 100644 --- a/documentation/dev-manual/building.rst +++ b/documentation/dev-manual/building.rst @@ -43,11 +43,11 @@ The following figure and list overviews the build process: .. image:: figures/bitbake-build-flow.png :width: 100% -1. *Set up Your Host Development System to Support Development Using the +#. *Set up Your Host Development System to Support Development Using the Yocto Project*: See the ":doc:`start`" section for options on how to get a build host ready to use the Yocto Project. -2. *Initialize the Build Environment:* Initialize the build environment +#. *Initialize the Build Environment:* Initialize the build environment by sourcing the build environment script (i.e. :ref:`structure-core-script`):: @@ -66,7 +66,7 @@ The following figure and list overviews the build process: event, it's typically cleaner to locate the :term:`Build Directory` somewhere outside of your source directory. -3. *Make Sure Your* ``local.conf`` *File is Correct*: Ensure the +#. *Make Sure Your* ``local.conf`` *File is Correct*: Ensure the ``conf/local.conf`` configuration file, which is found in the :term:`Build Directory`, is set up how you want it. This file defines many aspects of the build environment including the target machine architecture @@ -74,7 +74,7 @@ The following figure and list overviews the build process: the build (:term:`PACKAGE_CLASSES`), and a centralized tarball download directory through the :term:`DL_DIR` variable. -4. *Build the Image:* Build the image using the ``bitbake`` command:: +#. *Build the Image:* Build the image using the ``bitbake`` command:: $ bitbake target @@ -273,12 +273,12 @@ loading modules needed to locate and mount the final root filesystem. Follow these steps to create an :term:`Initramfs` image: -1. *Create the :term:`Initramfs` Image Recipe:* You can reference the +#. *Create the :term:`Initramfs` Image Recipe:* You can reference the ``core-image-minimal-initramfs.bb`` recipe found in the ``meta/recipes-core`` directory of the :term:`Source Directory` as an example from which to work. -2. *Decide if You Need to Bundle the :term:`Initramfs` Image Into the Kernel +#. *Decide if You Need to Bundle the :term:`Initramfs` Image Into the Kernel Image:* If you want the :term:`Initramfs` image that is built to be bundled in with the kernel image, set the :term:`INITRAMFS_IMAGE_BUNDLE` variable to ``"1"`` in your ``local.conf`` configuration file and set the @@ -290,7 +290,7 @@ Follow these steps to create an :term:`Initramfs` image: :term:`CONFIG_INITRAMFS_SOURCE` variable, allowing the :term:`Initramfs` image to be built into the kernel normally. -3. *Optionally Add Items to the Initramfs Image Through the Initramfs +#. *Optionally Add Items to the Initramfs Image Through the Initramfs Image Recipe:* If you add items to the :term:`Initramfs` image by way of its recipe, you should use :term:`PACKAGE_INSTALL` rather than :term:`IMAGE_INSTALL`. :term:`PACKAGE_INSTALL` gives more direct control of @@ -298,7 +298,7 @@ Follow these steps to create an :term:`Initramfs` image: necessarily want that are set by the :ref:`image ` or :ref:`core-image ` classes. -4. *Build the Kernel Image and the Initramfs Image:* Build your kernel +#. *Build the Kernel Image and the Initramfs Image:* Build your kernel image using BitBake. Because the :term:`Initramfs` image recipe is a dependency of the kernel image, the :term:`Initramfs` image is built as well and bundled with the kernel image if you used the @@ -316,7 +316,7 @@ to override it. To achieve this, you need to perform some additional steps: -1. *Create a multiconfig for your Initramfs image:* You can perform the steps +#. *Create a multiconfig for your Initramfs image:* You can perform the steps on ":ref:`dev-manual/building:building images for multiple targets using multiple configurations`" to create a separate multiconfig. For the sake of simplicity let's assume such multiconfig is called: ``initramfscfg.conf`` and contains the variables:: @@ -324,7 +324,7 @@ To achieve this, you need to perform some additional steps: TMPDIR="${TOPDIR}/tmp-initramfscfg" TCLIBC="musl" -2. *Set additional Initramfs variables on your main configuration:* +#. *Set additional Initramfs variables on your main configuration:* Additionally, on your main configuration (``local.conf``) you need to set the variables:: @@ -599,13 +599,13 @@ are a couple of areas to experiment with: - ``glibc``: In general, follow this process: - 1. Remove ``glibc`` features from + #. Remove ``glibc`` features from :term:`DISTRO_FEATURES` that you think you do not need. - 2. Build your distribution. + #. Build your distribution. - 3. If the build fails due to missing symbols in a package, determine + #. If the build fails due to missing symbols in a package, determine if you can reconfigure the package to not need those features. For example, change the configuration to not support wide character support as is done for ``ncurses``. Or, if support for those @@ -837,13 +837,13 @@ build. Follow these steps to populate your Downloads directory: -1. *Create a Clean Downloads Directory:* Start with an empty downloads +#. *Create a Clean Downloads Directory:* Start with an empty downloads directory (:term:`DL_DIR`). You start with an empty downloads directory by either removing the files in the existing directory or by setting :term:`DL_DIR` to point to either an empty location or one that does not yet exist. -2. *Generate Tarballs of the Source Git Repositories:* Edit your +#. *Generate Tarballs of the Source Git Repositories:* Edit your ``local.conf`` configuration file as follows:: DL_DIR = "/home/your-download-dir/" @@ -856,7 +856,7 @@ Follow these steps to populate your Downloads directory: :term:`BB_GENERATE_MIRROR_TARBALLS` variable for more information. -3. *Populate Your Downloads Directory Without Building:* Use BitBake to +#. *Populate Your Downloads Directory Without Building:* Use BitBake to fetch your sources but inhibit the build:: $ bitbake target --runonly=fetch @@ -865,7 +865,7 @@ Follow these steps to populate your Downloads directory: a "snapshot" of the source files in the form of tarballs, which can be used for the build. -4. *Optionally Remove Any Git or other SCM Subdirectories From the +#. *Optionally Remove Any Git or other SCM Subdirectories From the Downloads Directory:* If you want, you can clean up your downloads directory by removing any Git or other Source Control Management (SCM) subdirectories such as ``${DL_DIR}/git2/*``. The tarballs @@ -879,7 +879,7 @@ any machine and at any time. Follow these steps to build your target using the files in the downloads directory: -1. *Using Local Files Only:* Inside your ``local.conf`` file, add the +#. *Using Local Files Only:* Inside your ``local.conf`` file, add the :term:`SOURCE_MIRROR_URL` variable, inherit the :ref:`own-mirrors ` class, and use the :term:`BB_NO_NETWORK` variable to your ``local.conf``:: @@ -894,11 +894,11 @@ directory: BitBake's fetching process in step 3 stays local, which means files from your "own-mirror" are used. -2. *Start With a Clean Build:* You can start with a clean build by +#. *Start With a Clean Build:* You can start with a clean build by removing the ``${``\ :term:`TMPDIR`\ ``}`` directory or using a new :term:`Build Directory`. -3. *Build Your Target:* Use BitBake to build your target:: +#. *Build Your Target:* Use BitBake to build your target:: $ bitbake target @@ -925,16 +925,16 @@ directory: If you do have recipes that use :term:`AUTOREV`, you can take steps to still use the recipes in an offline build. Do the following: - 1. Use a configuration generated by enabling :ref:`build + #. Use a configuration generated by enabling :ref:`build history `. - 2. Use the ``buildhistory-collect-srcrevs`` command to collect the + #. Use the ``buildhistory-collect-srcrevs`` command to collect the stored :term:`SRCREV` values from the build's history. For more information on collecting these values, see the ":ref:`dev-manual/build-quality:build history package information`" section. - 3. Once you have the correct source revisions, you can modify + #. Once you have the correct source revisions, you can modify those recipes to set :term:`SRCREV` to specific versions of the software. diff --git a/documentation/dev-manual/changes.rst b/documentation/dev-manual/changes.rst index 8ccbf0d7ee..9cb25f3549 100644 --- a/documentation/dev-manual/changes.rst +++ b/documentation/dev-manual/changes.rst @@ -22,40 +22,40 @@ steps, see the Yocto Project Use the following general steps to submit a bug: -1. Open the Yocto Project implementation of :yocto_bugs:`Bugzilla <>`. +#. Open the Yocto Project implementation of :yocto_bugs:`Bugzilla <>`. -2. Click "File a Bug" to enter a new bug. +#. Click "File a Bug" to enter a new bug. -3. Choose the appropriate "Classification", "Product", and "Component" +#. Choose the appropriate "Classification", "Product", and "Component" for which the bug was found. Bugs for the Yocto Project fall into one of several classifications, which in turn break down into several products and components. For example, for a bug against the ``meta-intel`` layer, you would choose "Build System, Metadata & Runtime", "BSPs", and "bsps-meta-intel", respectively. -4. Choose the "Version" of the Yocto Project for which you found the +#. Choose the "Version" of the Yocto Project for which you found the bug (e.g. &DISTRO;). -5. Determine and select the "Severity" of the bug. The severity +#. Determine and select the "Severity" of the bug. The severity indicates how the bug impacted your work. -6. Choose the "Hardware" that the bug impacts. +#. Choose the "Hardware" that the bug impacts. -7. Choose the "Architecture" that the bug impacts. +#. Choose the "Architecture" that the bug impacts. -8. Choose a "Documentation change" item for the bug. Fixing a bug might +#. Choose a "Documentation change" item for the bug. Fixing a bug might or might not affect the Yocto Project documentation. If you are unsure of the impact to the documentation, select "Don't Know". -9. Provide a brief "Summary" of the bug. Try to limit your summary to +#. Provide a brief "Summary" of the bug. Try to limit your summary to just a line or two and be sure to capture the essence of the bug. -10. Provide a detailed "Description" of the bug. You should provide as +#. Provide a detailed "Description" of the bug. You should provide as much detail as you can about the context, behavior, output, and so forth that surrounds the bug. You can even attach supporting files for output from logs by using the "Add an attachment" button. -11. Click the "Submit Bug" button submit the bug. A new Bugzilla number +#. Click the "Submit Bug" button submit the bug. A new Bugzilla number is assigned to the bug and the defect is logged in the bug tracking system. @@ -162,16 +162,16 @@ The following sections provide procedures for submitting a change. Preparing Changes for Submission -------------------------------- -1. *Make Your Changes Locally:* Make your changes in your local Git +#. *Make Your Changes Locally:* Make your changes in your local Git repository. You should make small, controlled, isolated changes. Keeping changes small and isolated aids review, makes merging/rebasing easier and keeps the change history clean should anyone need to refer to it in future. -2. *Stage Your Changes:* Stage your changes by using the ``git add`` +#. *Stage Your Changes:* Stage your changes by using the ``git add`` command on each file you changed. -3. *Commit Your Changes:* Commit the change by using the ``git commit`` +#. *Commit Your Changes:* Commit the change by using the ``git commit`` command. Make sure your commit information follows standards by following these accepted conventions: @@ -257,7 +257,7 @@ Here is the general procedure on how to submit a patch through email without using the scripts once the steps in :ref:`dev-manual/changes:preparing changes for submission` have been followed: -1. *Format the Commit:* Format the commit into an email message. To +#. *Format the Commit:* Format the commit into an email message. To format commits, use the ``git format-patch`` command. When you provide the command, you must include a revision list or a number of patches as part of the command. For example, either of these two @@ -289,7 +289,7 @@ without using the scripts once the steps in or to OpenEmbedded, you might consider requesting a contrib area and the necessary associated rights. -2. *Send the patches via email:* Send the patches to the recipients and +#. *Send the patches via email:* Send the patches to the recipients and relevant mailing lists by using the ``git send-email`` command. .. note:: @@ -352,7 +352,7 @@ been followed: in the `Git Community Book `__. -1. *Push Your Commits to a "Contrib" Upstream:* If you have arranged for +#. *Push Your Commits to a "Contrib" Upstream:* If you have arranged for permissions to push to an upstream contrib repository, push the change to that repository:: @@ -367,7 +367,7 @@ been followed: $ git push meta-intel-contrib your_name/README -2. *Determine Who to Notify:* Determine the maintainer or the mailing +#. *Determine Who to Notify:* Determine the maintainer or the mailing list that you need to notify for the change. Before submitting any change, you need to be sure who the maintainer @@ -395,7 +395,7 @@ been followed: lists `" section in the Yocto Project Reference Manual. -3. *Make a Pull Request:* Notify the maintainer or the mailing list that +#. *Make a Pull Request:* Notify the maintainer or the mailing list that you have pushed a change by making a pull request. The Yocto Project provides two scripts that conveniently let you @@ -486,30 +486,30 @@ branch can be obtained from the With this in mind, the steps to submit a change for a stable branch are as follows: -1. *Identify the bug or CVE to be fixed:* This information should be +#. *Identify the bug or CVE to be fixed:* This information should be collected so that it can be included in your submission. See :ref:`dev-manual/vulnerabilities:checking for vulnerabilities` for details about CVE tracking. -2. *Check if the fix is already present in the master branch:* This will +#. *Check if the fix is already present in the master branch:* This will result in the most straightforward path into the stable branch for the fix. - a. *If the fix is present in the master branch --- submit a backport request + #. *If the fix is present in the master branch --- submit a backport request by email:* You should send an email to the relevant stable branch maintainer and the mailing list with details of the bug or CVE to be fixed, the commit hash on the master branch that fixes the issue and the stable branches which you would like this fix to be backported to. - b. *If the fix is not present in the master branch --- submit the fix to the + #. *If the fix is not present in the master branch --- submit the fix to the master branch first:* This will ensure that the fix passes through the project's usual patch review and test processes before being accepted. It will also ensure that bugs are not left unresolved in the master branch itself. Once the fix is accepted in the master branch a backport request can be submitted as above. - c. *If the fix is unsuitable for the master branch --- submit a patch + #. *If the fix is unsuitable for the master branch --- submit a patch directly for the stable branch:* This method should be considered as a last resort. It is typically necessary when the master branch is using a newer version of the software which includes an upstream fix for the diff --git a/documentation/dev-manual/debugging.rst b/documentation/dev-manual/debugging.rst index f433e8e6a9..921022475f 100644 --- a/documentation/dev-manual/debugging.rst +++ b/documentation/dev-manual/debugging.rst @@ -297,11 +297,11 @@ If you are unsure whether a variable dependency is being picked up automatically for a given task, you can list the variable dependencies BitBake has determined by doing the following: -1. Build the recipe containing the task:: +#. Build the recipe containing the task:: $ bitbake recipename -2. Inside the :term:`STAMPS_DIR` +#. Inside the :term:`STAMPS_DIR` directory, find the signature data (``sigdata``) file that corresponds to the task. The ``sigdata`` files contain a pickled Python database of all the metadata that went into creating the input @@ -319,7 +319,7 @@ BitBake has determined by doing the following: the cached task output. The ``siginfo`` files contain exactly the same information as ``sigdata`` files. -3. Run ``bitbake-dumpsig`` on the ``sigdata`` or ``siginfo`` file. Here +#. Run ``bitbake-dumpsig`` on the ``sigdata`` or ``siginfo`` file. Here is an example:: $ bitbake-dumpsig ${BUILDDIR}/tmp/stamps/i586-poky-linux/db/6.0.30-r1.do_fetch.sigdata.7c048c18222b16ff0bcee2000ef648b1 @@ -992,7 +992,7 @@ site `__. The following steps show you how to debug using the GNU project debugger. -1. *Configure your build system to construct the companion debug +#. *Configure your build system to construct the companion debug filesystem:* In your ``local.conf`` file, set the following:: @@ -1012,7 +1012,7 @@ debugger. the full filesystem for debugging. Subsequent steps in this procedure show how to combine the partial filesystem with the full filesystem. -2. *Configure the system to include gdbserver in the target filesystem:* +#. *Configure the system to include gdbserver in the target filesystem:* Make the following addition in your ``local.conf`` file:: @@ -1021,7 +1021,7 @@ debugger. The change makes sure the ``gdbserver`` package is included. -3. *Build the environment:* +#. *Build the environment:* Use the following command to construct the image and the companion Debug Filesystem:: @@ -1057,7 +1057,7 @@ debugger. the actual image (e.g. ``gdb-cross-i586``). The suggestion is usually the actual name you want to use. -4. *Set up the* ``debugfs``\ *:* +#. *Set up the* ``debugfs``\ *:* Run the following commands to set up the ``debugfs``:: @@ -1066,7 +1066,7 @@ debugger. $ tar xvfj build-dir/tmp/deploy/images/machine/image.rootfs.tar.bz2 $ tar xvfj build-dir/tmp/deploy/images/machine/image-dbg.rootfs.tar.bz2 -5. *Set up GDB:* +#. *Set up GDB:* Install the SDK (if you built one) and then source the correct environment file. Sourcing the environment file puts the SDK in your @@ -1075,7 +1075,7 @@ debugger. If you are using the build system, Gdb is located in `build-dir`\ ``/tmp/sysroots/``\ `host`\ ``/usr/bin/``\ `architecture`\ ``/``\ `architecture`\ ``-gdb`` -6. *Boot the target:* +#. *Boot the target:* For information on how to run QEMU, see the `QEMU Documentation `__. @@ -1084,7 +1084,7 @@ debugger. Be sure to verify that your host can access the target via TCP. -7. *Debug a program:* +#. *Debug a program:* Debugging a program involves running gdbserver on the target and then running Gdb on the host. The example in this step debugs ``gzip``: @@ -1116,7 +1116,7 @@ debugger. users ``~/.gdbinit`` file. Upon starting, Gdb automatically runs whatever commands are in that file. -8. *Deploying without a full image rebuild:* +#. *Deploying without a full image rebuild:* In many cases, during development you want a quick method to deploy a new binary to the target and debug it, without waiting for a full diff --git a/documentation/dev-manual/gobject-introspection.rst b/documentation/dev-manual/gobject-introspection.rst index 89f21b7d10..28e51240c3 100644 --- a/documentation/dev-manual/gobject-introspection.rst +++ b/documentation/dev-manual/gobject-introspection.rst @@ -39,11 +39,11 @@ Enabling the Generation of Introspection Data Enabling the generation of introspection data (GIR files) in your library package involves the following: -1. Inherit the +#. Inherit the :ref:`gobject-introspection ` class. -2. Make sure introspection is not disabled anywhere in the recipe or +#. Make sure introspection is not disabled anywhere in the recipe or from anything the recipe includes. Also, make sure that "gobject-introspection-data" is not in :term:`DISTRO_FEATURES_BACKFILL_CONSIDERED` @@ -51,7 +51,7 @@ library package involves the following: :term:`MACHINE_FEATURES_BACKFILL_CONSIDERED`. In either of these conditions, nothing will happen. -3. Try to build the recipe. If you encounter build errors that look like +#. Try to build the recipe. If you encounter build errors that look like something is unable to find ``.so`` libraries, check where these libraries are located in the source tree and add the following to the recipe:: @@ -63,7 +63,7 @@ library package involves the following: See recipes in the ``oe-core`` repository that use that :term:`GIR_EXTRA_LIBS_PATH` variable as an example. -4. Look for any other errors, which probably mean that introspection +#. Look for any other errors, which probably mean that introspection support in a package is not entirely standard, and thus breaks down in a cross-compilation environment. For such cases, custom-made fixes are needed. A good place to ask and receive help in these cases is @@ -116,21 +116,21 @@ Testing that Introspection Works in an Image Use the following procedure to test if generating introspection data is working in an image: -1. Make sure that "gobject-introspection-data" is not in +#. Make sure that "gobject-introspection-data" is not in :term:`DISTRO_FEATURES_BACKFILL_CONSIDERED` and that "qemu-usermode" is not in :term:`MACHINE_FEATURES_BACKFILL_CONSIDERED`. -2. Build ``core-image-sato``. +#. Build ``core-image-sato``. -3. Launch a Terminal and then start Python in the terminal. +#. Launch a Terminal and then start Python in the terminal. -4. Enter the following in the terminal:: +#. Enter the following in the terminal:: >>> from gi.repository import GLib >>> GLib.get_host_name() -5. For something a little more advanced, enter the following see: +#. For something a little more advanced, enter the following see: https://python-gtk-3-tutorial.readthedocs.io/en/latest/introduction.html Known Issues diff --git a/documentation/dev-manual/layers.rst b/documentation/dev-manual/layers.rst index ad22524833..2d809562d1 100644 --- a/documentation/dev-manual/layers.rst +++ b/documentation/dev-manual/layers.rst @@ -28,14 +28,14 @@ Creating Your Own Layer Follow these general steps to create your layer without using tools: -1. *Check Existing Layers:* Before creating a new layer, you should be +#. *Check Existing Layers:* Before creating a new layer, you should be sure someone has not already created a layer containing the Metadata you need. You can see the :oe_layerindex:`OpenEmbedded Metadata Index <>` for a list of layers from the OpenEmbedded community that can be used in the Yocto Project. You could find a layer that is identical or close to what you need. -2. *Create a Directory:* Create the directory for your layer. When you +#. *Create a Directory:* Create the directory for your layer. When you create the layer, be sure to create the directory in an area not associated with the Yocto Project :term:`Source Directory` (e.g. the cloned ``poky`` repository). @@ -58,7 +58,7 @@ Follow these general steps to create your layer without using tools: "meta-" string are appended to several variables used in the configuration. -3. *Create a Layer Configuration File:* Inside your new layer folder, +#. *Create a Layer Configuration File:* Inside your new layer folder, you need to create a ``conf/layer.conf`` file. It is easiest to take an existing layer configuration file and copy that to your layer's ``conf`` directory and then modify the file as needed. @@ -128,7 +128,7 @@ Follow these general steps to create your layer without using tools: variable is a good way to indicate if your particular layer is current. -4. *Add Content:* Depending on the type of layer, add the content. If +#. *Add Content:* Depending on the type of layer, add the content. If the layer adds support for a machine, add the machine configuration in a ``conf/machine/`` file within the layer. If the layer adds distro policy, add the distro configuration in a ``conf/distro/`` @@ -141,7 +141,7 @@ Follow these general steps to create your layer without using tools: Yocto Project, see the ":ref:`bsp-guide/bsp:example filesystem layout`" section in the Yocto Project Board Support Package (BSP) Developer's Guide. -5. *Optionally Test for Compatibility:* If you want permission to use +#. *Optionally Test for Compatibility:* If you want permission to use the Yocto Project Compatibility logo with your layer or application that uses your layer, perform the steps to apply for compatibility. See the @@ -292,13 +292,13 @@ The Yocto Project Compatibility Program consists of a layer application process that requests permission to use the Yocto Project Compatibility Logo for your layer and application. The process consists of two parts: -1. Successfully passing a script (``yocto-check-layer``) that when run +#. Successfully passing a script (``yocto-check-layer``) that when run against your layer, tests it against constraints based on experiences of how layers have worked in the real world and where pitfalls have been found. Getting a "PASS" result from the script is required for successful compatibility registration. -2. Completion of an application acceptance form, which you can find at +#. Completion of an application acceptance form, which you can find at :yocto_home:`/webform/yocto-project-compatible-registration`. To be granted permission to use the logo, you need to satisfy the @@ -870,10 +870,10 @@ checked out first), or into a completely independent location. The replication of the layers is performed by running the ``setup-layers`` script provided above: -1. Clone the bootstrap layer or some other repository to obtain +#. Clone the bootstrap layer or some other repository to obtain the json config and the setup script that can use it. -2. Run the script directly with no options:: +#. Run the script directly with no options:: alex@Zen2:/srv/work/alex/my-build$ meta-alex/setup-layers Note: not checking out source meta-alex, use --force-bootstraplayer-checkout to override. diff --git a/documentation/dev-manual/packages.rst b/documentation/dev-manual/packages.rst index afd8bfc945..2decdcb253 100644 --- a/documentation/dev-manual/packages.rst +++ b/documentation/dev-manual/packages.rst @@ -554,10 +554,10 @@ to use. In your configuration, you use the :term:`PACKAGE_CLASSES` variable to specify the format: -1. Open the ``local.conf`` file inside your :term:`Build Directory` (e.g. +#. Open the ``local.conf`` file inside your :term:`Build Directory` (e.g. ``poky/build/conf/local.conf``). -2. Select the desired package format as follows:: +#. Select the desired package format as follows:: PACKAGE_CLASSES ?= "package_packageformat" diff --git a/documentation/dev-manual/qemu.rst b/documentation/dev-manual/qemu.rst index 084e67580d..d431ea4b99 100644 --- a/documentation/dev-manual/qemu.rst +++ b/documentation/dev-manual/qemu.rst @@ -44,13 +44,13 @@ To use QEMU, you need to have QEMU installed and initialized as well as have the proper artifacts (i.e. image files and root filesystems) available. Follow these general steps to run QEMU: -1. *Install QEMU:* QEMU is made available with the Yocto Project a +#. *Install QEMU:* QEMU is made available with the Yocto Project a number of ways. One method is to install a Software Development Kit (SDK). See ":ref:`sdk-manual/intro:the qemu emulator`" section in the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual for information on how to install QEMU. -2. *Setting Up the Environment:* How you set up the QEMU environment +#. *Setting Up the Environment:* How you set up the QEMU environment depends on how you installed QEMU: - If you cloned the ``poky`` repository or you downloaded and @@ -66,7 +66,7 @@ available. Follow these general steps to run QEMU: . poky_sdk/environment-setup-core2-64-poky-linux -3. *Ensure the Artifacts are in Place:* You need to be sure you have a +#. *Ensure the Artifacts are in Place:* You need to be sure you have a pre-built kernel that will boot in QEMU. You also need the target root filesystem for your target machine's architecture: @@ -84,7 +84,7 @@ available. Follow these general steps to run QEMU: Extensible Software Development Kit (eSDK) manual for information on how to extract a root filesystem. -4. *Run QEMU:* The basic ``runqemu`` command syntax is as follows:: +#. *Run QEMU:* The basic ``runqemu`` command syntax is as follows:: $ runqemu [option ] [...] @@ -184,7 +184,7 @@ the system does not need root privileges to run. It uses a user space NFS server to avoid that. Follow these steps to set up for running QEMU using an NFS server. -1. *Extract a Root Filesystem:* Once you are able to run QEMU in your +#. *Extract a Root Filesystem:* Once you are able to run QEMU in your environment, you can use the ``runqemu-extract-sdk`` script, which is located in the ``scripts`` directory along with the ``runqemu`` script. @@ -198,7 +198,7 @@ using an NFS server. runqemu-extract-sdk ./tmp/deploy/images/qemux86-64/core-image-sato-qemux86-64.tar.bz2 test-nfs -2. *Start QEMU:* Once you have extracted the file system, you can run +#. *Start QEMU:* Once you have extracted the file system, you can run ``runqemu`` normally with the additional location of the file system. You can then also make changes to the files within ``./test-nfs`` and see those changes appear in the image in real time. Here is an diff --git a/documentation/dev-manual/quilt.rst b/documentation/dev-manual/quilt.rst index 1dd9ff02d4..24343e2fac 100644 --- a/documentation/dev-manual/quilt.rst +++ b/documentation/dev-manual/quilt.rst @@ -20,32 +20,32 @@ form of a patch all using Quilt. Follow these general steps: -1. *Find the Source Code:* Temporary source code used by the +#. *Find the Source Code:* Temporary source code used by the OpenEmbedded build system is kept in the :term:`Build Directory`. See the ":ref:`dev-manual/temporary-source-code:finding temporary source code`" section to learn how to locate the directory that has the temporary source code for a particular package. -2. *Change Your Working Directory:* You need to be in the directory that +#. *Change Your Working Directory:* You need to be in the directory that has the temporary source code. That directory is defined by the :term:`S` variable. -3. *Create a New Patch:* Before modifying source code, you need to +#. *Create a New Patch:* Before modifying source code, you need to create a new patch. To create a new patch file, use ``quilt new`` as below:: $ quilt new my_changes.patch -4. *Notify Quilt and Add Files:* After creating the patch, you need to +#. *Notify Quilt and Add Files:* After creating the patch, you need to notify Quilt about the files you plan to edit. You notify Quilt by adding the files to the patch you just created:: $ quilt add file1.c file2.c file3.c -5. *Edit the Files:* Make your changes in the source code to the files +#. *Edit the Files:* Make your changes in the source code to the files you added to the patch. -6. *Test Your Changes:* Once you have modified the source code, the +#. *Test Your Changes:* Once you have modified the source code, the easiest way to test your changes is by calling the :ref:`ref-tasks-compile` task as shown in the following example:: @@ -65,7 +65,7 @@ Follow these general steps: the ":ref:`dev-manual/disk-space:conserving disk space during builds`" section. -7. *Generate the Patch:* Once your changes work as expected, you need to +#. *Generate the Patch:* Once your changes work as expected, you need to use Quilt to generate the final patch that contains all your modifications:: @@ -78,7 +78,7 @@ Follow these general steps: You can find the resulting patch file in the ``patches/`` subdirectory of the source (:term:`S`) directory. -8. *Copy the Patch File:* For simplicity, copy the patch file into a +#. *Copy the Patch File:* For simplicity, copy the patch file into a directory named ``files``, which you can create in the same directory that holds the recipe (``.bb``) file or the append (``.bbappend``) file. Placing the patch here guarantees that the OpenEmbedded build diff --git a/documentation/dev-manual/runtime-testing.rst b/documentation/dev-manual/runtime-testing.rst index 88b3ed541b..36ccf746ee 100644 --- a/documentation/dev-manual/runtime-testing.rst +++ b/documentation/dev-manual/runtime-testing.rst @@ -84,25 +84,25 @@ In order to run tests, you need to do the following: Once you start running the tests, the following happens: -1. A copy of the root filesystem is written to ``${WORKDIR}/testimage``. +#. A copy of the root filesystem is written to ``${WORKDIR}/testimage``. -2. The image is booted under QEMU using the standard ``runqemu`` script. +#. The image is booted under QEMU using the standard ``runqemu`` script. -3. A default timeout of 500 seconds occurs to allow for the boot process +#. A default timeout of 500 seconds occurs to allow for the boot process to reach the login prompt. You can change the timeout period by setting :term:`TEST_QEMUBOOT_TIMEOUT` in the ``local.conf`` file. -4. Once the boot process is reached and the login prompt appears, the +#. Once the boot process is reached and the login prompt appears, the tests run. The full boot log is written to ``${WORKDIR}/testimage/qemu_boot_log``. -5. Each test module loads in the order found in :term:`TEST_SUITES`. You can +#. Each test module loads in the order found in :term:`TEST_SUITES`. You can find the full output of the commands run over SSH in ``${WORKDIR}/testimgage/ssh_target_log``. -6. If no failures occur, the task running the tests ends successfully. +#. If no failures occur, the task running the tests ends successfully. You can find the output from the ``unittest`` in the task log at ``${WORKDIR}/temp/log.do_testimage``. @@ -117,13 +117,13 @@ For automated deployment, a "controller image" is installed onto the hardware once as part of setup. Then, each time tests are to be run, the following occurs: -1. The controller image is booted into and used to write the image to be +#. The controller image is booted into and used to write the image to be tested to a second partition. -2. The device is then rebooted using an external script that you need to +#. The device is then rebooted using an external script that you need to provide. -3. The device boots into the image to be tested. +#. The device boots into the image to be tested. When running tests (independent of whether the image has been deployed automatically or not), the device is expected to be connected to a @@ -188,11 +188,11 @@ not need any information in this section. You can skip down to the If you did set :term:`TEST_TARGET` to "SystemdbootTarget", you also need to perform a one-time setup of your controller image by doing the following: -1. *Set EFI_PROVIDER:* Be sure that :term:`EFI_PROVIDER` is as follows:: +#. *Set EFI_PROVIDER:* Be sure that :term:`EFI_PROVIDER` is as follows:: EFI_PROVIDER = "systemd-boot" -2. *Build the controller image:* Build the ``core-image-testmaster`` image. +#. *Build the controller image:* Build the ``core-image-testmaster`` image. The ``core-image-testmaster`` recipe is provided as an example for a "controller" image and you can customize the image recipe as you would any other recipe. @@ -219,13 +219,13 @@ perform a one-time setup of your controller image by doing the following: - Another partition labeled "testrootfs" where test images get deployed. -3. *Install image:* Install the image that you just built on the target +#. *Install image:* Install the image that you just built on the target system. The final thing you need to do when setting :term:`TEST_TARGET` to "SystemdbootTarget" is to set up the test image: -1. *Set up your local.conf file:* Make sure you have the following +#. *Set up your local.conf file:* Make sure you have the following statements in your ``local.conf`` file:: IMAGE_FSTYPES += "tar.gz" @@ -233,7 +233,7 @@ The final thing you need to do when setting :term:`TEST_TARGET` to TEST_TARGET = "SystemdbootTarget" TEST_TARGET_IP = "192.168.2.3" -2. *Build your test image:* Use BitBake to build the image:: +#. *Build your test image:* Use BitBake to build the image:: $ bitbake core-image-sato diff --git a/documentation/dev-manual/start.rst b/documentation/dev-manual/start.rst index b02e961608..498734a04d 100644 --- a/documentation/dev-manual/start.rst +++ b/documentation/dev-manual/start.rst @@ -29,7 +29,7 @@ however, keep in mind, the procedure here is simply a starting point. You can build off these steps and customize the procedure to fit any particular working environment and set of practices. -1. *Determine Who is Going to be Developing:* You first need to +#. *Determine Who is Going to be Developing:* You first need to understand who is going to be doing anything related to the Yocto Project and determine their roles. Making this determination is essential to completing subsequent steps, which are to get your @@ -52,7 +52,7 @@ particular working environment and set of practices. automated tests that are used to ensure all application and core system development meets desired quality standards. -2. *Gather the Hardware:* Based on the size and make-up of the team, +#. *Gather the Hardware:* Based on the size and make-up of the team, get the hardware together. Ideally, any development, build, or test engineer uses a system that runs a supported Linux distribution. These systems, in general, should be high performance (e.g. dual, @@ -66,13 +66,13 @@ particular working environment and set of practices. building Yocto Project development containers to be run under Docker, which is described later. -3. *Understand the Hardware Topology of the Environment:* Once you +#. *Understand the Hardware Topology of the Environment:* Once you understand the hardware involved and the make-up of the team, you can understand the hardware topology of the development environment. You can get a visual idea of the machines and their roles across the development environment. -4. *Use Git as Your Source Control Manager (SCM):* Keeping your +#. *Use Git as Your Source Control Manager (SCM):* Keeping your :term:`Metadata` (i.e. recipes, configuration files, classes, and so forth) and any software you are developing under the control of an SCM system that is compatible @@ -109,7 +109,7 @@ particular working environment and set of practices. Documentation on how to create interfaces and frontends for Git. -5. *Set up the Application Development Machines:* As mentioned earlier, +#. *Set up the Application Development Machines:* As mentioned earlier, application developers are creating applications on top of existing software stacks. Following are some best practices for setting up machines used for application development: @@ -128,7 +128,7 @@ particular working environment and set of practices. - Use multiple toolchains installed locally into different locations to allow development across versions. -6. *Set up the Core Development Machines:* As mentioned earlier, core +#. *Set up the Core Development Machines:* As mentioned earlier, core developers work on the contents of the operating system itself. Following are some best practices for setting up machines used for developing images: @@ -145,7 +145,7 @@ particular working environment and set of practices. - Share layers amongst the developers of a particular project and contain the policy configuration that defines the project. -7. *Set up an Autobuilder:* Autobuilders are often the core of the +#. *Set up an Autobuilder:* Autobuilders are often the core of the development environment. It is here that changes from individual developers are brought together and centrally tested. Based on this automated build and test environment, subsequent decisions about @@ -183,12 +183,12 @@ particular working environment and set of practices. - Allows scheduling of builds so that resources can be used efficiently. -8. *Set up Test Machines:* Use a small number of shared, high +#. *Set up Test Machines:* Use a small number of shared, high performance systems for testing purposes. Developers can use these systems for wider, more extensive testing while they continue to develop locally using their primary development system. -9. *Document Policies and Change Flow:* The Yocto Project uses a +#. *Document Policies and Change Flow:* The Yocto Project uses a hierarchical structure and a pull model. There are scripts to create and send pull requests (i.e. ``create-pull-request`` and ``send-pull-request``). This model is in line with other open source @@ -213,7 +213,7 @@ particular working environment and set of practices. possible. Chances are if you have discovered the need for changes, someone else in the community needs them also. -10. *Development Environment Summary:* Aside from the previous steps, +#. *Development Environment Summary:* Aside from the previous steps, here are best practices within the Yocto Project development environment: @@ -296,7 +296,7 @@ Setting Up a Native Linux Host Follow these steps to prepare a native Linux machine as your Yocto Project Build Host: -1. *Use a Supported Linux Distribution:* You should have a reasonably +#. *Use a Supported Linux Distribution:* You should have a reasonably current Linux-based host system. You will have the best results with a recent release of Fedora, openSUSE, Debian, Ubuntu, RHEL or CentOS as these releases are frequently tested against the Yocto Project and @@ -306,10 +306,10 @@ Project Build Host: section in the Yocto Project Reference Manual and the wiki page at :yocto_wiki:`Distribution Support `. -2. *Have Enough Free Memory:* Your system should have at least 50 Gbytes +#. *Have Enough Free Memory:* Your system should have at least 50 Gbytes of free disk space for building images. -3. *Meet Minimal Version Requirements:* The OpenEmbedded build system +#. *Meet Minimal Version Requirements:* The OpenEmbedded build system should be able to run on any modern distribution that has the following versions for Git, tar, Python, gcc and make. @@ -329,7 +329,7 @@ Project Build Host: ":ref:`ref-manual/system-requirements:required git, tar, python, make and gcc versions`" section in the Yocto Project Reference Manual for information. -4. *Install Development Host Packages:* Required development host +#. *Install Development Host Packages:* Required development host packages vary depending on your build host and what you want to do with the Yocto Project. Collectively, the number of required packages is large if you want to be able to cover all cases. @@ -361,7 +361,7 @@ Yocto Project on a Windows, Mac, or Linux machine. Follow these general steps to prepare a Windows, Mac, or Linux machine as your Yocto Project build host: -1. *Determine What Your Build Host Needs:* +#. *Determine What Your Build Host Needs:* `Docker `__ is a software container platform that you need to install on the build host. Depending on your build host, you might have to install different @@ -370,20 +370,20 @@ as your Yocto Project build host: Platforms `__" your build host needs to run containers. -2. *Choose What To Install:* Depending on whether or not your build host +#. *Choose What To Install:* Depending on whether or not your build host meets system requirements, you need to install "Docker CE Stable" or the "Docker Toolbox". Most situations call for Docker CE. However, if you have a build host that does not meet requirements (e.g. Pre-Windows 10 or Windows 10 "Home" version), you must install Docker Toolbox instead. -3. *Go to the Install Site for Your Platform:* Click the link for the +#. *Go to the Install Site for Your Platform:* Click the link for the Docker edition associated with your build host's native software. For example, if your build host is running Microsoft Windows Version 10 and you want the Docker CE Stable edition, click that link under "Supported Platforms". -4. *Install the Software:* Once you have understood all the +#. *Install the Software:* Once you have understood all the pre-requisites, you can download and install the appropriate software. Follow the instructions for your specific machine and the type of the software you need to install: @@ -412,15 +412,15 @@ as your Yocto Project build host: Ubuntu `__ for Linux build hosts running the Ubuntu distribution. -5. *Optionally Orient Yourself With Docker:* If you are unfamiliar with +#. *Optionally Orient Yourself With Docker:* If you are unfamiliar with Docker and the container concept, you can learn more here - https://docs.docker.com/get-started/. -6. *Launch Docker or Docker Toolbox:* You should be able to launch +#. *Launch Docker or Docker Toolbox:* You should be able to launch Docker or the Docker Toolbox and have a terminal shell on your development host. -7. *Set Up the Containers to Use the Yocto Project:* Go to +#. *Set Up the Containers to Use the Yocto Project:* Go to https://github.com/crops/docker-win-mac-docs/wiki and follow the directions for your particular build host (i.e. Linux, Mac, or Windows). @@ -453,7 +453,7 @@ in which you can develop using the Yocto Project. Follow these general steps to prepare a Windows machine using WSL 2 as your Yocto Project build host: -1. *Make sure your Windows machine is capable of running WSL 2:* +#. *Make sure your Windows machine is capable of running WSL 2:* While all Windows 11 and Windows Server 2022 builds support WSL 2, the first versions of Windows 10 and Windows Server 2019 didn't. @@ -469,7 +469,7 @@ your Yocto Project build host: Microsoft Windows [Version 10.0.19041.153] -2. *Install the Linux distribution of your choice inside WSL 2:* +#. *Install the Linux distribution of your choice inside WSL 2:* Once you know your version of Windows supports WSL 2, you can install the distribution of your choice from the Microsoft Store. Open the Microsoft Store and search for Linux. While there are @@ -479,7 +479,7 @@ your Yocto Project build host: making your selection, simply click "Get" to download and install the distribution. -3. *Check which Linux distribution WSL 2 is using:* Open a Windows +#. *Check which Linux distribution WSL 2 is using:* Open a Windows PowerShell and run:: C:\WINDOWS\system32> wsl -l -v @@ -489,13 +489,13 @@ your Yocto Project build host: Note that WSL 2 supports running as many different Linux distributions as you want to install. -4. *Optionally Get Familiar with WSL:* You can learn more on +#. *Optionally Get Familiar with WSL:* You can learn more on https://docs.microsoft.com/en-us/windows/wsl/wsl2-about. -5. *Launch your WSL Distibution:* From the Windows start menu simply +#. *Launch your WSL Distibution:* From the Windows start menu simply launch your WSL distribution just like any other application. -6. *Optimize your WSL 2 storage often:* Due to the way storage is +#. *Optimize your WSL 2 storage often:* Due to the way storage is handled on WSL 2, the storage space used by the underlying Linux distribution is not reflected immediately, and since BitBake heavily uses storage, after several builds, you may be unaware you are @@ -597,14 +597,14 @@ repository at :yocto_git:`/poky`. Use the following procedure to locate the latest upstream copy of the ``poky`` Git repository: -1. *Access Repositories:* Open a browser and go to +#. *Access Repositories:* Open a browser and go to :yocto_git:`/` to access the GUI-based interface into the Yocto Project source repositories. -2. *Select the Repository:* Click on the repository in which you are +#. *Select the Repository:* Click on the repository in which you are interested (e.g. ``poky``). -3. *Find the URL Used to Clone the Repository:* At the bottom of the +#. *Find the URL Used to Clone the Repository:* At the bottom of the page, note the URL used to clone that repository (e.g. :yocto_git:`/poky`). @@ -630,7 +630,7 @@ of a given component. Follow these steps to locate and download a particular tarball: -1. *Access the Index of Releases:* Open a browser and go to +#. *Access the Index of Releases:* Open a browser and go to :yocto_dl:`Index of Releases `. The list represents released components (e.g. ``bitbake``, ``sato``, and so on). @@ -642,14 +642,14 @@ Follow these steps to locate and download a particular tarball: historically used for very early releases and exists now only for retroactive completeness. -2. *Select a Component:* Click on any released component in which you +#. *Select a Component:* Click on any released component in which you are interested (e.g. ``yocto``). -3. *Find the Tarball:* Drill down to find the associated tarball. For +#. *Find the Tarball:* Drill down to find the associated tarball. For example, click on ``yocto-&DISTRO;`` to view files associated with the Yocto Project &DISTRO; release. -4. *Download the Tarball:* Click the tarball to download and save a +#. *Download the Tarball:* Click the tarball to download and save a snapshot of the given component. Using the Downloads Page @@ -661,13 +661,13 @@ release. Rather than Git repositories, these files represent snapshot tarballs similar to the tarballs located in the Index of Releases described in the ":ref:`dev-manual/start:accessing index of releases`" section. -1. *Go to the Yocto Project Website:* Open The +#. *Go to the Yocto Project Website:* Open The :yocto_home:`Yocto Project Website <>` in your browser. -2. *Get to the Downloads Area:* Select the "DOWNLOADS" item from the +#. *Get to the Downloads Area:* Select the "DOWNLOADS" item from the pull-down "SOFTWARE" tab menu near the top of the page. -3. *Select a Yocto Project Release:* Use the menu next to "RELEASE" to +#. *Select a Yocto Project Release:* Use the menu next to "RELEASE" to display and choose a recent or past supported Yocto Project release (e.g. &DISTRO_NAME_NO_CAP;, &DISTRO_NAME_NO_CAP_MINUS_ONE;, and so forth). @@ -679,7 +679,7 @@ described in the ":ref:`dev-manual/start:accessing index of releases`" section. You can use the "RELEASE ARCHIVE" link to reveal a menu of all Yocto Project releases. -4. *Download Tools or Board Support Packages (BSPs):* From the +#. *Download Tools or Board Support Packages (BSPs):* From the "DOWNLOADS" page, you can download tools or BSPs as well. Just scroll down the page and look for what you need. @@ -707,10 +707,10 @@ Cloning the ``poky`` Repository Follow these steps to create a local version of the upstream :term:`Poky` Git repository. -1. *Set Your Directory:* Change your working directory to where you want +#. *Set Your Directory:* Change your working directory to where you want to create your local copy of ``poky``. -2. *Clone the Repository:* The following example command clones the +#. *Clone the Repository:* The following example command clones the ``poky`` repository and uses the default name "poky" for your local repository:: @@ -766,13 +766,13 @@ and then specifically check out that development branch. Further development on top of the branch that occurs after check it out can occur. -1. *Switch to the Poky Directory:* If you have a local poky Git +#. *Switch to the Poky Directory:* If you have a local poky Git repository, switch to that directory. If you do not have the local copy of poky, see the ":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`" section. -2. *Determine Existing Branch Names:* +#. *Determine Existing Branch Names:* :: $ git branch -a @@ -793,7 +793,7 @@ and then specifically check out that development branch. remotes/origin/zeus-next ... and so on ... -3. *Check out the Branch:* Check out the development branch in which you +#. *Check out the Branch:* Check out the development branch in which you want to work. For example, to access the files for the Yocto Project &DISTRO; Release (&DISTRO_NAME;), use the following command:: @@ -827,19 +827,19 @@ similar to checking out by branch name except you use tag names. Checking out a branch based on a tag gives you a stable set of files not affected by development on the branch above the tag. -1. *Switch to the Poky Directory:* If you have a local poky Git +#. *Switch to the Poky Directory:* If you have a local poky Git repository, switch to that directory. If you do not have the local copy of poky, see the ":ref:`dev-manual/start:cloning the \`\`poky\`\` repository`" section. -2. *Fetch the Tag Names:* To checkout the branch based on a tag name, +#. *Fetch the Tag Names:* To checkout the branch based on a tag name, you need to fetch the upstream tags into your local repository:: $ git fetch --tags $ -3. *List the Tag Names:* You can list the tag names now:: +#. *List the Tag Names:* You can list the tag names now:: $ git tag 1.1_M1.final @@ -861,7 +861,7 @@ similar to checking out by branch name except you use tag names. yocto_1.5_M5.rc8 -4. *Check out the Branch:* +#. *Check out the Branch:* :: $ git checkout tags/yocto-&DISTRO; -b my_yocto_&DISTRO; diff --git a/documentation/dev-manual/upgrading-recipes.rst b/documentation/dev-manual/upgrading-recipes.rst index c41e3e1a5d..dd220cc6c8 100644 --- a/documentation/dev-manual/upgrading-recipes.rst +++ b/documentation/dev-manual/upgrading-recipes.rst @@ -51,12 +51,12 @@ commit messages in the layer's tree for the changes made to recipes. The following steps describe how to set up the AUH utility: -1. *Be Sure the Development Host is Set Up:* You need to be sure that +#. *Be Sure the Development Host is Set Up:* You need to be sure that your development host is set up to use the Yocto Project. For information on how to set up your host, see the ":ref:`dev-manual/start:Preparing the Build Host`" section. -2. *Make Sure Git is Configured:* The AUH utility requires Git to be +#. *Make Sure Git is Configured:* The AUH utility requires Git to be configured because AUH uses Git to save upgrades. Thus, you must have Git user and email configured. The following command shows your configurations:: @@ -69,7 +69,7 @@ The following steps describe how to set up the AUH utility: $ git config --global user.name some_name $ git config --global user.email username@domain.com -3. *Clone the AUH Repository:* To use AUH, you must clone the repository +#. *Clone the AUH Repository:* To use AUH, you must clone the repository onto your development host. The following command uses Git to create a local copy of the repository on your system:: @@ -84,7 +84,7 @@ The following steps describe how to set up the AUH utility: AUH is not part of the :term:`OpenEmbedded-Core (OE-Core)` or :term:`Poky` repositories. -4. *Create a Dedicated Build Directory:* Run the :ref:`structure-core-script` +#. *Create a Dedicated Build Directory:* Run the :ref:`structure-core-script` script to create a fresh :term:`Build Directory` that you use exclusively for running the AUH utility:: @@ -95,7 +95,7 @@ The following steps describe how to set up the AUH utility: recommended as existing settings could cause AUH to fail or behave undesirably. -5. *Make Configurations in Your Local Configuration File:* Several +#. *Make Configurations in Your Local Configuration File:* Several settings are needed in the ``local.conf`` file in the build directory you just created for AUH. Make these following configurations: @@ -128,13 +128,13 @@ The following steps describe how to set up the AUH utility: DISTRO_FEATURES:append = " ptest" -6. *Optionally Start a vncserver:* If you are running in a server +#. *Optionally Start a vncserver:* If you are running in a server without an X11 session, you need to start a vncserver:: $ vncserver :1 $ export DISPLAY=:1 -7. *Create and Edit an AUH Configuration File:* You need to have the +#. *Create and Edit an AUH Configuration File:* You need to have the ``upgrade-helper/upgrade-helper.conf`` configuration file in your :term:`Build Directory`. You can find a sample configuration file in the :yocto_git:`AUH source repository `. @@ -346,17 +346,17 @@ you can manually edit the recipe files to upgrade the versions. To manually upgrade recipe versions, follow these general steps: -1. *Change the Version:* Rename the recipe such that the version (i.e. +#. *Change the Version:* Rename the recipe such that the version (i.e. the :term:`PV` part of the recipe name) changes appropriately. If the version is not part of the recipe name, change the value as it is set for :term:`PV` within the recipe itself. -2. *Update* :term:`SRCREV` *if Needed*: If the source code your recipe builds +#. *Update* :term:`SRCREV` *if Needed*: If the source code your recipe builds is fetched from Git or some other version control system, update :term:`SRCREV` to point to the commit hash that matches the new version. -3. *Build the Software:* Try to build the recipe using BitBake. Typical +#. *Build the Software:* Try to build the recipe using BitBake. Typical build failures include the following: - License statements were updated for the new version. For this @@ -377,22 +377,22 @@ To manually upgrade recipe versions, follow these general steps: issues. If a patch is necessary and failing, you need to rebase it into the new version. -4. *Optionally Attempt to Build for Several Architectures:* Once you +#. *Optionally Attempt to Build for Several Architectures:* Once you successfully build the new software for a given architecture, you could test the build for other architectures by changing the :term:`MACHINE` variable and rebuilding the software. This optional step is especially important if the recipe is to be released publicly. -5. *Check the Upstream Change Log or Release Notes:* Checking both these +#. *Check the Upstream Change Log or Release Notes:* Checking both these reveals if there are new features that could break backwards-compatibility. If so, you need to take steps to mitigate or eliminate that situation. -6. *Optionally Create a Bootable Image and Test:* If you want, you can +#. *Optionally Create a Bootable Image and Test:* If you want, you can test the new software by booting it onto actual hardware. -7. *Create a Commit with the Change in the Layer Repository:* After all +#. *Create a Commit with the Change in the Layer Repository:* After all builds work and any testing is successful, you can create commits for any changes in the layer holding your upgraded recipe. diff --git a/documentation/dev-manual/wayland.rst b/documentation/dev-manual/wayland.rst index bcbf40acc5..097be9cbde 100644 --- a/documentation/dev-manual/wayland.rst +++ b/documentation/dev-manual/wayland.rst @@ -78,13 +78,13 @@ Alternatively, you can run Weston through the command-line interpretor (CLI), which is better suited for development work. To run Weston under the CLI, you need to do the following after your image is built: -1. Run these commands to export ``XDG_RUNTIME_DIR``:: +#. Run these commands to export ``XDG_RUNTIME_DIR``:: mkdir -p /tmp/$USER-weston chmod 0700 /tmp/$USER-weston export XDG_RUNTIME_DIR=/tmp/$USER-weston -2. Launch Weston in the shell:: +#. Launch Weston in the shell:: weston diff --git a/documentation/dev-manual/wic.rst b/documentation/dev-manual/wic.rst index 7ed887b270..d698cec77c 100644 --- a/documentation/dev-manual/wic.rst +++ b/documentation/dev-manual/wic.rst @@ -641,7 +641,7 @@ modify the kernel. The following example examines the contents of the Wic image, deletes the existing kernel, and then inserts a new kernel: -1. *List the Partitions:* Use the ``wic ls`` command to list all the +#. *List the Partitions:* Use the ``wic ls`` command to list all the partitions in the Wic image:: $ wic ls tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic @@ -652,7 +652,7 @@ the existing kernel, and then inserts a new kernel: The previous output shows two partitions in the ``core-image-minimal-qemux86.wic`` image. -2. *Examine a Particular Partition:* Use the ``wic ls`` command again +#. *Examine a Particular Partition:* Use the ``wic ls`` command again but in a different form to examine a particular partition. .. note:: @@ -700,12 +700,12 @@ the existing kernel, and then inserts a new kernel: Add mtools_skip_check=1 to your .mtoolsrc file to skip this test -3. *Remove the Old Kernel:* Use the ``wic rm`` command to remove the +#. *Remove the Old Kernel:* Use the ``wic rm`` command to remove the ``vmlinuz`` file (kernel):: $ wic rm tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic:1/vmlinuz -4. *Add In the New Kernel:* Use the ``wic cp`` command to add the +#. *Add In the New Kernel:* Use the ``wic cp`` command to add the updated kernel to the Wic image. Depending on how you built your kernel, it could be in different places. If you used ``devtool`` and an SDK to build your kernel, it resides in the ``tmp/work`` directory -- cgit v1.2.3-54-g00ecf