summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* recipes/golang: improve reproducibilityChangqing Li4 days8-17/+20
| | | | | | | | | | | | | Refer [1], cgo will embeded cgo_ldflags in the intermediary output, which make content ID will be incfluenced by cgo_ldflags. '--sysroot=xxx' includes build path, which will make the binary not reproducible, these recipes can build successfully without --sysroot, so remove it [1] https://git.openembedded.org/openembedded-core/commit/?id=1797741aad02b8bf429fac4b81e30cdda64b5448 Signed-off-by: Changqing Li <changqing.li@windriver.com> Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* nerdctl: add -buildmode=pie to avoid textrel QA errorChen Qi4 days1-1/+1
| | | | | | | | On qemuarm, building nerdctl fails with QA error about textrel. Add '-buildmode=pie' to fix this issue. Signed-off-by: Chen Qi <Qi.Chen@windriver.com> Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vruntime: add BBMASK to reduce multiconfig parse timeBruce Ashfield4 days4-0/+244
| | | | | | | | | | | | | | | | | | | | | | | | | | The vruntime multiconfigs (vruntime-aarch64, vruntime-x86-64) trigger a full BitBake parse of all layers, but only need ~318 recipes to build the vdkr/vpdmn container runtime stacks. BBMASK set in the vruntime distro conf only affects parsing for those multiconfigs; the main build is unaffected. Add three .inc files, each independently disableable, that mask unused recipes: - vruntime-bbmask.inc: meta-virtualization layer (~88 masks covering virtualization platforms, unused container orchestration/tooling, and individual go libraries) - vruntime-bbmask-oe-core.inc: oe-core graphics subdirs, multimedia, sato, and rt categories - vruntime-bbmask-meta-oe.inc: meta-oe, meta-networking categories, plus entire meta-python, meta-filesystems, and meta-webserver layers Mask patterns were generated from bitbake -g dependency graph analysis of both aarch64 and x86-64 targets, with all 318 needed PNs (including -native variants) cross-checked against the patterns. Orphaned bbappend files in other layers are also masked to prevent parse errors. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: fix yocto-check-layer mcdepends parse errorBruce Ashfield4 days4-21/+37
| | | | | | | | | | | | | | | | | | Fix yocto-check-layer failure: ERROR: Multiconfig dependency mc::vruntime-x86-64:vpdmn-initramfs-create:do_deploy depends on nonexistent multiconfig configuration named configuration vruntime-x86-64 Several recipes and classes declared static mcdepends referencing vruntime-aarch64 and vruntime-x86-64 multiconfigs. When parsed without BBMULTICONFIG set (e.g. yocto-check-layer), BitBake validates these and fails because the referenced multiconfigs don't exist. Move mcdepends into anonymous python functions and only set them when the target multiconfig exists in BBMULTICONFIG, following the pattern established in meta/classes-recipe/kernel-fit-image.bbclass. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* go-mod-vcs: fix do_rm_work permission failure on module cacheBruce Ashfield4 days1-12/+4
| | | | | | | | | | | | | go build creates read-only files in the module cache during do_compile. The previous do_fix_go_mod_permissions task ran before do_compile, so it could not catch these files, causing do_rm_work to fail with permission errors. Replace the standalone task with a do_compile postfunc that fixes module cache permissions after compilation finishes. This covers all go-mod-vcs recipes regardless of how they invoke go build. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* docker-registry: add native supportBruce Ashfield4 days1-0/+1
| | | | | | | This is required for several of the scripts and capabilities providing local registry support. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-registry: add tests and documentation for secure registryBruce Ashfield4 days3-1/+1195
| | | | | | | | | | | | | | | | | | | | | | | | Add comprehensive test coverage and documentation for the secure registry infrastructure. Tests added: TestRegistryAuthentication - auth modes (none, home, authfile, credsfile, env, creds, token) for push and import TestSecureRegistryTLSOnly - TLS-only mode using running registry TestSecureRegistryWithAuth - isolated TLS+auth instance on port 5001 TestDockerRegistryConfig - static analysis of bbclass/recipe logic TestContainerCrossInstallSecure - auto IMAGE_INSTALL verification TestVcontainerSecureRegistry - script pattern verification for virtio-9p CA transport, daemon _9p=1, shared folder reads README.md: Document authentication modes (none, home, authfile, credsfile, env), secure registry setup, PKI generation, target integration, and CI/CD examples. conftest.py: Add --secure-registry pytest option and skip_secure fixture for tests requiring openssl/htpasswd. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* image-oci: fix process substitution for dash/busybox compatibilityBruce Ashfield4 days1-6/+7
| | | | | | | | | | | Replace bash-specific process substitution (< <(find ...)) with POSIX-compatible piped find | while constructs. Replace $((...)) arithmetic with expr for broader shell compatibility. This fixes OCI image delta-copy on systems where /bin/sh is dash or busybox ash rather than bash. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add secure registry support with virtio-9p CA transportBruce Ashfield4 days4-6/+246
| | | | | | | | | | | | | | | | | | | | | | | | | Enable vdkr/vcontainer to pull from TLS-secured registries by transporting the CA certificate via virtio-9p shared folder. vcontainer-common.sh: Add --secure-registry, --ca-cert, --registry-user, --registry-password CLI options. Auto-detect bundled CA cert at registry/ca.crt in the tarball and enable secure mode automatically. vrunner.sh: Copy CA cert to the virtio-9p shared folder for both daemon and non-daemon modes. Fix daemon mode missing _9p=1 kernel cmdline parameter which prevented the init script from mounting the shared folder. vdkr-init.sh: Read CA cert from /mnt/share/ca.crt (virtio-9p) instead of base64-decoding from kernel cmdline (which caused truncation for large certificates). Install cert to /etc/docker/certs.d/{host}/ca.crt for Docker TLS verification. Support optional credential passing for authenticated registries. vcontainer-tarball.bb: Add script files to SRC_URI for proper file tracking and rebuild triggers. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-registry: add target image TLS integrationBruce Ashfield4 days4-40/+288
| | | | | | | | | | | | | | | | | | | | | | | Install CA certificates and registry configuration into target images so they can pull from the secure registry at runtime. docker-registry-config.bb: When CONTAINER_REGISTRY_SECURE=1, install the CA cert to /etc/docker/certs.d/{host}/ca.crt instead of adding insecure-registries to daemon.json. Translates localhost/127.0.0.1 to 10.0.2.2 for QEMU targets where the host registry is accessed via slirp networking. container-oci-registry-config.bb: Same secure mode support for podman/CRI-O with insecure=false in registries.conf. container-registry-ca.bb: New recipe that installs the CA certificate to Docker, podman/CRI-O, and system trust store paths on the target. container-cross-install.bbclass: Auto-add docker-registry-config or container-oci-registry-config to IMAGE_INSTALL when CONTAINER_REGISTRY_SECURE=1, based on the configured container engine. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-registry: add secure registry infrastructure with TLS and authBruce Ashfield4 days3-60/+1076
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add opt-in secure registry mode with auto-generated TLS certificates and htpasswd authentication. New BitBake variables: CONTAINER_REGISTRY_SECURE - Enable TLS (HTTPS) for local registry CONTAINER_REGISTRY_AUTH - Enable htpasswd auth (requires SECURE=1) CONTAINER_REGISTRY_USERNAME/PASSWORD - Credential configuration CONTAINER_REGISTRY_CERT_DAYS/CA_DAYS - Certificate validity CONTAINER_REGISTRY_CERT_SAN - Custom SAN entries The bbclass validates conflicting settings (AUTH without SECURE) and provides credential helper functions for skopeo push operations. PKI infrastructure (CA + server cert with SAN) is auto-generated at bitbake build time via openssl-native. The generated helper script supports both TLS-only and TLS+auth modes. The script now supports environment variable overrides for CONTAINER_REGISTRY_STORAGE, CONTAINER_REGISTRY_URL, and CONTAINER_REGISTRY_NAMESPACE, uses per-port PID files to allow multiple instances, and auto-generates config files when running from an overridden storage path. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* lxc: restore DEBUG_PREFIX_MAP in TARGET_LDFLAGS for LTO reproducibilityRicardo Salveti4 days1-0/+3
| | | | | | | | | | | | | | | | | | | | oe-core [1] removed DEBUG_PREFIX_MAP from TARGET_LDFLAGS to avoid passing prefix-map options via the linker flags. This is fine for most projects since DEBUG_PREFIX_MAP is also provided via CFLAGS at configure time. However, lxc enables LTO by default, which causes link-time code generation to (re)emit debug information during the link step. Without DEBUG_PREFIX_MAP on the link command line, TMPDIR/WORKDIR paths can leak into DWARF, triggering the buildpaths QA check and breaking reproducibility. Append DEBUG_PREFIX_MAP back to TARGET_LDFLAGS for lxc to ensure prefix-map options are visible during LTO link-time compilation. [1] https://git.openembedded.org/openembedded-core/commit/?id=1797741aad02b8bf429fac4b81e30cdda64b5448 Signed-off-by: Ricardo Salveti <ricardo.salveti@oss.qualcomm.com> Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-cross-install: add tests and documentation for custom service filesBruce Ashfield4 days3-1/+294
| | | | | | | | | | | | | | | | | | | | | | | | | | Add pytest tests to verify CONTAINER_SERVICE_FILE varflag support: TestCustomServiceFileSupport (unit tests, no build required): - test_bbclass_has_service_file_support - test_bundle_class_has_service_file_support - test_service_file_map_syntax - test_install_custom_service_function TestCustomServiceFileBoot (boot tests, require built image): - test_systemd_services_directory_exists - test_container_services_present - test_container_service_enabled - test_custom_service_content - test_podman_quadlet_directory Documentation updates: - docs/container-bundling.md: Add "Custom Service Files" section with variable format, usage examples for both BUNDLED_CONTAINERS and container-bundle packages, and example .service/.container files - tests/README.md: Add test class entries to structure diagram and "What the Tests Check" table Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-cross-install: add CONTAINER_SERVICE_FILE supportBruce Ashfield4 days2-1/+236
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for custom systemd service files (Docker) or Quadlet container files (Podman) instead of auto-generated ones for container autostart. For containers requiring specific startup configuration (ports, volumes, capabilities, dependencies), users can now provide custom service files using the CONTAINER_SERVICE_FILE varflag: CONTAINER_SERVICE_FILE[container-name] = "${UNPACKDIR}/myservice.service" For BUNDLED_CONTAINERS in image recipes: SRC_URI += "file://myapp.service" BUNDLED_CONTAINERS = "myapp-container:docker:autostart" CONTAINER_SERVICE_FILE[myapp-container] = "${UNPACKDIR}/myapp.service" For container-bundle packages: SRC_URI = "file://myapp.service" CONTAINER_BUNDLES = "myapp-container:autostart" CONTAINER_SERVICE_FILE[myapp-container] = "${UNPACKDIR}/myapp.service" Implementation: - container-cross-install.bbclass: Add get_container_service_file_map() to build varflag map, install_custom_service() for BUNDLED_CONTAINERS, and install_custom_service_from_bundle() for bundle packages - container-bundle.bbclass: Install custom service files to ${datadir}/container-bundles/${runtime}/services/ Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* image-oci: add host layer type and delta-only copyingBruce Ashfield4 days2-14/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add two enhancements to multi-layer OCI image support: 1. Delta-only copying for directories/files layers: - directories and files layers now only copy content that doesn't already exist in the bundle rootfs from earlier layers - Prevents duplication when a directories layer references paths that were already populated by a packages layer - Logs show "delta: N copied, M skipped" for visibility 2. New 'host' layer type for build machine content: - Copies files from the build machine filesystem (outside Yocto) - Format: name:host:source_path:dest_path - Multiple pairs: name:host:src1:dst1+src2:dst2 - Emits warning at parse time about reproducibility impact - Fatal error if source path doesn't exist - Use case: deployment-specific config, certificates, keys that cannot be packaged in recipes Example: OCI_LAYERS = "\ base:packages:busybox \ app:directories:/opt/myapp \ certs:host:/etc/ssl/certs/ca.crt:/etc/ssl/certs/ca.crt \ " Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: enable incremental builds by defaultBruce Ashfield4 days4-10/+38
| | | | | | | | | | | | | | | | | | | Previously, vcontainer recipes had [nostamp] flags that forced all tasks to rebuild on every bitbake invocation, even when nothing changed. This was added as a workaround for dependency tracking issues but caused slow rebuild times. Changes: - Make [nostamp] conditional on VCONTAINER_FORCE_BUILD variable - Default to normal stamp-based caching for faster incremental builds - file-checksums on do_rootfs still tracks init script changes - Add VCONTAINER_FORCE_BUILD status to the tarball build banner To enable the old always-rebuild behavior (for debugging dependency issues), set in local.conf: VCONTAINER_FORCE_BUILD = "1" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-tarball: build all architectures via single bitbake commandBruce Ashfield4 days2-6/+45
| | | | | | | | | | | | | | | | | | | | Previously, building vcontainer-tarball required multiple bitbake invocations or complex command lines to build both x86_64 and aarch64 blobs. This was a usability issue. Changes: - mcdepends now triggers builds for BOTH architectures automatically - VCONTAINER_ARCHITECTURES defaults to "x86_64 aarch64" (was auto-detect) - Add informational banner at parse time showing what will be built - Fix duplicate sanity check messages when multiconfig is active Usage is now simply: bitbake vcontainer-tarball To build only one architecture, set in local.conf: VCONTAINER_ARCHITECTURES = "x86_64" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vdkr-init: improve Docker daemon startup logging and error handlingBruce Ashfield4 days1-5/+17
| | | | | | | | | | | Improve debugging capabilities when Docker daemon fails to start: - Log dockerd output to /var/log/docker.log instead of /dev/null - Capture docker info exit code and output for diagnostics - Show docker info error on every 10th iteration while waiting - Include last docker info output and docker.log tail on failure - Extend sleep on failure from 2s to 5s for log review Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* packagegroups: add container build aggregatesBruce Ashfield4 days3-0/+118
| | | | | | | | | | | | | | | | | | Add packagegroup recipes to simplify building all container-related artifacts: - packagegroup-container-images: Build all OCI container images (recipes inheriting image-oci) - packagegroup-container-bundles: Build all container bundles (recipes inheriting container-bundle) - packagegroup-container-demo: Build all demo containers and bundles Usage: bitbake packagegroup-container-images bitbake packagegroup-container-bundles bitbake packagegroup-container-demo Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add host-side idle timeout with QMP shutdownBruce Ashfield4 days3-20/+154
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement reliable idle timeout for vmemres daemon mode using host-side monitoring with QMP-based shutdown, and container-aware idle detection via virtio-9p shared file. Host-side changes (vrunner.sh): - Add -no-reboot flag to QEMU for clean exit semantics - Spawn background watchdog when daemon starts - Watchdog monitors activity file timestamp - Check interval scales to idle timeout (timeout/5, clamped 10-60s) - Read container status from shared file (guest writes via virtio-9p) - Only shutdown if no containers are running - Send QMP "quit" command for graceful shutdown - Watchdog auto-exits if QEMU dies (no zombie processes) - Touch activity file in daemon_send() for user activity tracking Config changes (vcontainer-common.sh): - Add idle-timeout to build_runner_args() so it's always passed Guest-side changes (vcontainer-init-common.sh): - Add watchdog that writes container status to /mnt/share/.containers_running - Host reads this file instead of socket commands (avoids output corruption) - Close inherited virtio-serial fd 3 in watchdog subshell to prevent leaks - Guest-side shutdown logic preserved but disabled (QMP more reliable) - Handle Yocto read-only-rootfs volatile directories (/var/volatile) The shared file approach avoids sending container check commands through the daemon socket, which previously caused output corruption on the single-stream virtio-serial channel. The idle timeout is configurable via: vdkr vconfig idle-timeout <secs> Default: 1800 seconds (30 minutes) Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: consolidate initramfs-create recipesBruce Ashfield4 days3-80/+33
| | | | | | | | | | | | | | | | | | | Update vcontainer-initramfs-create.inc to use the image-based approach: - Depend on tiny-initramfs-image for cpio.gz (replaces file extraction) - Depend on rootfs-image for squashfs (unchanged) - Remove DEPENDS on squashfs-tools-native (no longer extracting files) Update recipe files to use the consolidated inc: - vdkr-initramfs-create_1.0.bb - vpdmn-initramfs-create_1.0.bb Boot flow remains unchanged: QEMU boots kernel + tiny initramfs -> preinit mounts rootfs.img from /dev/vda -> switch_root into rootfs.img -> vdkr-init.sh or vpdmn-init.sh runs Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add tiny initramfs image infrastructureBruce Ashfield4 days5-3/+142
| | | | | | | | | | | | | | | | | | | | | Add proper Yocto image recipes for the tiny initramfs used by vdkr/vpdmn in the switch_root boot flow: - vcontainer-tiny-initramfs-image.inc: Shared image configuration - vcontainer-preinit_1.0.bb: Preinit script package (shared) - vdkr-tiny-initramfs-image.bb: Tiny initramfs for vdkr - vpdmn-tiny-initramfs-image.bb: Tiny initramfs for vpdmn The tiny initramfs contains only busybox and a preinit script that: 1. Mounts devtmpfs, proc, sysfs 2. Mounts the squashfs rootfs.img from /dev/vda 3. Creates tmpfs overlay for writes 4. Performs switch_root to the real rootfs This replaces ad-hoc file extraction with proper image-based builds, improving reproducibility and maintainability. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-tarball: add nativesdk-expect dependencyBruce Ashfield4 days1-0/+1
| | | | | | | Add expect to the vcontainer SDK toolchain for interactive testing and automation scripts. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* linux-yocto: add iptables legacy kernel config for DockerBruce Ashfield4 days1-1/+10
| | | | | | | | | | | | | | | | Kernel 6.18+ split iptables into legacy/nftables backends. Docker requires the legacy iptables support, so add the kernel configuration for the full dependency chain: - CONFIG_NETFILTER_XTABLES_LEGACY=y - CONFIG_IP_NF_IPTABLES_LEGACY=m - CONFIG_IP_NF_FILTER=m - CONFIG_IP_NF_NAT=m - CONFIG_IP_NF_TARGET_MASQUERADE=m Without these, Docker's iptables rules fail to load on 6.18+ kernels. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add sanity checks and auto-enable virtfs for QEMUBruce Ashfield4 days3-4/+12
| | | | | | | | | | | | | | | | | Fix virtio-9p (virtfs) support for container-cross-install batch imports which provides ~50x speedup over base64-over-serial. The issue was that native recipes don't see target DISTRO_FEATURES, so qemu-system-native wasn't getting virtfs enabled. Fix by: - layer.conf: Propagate virtualization to DISTRO_FEATURES_NATIVE when vcontainer or virtualization is in target DISTRO_FEATURES - qemu-system-native: Check DISTRO_FEATURES_NATIVE for virtfs enable - container-cross-install: Prepend native sysroot to PATH so vrunner finds the QEMU with virtfs support Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: fix runc/crun conflict in multiconfig buildsBruce Ashfield4 days3-12/+23
| | | | | | | | | | | | | | | | | | | | | | The vruntime distro is used for multiconfig builds of both vdkr (Docker/runc) and vpdmn (Podman/crun) images. When CONTAINER_PROFILE or VIRTUAL-RUNTIME_container_runtime is set, containerd and podman pull their preferred runtime via RDEPENDS, causing package conflicts. Fix by having vruntime distro NOT participate in CONTAINER_PROFILE: - Set VIRTUAL-RUNTIME_container_runtime="" to prevent automatic runtime selection - Explicitly install runc in vdkr-rootfs-image.bb - Explicitly install crun in vpdmn-rootfs-image.bb This allows both images to be built in the same multiconfig without conflicts, while standard container-host images continue to use CONTAINER_PROFILE normally. Also add kernel-modules to vdkr-rootfs-image for overlay filesystem support. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* crun: add RCONFLICTS to prevent runc package conflictBruce Ashfield4 days1-0/+7
| | | | | | | | | | When CRUN_AS_RUNC is enabled (default), crun creates a /usr/bin/runc symlink that conflicts with the runc package's /usr/bin/runc binary. Add RCONFLICTS to declare this conflict so package managers prevent both from being installed simultaneously. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add sanity checks and auto-enable virtfs for QEMUBruce Ashfield4 days2-0/+46
| | | | | | | | | | | Add sanity check that warns when vcontainer distro feature is enabled but BBMULTICONFIG is missing the required vruntime-* multiconfigs. Add qemu-system-native bbappend to auto-enable virtfs (virtio-9p) when vcontainer or virtualization distro feature is set. This is required for the fast batch-import path in container-cross-install. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-bundles: add multilayer container bundle recipeBruce Ashfield4 days1-0/+27
| | | | | | | | | | Add demo recipe that bundles app-container-multilayer to demonstrate multi-layer OCI images with container-cross-install. Usage: IMAGE_INSTALL:append:pn-container-image-host = " multilayer-container-bundle" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add multi-arch OCI supportBruce Ashfield4 days5-11/+1259
| | | | | | | | | | | | | | | | | | | | | Add functions to detect and handle multi-architecture OCI Image Index format with automatic platform selection during import. Also add oci-multiarch.bbclass for build-time multi-arch OCI creation. Runtime support (vcontainer-common.sh): - is_oci_image_index() - detect multi-arch OCI images - get_oci_platforms() - list available platforms - select_platform_manifest() - select manifest for target architecture - extract_platform_oci() - extract single platform to new OCI dir - normalize_arch_to_oci/from_oci() - architecture name mapping - Update vimport to auto-select platform from multi-arch images Build-time support (oci-multiarch.bbclass): - Create OCI Image Index from multiconfig builds - Collect images from vruntime-aarch64, vruntime-x86-64 - Combine blobs and create unified manifest list Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-registry: abstract config and add multi-directory pushBruce Ashfield4 days2-45/+352
| | | | | | | | | | | | | Abstract registry configuration for Docker/Podman compatibility and add multi-directory scanning for easy multi-arch manifest list creation. - Support both DOCKER_REGISTRY_INSECURE and CONTAINER_REGISTRY_INSECURE - Add DEPLOY_DIR_IMAGES to scan all machine directories - Support push by path (single OCI) and push by name (all archs) - Add environment variable overrides for flexibility - Single 'push' command now creates multi-arch manifest lists Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-cross-install: fix image naming and default runtimeBruce Ashfield4 days1-17/+56
| | | | | | | | | | | | Fix extract_container_info() to properly handle multi-part container names and add automatic runtime detection based on CONTAINER_PROFILE. - Fix multi-part name parsing (app-container-multilayer-latest-oci now correctly becomes app-container-multilayer:latest) - Add CONTAINER_DEFAULT_RUNTIME from CONTAINER_PROFILE - Add CONTAINER_IMPORT_TIMEOUT_BASE/PER for dynamic timeout scaling Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add virtio-9p fast path for batch importsBruce Ashfield4 days4-50/+284
| | | | | | | | | | | | | Add virtio-9p filesystem support for faster storage output during batch container imports, replacing slow base64-over-console method. - Add --timeout option for configurable import timeouts - Mount virtio-9p share in batch-import mode - Parse _9p=1 kernel parameter for 9p availability - Write storage.tar directly to shared filesystem - Reduces import time from ~600s to ~11s for large containers Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* image-oci: add layer caching for multi-layer OCI buildsBruce Ashfield4 days4-2/+731
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add layer caching to speed up multi-layer OCI image rebuilds. When enabled, pre-installed package layers are cached to disk and restored on subsequent builds, avoiding repeated package installation. New variables: - OCI_LAYER_CACHE: Enable/disable caching (default "1") - OCI_LAYER_CACHE_DIR: Cache location (default ${TOPDIR}/oci-layer-cache/${MACHINE}) Cache key is computed from: - Layer name and type - Sorted package list - Package versions from PKGDATA_DIR - MACHINE and TUNE_PKGARCH Cache automatically invalidates when: - Package versions change - Layer definition changes - Architecture changes Benefits: - First build: ~10-30s per layer (cache miss, packages installed) - Subsequent builds: ~1s per layer (cache hit, files copied) - Shared across recipes with identical layer definitions Build log shows cache status: NOTE: OCI Cache HIT: Layer 'base' (be88c180f651416b) NOTE: OCI: Pre-installed packages for 3 layers (cache: 3 hits, 0 misses) Also adds comprehensive pytest suite for multi-layer OCI functionality including tests for 1/2/3 layer modes and cache behavior. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* image-oci: add multi-layer OCI image support with OCI_LAYERSBruce Ashfield4 days4-34/+488
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for creating multi-layer OCI images with explicit layer definitions via OCI_LAYERS variable. This enables fine-grained control over container layer composition. New variables: - OCI_LAYER_MODE: Set to "multi" for explicit layer definitions - OCI_LAYERS: Define layers as "name:type:content" entries - packages: Install specific packages in a layer - directories: Copy directories from IMAGE_ROOTFS - files: Copy specific files from IMAGE_ROOTFS Package installation uses Yocto's package manager classes (RpmPM, OpkgPM) for consistency with do_rootfs, rather than calling dnf/opkg directly. Example usage: OCI_LAYER_MODE = "multi" OCI_LAYERS = "\ base:packages:base-files+base-passwd+netbase \ shell:packages:busybox \ app:packages:curl \ " This creates a 3-layer OCI image with discrete base, shell, and app layers that can be shared and cached independently. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* recipes: add multi-layer OCI example recipesBruce Ashfield4 days3-0/+102
| | | | | | | | | | | | | | | | | | | | | | | | Add example recipes demonstrating multi-layer OCI image building: alpine-oci-base_3.19.bb: - Fetches Alpine 3.19 from Docker Hub using container-bundle - Uses CONTAINER_BUNDLE_DEPLOY for use as OCI_BASE_IMAGE source - Pinned digest for reproducible builds app-container-alpine.bb: - Demonstrates external base image usage - Layers Yocto packages (busybox) on top of Alpine - Uses OCI_IMAGE_CMD for Docker-like behavior app-container-layered.bb: - Demonstrates local base image usage - Layers Yocto packages on top of container-base - Uses OCI_IMAGE_CMD for Docker-like behavior Both app containers produce 2-layer OCI images where the base layer is shared, reducing storage and transfer costs. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* docs: add OCI multi-layer and vshell documentationBruce Ashfield4 days1-0/+46
| | | | | | | | | | | | Update container-bundling.md with: - New "OCI Multi-Layer Images" section explaining: - Single vs multi-layer image differences - OCI_BASE_IMAGE usage (recipe name or path) - OCI_IMAGE_CMD vs OCI_IMAGE_ENTRYPOINT behavior - When to use CMD (base images) vs ENTRYPOINT (wrapper tools) Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add vshell command for VM debug accessBruce Ashfield4 days1-0/+20
| | | | | | | | | | | | | | | | | | | | Add vshell command to vdkr/vpdmn for interactive shell access to the QEMU VM where Docker/Podman runs. This is useful for debugging container issues directly inside the virtual environment. Usage: vdkr vmemres start vdkr vshell # Now inside VM, can run docker commands directly docker images docker inspect <image> exit The vshell command requires the memory-resident daemon to be running (vmemres start). It opens an interactive shell via the daemon's --daemon-interactive mode. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-bundle: add CONTAINER_BUNDLE_DEPLOY for base layer useBruce Ashfield4 days1-0/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add CONTAINER_BUNDLE_DEPLOY variable to enable dual-use of container-bundle: 1. Target packages (existing): Creates installable packages for target container storage (Docker/Podman) 2. Base layer source (new): When CONTAINER_BUNDLE_DEPLOY = "1", also deploys the fetched OCI image to DEPLOY_DIR_IMAGE for use as a base layer via OCI_BASE_IMAGE This enables fetching external images (docker.io, quay.io) and using them as base layers for Yocto-built container images. Example usage: # recipes-containers/oci-base-images/alpine-oci-base_3.19.bb inherit container-bundle CONTAINER_BUNDLES = "docker.io/library/alpine:3.19" CONTAINER_DIGESTS[docker.io_library_alpine_3.19] = "sha256:..." CONTAINER_BUNDLE_DEPLOY = "1" # Then in your app container recipe: OCI_BASE_IMAGE = "alpine-oci-base" IMAGE_INSTALL = "myapp" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* image-oci: add multi-layer OCI support and CMD defaultBruce Ashfield4 days2-15/+395
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for multi-layer OCI images, enabling base + app layer builds: Multi-layer support: - Add OCI_BASE_IMAGE variable to specify base layer (recipe name or path) - Add OCI_BASE_IMAGE_TAG for selecting base image tag (default: latest) - Resolve base image type (recipe/path/remote) at parse time - Copy base OCI layout before adding new layer via umoci repack - Fix merged-usr whiteout ordering issue for non-merged-usr base images (replaces problematic whiteouts with filtered entries to avoid Docker pull failures when layering merged-usr on traditional layout) CMD/ENTRYPOINT behavior change: - Add OCI_IMAGE_CMD variable (default: "/bin/sh") - Change OCI_IMAGE_ENTRYPOINT default to empty string - This makes `docker run image /bin/sh` work as expected (like Docker Hub images) - OCI_IMAGE_ENTRYPOINT_ARGS still works for legacy compatibility - Fix shlex.split() for proper shell quoting in CMD/ENTRYPOINT values The multi-layer feature requires umoci backend (default). The sloci backend only supports single-layer images and will error if OCI_BASE_IMAGE is set. Example usage: OCI_BASE_IMAGE = "container-base" IMAGE_INSTALL = "myapp" OCI_IMAGE_CMD = "/usr/bin/myapp" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-registry: add management commands and documentationBruce Ashfield4 days3-17/+272
| | | | | | | | | | | | | | | | | | | | Registry management commands: - delete <image>:<tag>: Remove tagged images from registry - gc: Garbage collection with dry-run preview and confirmation - push <image> --tag: Explicit tags now require image name (prevents accidentally tagging all images with same version) Config improvements: - Copy config to storage directory with baked-in storage path - Fixes gc which reads config directly (not via env var) - All registry files now in ${TOPDIR}/container-registry/ Documentation: - Development Loop workflow (build, push, pull, test) - Build-time OCI labels (revision, branch, created) - Complete command reference Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* image-oci: add build-time metadata labels for traceabilityBruce Ashfield4 days2-1/+81
| | | | | | | | | | | | | | | | | | | | | | | Automatically embed source and build information into OCI images using standard OCI annotations (opencontainers.org image-spec): - org.opencontainers.image.revision: git commit SHA - org.opencontainers.image.ref.name: git branch name - org.opencontainers.image.created: ISO 8601 build timestamp - org.opencontainers.image.version: PV (if meaningful) New variables: - OCI_IMAGE_REVISION: explicit SHA override (auto-detects from TOPDIR) - OCI_IMAGE_BRANCH: explicit branch override (auto-detects from TOPDIR) - OCI_IMAGE_BUILD_DATE: explicit timestamp override (auto-generated) - OCI_IMAGE_APP_RECIPE: hook for future cross-recipe extraction Set any variable to "none" to disable that specific label. This enables 1:1 traceability between container images and source code, following industry best practices for CI/CD and release management. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-registry: add industry-standard tag strategiesBruce Ashfield4 days3-25/+247
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add comprehensive tag support for registry push operations: Tag strategies (CONTAINER_REGISTRY_TAG_STRATEGY): - sha/git: short git commit hash for traceability - branch: git branch name (sanitized) for dev workflows - semver: nested SemVer tags (1.2.3 -> 1.2.3, 1.2, 1) - timestamp: YYYYMMDD-HHMMSS format - version: single version tag from PV - latest: the "latest" tag - arch: append architecture suffix Helper script enhancements: - push --tag <tag>: explicit tags (repeatable) - push --strategy <strategies>: override tag strategy - push --version <ver>: version for semver strategy - Baked-in defaults from bitbake variables - Environment variable overrides supported This aligns with industry practices: - Git SHA for CI/CD traceability - SemVer nested tags for release management - Branch tags for feature development Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* tests: add container registry pytest testsBruce Ashfield4 days5-5/+1106
| | | | | | | | | Add pytest tests for registry functionality: - test_vdkr_registry.py: vconfig registry, image commands, CLI override - test_container_registry_script.py: start/stop/push/import/list/tags - conftest.py: --registry-url, --registry-script options Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vdkr: add registry configuration and pull fallbackBruce Ashfield4 days5-5/+493
| | | | | | | | | | | Add registry support to vdkr: - vconfig registry command for persistent config - --registry flag for one-off usage - Registry-first, Docker Hub fallback for pulls - Baked-in registry config via CONTAINER_REGISTRY_URL - Image commands (inspect, history, rmi, images) work without transform Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-registry: add local OCI registry infrastructureBruce Ashfield4 days10-0/+1422
| | | | | | | | | | | Add container registry support for Yocto container workflows: - container-registry.bbclass with helper functions - container-registry-index.bb generates helper script with baked paths - docker-registry-config.bb for Docker daemon on targets - container-oci-registry-config.bb for Podman/Skopeo/Buildah targets - IMAGE_FEATURES container-registry for easy target configuration Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* tests: increase stop command timeouts to 30 secondsBruce Ashfield4 days1-10/+10
| | | | | | | | | Docker stop has a default 10-second grace period before SIGKILL, so test timeouts of 10 seconds were insufficient. Increase all stop timeouts to 30 seconds to account for the grace period plus command overhead. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* tests: add --network=host backward compatibility testBruce Ashfield4 days2-4/+67
| | | | | | | | | | | | Add test_network_host_backward_compat to verify that explicit --network=host still works with the new bridge networking default. Uses busybox httpd with configurable port since static port forwards now map host_port -> host_port on VM (for bridge networking's Docker -p handling). Also update test docstrings to reflect bridge networking as the new default and add port 8082 to TEST_PORTS for orphan cleanup. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: fix ps -q to suppress port forward displayBruce Ashfield4 days2-5/+13
| | | | | | | | When using `ps -q` or `ps --quiet`, only container IDs should be output. The port forward registry display was being included, which broke cleanup code that expected just container IDs. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vrunner: update static port forwarding for bridge networkingBruce Ashfield4 days1-5/+8
| | | | | | | | | | | | | Update the static port forwarding (used at QEMU startup) to match the dynamic QMP port forwarding change. With bridge networking: - QEMU forwards host:port -> VM:port - Docker's iptables handles VM:port -> container:port Previously the static port forward went directly to the container port (host:8080 -> VM:80), which doesn't work with bridge networking where Docker needs to handle the final hop. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>