summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* container-cross-install: add tests and documentation for custom service filescontainer-cross-installBruce Ashfield2026-02-063-1/+294
| | | | | | | | | | | | | | | | | | | | | | | | | | Add pytest tests to verify CONTAINER_SERVICE_FILE varflag support: TestCustomServiceFileSupport (unit tests, no build required): - test_bbclass_has_service_file_support - test_bundle_class_has_service_file_support - test_service_file_map_syntax - test_install_custom_service_function TestCustomServiceFileBoot (boot tests, require built image): - test_systemd_services_directory_exists - test_container_services_present - test_container_service_enabled - test_custom_service_content - test_podman_quadlet_directory Documentation updates: - docs/container-bundling.md: Add "Custom Service Files" section with variable format, usage examples for both BUNDLED_CONTAINERS and container-bundle packages, and example .service/.container files - tests/README.md: Add test class entries to structure diagram and "What the Tests Check" table Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-cross-install: add CONTAINER_SERVICE_FILE supportBruce Ashfield2026-02-062-1/+236
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for custom systemd service files (Docker) or Quadlet container files (Podman) instead of auto-generated ones for container autostart. For containers requiring specific startup configuration (ports, volumes, capabilities, dependencies), users can now provide custom service files using the CONTAINER_SERVICE_FILE varflag: CONTAINER_SERVICE_FILE[container-name] = "${UNPACKDIR}/myservice.service" For BUNDLED_CONTAINERS in image recipes: SRC_URI += "file://myapp.service" BUNDLED_CONTAINERS = "myapp-container:docker:autostart" CONTAINER_SERVICE_FILE[myapp-container] = "${UNPACKDIR}/myapp.service" For container-bundle packages: SRC_URI = "file://myapp.service" CONTAINER_BUNDLES = "myapp-container:autostart" CONTAINER_SERVICE_FILE[myapp-container] = "${UNPACKDIR}/myapp.service" Implementation: - container-cross-install.bbclass: Add get_container_service_file_map() to build varflag map, install_custom_service() for BUNDLED_CONTAINERS, and install_custom_service_from_bundle() for bundle packages - container-bundle.bbclass: Install custom service files to ${datadir}/container-bundles/${runtime}/services/ Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* image-oci: add host layer type and delta-only copyingBruce Ashfield2026-02-052-14/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add two enhancements to multi-layer OCI image support: 1. Delta-only copying for directories/files layers: - directories and files layers now only copy content that doesn't already exist in the bundle rootfs from earlier layers - Prevents duplication when a directories layer references paths that were already populated by a packages layer - Logs show "delta: N copied, M skipped" for visibility 2. New 'host' layer type for build machine content: - Copies files from the build machine filesystem (outside Yocto) - Format: name:host:source_path:dest_path - Multiple pairs: name:host:src1:dst1+src2:dst2 - Emits warning at parse time about reproducibility impact - Fatal error if source path doesn't exist - Use case: deployment-specific config, certificates, keys that cannot be packaged in recipes Example: OCI_LAYERS = "\ base:packages:busybox \ app:directories:/opt/myapp \ certs:host:/etc/ssl/certs/ca.crt:/etc/ssl/certs/ca.crt \ " Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: enable incremental builds by defaultBruce Ashfield2026-02-054-10/+38
| | | | | | | | | | | | | | | | | | | Previously, vcontainer recipes had [nostamp] flags that forced all tasks to rebuild on every bitbake invocation, even when nothing changed. This was added as a workaround for dependency tracking issues but caused slow rebuild times. Changes: - Make [nostamp] conditional on VCONTAINER_FORCE_BUILD variable - Default to normal stamp-based caching for faster incremental builds - file-checksums on do_rootfs still tracks init script changes - Add VCONTAINER_FORCE_BUILD status to the tarball build banner To enable the old always-rebuild behavior (for debugging dependency issues), set in local.conf: VCONTAINER_FORCE_BUILD = "1" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-tarball: build all architectures via single bitbake commandBruce Ashfield2026-02-052-6/+45
| | | | | | | | | | | | | | | | | | | | Previously, building vcontainer-tarball required multiple bitbake invocations or complex command lines to build both x86_64 and aarch64 blobs. This was a usability issue. Changes: - mcdepends now triggers builds for BOTH architectures automatically - VCONTAINER_ARCHITECTURES defaults to "x86_64 aarch64" (was auto-detect) - Add informational banner at parse time showing what will be built - Fix duplicate sanity check messages when multiconfig is active Usage is now simply: bitbake vcontainer-tarball To build only one architecture, set in local.conf: VCONTAINER_ARCHITECTURES = "x86_64" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vdkr-init: improve Docker daemon startup logging and error handlingBruce Ashfield2026-01-241-5/+17
| | | | | | | | | | | Improve debugging capabilities when Docker daemon fails to start: - Log dockerd output to /var/log/docker.log instead of /dev/null - Capture docker info exit code and output for diagnostics - Show docker info error on every 10th iteration while waiting - Include last docker info output and docker.log tail on failure - Extend sleep on failure from 2s to 5s for log review Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* packagegroups: add container build aggregatesBruce Ashfield2026-01-243-0/+118
| | | | | | | | | | | | | | | | | | Add packagegroup recipes to simplify building all container-related artifacts: - packagegroup-container-images: Build all OCI container images (recipes inheriting image-oci) - packagegroup-container-bundles: Build all container bundles (recipes inheriting container-bundle) - packagegroup-container-demo: Build all demo containers and bundles Usage: bitbake packagegroup-container-images bitbake packagegroup-container-bundles bitbake packagegroup-container-demo Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add host-side idle timeout with QMP shutdownBruce Ashfield2026-01-243-20/+154
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement reliable idle timeout for vmemres daemon mode using host-side monitoring with QMP-based shutdown, and container-aware idle detection via virtio-9p shared file. Host-side changes (vrunner.sh): - Add -no-reboot flag to QEMU for clean exit semantics - Spawn background watchdog when daemon starts - Watchdog monitors activity file timestamp - Check interval scales to idle timeout (timeout/5, clamped 10-60s) - Read container status from shared file (guest writes via virtio-9p) - Only shutdown if no containers are running - Send QMP "quit" command for graceful shutdown - Watchdog auto-exits if QEMU dies (no zombie processes) - Touch activity file in daemon_send() for user activity tracking Config changes (vcontainer-common.sh): - Add idle-timeout to build_runner_args() so it's always passed Guest-side changes (vcontainer-init-common.sh): - Add watchdog that writes container status to /mnt/share/.containers_running - Host reads this file instead of socket commands (avoids output corruption) - Close inherited virtio-serial fd 3 in watchdog subshell to prevent leaks - Guest-side shutdown logic preserved but disabled (QMP more reliable) - Handle Yocto read-only-rootfs volatile directories (/var/volatile) The shared file approach avoids sending container check commands through the daemon socket, which previously caused output corruption on the single-stream virtio-serial channel. The idle timeout is configurable via: vdkr vconfig idle-timeout <secs> Default: 1800 seconds (30 minutes) Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: consolidate initramfs-create recipesBruce Ashfield2026-01-233-80/+33
| | | | | | | | | | | | | | | | | | | Update vcontainer-initramfs-create.inc to use the image-based approach: - Depend on tiny-initramfs-image for cpio.gz (replaces file extraction) - Depend on rootfs-image for squashfs (unchanged) - Remove DEPENDS on squashfs-tools-native (no longer extracting files) Update recipe files to use the consolidated inc: - vdkr-initramfs-create_1.0.bb - vpdmn-initramfs-create_1.0.bb Boot flow remains unchanged: QEMU boots kernel + tiny initramfs -> preinit mounts rootfs.img from /dev/vda -> switch_root into rootfs.img -> vdkr-init.sh or vpdmn-init.sh runs Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add tiny initramfs image infrastructureBruce Ashfield2026-01-235-3/+142
| | | | | | | | | | | | | | | | | | | | | Add proper Yocto image recipes for the tiny initramfs used by vdkr/vpdmn in the switch_root boot flow: - vcontainer-tiny-initramfs-image.inc: Shared image configuration - vcontainer-preinit_1.0.bb: Preinit script package (shared) - vdkr-tiny-initramfs-image.bb: Tiny initramfs for vdkr - vpdmn-tiny-initramfs-image.bb: Tiny initramfs for vpdmn The tiny initramfs contains only busybox and a preinit script that: 1. Mounts devtmpfs, proc, sysfs 2. Mounts the squashfs rootfs.img from /dev/vda 3. Creates tmpfs overlay for writes 4. Performs switch_root to the real rootfs This replaces ad-hoc file extraction with proper image-based builds, improving reproducibility and maintainability. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-tarball: add nativesdk-expect dependencyBruce Ashfield2026-01-231-0/+1
| | | | | | | Add expect to the vcontainer SDK toolchain for interactive testing and automation scripts. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* linux-yocto: add iptables legacy kernel config for DockerBruce Ashfield2026-01-231-1/+10
| | | | | | | | | | | | | | | | Kernel 6.18+ split iptables into legacy/nftables backends. Docker requires the legacy iptables support, so add the kernel configuration for the full dependency chain: - CONFIG_NETFILTER_XTABLES_LEGACY=y - CONFIG_IP_NF_IPTABLES_LEGACY=m - CONFIG_IP_NF_FILTER=m - CONFIG_IP_NF_NAT=m - CONFIG_IP_NF_TARGET_MASQUERADE=m Without these, Docker's iptables rules fail to load on 6.18+ kernels. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add sanity checks and auto-enable virtfs for QEMUBruce Ashfield2026-01-233-4/+12
| | | | | | | | | | | | | | | | | Fix virtio-9p (virtfs) support for container-cross-install batch imports which provides ~50x speedup over base64-over-serial. The issue was that native recipes don't see target DISTRO_FEATURES, so qemu-system-native wasn't getting virtfs enabled. Fix by: - layer.conf: Propagate virtualization to DISTRO_FEATURES_NATIVE when vcontainer or virtualization is in target DISTRO_FEATURES - qemu-system-native: Check DISTRO_FEATURES_NATIVE for virtfs enable - container-cross-install: Prepend native sysroot to PATH so vrunner finds the QEMU with virtfs support Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: fix runc/crun conflict in multiconfig buildsBruce Ashfield2026-01-233-12/+23
| | | | | | | | | | | | | | | | | | | | | | The vruntime distro is used for multiconfig builds of both vdkr (Docker/runc) and vpdmn (Podman/crun) images. When CONTAINER_PROFILE or VIRTUAL-RUNTIME_container_runtime is set, containerd and podman pull their preferred runtime via RDEPENDS, causing package conflicts. Fix by having vruntime distro NOT participate in CONTAINER_PROFILE: - Set VIRTUAL-RUNTIME_container_runtime="" to prevent automatic runtime selection - Explicitly install runc in vdkr-rootfs-image.bb - Explicitly install crun in vpdmn-rootfs-image.bb This allows both images to be built in the same multiconfig without conflicts, while standard container-host images continue to use CONTAINER_PROFILE normally. Also add kernel-modules to vdkr-rootfs-image for overlay filesystem support. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* crun: add RCONFLICTS to prevent runc package conflictBruce Ashfield2026-01-231-0/+3
| | | | | | | | | | When CRUN_AS_RUNC is enabled (default), crun creates a /usr/bin/runc symlink that conflicts with the runc package's /usr/bin/runc binary. Add RCONFLICTS to declare this conflict so package managers prevent both from being installed simultaneously. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add sanity checks and auto-enable virtfs for QEMUBruce Ashfield2026-01-212-0/+46
| | | | | | | | | | | Add sanity check that warns when vcontainer distro feature is enabled but BBMULTICONFIG is missing the required vruntime-* multiconfigs. Add qemu-system-native bbappend to auto-enable virtfs (virtio-9p) when vcontainer or virtualization distro feature is set. This is required for the fast batch-import path in container-cross-install. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-bundles: add multilayer container bundle recipeBruce Ashfield2026-01-211-0/+27
| | | | | | | | | | Add demo recipe that bundles app-container-multilayer to demonstrate multi-layer OCI images with container-cross-install. Usage: IMAGE_INSTALL:append:pn-container-image-host = " multilayer-container-bundle" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add multi-arch OCI supportBruce Ashfield2026-01-215-11/+1259
| | | | | | | | | | | | | | | | | | | | | Add functions to detect and handle multi-architecture OCI Image Index format with automatic platform selection during import. Also add oci-multiarch.bbclass for build-time multi-arch OCI creation. Runtime support (vcontainer-common.sh): - is_oci_image_index() - detect multi-arch OCI images - get_oci_platforms() - list available platforms - select_platform_manifest() - select manifest for target architecture - extract_platform_oci() - extract single platform to new OCI dir - normalize_arch_to_oci/from_oci() - architecture name mapping - Update vimport to auto-select platform from multi-arch images Build-time support (oci-multiarch.bbclass): - Create OCI Image Index from multiconfig builds - Collect images from vruntime-aarch64, vruntime-x86-64 - Combine blobs and create unified manifest list Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-registry: abstract config and add multi-directory pushBruce Ashfield2026-01-212-45/+352
| | | | | | | | | | | | | Abstract registry configuration for Docker/Podman compatibility and add multi-directory scanning for easy multi-arch manifest list creation. - Support both DOCKER_REGISTRY_INSECURE and CONTAINER_REGISTRY_INSECURE - Add DEPLOY_DIR_IMAGES to scan all machine directories - Support push by path (single OCI) and push by name (all archs) - Add environment variable overrides for flexibility - Single 'push' command now creates multi-arch manifest lists Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-cross-install: fix image naming and default runtimeBruce Ashfield2026-01-211-17/+56
| | | | | | | | | | | | Fix extract_container_info() to properly handle multi-part container names and add automatic runtime detection based on CONTAINER_PROFILE. - Fix multi-part name parsing (app-container-multilayer-latest-oci now correctly becomes app-container-multilayer:latest) - Add CONTAINER_DEFAULT_RUNTIME from CONTAINER_PROFILE - Add CONTAINER_IMPORT_TIMEOUT_BASE/PER for dynamic timeout scaling Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add virtio-9p fast path for batch importsBruce Ashfield2026-01-214-50/+284
| | | | | | | | | | | | | Add virtio-9p filesystem support for faster storage output during batch container imports, replacing slow base64-over-console method. - Add --timeout option for configurable import timeouts - Mount virtio-9p share in batch-import mode - Parse _9p=1 kernel parameter for 9p availability - Write storage.tar directly to shared filesystem - Reduces import time from ~600s to ~11s for large containers Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* image-oci: add layer caching for multi-layer OCI buildsBruce Ashfield2026-01-214-2/+731
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add layer caching to speed up multi-layer OCI image rebuilds. When enabled, pre-installed package layers are cached to disk and restored on subsequent builds, avoiding repeated package installation. New variables: - OCI_LAYER_CACHE: Enable/disable caching (default "1") - OCI_LAYER_CACHE_DIR: Cache location (default ${TOPDIR}/oci-layer-cache/${MACHINE}) Cache key is computed from: - Layer name and type - Sorted package list - Package versions from PKGDATA_DIR - MACHINE and TUNE_PKGARCH Cache automatically invalidates when: - Package versions change - Layer definition changes - Architecture changes Benefits: - First build: ~10-30s per layer (cache miss, packages installed) - Subsequent builds: ~1s per layer (cache hit, files copied) - Shared across recipes with identical layer definitions Build log shows cache status: NOTE: OCI Cache HIT: Layer 'base' (be88c180f651416b) NOTE: OCI: Pre-installed packages for 3 layers (cache: 3 hits, 0 misses) Also adds comprehensive pytest suite for multi-layer OCI functionality including tests for 1/2/3 layer modes and cache behavior. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* image-oci: add multi-layer OCI image support with OCI_LAYERSBruce Ashfield2026-01-214-34/+488
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for creating multi-layer OCI images with explicit layer definitions via OCI_LAYERS variable. This enables fine-grained control over container layer composition. New variables: - OCI_LAYER_MODE: Set to "multi" for explicit layer definitions - OCI_LAYERS: Define layers as "name:type:content" entries - packages: Install specific packages in a layer - directories: Copy directories from IMAGE_ROOTFS - files: Copy specific files from IMAGE_ROOTFS Package installation uses Yocto's package manager classes (RpmPM, OpkgPM) for consistency with do_rootfs, rather than calling dnf/opkg directly. Example usage: OCI_LAYER_MODE = "multi" OCI_LAYERS = "\ base:packages:base-files+base-passwd+netbase \ shell:packages:busybox \ app:packages:curl \ " This creates a 3-layer OCI image with discrete base, shell, and app layers that can be shared and cached independently. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* recipes: add multi-layer OCI example recipesBruce Ashfield2026-01-213-0/+102
| | | | | | | | | | | | | | | | | | | | | | | | Add example recipes demonstrating multi-layer OCI image building: alpine-oci-base_3.19.bb: - Fetches Alpine 3.19 from Docker Hub using container-bundle - Uses CONTAINER_BUNDLE_DEPLOY for use as OCI_BASE_IMAGE source - Pinned digest for reproducible builds app-container-alpine.bb: - Demonstrates external base image usage - Layers Yocto packages (busybox) on top of Alpine - Uses OCI_IMAGE_CMD for Docker-like behavior app-container-layered.bb: - Demonstrates local base image usage - Layers Yocto packages on top of container-base - Uses OCI_IMAGE_CMD for Docker-like behavior Both app containers produce 2-layer OCI images where the base layer is shared, reducing storage and transfer costs. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* docs: add OCI multi-layer and vshell documentationBruce Ashfield2026-01-211-0/+46
| | | | | | | | | | | | Update container-bundling.md with: - New "OCI Multi-Layer Images" section explaining: - Single vs multi-layer image differences - OCI_BASE_IMAGE usage (recipe name or path) - OCI_IMAGE_CMD vs OCI_IMAGE_ENTRYPOINT behavior - When to use CMD (base images) vs ENTRYPOINT (wrapper tools) Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add vshell command for VM debug accessBruce Ashfield2026-01-211-0/+20
| | | | | | | | | | | | | | | | | | | | Add vshell command to vdkr/vpdmn for interactive shell access to the QEMU VM where Docker/Podman runs. This is useful for debugging container issues directly inside the virtual environment. Usage: vdkr vmemres start vdkr vshell # Now inside VM, can run docker commands directly docker images docker inspect <image> exit The vshell command requires the memory-resident daemon to be running (vmemres start). It opens an interactive shell via the daemon's --daemon-interactive mode. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-bundle: add CONTAINER_BUNDLE_DEPLOY for base layer useBruce Ashfield2026-01-211-0/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add CONTAINER_BUNDLE_DEPLOY variable to enable dual-use of container-bundle: 1. Target packages (existing): Creates installable packages for target container storage (Docker/Podman) 2. Base layer source (new): When CONTAINER_BUNDLE_DEPLOY = "1", also deploys the fetched OCI image to DEPLOY_DIR_IMAGE for use as a base layer via OCI_BASE_IMAGE This enables fetching external images (docker.io, quay.io) and using them as base layers for Yocto-built container images. Example usage: # recipes-containers/oci-base-images/alpine-oci-base_3.19.bb inherit container-bundle CONTAINER_BUNDLES = "docker.io/library/alpine:3.19" CONTAINER_DIGESTS[docker.io_library_alpine_3.19] = "sha256:..." CONTAINER_BUNDLE_DEPLOY = "1" # Then in your app container recipe: OCI_BASE_IMAGE = "alpine-oci-base" IMAGE_INSTALL = "myapp" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* image-oci: add multi-layer OCI support and CMD defaultBruce Ashfield2026-01-212-15/+395
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for multi-layer OCI images, enabling base + app layer builds: Multi-layer support: - Add OCI_BASE_IMAGE variable to specify base layer (recipe name or path) - Add OCI_BASE_IMAGE_TAG for selecting base image tag (default: latest) - Resolve base image type (recipe/path/remote) at parse time - Copy base OCI layout before adding new layer via umoci repack - Fix merged-usr whiteout ordering issue for non-merged-usr base images (replaces problematic whiteouts with filtered entries to avoid Docker pull failures when layering merged-usr on traditional layout) CMD/ENTRYPOINT behavior change: - Add OCI_IMAGE_CMD variable (default: "/bin/sh") - Change OCI_IMAGE_ENTRYPOINT default to empty string - This makes `docker run image /bin/sh` work as expected (like Docker Hub images) - OCI_IMAGE_ENTRYPOINT_ARGS still works for legacy compatibility - Fix shlex.split() for proper shell quoting in CMD/ENTRYPOINT values The multi-layer feature requires umoci backend (default). The sloci backend only supports single-layer images and will error if OCI_BASE_IMAGE is set. Example usage: OCI_BASE_IMAGE = "container-base" IMAGE_INSTALL = "myapp" OCI_IMAGE_CMD = "/usr/bin/myapp" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-registry: add management commands and documentationBruce Ashfield2026-01-213-17/+272
| | | | | | | | | | | | | | | | | | | | Registry management commands: - delete <image>:<tag>: Remove tagged images from registry - gc: Garbage collection with dry-run preview and confirmation - push <image> --tag: Explicit tags now require image name (prevents accidentally tagging all images with same version) Config improvements: - Copy config to storage directory with baked-in storage path - Fixes gc which reads config directly (not via env var) - All registry files now in ${TOPDIR}/container-registry/ Documentation: - Development Loop workflow (build, push, pull, test) - Build-time OCI labels (revision, branch, created) - Complete command reference Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* image-oci: add build-time metadata labels for traceabilityBruce Ashfield2026-01-212-1/+81
| | | | | | | | | | | | | | | | | | | | | | | Automatically embed source and build information into OCI images using standard OCI annotations (opencontainers.org image-spec): - org.opencontainers.image.revision: git commit SHA - org.opencontainers.image.ref.name: git branch name - org.opencontainers.image.created: ISO 8601 build timestamp - org.opencontainers.image.version: PV (if meaningful) New variables: - OCI_IMAGE_REVISION: explicit SHA override (auto-detects from TOPDIR) - OCI_IMAGE_BRANCH: explicit branch override (auto-detects from TOPDIR) - OCI_IMAGE_BUILD_DATE: explicit timestamp override (auto-generated) - OCI_IMAGE_APP_RECIPE: hook for future cross-recipe extraction Set any variable to "none" to disable that specific label. This enables 1:1 traceability between container images and source code, following industry best practices for CI/CD and release management. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-registry: add industry-standard tag strategiesBruce Ashfield2026-01-213-25/+247
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add comprehensive tag support for registry push operations: Tag strategies (CONTAINER_REGISTRY_TAG_STRATEGY): - sha/git: short git commit hash for traceability - branch: git branch name (sanitized) for dev workflows - semver: nested SemVer tags (1.2.3 -> 1.2.3, 1.2, 1) - timestamp: YYYYMMDD-HHMMSS format - version: single version tag from PV - latest: the "latest" tag - arch: append architecture suffix Helper script enhancements: - push --tag <tag>: explicit tags (repeatable) - push --strategy <strategies>: override tag strategy - push --version <ver>: version for semver strategy - Baked-in defaults from bitbake variables - Environment variable overrides supported This aligns with industry practices: - Git SHA for CI/CD traceability - SemVer nested tags for release management - Branch tags for feature development Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* tests: add container registry pytest testsBruce Ashfield2026-01-215-5/+1106
| | | | | | | | | Add pytest tests for registry functionality: - test_vdkr_registry.py: vconfig registry, image commands, CLI override - test_container_registry_script.py: start/stop/push/import/list/tags - conftest.py: --registry-url, --registry-script options Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vdkr: add registry configuration and pull fallbackBruce Ashfield2026-01-215-5/+493
| | | | | | | | | | | Add registry support to vdkr: - vconfig registry command for persistent config - --registry flag for one-off usage - Registry-first, Docker Hub fallback for pulls - Baked-in registry config via CONTAINER_REGISTRY_URL - Image commands (inspect, history, rmi, images) work without transform Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* container-registry: add local OCI registry infrastructureBruce Ashfield2026-01-2110-0/+1422
| | | | | | | | | | | Add container registry support for Yocto container workflows: - container-registry.bbclass with helper functions - container-registry-index.bb generates helper script with baked paths - docker-registry-config.bb for Docker daemon on targets - container-oci-registry-config.bb for Podman/Skopeo/Buildah targets - IMAGE_FEATURES container-registry for easy target configuration Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* tests: increase stop command timeouts to 30 secondsBruce Ashfield2026-01-211-10/+10
| | | | | | | | | Docker stop has a default 10-second grace period before SIGKILL, so test timeouts of 10 seconds were insufficient. Increase all stop timeouts to 30 seconds to account for the grace period plus command overhead. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* tests: add --network=host backward compatibility testBruce Ashfield2026-01-212-4/+67
| | | | | | | | | | | | Add test_network_host_backward_compat to verify that explicit --network=host still works with the new bridge networking default. Uses busybox httpd with configurable port since static port forwards now map host_port -> host_port on VM (for bridge networking's Docker -p handling). Also update test docstrings to reflect bridge networking as the new default and add port 8082 to TEST_PORTS for orphan cleanup. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: fix ps -q to suppress port forward displayBruce Ashfield2026-01-212-5/+13
| | | | | | | | When using `ps -q` or `ps --quiet`, only container IDs should be output. The port forward registry display was being included, which broke cleanup code that expected just container IDs. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vrunner: update static port forwarding for bridge networkingBruce Ashfield2026-01-211-5/+8
| | | | | | | | | | | | | Update the static port forwarding (used at QEMU startup) to match the dynamic QMP port forwarding change. With bridge networking: - QEMU forwards host:port -> VM:port - Docker's iptables handles VM:port -> container:port Previously the static port forward went directly to the container port (host:8080 -> VM:80), which doesn't work with bridge networking where Docker needs to handle the final hop. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* tests: add cleanup for orphan QEMU and stale test stateBruce Ashfield2026-01-211-0/+98
| | | | | | | | | | | | | Add session-scoped autouse fixture that at session start: 1. Kills any QEMU processes holding ports used by tests (8080, 8081, 8888, etc.) - handles orphans from manual testing or crashed runs 2. Cleans up corrupt test state directories (docker-state.img with "needs journal recovery") to ensure tests start fresh This ensures tests don't fail due to leftover state from previous runs or manual testing. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* recipes: fix multiconfig build commentsBruce Ashfield2026-01-211-2/+2
| | | | | | | | Fix the bitbake multiconfig commands in rootfs recipe comments. The multiconfig names are vruntime-aarch64 and vruntime-x86-64, not vdkr-*/vpdmn-*. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* tests: add bridge networking testBruce Ashfield2026-01-211-2/+70
| | | | | | | | | | | | | | Add test_multiple_containers_same_internal_port() to verify the key benefit of bridge networking: multiple containers can listen on the same internal port with different host port mappings. The test runs two nginx containers both listening on port 80 internally, mapped to host ports 8080 and 8081, and verifies both are accessible. Also update TestPortForwarding docstring to reflect the change from host networking to bridge networking. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: update CLI for bridge networkingBruce Ashfield2026-01-211-12/+39
| | | | | | | | | | | | | | | | | | | Update the CLI wrapper to work with Docker bridge networking: - qmp_add_hostfwd(): Change QEMU port forward from host_port->guest_port to host_port->host_port. With bridge networking, Docker's iptables handles the final hop (VM:host_port -> container:guest_port). - Default network mode: Remove --network=host default. Docker's bridge is now the default, giving each container its own IP. Users can still explicitly use --network=host for legacy behavior. - Update help text to document the new bridge networking behavior. Port forwarding flow is now: Host:8080 -> QEMU -> VM:8080 -> Docker iptables -> Container:80 Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vdkr: enable Docker bridge networkingBruce Ashfield2026-01-212-4/+6
| | | | | | | | | | | | | | | | | | | | | | Enable Docker's default bridge network (docker0, 172.17.0.0/16) inside the QEMU VM to allow multiple containers to listen on the same internal port with different host port mappings. Changes: - Add iptables package to vdkr-rootfs-image for Docker NAT rules - Change dockerd options in vdkr-init.sh: - Set --iptables=true (was false) - Remove --bridge=none to enable default docker0 bridge This enables the workflow: vdkr run -d -p 8080:80 --name nginx1 nginx:alpine vdkr run -d -p 8081:80 --name nginx2 nginx:alpine # Both work - each container gets its own 172.17.0.x IP Previously with --network=host (the old default), both containers would try to bind port 80 on the VM's single IP, causing conflicts. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-common: fix QMP netdev ID for hostfwd commandsBruce Ashfield2026-01-211-2/+2
| | | | | | | The QEMU netdev is named 'net0', not 'user.0'. Fix the hostfwd_add and hostfwd_remove commands to use the correct netdev ID. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-common: remove 'local' keywords from case blocksBruce Ashfield2026-01-211-10/+10
| | | | | | | | The 'local' keyword can only be used inside functions, not in the main script's case blocks. Remove 'local' from variables in the run, ps, and vmemres case handlers. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-init: fix idle timeout detection to distinguish EOF from timeoutBruce Ashfield2026-01-211-5/+9
| | | | | | | | | | | | | | | | The read -t command returns different exit codes: - 0: successful read - 1: EOF (connection closed) - >128: timeout (typically 142) Previously, any empty result was treated as timeout, but when the host closes the connection after PING/PONG, read returns EOF immediately. This caused the daemon to shut down right after startup. Now we capture the exit code and only trigger idle shutdown when read actually times out (exit code > 128). Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vrunner: fix file descriptor close syntax for coprocBruce Ashfield2026-01-211-1/+1
| | | | | | | | | | The syntax {SOCAT[0]}<&- is invalid - curly braces are for allocating new file descriptors, not referencing existing ones. Use eval with ${SOCAT[0]} to properly close the coproc file descriptors. Fixes: "SOCAT[0]: ambiguous redirect" error when using daemon mode. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* tests: add tests for auto-start and dynamic port forwardingBruce Ashfield2026-01-212-0/+250
| | | | | | | | | | | | | | | | | | | | | | | Add test coverage for new vmemres features: TestAutoStartDaemon: - test_auto_start_on_first_command: Verify daemon auto-starts - test_no_daemon_flag: Verify --no-daemon uses ephemeral mode - test_vconfig_auto_daemon: Test auto-daemon config setting - test_vconfig_idle_timeout: Test idle-timeout config setting TestDynamicPortForwarding: - test_dynamic_port_forward_run: Run -d -p adds forward dynamically - test_port_forward_cleanup_on_stop: Forwards removed on stop - test_port_forward_cleanup_on_rm: Forwards removed on rm - test_multiple_dynamic_port_forwards: Multiple containers work TestPortForwardRegistry: - test_port_forward_cleared_on_memres_stop: Registry cleared Also add ensure_busybox() helper to both VdkrRunner and VpdmnRunner. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add dynamic port forwarding via QEMU QMPBruce Ashfield2026-01-212-17/+291
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add dynamic port forwarding for containers using QEMU's QMP (QEMU Machine Protocol). This enables running multiple detached containers with different port mappings without needing to declare ports upfront. Key changes: - vrunner.sh: Add QMP socket (-qmp unix:...) to daemon startup - vcontainer-common.sh: * QMP helper functions (qmp_send, qmp_add_hostfwd, qmp_remove_hostfwd) * Port forward registry (port-forwards.txt) for tracking mappings * Parse -p, -d, --name flags in run command * Add port forwards via QMP for detached containers * Enhanced ps command to show host port forwards * Cleanup port forwards on stop/rm/vmemres stop Usage: vdkr run -d -p 8080:80 nginx # Adds 8080->80 forward vdkr run -d -p 3000:3000 myapp # Adds 3000->3000 forward vdkr ps # Shows containers + port forwards vdkr stop nginx # Removes 8080->80 forward This solves the problem where auto-started vmemres couldn't handle multiple containers with different port mappings. QMP allows adding and removing port forwards at runtime without restarting the daemon. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add auto-start daemon with idle timeoutBruce Ashfield2026-01-214-15/+102
| | | | | | | | | | | | | | | | | | | | | | | | Add automatic daemon startup and idle timeout cleanup for vdkr/vpdmn: - vmemres daemon auto-starts on first command (no manual start needed) - Daemon auto-stops after idle timeout (default: 30 minutes) - --no-daemon flag for ephemeral mode (single-shot QEMU) - New config keys: idle-timeout, auto-daemon Changes: - vcontainer-init-common.sh: Parse idle_timeout from cmdline, add read -t timeout to daemon loop for auto-shutdown - vrunner.sh: Add --idle-timeout option, pass to kernel cmdline - vcontainer-common.sh: Auto-start logic in run_runtime_command(), --no-daemon flag, config defaults - container-cross-install.bbclass: Add --no-daemon for explicit ephemeral mode during Yocto builds Configuration: vdkr vconfig idle-timeout 3600 # 1 hour timeout vdkr vconfig auto-daemon false # Disable auto-start Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>