summaryrefslogtreecommitdiffstats
path: root/recipes-containers/vcontainer
Commit message (Collapse)AuthorAgeFilesLines
* vcontainer-tarball: set S to UNPACKDIR for do_qa_unpack checkBruce Ashfield40 hours1-0/+3
| | | | | | | | | The recipe only has file:// SRC_URI entries which unpack directly into UNPACKDIR, not a ${BP} subdirectory. The new do_qa_unpack QA check in insane.bbclass warns when S doesn't exist after unpack. Set S explicitly to satisfy the check. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: fix daemon mode missing shared directory for 9pBruce Ashfield2026-02-261-1/+6
| | | | | | | | | | | DAEMON_SHARE_DIR was referenced in the CA certificate copy and idle watchdog paths but never assigned, causing 'cp: cannot create regular file /ca.crt: Permission denied' when starting the daemon. Create the share directory under DAEMON_SOCKET_DIR and register it as a 9p mount, matching the path expected by daemon_run(). Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vxn: add host-side OCI image cache and fix Docker iptables conflictBruce Ashfield2026-02-261-6/+268
| | | | | | | | | | | | | | | | | | | Add a host-side OCI image cache at ~/.vxn/images/ for the vdkr/vpdmn standalone Xen path. Images pulled via skopeo are stored in a content-addressed layout (refs/ symlinks + store/ OCI dirs) so subsequent runs hit the cache without network access. New commands on Xen: pull, images, rmi, tag, inspect, image <subcmd>. The run path is unchanged — cache integration into hv_prepare_container is deferred to a follow-up. Also fix Docker iptables conflict: when docker-moby and vxn-docker-config coexist on Dom0, Docker's default FORWARD DROP policy blocks DHCP for Xen DomU vifs on xenbr0. Adding "iptables": false to daemon.json prevents Docker from modifying iptables since VM-based containers manage their own network stack. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vxn: add Docker/Podman integration and CLI frontendsBruce Ashfield2026-02-264-40/+167
| | | | | | | | | Add vdkr/vpdmn as Dom0 target packages with Xen auto-detection, native Docker/Podman config sub-packages, and OCI runtime fixes for Docker compatibility (JSON logging, root.path, kill --all, monitor PID lifecycle). Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vxn: add controlling terminal and clean up interactive outputBruce Ashfield2026-02-261-9/+8
| | | | | | | | | | | | | Use setsid -c to establish a controlling terminal for the container shell, fixing "can't access tty; job control turned off" and enabling Ctrl-C signal delivery. Run in a subshell so setsid() succeeds without forking (PID 1 is already a session leader). Remove [vxn] diagnostic markers from interactive output now that terminal mode is working. Suppress mount warning on read-only input disk. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vxn: fix terminal mode hang and enable interactive container supportBruce Ashfield2026-02-264-17/+67
| | | | | | | | | | | | | | | | | | | | | | | | The containerd shim's Create RPC hung indefinitely because go-runc captures the OCI runtime's stdout via a pipe, and cmd.Wait() blocks until all holders of the pipe's write end close it. The background monitor subshell inherited this pipe fd and held it open, preventing the shim from ever proceeding to ReceiveMaster() or calling Start. Fix by closing inherited stdout/stderr in the terminal-mode monitor with exec >/dev/null before entering the domain poll loop. Non-terminal mode is unaffected because the shim configures IO via FIFO dup2, where cmd.Wait() only waits for process exit. Additional changes for terminal mode support: - vxn-sendtty: set PTY to raw mode (cfmakeraw) before sending fd - vxn-oci-runtime: wait up to 5s for xenconsoled PTY, capture sendtty return code, write persistent debug file to /root/vxn-tty-debug, log every runtime invocation, remove stale debug logging - vxn-init.sh: add [vxn] diagnostic markers for terminal visibility, suppress kernel console messages early in interactive mode - vcontainer-preinit.sh: suppress kernel messages in quiet mode Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: inject vxn-init.sh into vdkr and vpdmn rootfs imagesBruce Ashfield2026-02-262-0/+8
| | | | | | | | | Install vxn-init.sh alongside the existing init scripts in both vdkr and vpdmn rootfs images. The Xen backend selects it at boot via the vcontainer.init=/vxn-init.sh kernel command line parameter. Add file-checksums tracking so rootfs rebuilds when the script changes. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add bundle command for OCI runtime bundle creationBruce Ashfield2026-02-261-0/+118
| | | | | | | | | | Add 'bundle' command to the vcontainer CLI for creating OCI runtime bundles from container images. Pulls the image via skopeo, extracts layers into rootfs/, resolves entrypoint/cmd/env from OCI config, and generates config.json. Supports command override via -- separator. Only available on the Xen (vxn) backend. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: generalize init scripts for pluggable hypervisor backendsBruce Ashfield2026-02-263-15/+22
| | | | | | | | | | | Make preinit and guest init scripts hypervisor-agnostic: - vcontainer-preinit.sh: add vcontainer.init= cmdline parameter for init script selection and vcontainer.blk= for block device prefix (QEMU uses /dev/vda, Xen uses /dev/xvda) - vdkr-init.sh, vpdmn-init.sh: use NINE_P_TRANSPORT variable for 9p mount transport (virtio for QEMU, xen for Xen) Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add QEMU hypervisor backend and register in recipesBruce Ashfield2026-02-263-1/+267
| | | | | | | | | | Add vrunner-backend-qemu.sh implementing the hypervisor interface for QEMU (arch setup, KVM detection, disk/network/9p options, VM lifecycle, QMP control). Register backend scripts in vcontainer-native and vcontainer-tarball recipes so they are available in both build-time and standalone tarball contexts. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vxn: add containerd OCI runtime integrationBruce Ashfield2026-02-265-0/+782
| | | | | | | | | | | | | | | | | | | | | | | | | | Add shell-based OCI runtime (vxn-oci-runtime) that enables containerd to manage Xen DomU containers through the standard runc shim. Non-terminal container output flows back to ctr via the shim's pipe mechanism. New files: - vxn-oci-runtime: OCI runtime (create/start/state/kill/delete/features/logs) - vxn-sendtty.c: SCM_RIGHTS helper for terminal mode PTY passing - containerd-shim-vxn-v2: PATH trick wrapper for runc shim coexistence - containerd-config-vxn.toml: CRI config (vxn default, runc fallback) - vctr: convenience wrapper injecting --runtime io.containerd.vxn.v2 Key design: - Monitor subprocess uses wait on xl console (not sleep-polling) for instant reaction when domain dies, then extracts output markers and writes to stdout (shim pipe -> containerd FIFO -> ctr client) - cmd_state checks monitor PID liveness (not domain status) to prevent premature cleanup race that killed monitor before output - cmd_delete always destroys remnant domains (no --force needed) - Coexists with runc: /usr/libexec/vxn/shim/runc symlink + PATH trick Verified: vctr run --rm, vctr run -d, vxn standalone, vxn daemon mode. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vxn: add per-container DomU lifecycle and memres persistent DomUBruce Ashfield2026-02-264-154/+922
| | | | | | | | | | | | | | | | | | | | | | | | Per-container DomU lifecycle: - run -d: per-container DomU with daemon loop and PTY-based IPC - ps: show Running vs Exited(code) via ===STATUS=== PTY query - exec/stop/rm: send commands to per-container DomU - logs: retrieve entrypoint output from running DomU - Entrypoint death detection with configurable grace period - Graceful error messages for ~25 unsupported commands - Command quoting fix: word-count+cut preserves internal spaces Memres (persistent DomU for fast container dispatch): - vxn memres start/stop/status/list for persistent DomU management - vxn run auto-dispatches to memres via xl block-attach + RUN_CONTAINER - Guest daemon loop handles ===RUN_CONTAINER===: mount hot-plugged xvdb, extract OCI rootfs, chroot exec entrypoint, unmount, report - Falls back to ephemeral mode when memres is occupied (PING timeout) - Xen-specific memres list shows xl domains and orphan detection Tested: vxn memres start + vxn run --rm alpine echo hello + vxn run --rm hello-world both produce correct output. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vxn: fix non-interactive mode for clean container outputBruce Ashfield2026-02-262-10/+41
| | | | | | | | | | | | | | | | | | | | Fix several issues preventing non-interactive mode (vxn --no-daemon run) from showing clean container output: - Fix console capture: check DAEMON_MODE instead of DAEMON_SOCKET in Xen backend so ephemeral runs use xl console capture instead of the daemon socat bridge (DAEMON_SOCKET is always set, DAEMON_MODE is only "start" for actual daemon launches) - Fix race condition: add post-loop marker detection after VM exits, with 2s delay for xl console to flush its buffer - Add stdbuf -oL to xl console for line-buffered output - Suppress mke2fs stdout (was only redirecting stderr) - Suppress kernel console messages during VM lifecycle in non-verbose mode - Fix grep -P (Perl regex) for BusyBox compatibility in exit code parsing - Preserve temp directory on failure for debugging - Fix hardcoded "QEMU" in error messages to "VM" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vxn: add Xen DomU container runtime with OCI image supportBruce Ashfield2026-02-265-311/+1403
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | vxn runs OCI containers as Xen DomU guests — the VM IS the container. No Docker/containerd runs inside the guest; the init script directly mounts the container rootfs and execs the entrypoint via chroot. Host-side (Dom0): - vxn.sh: Docker-like CLI wrapper (sets HYPERVISOR=xen) - vrunner-backend-xen.sh: Xen xl backend for vrunner - hv_prepare_container(): pulls OCI images via skopeo, resolves entrypoint from OCI config using jq on host - xl create for VM lifecycle (PVH on aarch64, PV on x86_64) - Bridge networking with iptables DNAT for port forwards - Console capture via xl console for ephemeral mode Guest-side (DomU): - vxn-init.sh: mounts container rootfs from input disk, extracts OCI layers, execs entrypoint via chroot - Supports containers with or without /bin/sh - grep/sed fallback for OCI config parsing (no jq needed) - Daemon mode with command loop on hvc1 - vcontainer-init-common.sh: hypervisor detection, head -n fix - vcontainer-preinit.sh: init selection via vcontainer.init= Build system: - vxn-initramfs-create.inc: assembles boot blobs from vruntime multiconfig, injects vxn-init.sh into rootfs squashfs - vxn_1.0.bb: Dom0 package with scripts + blobs - nostamp on install/package chain (blobs from DEPLOY_DIR are untracked by sstate) - vxn.cfg: Xen PV kernel config fragment Tested: vxn -it --no-daemon run --rm hello-world Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-tarball: remove parse-time bannerBruce Ashfield2026-02-111-34/+3
| | | | | | | | | | | The anonymous python function prints a banner unconditionally at parse time, which means it appears when building any recipe (e.g. xen-image-minimal), not just vcontainer-tarball. Remove the parse-time banner since the post-build banner in do_populate_sdk:append() already provides the same information and only fires when actually building the tarball. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: fix yocto-check-layer mcdepends parse errorBruce Ashfield2026-02-092-10/+20
| | | | | | | | | | | | | | | | | | Fix yocto-check-layer failure: ERROR: Multiconfig dependency mc::vruntime-x86-64:vpdmn-initramfs-create:do_deploy depends on nonexistent multiconfig configuration named configuration vruntime-x86-64 Several recipes and classes declared static mcdepends referencing vruntime-aarch64 and vruntime-x86-64 multiconfigs. When parsed without BBMULTICONFIG set (e.g. yocto-check-layer), BitBake validates these and fails because the referenced multiconfigs don't exist. Move mcdepends into anonymous python functions and only set them when the target multiconfig exists in BBMULTICONFIG, following the pattern established in meta/classes-recipe/kernel-fit-image.bbclass. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add secure registry support with virtio-9p CA transportBruce Ashfield2026-02-094-6/+246
| | | | | | | | | | | | | | | | | | | | | | | | | Enable vdkr/vcontainer to pull from TLS-secured registries by transporting the CA certificate via virtio-9p shared folder. vcontainer-common.sh: Add --secure-registry, --ca-cert, --registry-user, --registry-password CLI options. Auto-detect bundled CA cert at registry/ca.crt in the tarball and enable secure mode automatically. vrunner.sh: Copy CA cert to the virtio-9p shared folder for both daemon and non-daemon modes. Fix daemon mode missing _9p=1 kernel cmdline parameter which prevented the init script from mounting the shared folder. vdkr-init.sh: Read CA cert from /mnt/share/ca.crt (virtio-9p) instead of base64-decoding from kernel cmdline (which caused truncation for large certificates). Install cert to /etc/docker/certs.d/{host}/ca.crt for Docker TLS verification. Support optional credential passing for authenticated registries. vcontainer-tarball.bb: Add script files to SRC_URI for proper file tracking and rebuild triggers. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: enable incremental builds by defaultBruce Ashfield2026-02-094-10/+38
| | | | | | | | | | | | | | | | | | | Previously, vcontainer recipes had [nostamp] flags that forced all tasks to rebuild on every bitbake invocation, even when nothing changed. This was added as a workaround for dependency tracking issues but caused slow rebuild times. Changes: - Make [nostamp] conditional on VCONTAINER_FORCE_BUILD variable - Default to normal stamp-based caching for faster incremental builds - file-checksums on do_rootfs still tracks init script changes - Add VCONTAINER_FORCE_BUILD status to the tarball build banner To enable the old always-rebuild behavior (for debugging dependency issues), set in local.conf: VCONTAINER_FORCE_BUILD = "1" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-tarball: build all architectures via single bitbake commandBruce Ashfield2026-02-091-6/+40
| | | | | | | | | | | | | | | | | | | | Previously, building vcontainer-tarball required multiple bitbake invocations or complex command lines to build both x86_64 and aarch64 blobs. This was a usability issue. Changes: - mcdepends now triggers builds for BOTH architectures automatically - VCONTAINER_ARCHITECTURES defaults to "x86_64 aarch64" (was auto-detect) - Add informational banner at parse time showing what will be built - Fix duplicate sanity check messages when multiconfig is active Usage is now simply: bitbake vcontainer-tarball To build only one architecture, set in local.conf: VCONTAINER_ARCHITECTURES = "x86_64" Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vdkr-init: improve Docker daemon startup logging and error handlingBruce Ashfield2026-02-091-5/+17
| | | | | | | | | | | Improve debugging capabilities when Docker daemon fails to start: - Log dockerd output to /var/log/docker.log instead of /dev/null - Capture docker info exit code and output for diagnostics - Show docker info error on every 10th iteration while waiting - Include last docker info output and docker.log tail on failure - Extend sleep on failure from 2s to 5s for log review Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add host-side idle timeout with QMP shutdownBruce Ashfield2026-02-093-20/+154
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement reliable idle timeout for vmemres daemon mode using host-side monitoring with QMP-based shutdown, and container-aware idle detection via virtio-9p shared file. Host-side changes (vrunner.sh): - Add -no-reboot flag to QEMU for clean exit semantics - Spawn background watchdog when daemon starts - Watchdog monitors activity file timestamp - Check interval scales to idle timeout (timeout/5, clamped 10-60s) - Read container status from shared file (guest writes via virtio-9p) - Only shutdown if no containers are running - Send QMP "quit" command for graceful shutdown - Watchdog auto-exits if QEMU dies (no zombie processes) - Touch activity file in daemon_send() for user activity tracking Config changes (vcontainer-common.sh): - Add idle-timeout to build_runner_args() so it's always passed Guest-side changes (vcontainer-init-common.sh): - Add watchdog that writes container status to /mnt/share/.containers_running - Host reads this file instead of socket commands (avoids output corruption) - Close inherited virtio-serial fd 3 in watchdog subshell to prevent leaks - Guest-side shutdown logic preserved but disabled (QMP more reliable) - Handle Yocto read-only-rootfs volatile directories (/var/volatile) The shared file approach avoids sending container check commands through the daemon socket, which previously caused output corruption on the single-stream virtio-serial channel. The idle timeout is configurable via: vdkr vconfig idle-timeout <secs> Default: 1800 seconds (30 minutes) Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: consolidate initramfs-create recipesBruce Ashfield2026-02-093-80/+33
| | | | | | | | | | | | | | | | | | | Update vcontainer-initramfs-create.inc to use the image-based approach: - Depend on tiny-initramfs-image for cpio.gz (replaces file extraction) - Depend on rootfs-image for squashfs (unchanged) - Remove DEPENDS on squashfs-tools-native (no longer extracting files) Update recipe files to use the consolidated inc: - vdkr-initramfs-create_1.0.bb - vpdmn-initramfs-create_1.0.bb Boot flow remains unchanged: QEMU boots kernel + tiny initramfs -> preinit mounts rootfs.img from /dev/vda -> switch_root into rootfs.img -> vdkr-init.sh or vpdmn-init.sh runs Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add tiny initramfs image infrastructureBruce Ashfield2026-02-095-3/+142
| | | | | | | | | | | | | | | | | | | | | Add proper Yocto image recipes for the tiny initramfs used by vdkr/vpdmn in the switch_root boot flow: - vcontainer-tiny-initramfs-image.inc: Shared image configuration - vcontainer-preinit_1.0.bb: Preinit script package (shared) - vdkr-tiny-initramfs-image.bb: Tiny initramfs for vdkr - vpdmn-tiny-initramfs-image.bb: Tiny initramfs for vpdmn The tiny initramfs contains only busybox and a preinit script that: 1. Mounts devtmpfs, proc, sysfs 2. Mounts the squashfs rootfs.img from /dev/vda 3. Creates tmpfs overlay for writes 4. Performs switch_root to the real rootfs This replaces ad-hoc file extraction with proper image-based builds, improving reproducibility and maintainability. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-tarball: add nativesdk-expect dependencyBruce Ashfield2026-02-091-0/+1
| | | | | | | Add expect to the vcontainer SDK toolchain for interactive testing and automation scripts. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: fix runc/crun conflict in multiconfig buildsBruce Ashfield2026-02-092-10/+6
| | | | | | | | | | | | | | | | | | | | | | The vruntime distro is used for multiconfig builds of both vdkr (Docker/runc) and vpdmn (Podman/crun) images. When CONTAINER_PROFILE or VIRTUAL-RUNTIME_container_runtime is set, containerd and podman pull their preferred runtime via RDEPENDS, causing package conflicts. Fix by having vruntime distro NOT participate in CONTAINER_PROFILE: - Set VIRTUAL-RUNTIME_container_runtime="" to prevent automatic runtime selection - Explicitly install runc in vdkr-rootfs-image.bb - Explicitly install crun in vpdmn-rootfs-image.bb This allows both images to be built in the same multiconfig without conflicts, while standard container-host images continue to use CONTAINER_PROFILE normally. Also add kernel-modules to vdkr-rootfs-image for overlay filesystem support. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add multi-arch OCI supportBruce Ashfield2026-02-091-4/+239
| | | | | | | | | | | | | | | | | | | | | Add functions to detect and handle multi-architecture OCI Image Index format with automatic platform selection during import. Also add oci-multiarch.bbclass for build-time multi-arch OCI creation. Runtime support (vcontainer-common.sh): - is_oci_image_index() - detect multi-arch OCI images - get_oci_platforms() - list available platforms - select_platform_manifest() - select manifest for target architecture - extract_platform_oci() - extract single platform to new OCI dir - normalize_arch_to_oci/from_oci() - architecture name mapping - Update vimport to auto-select platform from multi-arch images Build-time support (oci-multiarch.bbclass): - Create OCI Image Index from multiconfig builds - Collect images from vruntime-aarch64, vruntime-x86-64 - Combine blobs and create unified manifest list Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add virtio-9p fast path for batch importsBruce Ashfield2026-02-094-50/+284
| | | | | | | | | | | | | Add virtio-9p filesystem support for faster storage output during batch container imports, replacing slow base64-over-console method. - Add --timeout option for configurable import timeouts - Mount virtio-9p share in batch-import mode - Parse _9p=1 kernel parameter for 9p availability - Write storage.tar directly to shared filesystem - Reduces import time from ~600s to ~11s for large containers Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add vshell command for VM debug accessBruce Ashfield2026-02-091-0/+20
| | | | | | | | | | | | | | | | | | | | Add vshell command to vdkr/vpdmn for interactive shell access to the QEMU VM where Docker/Podman runs. This is useful for debugging container issues directly inside the virtual environment. Usage: vdkr vmemres start vdkr vshell # Now inside VM, can run docker commands directly docker images docker inspect <image> exit The vshell command requires the memory-resident daemon to be running (vmemres start). It opens an interactive shell via the daemon's --daemon-interactive mode. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vdkr: add registry configuration and pull fallbackBruce Ashfield2026-02-095-5/+493
| | | | | | | | | | | Add registry support to vdkr: - vconfig registry command for persistent config - --registry flag for one-off usage - Registry-first, Docker Hub fallback for pulls - Baked-in registry config via CONTAINER_REGISTRY_URL - Image commands (inspect, history, rmi, images) work without transform Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: fix ps -q to suppress port forward displayBruce Ashfield2026-02-091-1/+9
| | | | | | | | When using `ps -q` or `ps --quiet`, only container IDs should be output. The port forward registry display was being included, which broke cleanup code that expected just container IDs. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vrunner: update static port forwarding for bridge networkingBruce Ashfield2026-02-091-5/+8
| | | | | | | | | | | | | Update the static port forwarding (used at QEMU startup) to match the dynamic QMP port forwarding change. With bridge networking: - QEMU forwards host:port -> VM:port - Docker's iptables handles VM:port -> container:port Previously the static port forward went directly to the container port (host:8080 -> VM:80), which doesn't work with bridge networking where Docker needs to handle the final hop. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* recipes: fix multiconfig build commentsBruce Ashfield2026-02-091-2/+2
| | | | | | | | Fix the bitbake multiconfig commands in rootfs recipe comments. The multiconfig names are vruntime-aarch64 and vruntime-x86-64, not vdkr-*/vpdmn-*. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: update CLI for bridge networkingBruce Ashfield2026-02-091-12/+39
| | | | | | | | | | | | | | | | | | | Update the CLI wrapper to work with Docker bridge networking: - qmp_add_hostfwd(): Change QEMU port forward from host_port->guest_port to host_port->host_port. With bridge networking, Docker's iptables handles the final hop (VM:host_port -> container:guest_port). - Default network mode: Remove --network=host default. Docker's bridge is now the default, giving each container its own IP. Users can still explicitly use --network=host for legacy behavior. - Update help text to document the new bridge networking behavior. Port forwarding flow is now: Host:8080 -> QEMU -> VM:8080 -> Docker iptables -> Container:80 Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vdkr: enable Docker bridge networkingBruce Ashfield2026-02-092-4/+6
| | | | | | | | | | | | | | | | | | | | | | Enable Docker's default bridge network (docker0, 172.17.0.0/16) inside the QEMU VM to allow multiple containers to listen on the same internal port with different host port mappings. Changes: - Add iptables package to vdkr-rootfs-image for Docker NAT rules - Change dockerd options in vdkr-init.sh: - Set --iptables=true (was false) - Remove --bridge=none to enable default docker0 bridge This enables the workflow: vdkr run -d -p 8080:80 --name nginx1 nginx:alpine vdkr run -d -p 8081:80 --name nginx2 nginx:alpine # Both work - each container gets its own 172.17.0.x IP Previously with --network=host (the old default), both containers would try to bind port 80 on the VM's single IP, causing conflicts. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-common: fix QMP netdev ID for hostfwd commandsBruce Ashfield2026-02-091-2/+2
| | | | | | | The QEMU netdev is named 'net0', not 'user.0'. Fix the hostfwd_add and hostfwd_remove commands to use the correct netdev ID. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-common: remove 'local' keywords from case blocksBruce Ashfield2026-02-091-10/+10
| | | | | | | | The 'local' keyword can only be used inside functions, not in the main script's case blocks. Remove 'local' from variables in the run, ps, and vmemres case handlers. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer-init: fix idle timeout detection to distinguish EOF from timeoutBruce Ashfield2026-02-091-5/+9
| | | | | | | | | | | | | | | | The read -t command returns different exit codes: - 0: successful read - 1: EOF (connection closed) - >128: timeout (typically 142) Previously, any empty result was treated as timeout, but when the host closes the connection after PING/PONG, read returns EOF immediately. This caused the daemon to shut down right after startup. Now we capture the exit code and only trigger idle shutdown when read actually times out (exit code > 128). Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vrunner: fix file descriptor close syntax for coprocBruce Ashfield2026-02-091-1/+1
| | | | | | | | | | The syntax {SOCAT[0]}<&- is invalid - curly braces are for allocating new file descriptors, not referencing existing ones. Use eval with ${SOCAT[0]} to properly close the coproc file descriptors. Fixes: "SOCAT[0]: ambiguous redirect" error when using daemon mode. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add dynamic port forwarding via QEMU QMPBruce Ashfield2026-02-092-17/+291
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add dynamic port forwarding for containers using QEMU's QMP (QEMU Machine Protocol). This enables running multiple detached containers with different port mappings without needing to declare ports upfront. Key changes: - vrunner.sh: Add QMP socket (-qmp unix:...) to daemon startup - vcontainer-common.sh: * QMP helper functions (qmp_send, qmp_add_hostfwd, qmp_remove_hostfwd) * Port forward registry (port-forwards.txt) for tracking mappings * Parse -p, -d, --name flags in run command * Add port forwards via QMP for detached containers * Enhanced ps command to show host port forwards * Cleanup port forwards on stop/rm/vmemres stop Usage: vdkr run -d -p 8080:80 nginx # Adds 8080->80 forward vdkr run -d -p 3000:3000 myapp # Adds 3000->3000 forward vdkr ps # Shows containers + port forwards vdkr stop nginx # Removes 8080->80 forward This solves the problem where auto-started vmemres couldn't handle multiple containers with different port mappings. QMP allows adding and removing port forwards at runtime without restarting the daemon. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add auto-start daemon with idle timeoutBruce Ashfield2026-02-093-15/+100
| | | | | | | | | | | | | | | | | | | | | | | | Add automatic daemon startup and idle timeout cleanup for vdkr/vpdmn: - vmemres daemon auto-starts on first command (no manual start needed) - Daemon auto-stops after idle timeout (default: 30 minutes) - --no-daemon flag for ephemeral mode (single-shot QEMU) - New config keys: idle-timeout, auto-daemon Changes: - vcontainer-init-common.sh: Parse idle_timeout from cmdline, add read -t timeout to daemon loop for auto-shutdown - vrunner.sh: Add --idle-timeout option, pass to kernel cmdline - vcontainer-common.sh: Auto-start logic in run_runtime_command(), --no-daemon flag, config defaults - container-cross-install.bbclass: Add --no-daemon for explicit ephemeral mode during Yocto builds Configuration: vdkr vconfig idle-timeout 3600 # 1 hour timeout vdkr vconfig auto-daemon false # Disable auto-start Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* docs: fix dead references to vdkr-native and obsolete test classesBruce Ashfield2026-02-094-4/+4
| | | | | | | | | Update references to reflect the current architecture: - Change vdkr-native/vpdmn-native to vcontainer-native in comments - Remove TestContainerCrossTools and TestContainerCrossInitramfs from README - Fix build command: vdkr-native → vcontainer-tarball Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: default to --network=host for container runBruce Ashfield2026-02-091-7/+27
| | | | | | | | | | | | | | | | | | Docker bridge networking is intentionally disabled in vdkr (dockerd runs with --bridge=none --iptables=false). Rather than requiring users to explicitly add --network=host to every container run command, make it the default. This simplifies port forwarding workflows: vdkr memres start -p 8080:80 vdkr run -d --rm nginx:alpine # Just works, no --network=host needed Users can still override with --network=none if they explicitly want no networking. Updates help text and examples to reflect the new default. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add SDK-based standalone tarballBruce Ashfield2026-02-096-402/+834
| | | | | | | | | | | | | | | | | | | | | | | | | Add vcontainer-tarball.bb recipe that creates a relocatable standalone distribution of vdkr and vpdmn container tools using Yocto SDK infrastructure. Features: - Auto-detects available architectures from built blobs - Custom SDK installer with vcontainer-specific messages - nativesdk-qemu-vcontainer.bb: minimal QEMU without OpenGL deps Recipe changes: - DELETE vdkr-native_1.0.bb (functionality moved to SDK) - DELETE vpdmn-native_1.0.bb (functionality moved to SDK) - ADD vcontainer-tarball.bb (SDK tarball recipe) - ADD toolchain-shar-extract.sh (SDK installer template) - ADD nativesdk-qemu-vcontainer.bb (minimal QEMU for SDK) Usage: MACHINE=qemux86-64 bitbake vcontainer-tarball ./tmp/deploy/sdk/vcontainer-standalone.sh -d /opt/vcontainer -y source /opt/vcontainer/init-env.sh vdkr images Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add documentationBruce Ashfield2026-02-091-0/+368
| | | | | | | | | | | Add documentation for vcontainer/vdkr/vpdmn: - README.md: Update layer README with vcontainer distro feature - recipes-containers/vcontainer/README.md: Comprehensive vdkr/vpdmn usage documentation including CLI commands, build instructions, and architecture overview Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add vpdmn Podman supportBruce Ashfield2026-02-097-0/+610
| | | | | | | | | | | | | | | | | | | Add vpdmn - Podman CLI wrapper for cross-architecture container operations: Scripts: - vpdmn.sh: Podman CLI entry point (vpdmn-x86_64, vpdmn-aarch64) - vpdmn-init.sh: Podman init script for QEMU guest Recipes: - vpdmn-native: Installs vpdmn CLI wrappers - vpdmn-rootfs-image: Builds Podman rootfs with crun, netavark, skopeo - vpdmn-initramfs-create: Creates bootable initramfs blob The vpdmn CLI provides Podman-compatible commands executed inside a QEMU-emulated environment. Unlike vdkr, Podman is daemonless which simplifies the guest init process. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add vdkr Docker supportBruce Ashfield2026-02-097-0/+626
| | | | | | | | | | | | | | | | | | | Add vdkr - Docker CLI wrapper for cross-architecture container operations: Scripts: - vdkr.sh: Docker CLI entry point (vdkr-x86_64, vdkr-aarch64) - vdkr-init.sh: Docker init script for QEMU guest Recipes: - vdkr-native: Installs vdkr CLI wrappers - vdkr-rootfs-image: Builds Docker rootfs with containerd, runc, skopeo - vdkr-initramfs-create: Creates bootable initramfs blob The vdkr CLI provides Docker-compatible commands executed inside a QEMU-emulated environment, enabling cross-architecture container operations during Yocto builds. Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
* vcontainer: add shared infrastructure and runnerBruce Ashfield2026-02-096-0/+4307
Add core vcontainer infrastructure shared by vdkr and vpdmn: Scripts: - vrunner.sh: QEMU runner supporting both Docker and Podman runtimes - vcontainer-common.sh: Shared CLI functions and command handling - vcontainer-init-common.sh: Shared init script functions for QEMU guest - vdkr-preinit.sh: Initramfs preinit for switch_root to squashfs overlay Recipes: - vcontainer-native: Installs vrunner.sh and shared scripts - vcontainer-initramfs-create.inc: Shared initramfs build logic Features: - Docker/Podman-compatible commands: images, pull, load, save, run, exec - Memory resident mode for fast command execution - KVM acceleration when host matches target - Interactive mode with volume mounts - Squashfs rootfs with tmpfs overlay Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>