| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
| |
Improve debugging capabilities when Docker daemon fails to start:
- Log dockerd output to /var/log/docker.log instead of /dev/null
- Capture docker info exit code and output for diagnostics
- Show docker info error on every 10th iteration while waiting
- Include last docker info output and docker.log tail on failure
- Extend sleep on failure from 2s to 5s for log review
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement reliable idle timeout for vmemres daemon mode using
host-side monitoring with QMP-based shutdown, and container-aware
idle detection via virtio-9p shared file.
Host-side changes (vrunner.sh):
- Add -no-reboot flag to QEMU for clean exit semantics
- Spawn background watchdog when daemon starts
- Watchdog monitors activity file timestamp
- Check interval scales to idle timeout (timeout/5, clamped 10-60s)
- Read container status from shared file (guest writes via virtio-9p)
- Only shutdown if no containers are running
- Send QMP "quit" command for graceful shutdown
- Watchdog auto-exits if QEMU dies (no zombie processes)
- Touch activity file in daemon_send() for user activity tracking
Config changes (vcontainer-common.sh):
- Add idle-timeout to build_runner_args() so it's always passed
Guest-side changes (vcontainer-init-common.sh):
- Add watchdog that writes container status to /mnt/share/.containers_running
- Host reads this file instead of socket commands (avoids output corruption)
- Close inherited virtio-serial fd 3 in watchdog subshell to prevent leaks
- Guest-side shutdown logic preserved but disabled (QMP more reliable)
- Handle Yocto read-only-rootfs volatile directories (/var/volatile)
The shared file approach avoids sending container check commands through
the daemon socket, which previously caused output corruption on the
single-stream virtio-serial channel.
The idle timeout is configurable via: vdkr vconfig idle-timeout <secs>
Default: 1800 seconds (30 minutes)
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add proper Yocto image recipes for the tiny initramfs used by
vdkr/vpdmn in the switch_root boot flow:
- vcontainer-tiny-initramfs-image.inc: Shared image configuration
- vcontainer-preinit_1.0.bb: Preinit script package (shared)
- vdkr-tiny-initramfs-image.bb: Tiny initramfs for vdkr
- vpdmn-tiny-initramfs-image.bb: Tiny initramfs for vpdmn
The tiny initramfs contains only busybox and a preinit script that:
1. Mounts devtmpfs, proc, sysfs
2. Mounts the squashfs rootfs.img from /dev/vda
3. Creates tmpfs overlay for writes
4. Performs switch_root to the real rootfs
This replaces ad-hoc file extraction with proper image-based builds,
improving reproducibility and maintainability.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add functions to detect and handle multi-architecture OCI Image Index
format with automatic platform selection during import. Also add
oci-multiarch.bbclass for build-time multi-arch OCI creation.
Runtime support (vcontainer-common.sh):
- is_oci_image_index() - detect multi-arch OCI images
- get_oci_platforms() - list available platforms
- select_platform_manifest() - select manifest for target architecture
- extract_platform_oci() - extract single platform to new OCI dir
- normalize_arch_to_oci/from_oci() - architecture name mapping
- Update vimport to auto-select platform from multi-arch images
Build-time support (oci-multiarch.bbclass):
- Create OCI Image Index from multiconfig builds
- Collect images from vruntime-aarch64, vruntime-x86-64
- Combine blobs and create unified manifest list
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Add virtio-9p filesystem support for faster storage output during batch
container imports, replacing slow base64-over-console method.
- Add --timeout option for configurable import timeouts
- Mount virtio-9p share in batch-import mode
- Parse _9p=1 kernel parameter for 9p availability
- Write storage.tar directly to shared filesystem
- Reduces import time from ~600s to ~11s for large containers
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add vshell command to vdkr/vpdmn for interactive shell access to the
QEMU VM where Docker/Podman runs. This is useful for debugging container
issues directly inside the virtual environment.
Usage:
vdkr vmemres start
vdkr vshell
# Now inside VM, can run docker commands directly
docker images
docker inspect <image>
exit
The vshell command requires the memory-resident daemon to be running
(vmemres start). It opens an interactive shell via the daemon's
--daemon-interactive mode.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
Add registry support to vdkr:
- vconfig registry command for persistent config
- --registry flag for one-off usage
- Registry-first, Docker Hub fallback for pulls
- Baked-in registry config via CONTAINER_REGISTRY_URL
- Image commands (inspect, history, rmi, images) work without transform
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
| |
When using `ps -q` or `ps --quiet`, only container IDs should be output.
The port forward registry display was being included, which broke cleanup
code that expected just container IDs.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Update the static port forwarding (used at QEMU startup) to match
the dynamic QMP port forwarding change. With bridge networking:
- QEMU forwards host:port -> VM:port
- Docker's iptables handles VM:port -> container:port
Previously the static port forward went directly to the container
port (host:8080 -> VM:80), which doesn't work with bridge networking
where Docker needs to handle the final hop.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update the CLI wrapper to work with Docker bridge networking:
- qmp_add_hostfwd(): Change QEMU port forward from host_port->guest_port
to host_port->host_port. With bridge networking, Docker's iptables
handles the final hop (VM:host_port -> container:guest_port).
- Default network mode: Remove --network=host default. Docker's bridge
is now the default, giving each container its own IP. Users can still
explicitly use --network=host for legacy behavior.
- Update help text to document the new bridge networking behavior.
Port forwarding flow is now:
Host:8080 -> QEMU -> VM:8080 -> Docker iptables -> Container:80
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable Docker's default bridge network (docker0, 172.17.0.0/16) inside
the QEMU VM to allow multiple containers to listen on the same internal
port with different host port mappings.
Changes:
- Add iptables package to vdkr-rootfs-image for Docker NAT rules
- Change dockerd options in vdkr-init.sh:
- Set --iptables=true (was false)
- Remove --bridge=none to enable default docker0 bridge
This enables the workflow:
vdkr run -d -p 8080:80 --name nginx1 nginx:alpine
vdkr run -d -p 8081:80 --name nginx2 nginx:alpine
# Both work - each container gets its own 172.17.0.x IP
Previously with --network=host (the old default), both containers would
try to bind port 80 on the VM's single IP, causing conflicts.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
| |
The QEMU netdev is named 'net0', not 'user.0'. Fix the hostfwd_add
and hostfwd_remove commands to use the correct netdev ID.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
| |
The 'local' keyword can only be used inside functions, not in the
main script's case blocks. Remove 'local' from variables in the
run, ps, and vmemres case handlers.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The read -t command returns different exit codes:
- 0: successful read
- 1: EOF (connection closed)
- >128: timeout (typically 142)
Previously, any empty result was treated as timeout, but when the host
closes the connection after PING/PONG, read returns EOF immediately.
This caused the daemon to shut down right after startup.
Now we capture the exit code and only trigger idle shutdown when
read actually times out (exit code > 128).
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
| |
The syntax {SOCAT[0]}<&- is invalid - curly braces are for allocating
new file descriptors, not referencing existing ones. Use eval with
${SOCAT[0]} to properly close the coproc file descriptors.
Fixes: "SOCAT[0]: ambiguous redirect" error when using daemon mode.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add dynamic port forwarding for containers using QEMU's QMP (QEMU
Machine Protocol). This enables running multiple detached containers
with different port mappings without needing to declare ports upfront.
Key changes:
- vrunner.sh: Add QMP socket (-qmp unix:...) to daemon startup
- vcontainer-common.sh:
* QMP helper functions (qmp_send, qmp_add_hostfwd, qmp_remove_hostfwd)
* Port forward registry (port-forwards.txt) for tracking mappings
* Parse -p, -d, --name flags in run command
* Add port forwards via QMP for detached containers
* Enhanced ps command to show host port forwards
* Cleanup port forwards on stop/rm/vmemres stop
Usage:
vdkr run -d -p 8080:80 nginx # Adds 8080->80 forward
vdkr run -d -p 3000:3000 myapp # Adds 3000->3000 forward
vdkr ps # Shows containers + port forwards
vdkr stop nginx # Removes 8080->80 forward
This solves the problem where auto-started vmemres couldn't handle
multiple containers with different port mappings. QMP allows adding
and removing port forwards at runtime without restarting the daemon.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add automatic daemon startup and idle timeout cleanup for vdkr/vpdmn:
- vmemres daemon auto-starts on first command (no manual start needed)
- Daemon auto-stops after idle timeout (default: 30 minutes)
- --no-daemon flag for ephemeral mode (single-shot QEMU)
- New config keys: idle-timeout, auto-daemon
Changes:
- vcontainer-init-common.sh: Parse idle_timeout from cmdline, add
read -t timeout to daemon loop for auto-shutdown
- vrunner.sh: Add --idle-timeout option, pass to kernel cmdline
- vcontainer-common.sh: Auto-start logic in run_runtime_command(),
--no-daemon flag, config defaults
- container-cross-install.bbclass: Add --no-daemon for explicit
ephemeral mode during Yocto builds
Configuration:
vdkr vconfig idle-timeout 3600 # 1 hour timeout
vdkr vconfig auto-daemon false # Disable auto-start
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
| |
Update references to reflect the current architecture:
- Change vdkr-native/vpdmn-native to vcontainer-native in comments
- Remove TestContainerCrossTools and TestContainerCrossInitramfs from README
- Fix build command: vdkr-native → vcontainer-tarball
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Docker bridge networking is intentionally disabled in vdkr (dockerd runs
with --bridge=none --iptables=false). Rather than requiring users to
explicitly add --network=host to every container run command, make it
the default.
This simplifies port forwarding workflows:
vdkr memres start -p 8080:80
vdkr run -d --rm nginx:alpine # Just works, no --network=host needed
Users can still override with --network=none if they explicitly want
no networking.
Updates help text and examples to reflect the new default.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add vcontainer-tarball.bb recipe that creates a relocatable standalone
distribution of vdkr and vpdmn container tools using Yocto SDK infrastructure.
Features:
- Auto-detects available architectures from built blobs
- Custom SDK installer with vcontainer-specific messages
- nativesdk-qemu-vcontainer.bb: minimal QEMU without OpenGL deps
Recipe changes:
- DELETE vdkr-native_1.0.bb (functionality moved to SDK)
- DELETE vpdmn-native_1.0.bb (functionality moved to SDK)
- ADD vcontainer-tarball.bb (SDK tarball recipe)
- ADD toolchain-shar-extract.sh (SDK installer template)
- ADD nativesdk-qemu-vcontainer.bb (minimal QEMU for SDK)
Usage:
MACHINE=qemux86-64 bitbake vcontainer-tarball
./tmp/deploy/sdk/vcontainer-standalone.sh -d /opt/vcontainer -y
source /opt/vcontainer/init-env.sh
vdkr images
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add vpdmn - Podman CLI wrapper for cross-architecture container operations:
Scripts:
- vpdmn.sh: Podman CLI entry point (vpdmn-x86_64, vpdmn-aarch64)
- vpdmn-init.sh: Podman init script for QEMU guest
Recipes:
- vpdmn-native: Installs vpdmn CLI wrappers
- vpdmn-rootfs-image: Builds Podman rootfs with crun, netavark, skopeo
- vpdmn-initramfs-create: Creates bootable initramfs blob
The vpdmn CLI provides Podman-compatible commands executed inside a
QEMU-emulated environment. Unlike vdkr, Podman is daemonless which
simplifies the guest init process.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add vdkr - Docker CLI wrapper for cross-architecture container operations:
Scripts:
- vdkr.sh: Docker CLI entry point (vdkr-x86_64, vdkr-aarch64)
- vdkr-init.sh: Docker init script for QEMU guest
Recipes:
- vdkr-native: Installs vdkr CLI wrappers
- vdkr-rootfs-image: Builds Docker rootfs with containerd, runc, skopeo
- vdkr-initramfs-create: Creates bootable initramfs blob
The vdkr CLI provides Docker-compatible commands executed inside a
QEMU-emulated environment, enabling cross-architecture container
operations during Yocto builds.
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|
|
|
Add core vcontainer infrastructure shared by vdkr and vpdmn:
Scripts:
- vrunner.sh: QEMU runner supporting both Docker and Podman runtimes
- vcontainer-common.sh: Shared CLI functions and command handling
- vcontainer-init-common.sh: Shared init script functions for QEMU guest
- vdkr-preinit.sh: Initramfs preinit for switch_root to squashfs overlay
Recipes:
- vcontainer-native: Installs vrunner.sh and shared scripts
- vcontainer-initramfs-create.inc: Shared initramfs build logic
Features:
- Docker/Podman-compatible commands: images, pull, load, save, run, exec
- Memory resident mode for fast command execution
- KVM acceleration when host matches target
- Interactive mode with volume mounts
- Squashfs rootfs with tmpfs overlay
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
|