1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
|
# vdkr & vpdmn - Emulated Docker/Podman for Cross-Architecture
Execute Docker or Podman commands inside a QEMU-emulated target environment.
| Tool | Runtime | State Directory |
|------|---------|-----------------|
| `vdkr` | Docker (dockerd + containerd) | `~/.vdkr/<arch>/` |
| `vpdmn` | Podman (daemonless) | `~/.vpdmn/<arch>/` |
## Quick Start
```bash
# Build and install SDK (see "Standalone SDK" section for full instructions)
MACHINE=qemux86-64 bitbake vcontainer-tarball
./tmp/deploy/sdk/vcontainer-standalone.sh -d /tmp/vcontainer -y
source /tmp/vcontainer/init-env.sh
# List images (uses host architecture by default)
vdkr images
# Explicit architecture
vdkr -a aarch64 images
# Import an OCI container
vdkr vimport ./my-container-oci/ myapp:latest
# Export storage for deployment
vdkr --storage /tmp/docker-storage.tar vimport ./container-oci/ myapp:latest
# Clean persistent state
vdkr clean
```
## Architecture Selection
vdkr detects the target architecture automatically. Override with:
| Method | Example | Priority |
|--------|---------|----------|
| `--arch` / `-a` flag | `vdkr -a aarch64 images` | Highest |
| Executable name | `vdkr-x86_64 images` | 2nd |
| `VDKR_ARCH` env var | `export VDKR_ARCH=aarch64` | 3rd |
| Config file | `~/.config/vdkr/arch` | 4th |
| Host architecture | `uname -m` | Lowest |
**Set default architecture:**
```bash
mkdir -p ~/.config/vdkr
echo "aarch64" > ~/.config/vdkr/arch
```
**Backwards-compatible symlinks:**
```bash
vdkr-aarch64 images # Same as: vdkr -a aarch64 images
vdkr-x86_64 images # Same as: vdkr -a x86_64 images
```
## Commands
### Docker-Compatible (same syntax as Docker)
| Command | Description |
|---------|-------------|
| `images` | List images |
| `run [opts] <image> [cmd]` | Run a command in a container |
| `import <tarball> [name:tag]` | Import rootfs tarball |
| `load -i <file>` | Load Docker image archive |
| `save -o <file> <image>` | Save image to archive |
| `pull <image>` | Pull image from registry |
| `tag <source> <target>` | Tag an image |
| `rmi <image>` | Remove an image |
| `ps`, `rm`, `logs`, `start`, `stop` | Container management |
| `exec [opts] <container> <cmd>` | Execute in running container |
### Extended Commands (vdkr-specific)
| Command | Description |
|---------|-------------|
| `vimport <path> [name:tag]` | Import OCI directory, tarball, or directory (auto-detect) |
| `vrun [opts] <image> [cmd]` | Run with entrypoint cleared (command runs directly) |
| `clean` | Remove persistent state |
| `memres start [-p port:port]` | Start memory resident VM with optional port forwards |
| `memres stop` | Stop memory resident VM |
| `memres restart [--clean]` | Restart VM (optionally clean state) |
| `memres status` | Show memory resident VM status |
| `memres list` | List all running memres instances |
### run vs vrun
| Command | Behavior |
|---------|----------|
| `run` | Docker-compatible - entrypoint honored |
| `vrun` | Clears entrypoint when command given - runs command directly |
## Options
| Option | Description |
|--------|-------------|
| `--arch, -a <arch>` | Target architecture (x86_64 or aarch64) |
| `--instance, -I <name>` | Use named instance (shortcut for `--state-dir ~/.vdkr/<name>`) |
| `--stateless` | Don't use persistent state |
| `--storage <file>` | Export Docker storage to tar after command |
| `--state-dir <path>` | Override state directory |
| `--no-kvm` | Disable KVM acceleration |
| `-v, --verbose` | Enable verbose output |
## Memory Resident Mode
Keep QEMU VM running for fast command execution (~1s vs ~30s):
```bash
vdkr memres start # Start daemon
vdkr images # Fast!
vdkr pull alpine:latest # Fast!
vdkr run -it alpine /bin/sh # Interactive mode works via daemon!
vdkr memres stop # Stop daemon
```
Interactive mode (`run -it`, `vrun -it`, `exec -it`) now works directly via the daemon using virtio-serial passthrough - no need to stop/restart the daemon.
Note: Interactive mode combined with volume mounts (`-v`) still requires stopping the daemon temporarily.
## Port Forwarding
Forward ports from host to containers for SSH, web servers, etc:
```bash
# Start daemon with port forwarding
vdkr memres start -p 8080:80 # Host:8080 -> Guest:80
vdkr memres start -p 8080:80 -p 2222:22 # Multiple ports
# Run container with host networking (shares guest's network)
vdkr run -d --rm --network=host nginx:alpine
# Access from host
curl http://localhost:8080 # Access nginx
```
**How it works:**
```
Host:8080 → (QEMU hostfwd) → Guest:80 → (--network=host) → Container on port 80
```
Containers must use `--network=host` because Docker runs with `--bridge=none` inside the guest. This means the container shares the guest VM's network stack directly.
**Options:**
- `-p <host_port>:<guest_port>` - TCP forwarding (default)
- `-p <host_port>:<guest_port>/udp` - UDP forwarding
- Multiple `-p` options can be specified
**Managing instances:**
```bash
vdkr memres list # Show all running instances
vdkr memres start -p 9000:80 # Prompts if instance already running
vdkr -I web memres start -p 8080:80 # Start named instance "web"
vdkr -I web images # Use named instance
vdkr -I backend run -d --network=host my-api:latest
```
## Exporting Images
Two ways to export, for different purposes:
```bash
# Export a single image as Docker archive (portable, can be `docker load`ed)
vdkr save -o /tmp/myapp.tar myapp:latest
# Export entire Docker storage for deployment to target rootfs
vdkr --storage /tmp/docker-storage.tar images
```
| Method | Output | Use case |
|--------|--------|----------|
| `save -o file image:tag` | Docker archive | Share image, load on another Docker |
| `--storage file` | `/var/lib/docker` tar | Deploy to target rootfs |
## Persistent State
By default, Docker state persists in `~/.vdkr/<arch>/`. Images imported in one session are available in the next.
```bash
vdkr vimport ./container-oci/ myapp:latest
vdkr images # Shows myapp:latest
# Later...
vdkr images # Still shows myapp:latest
# Start fresh
vdkr --stateless images # Empty
# Clear state
vdkr clean
```
## Standalone SDK
Create a self-contained redistributable SDK that works without Yocto:
```bash
# Ensure multiconfig is enabled in local.conf:
# BBMULTICONFIG = "vruntime-aarch64 vruntime-x86-64"
# Step 1: Build blobs for desired architectures (sequentially to avoid deadlocks)
bitbake mc:vruntime-x86-64:vdkr-initramfs-create mc:vruntime-x86-64:vpdmn-initramfs-create
bitbake mc:vruntime-aarch64:vdkr-initramfs-create mc:vruntime-aarch64:vpdmn-initramfs-create
# Step 2: Build SDK (auto-detects available architectures)
MACHINE=qemux86-64 bitbake vcontainer-tarball
# Output: tmp/deploy/sdk/vcontainer-standalone.sh
```
To limit architectures, set in local.conf:
```bash
VCONTAINER_ARCHITECTURES = "x86_64" # x86_64 only
VCONTAINER_ARCHITECTURES = "aarch64" # aarch64 only
VCONTAINER_ARCHITECTURES = "x86_64 aarch64" # both (default if both built)
```
The SDK includes:
- `vdkr`, `vpdmn` - Main CLI scripts
- `vdkr-<arch>`, `vpdmn-<arch>` - Symlinks for each included architecture
- `vrunner.sh` - Shared QEMU runner
- `vdkr-blobs/`, `vpdmn-blobs/` - Kernel and initramfs per architecture
- `sysroots/` - SDK binaries (QEMU, socat, libraries)
- `init-env.sh` - Environment setup script
Usage:
```bash
# Install (self-extracting)
./vcontainer-standalone.sh -d /tmp/vcontainer -y
# Or extract tarball directly
tar -xf vcontainer-standalone.tar.xz -C /tmp/vcontainer
# Use
cd /tmp/vcontainer
source init-env.sh
vdkr-x86_64 images
vdkr-aarch64 images
```
## Interactive Mode
Run containers with an interactive shell:
```bash
# Interactive shell in a container
vdkr run -it alpine:latest /bin/sh
# Using vrun (clears entrypoint)
vdkr vrun -it alpine:latest /bin/sh
# Inside the container:
/ # apk add curl
/ # exit
```
## Networking
vdkr supports outbound networking via QEMU's slirp user-mode networking:
```bash
# Pull an image from a registry
vdkr pull alpine:latest
# Images persist in state directory
vdkr images # Shows alpine:latest
```
## Volume Mounts
Mount host directories into containers using `-v` (requires memory resident mode):
```bash
# Start memres first
vdkr memres start
# Mount a host directory
vdkr vrun -v /tmp/data:/data alpine cat /data/file.txt
# Mount multiple directories
vdkr vrun -v /home/user/src:/src -v /tmp/out:/out alpine /src/build.sh
# Read-only mount
vdkr vrun -v /etc/config:/config:ro alpine cat /config/settings.conf
# With run command (same syntax)
vdkr run -v ./local:/app --rm myapp:latest /app/run.sh
```
**How it works:**
- Host files are copied to the virtio-9p share directory before container runs
- Container accesses them via the shared filesystem mount
- For `:rw` mounts (default), changes are synced back to host after container exits
- For `:ro` mounts, changes in container are discarded
**Limitations:**
- Requires daemon mode (memres) - volume mounts don't work in regular mode
- Interactive + volumes (`-it -v`) requires stopping daemon temporarily (share directory conflict)
- Changes sync after container exits (not real-time)
- Large directories may be slow to copy
**Debugging with volumes:**
```bash
# Run non-interactively with a shell command to inspect volume contents
vdkr vrun -v /tmp/data:/data alpine ls -la /data
# Or start the container detached and exec into it
vdkr run -d --name debug -v /tmp/data:/data alpine sleep 3600
vdkr exec debug ls -la /data
vdkr rm -f debug
```
## Testing
See `tests/README.md` for the pytest-based test suite:
```bash
# Build and install SDK
MACHINE=qemux86-64 bitbake vcontainer-tarball
./tmp/deploy/sdk/vcontainer-standalone.sh -d /tmp/vcontainer -y
# Run tests
cd /opt/bruce/poky/meta-virtualization
pytest tests/test_vdkr.py -v --vdkr-dir /tmp/vcontainer
```
## vpdmn (Podman)
vpdmn provides the same functionality as vdkr but uses Podman instead of Docker:
```bash
# Pull and run with Podman
vpdmn-x86_64 pull alpine:latest
vpdmn-x86_64 vrun alpine:latest echo hello
# Override entrypoint
vpdmn-x86_64 run --rm --entrypoint /bin/cat alpine:latest /etc/os-release
# Import OCI container
vpdmn-x86_64 vimport ./my-container-oci/ myapp:latest
```
Key differences from vdkr:
- **Daemonless** - No containerd/dockerd startup, faster boot (~5s vs ~10-15s)
- **Separate state** - Uses `~/.vpdmn/<arch>/` (images not shared with vdkr)
- **Same commands** - `images`, `pull`, `run`, `vrun`, `vimport`, etc. all work
## Recipes
| Recipe | Purpose |
|--------|---------|
| `vcontainer-tarball.bb` | Standalone SDK with vdkr and vpdmn |
| `vdkr-initramfs-create_1.0.bb` | Build vdkr initramfs blobs |
| `vpdmn-initramfs-create_1.0.bb` | Build vpdmn initramfs blobs |
## Files
| File | Purpose |
|------|---------|
| `vdkr.sh` | Docker CLI wrapper |
| `vpdmn.sh` | Podman CLI wrapper |
| `vrunner.sh` | Shared QEMU runner script |
| `vdkr-init.sh` | Docker init script (baked into initramfs) |
| `vpdmn-init.sh` | Podman init script (daemonless) |
## Testing Both Tools
```bash
# Build and install SDK (includes both vdkr and vpdmn)
MACHINE=qemux86-64 bitbake vcontainer-tarball
./tmp/deploy/sdk/vcontainer-standalone.sh -d /tmp/vcontainer -y
# Run tests for both tools
cd /opt/bruce/poky/meta-virtualization
pytest tests/test_vdkr.py tests/test_vpdmn.py -v --vdkr-dir /tmp/vcontainer
```
## See Also
- `classes/container-cross-install.bbclass` for bundling containers into Yocto images
- `classes/container-bundle.bbclass` for creating container bundle packages
- `tests/README.md` for test documentation
|