summaryrefslogtreecommitdiffstats
path: root/meta/recipes-devtools/qemu/qemu/CVE-2020-27821.patch
diff options
context:
space:
mode:
Diffstat (limited to 'meta/recipes-devtools/qemu/qemu/CVE-2020-27821.patch')
-rw-r--r--meta/recipes-devtools/qemu/qemu/CVE-2020-27821.patch73
1 files changed, 73 insertions, 0 deletions
diff --git a/meta/recipes-devtools/qemu/qemu/CVE-2020-27821.patch b/meta/recipes-devtools/qemu/qemu/CVE-2020-27821.patch
new file mode 100644
index 0000000000..e26bc31bbb
--- /dev/null
+++ b/meta/recipes-devtools/qemu/qemu/CVE-2020-27821.patch
@@ -0,0 +1,73 @@
1From 15222d4636d742f3395fd211fad0cd7e36d9f43e Mon Sep 17 00:00:00 2001
2From: Hitendra Prajapati <hprajapati@mvista.com>
3Date: Tue, 16 Aug 2022 10:07:01 +0530
4Subject: [PATCH] CVE-2020-27821
5
6Upstream-Status: Backport [https://git.qemu.org/?p=qemu.git;a=commit;h=4bfb024bc76973d40a359476dc0291f46e435442]
7CVE: CVE-2020-27821
8Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
9
10memory: clamp cached translation in case it points to an MMIO region
11
12In using the address_space_translate_internal API, address_space_cache_init
13forgot one piece of advice that can be found in the code for
14address_space_translate_internal:
15
16 /* MMIO registers can be expected to perform full-width accesses based only
17 * on their address, without considering adjacent registers that could
18 * decode to completely different MemoryRegions. When such registers
19 * exist (e.g. I/O ports 0xcf8 and 0xcf9 on most PC chipsets), MMIO
20 * regions overlap wildly. For this reason we cannot clamp the accesses
21 * here.
22 *
23 * If the length is small (as is the case for address_space_ldl/stl),
24 * everything works fine. If the incoming length is large, however,
25 * the caller really has to do the clamping through memory_access_size.
26 */
27
28address_space_cache_init is exactly one such case where "the incoming length
29is large", therefore we need to clamp the resulting length---not to
30memory_access_size though, since we are not doing an access yet, but to
31the size of the resulting section. This ensures that subsequent accesses
32to the cached MemoryRegionSection will be in range.
33
34With this patch, the enclosed testcase notices that the used ring does
35not fit into the MSI-X table and prints a "qemu-system-x86_64: Cannot map used"
36error.
37
38Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
39---
40 exec.c | 10 ++++++++++
41 1 file changed, 10 insertions(+)
42
43diff --git a/exec.c b/exec.c
44index 2d6add46..1360051a 100644
45--- a/exec.c
46+++ b/exec.c
47@@ -3632,6 +3632,7 @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
48 AddressSpaceDispatch *d;
49 hwaddr l;
50 MemoryRegion *mr;
51+ Int128 diff;
52
53 assert(len > 0);
54
55@@ -3640,6 +3641,15 @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
56 d = flatview_to_dispatch(cache->fv);
57 cache->mrs = *address_space_translate_internal(d, addr, &cache->xlat, &l, true);
58
59+ /*
60+ * cache->xlat is now relative to cache->mrs.mr, not to the section itself.
61+ * Take that into account to compute how many bytes are there between
62+ * cache->xlat and the end of the section.
63+ */
64+ diff = int128_sub(cache->mrs.size,
65+ int128_make64(cache->xlat - cache->mrs.offset_within_region));
66+ l = int128_get64(int128_min(diff, int128_make64(l)));
67+
68 mr = cache->mrs.mr;
69 memory_region_ref(mr);
70 if (memory_access_is_direct(mr, is_write)) {
71--
722.25.1
73