summaryrefslogtreecommitdiffstats
path: root/recipes-kernel/linux/files/0003-shmem-CVE-2014-4171.patch
diff options
context:
space:
mode:
Diffstat (limited to 'recipes-kernel/linux/files/0003-shmem-CVE-2014-4171.patch')
-rw-r--r--recipes-kernel/linux/files/0003-shmem-CVE-2014-4171.patch134
1 files changed, 0 insertions, 134 deletions
diff --git a/recipes-kernel/linux/files/0003-shmem-CVE-2014-4171.patch b/recipes-kernel/linux/files/0003-shmem-CVE-2014-4171.patch
deleted file mode 100644
index 2b70ec1..0000000
--- a/recipes-kernel/linux/files/0003-shmem-CVE-2014-4171.patch
+++ /dev/null
@@ -1,134 +0,0 @@
1From a428dc008e435c5a36b1288fb5b8c4b58472e28c Mon Sep 17 00:00:00 2001
2From: Hugh Dickins <hughd@google.com>
3Date: Wed, 23 Jul 2014 14:00:13 -0700
4Subject: [PATCH 3/3] shmem: fix splicing from a hole while it's punched
5
6commit b1a366500bd537b50c3aad26dc7df083ec03a448 upstream.
7
8shmem_fault() is the actual culprit in trinity's hole-punch starvation,
9and the most significant cause of such problems: since a page faulted is
10one that then appears page_mapped(), needing unmap_mapping_range() and
11i_mmap_mutex to be unmapped again.
12
13But it is not the only way in which a page can be brought into a hole in
14the radix_tree while that hole is being punched; and Vlastimil's testing
15implies that if enough other processors are busy filling in the hole,
16then shmem_undo_range() can be kept from completing indefinitely.
17
18shmem_file_splice_read() is the main other user of SGP_CACHE, which can
19instantiate shmem pagecache pages in the read-only case (without holding
20i_mutex, so perhaps concurrently with a hole-punch). Probably it's
21silly not to use SGP_READ already (using the ZERO_PAGE for holes): which
22ought to be safe, but might bring surprises - not a change to be rushed.
23
24shmem_read_mapping_page_gfp() is an internal interface used by
25drivers/gpu/drm GEM (and next by uprobes): it should be okay. And
26shmem_file_read_iter() uses the SGP_DIRTY variant of SGP_CACHE, when
27called internally by the kernel (perhaps for a stacking filesystem,
28which might rely on holes to be reserved): it's unclear whether it could
29be provoked to keep hole-punch busy or not.
30
31We could apply the same umbrella as now used in shmem_fault() to
32shmem_file_splice_read() and the others; but it looks ugly, and use over
33a range raises questions - should it actually be per page? can these get
34starved themselves?
35
36The origin of this part of the problem is my v3.1 commit d0823576bf4b
37("mm: pincer in truncate_inode_pages_range"), once it was duplicated
38into shmem.c. It seemed like a nice idea at the time, to ensure
39(barring RCU lookup fuzziness) that there's an instant when the entire
40hole is empty; but the indefinitely repeated scans to ensure that make
41it vulnerable.
42
43Revert that "enhancement" to hole-punch from shmem_undo_range(), but
44retain the unproblematic rescanning when it's truncating; add a couple
45of comments there.
46
47Remove the "indices[0] >= end" test: that is now handled satisfactorily
48by the inner loop, and mem_cgroup_uncharge_start()/end() are too light
49to be worth avoiding here.
50
51But if we do not always loop indefinitely, we do need to handle the case
52of swap swizzled back to page before shmem_free_swap() gets it: add a
53retry for that case, as suggested by Konstantin Khlebnikov; and for the
54case of page swizzled back to swap, as suggested by Johannes Weiner.
55
56Upstream-Status: Backport
57
58Signed-off-by: Hugh Dickins <hughd@google.com>
59Reported-by: Sasha Levin <sasha.levin@oracle.com>
60Suggested-by: Vlastimil Babka <vbabka@suse.cz>
61Cc: Konstantin Khlebnikov <koct9i@gmail.com>
62Cc: Johannes Weiner <hannes@cmpxchg.org>
63Cc: Lukas Czerner <lczerner@redhat.com>
64Cc: Dave Jones <davej@redhat.com>
65Cc: <stable@vger.kernel.org> [3.1+]
66Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
67Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
68Signed-off-by: Jiri Slaby <jslaby@suse.cz>
69Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
70---
71 mm/shmem.c | 24 +++++++++++++++---------
72 1 file changed, 15 insertions(+), 9 deletions(-)
73
74diff --git a/mm/shmem.c b/mm/shmem.c
75index 6f5626f..0da81aa 100644
76--- a/mm/shmem.c
77+++ b/mm/shmem.c
78@@ -534,22 +534,19 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
79 return;
80
81 index = start;
82- for ( ; ; ) {
83+ while (index < end) {
84 cond_resched();
85 pvec.nr = shmem_find_get_pages_and_swap(mapping, index,
86 min(end - index, (pgoff_t)PAGEVEC_SIZE),
87 pvec.pages, indices);
88 if (!pvec.nr) {
89- if (index == start || unfalloc)
90+ /* If all gone or hole-punch or unfalloc, we're done */
91+ if (index == start || end != -1)
92 break;
93+ /* But if truncating, restart to make sure all gone */
94 index = start;
95 continue;
96 }
97- if ((index == start || unfalloc) && indices[0] >= end) {
98- shmem_deswap_pagevec(&pvec);
99- pagevec_release(&pvec);
100- break;
101- }
102 mem_cgroup_uncharge_start();
103 for (i = 0; i < pagevec_count(&pvec); i++) {
104 struct page *page = pvec.pages[i];
105@@ -561,8 +558,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
106 if (radix_tree_exceptional_entry(page)) {
107 if (unfalloc)
108 continue;
109- nr_swaps_freed += !shmem_free_swap(mapping,
110- index, page);
111+ if (shmem_free_swap(mapping, index, page)) {
112+ /* Swap was replaced by page: retry */
113+ index--;
114+ break;
115+ }
116+ nr_swaps_freed++;
117 continue;
118 }
119
120@@ -571,6 +572,11 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
121 if (page->mapping == mapping) {
122 VM_BUG_ON(PageWriteback(page));
123 truncate_inode_page(mapping, page);
124+ } else {
125+ /* Page was replaced by swap: retry */
126+ unlock_page(page);
127+ index--;
128+ break;
129 }
130 }
131 unlock_page(page);
132--
1331.9.1
134