From 2d6e98e65e028f15019a085f5e705496a1c90175 Mon Sep 17 00:00:00 2001 From: Ting Liu Date: Fri, 17 Jul 2015 15:17:38 +0800 Subject: linux-qoriq: update to revision f488de6 Minor version update to 3.12.37-rt51 with new features: * e6500 hugepage TLB miss performance improvement * T1023RDB support * T1040D4RDB and T1042D4RDB support * DIU [T1042] * DPAA Ethernet: loadable module * eMMC: DDR mode [T2080] * eTSEC: Gianfar upstream updates and fixes * fmlib: table statistics, stats extension * IEEE802.1AE (MACSEC) and IEEE802.1X (port-based network access control) [T104x, T102x] * IEEE1588 ptpd open source stack includes more DPAA processors: P1023, P2041, P3041, P5020, P5040, T4240, T1023 * LAG SGMII 2.5G ports support - IPv4 traffics forwarding on aggregated 2 x 2.5Gb L2 Switch FMAN ports [1040] * LAG support of IPv6 traffics forwarding and TCP/UDP traffics over IPv6 forwarding (2 x 2.5Gb L2 Switch WAN) [1040] * LAG support of IPv6 traffics forwarding and TCP/UDP traffics over IPv6 forwarding on both 1 G RGMII port and 1G SGMII port [1040] * Power Management: Power off feature for all QDS boards except B9132QDS and B4860QDS * SEC: QI Driver IPSec performance improvement * SGMII 2.5G fixed link [T1024] * USB: Dual UTMI For detailed history, see http://git.freescale.com/git/cgit.cgi/ppc/sdk/linux.git/tag/?id=fsl-sdk-v1.8 Also remove the patches which already merged in 3.12.37-rt51 Signed-off-by: Ting Liu Acked-by: Otavio Salvador --- .../linux/files/0003-shmem-CVE-2014-4171.patch | 134 --------------------- 1 file changed, 134 deletions(-) delete mode 100644 recipes-kernel/linux/files/0003-shmem-CVE-2014-4171.patch (limited to 'recipes-kernel/linux/files/0003-shmem-CVE-2014-4171.patch') diff --git a/recipes-kernel/linux/files/0003-shmem-CVE-2014-4171.patch b/recipes-kernel/linux/files/0003-shmem-CVE-2014-4171.patch deleted file mode 100644 index 2b70ec1..0000000 --- a/recipes-kernel/linux/files/0003-shmem-CVE-2014-4171.patch +++ /dev/null @@ -1,134 +0,0 @@ -From a428dc008e435c5a36b1288fb5b8c4b58472e28c Mon Sep 17 00:00:00 2001 -From: Hugh Dickins -Date: Wed, 23 Jul 2014 14:00:13 -0700 -Subject: [PATCH 3/3] shmem: fix splicing from a hole while it's punched - -commit b1a366500bd537b50c3aad26dc7df083ec03a448 upstream. - -shmem_fault() is the actual culprit in trinity's hole-punch starvation, -and the most significant cause of such problems: since a page faulted is -one that then appears page_mapped(), needing unmap_mapping_range() and -i_mmap_mutex to be unmapped again. - -But it is not the only way in which a page can be brought into a hole in -the radix_tree while that hole is being punched; and Vlastimil's testing -implies that if enough other processors are busy filling in the hole, -then shmem_undo_range() can be kept from completing indefinitely. - -shmem_file_splice_read() is the main other user of SGP_CACHE, which can -instantiate shmem pagecache pages in the read-only case (without holding -i_mutex, so perhaps concurrently with a hole-punch). Probably it's -silly not to use SGP_READ already (using the ZERO_PAGE for holes): which -ought to be safe, but might bring surprises - not a change to be rushed. - -shmem_read_mapping_page_gfp() is an internal interface used by -drivers/gpu/drm GEM (and next by uprobes): it should be okay. And -shmem_file_read_iter() uses the SGP_DIRTY variant of SGP_CACHE, when -called internally by the kernel (perhaps for a stacking filesystem, -which might rely on holes to be reserved): it's unclear whether it could -be provoked to keep hole-punch busy or not. - -We could apply the same umbrella as now used in shmem_fault() to -shmem_file_splice_read() and the others; but it looks ugly, and use over -a range raises questions - should it actually be per page? can these get -starved themselves? - -The origin of this part of the problem is my v3.1 commit d0823576bf4b -("mm: pincer in truncate_inode_pages_range"), once it was duplicated -into shmem.c. It seemed like a nice idea at the time, to ensure -(barring RCU lookup fuzziness) that there's an instant when the entire -hole is empty; but the indefinitely repeated scans to ensure that make -it vulnerable. - -Revert that "enhancement" to hole-punch from shmem_undo_range(), but -retain the unproblematic rescanning when it's truncating; add a couple -of comments there. - -Remove the "indices[0] >= end" test: that is now handled satisfactorily -by the inner loop, and mem_cgroup_uncharge_start()/end() are too light -to be worth avoiding here. - -But if we do not always loop indefinitely, we do need to handle the case -of swap swizzled back to page before shmem_free_swap() gets it: add a -retry for that case, as suggested by Konstantin Khlebnikov; and for the -case of page swizzled back to swap, as suggested by Johannes Weiner. - -Upstream-Status: Backport - -Signed-off-by: Hugh Dickins -Reported-by: Sasha Levin -Suggested-by: Vlastimil Babka -Cc: Konstantin Khlebnikov -Cc: Johannes Weiner -Cc: Lukas Czerner -Cc: Dave Jones -Cc: [3.1+] -Signed-off-by: Andrew Morton -Signed-off-by: Linus Torvalds -Signed-off-by: Jiri Slaby -Signed-off-by: Sona Sarmadi ---- - mm/shmem.c | 24 +++++++++++++++--------- - 1 file changed, 15 insertions(+), 9 deletions(-) - -diff --git a/mm/shmem.c b/mm/shmem.c -index 6f5626f..0da81aa 100644 ---- a/mm/shmem.c -+++ b/mm/shmem.c -@@ -534,22 +534,19 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, - return; - - index = start; -- for ( ; ; ) { -+ while (index < end) { - cond_resched(); - pvec.nr = shmem_find_get_pages_and_swap(mapping, index, - min(end - index, (pgoff_t)PAGEVEC_SIZE), - pvec.pages, indices); - if (!pvec.nr) { -- if (index == start || unfalloc) -+ /* If all gone or hole-punch or unfalloc, we're done */ -+ if (index == start || end != -1) - break; -+ /* But if truncating, restart to make sure all gone */ - index = start; - continue; - } -- if ((index == start || unfalloc) && indices[0] >= end) { -- shmem_deswap_pagevec(&pvec); -- pagevec_release(&pvec); -- break; -- } - mem_cgroup_uncharge_start(); - for (i = 0; i < pagevec_count(&pvec); i++) { - struct page *page = pvec.pages[i]; -@@ -561,8 +558,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, - if (radix_tree_exceptional_entry(page)) { - if (unfalloc) - continue; -- nr_swaps_freed += !shmem_free_swap(mapping, -- index, page); -+ if (shmem_free_swap(mapping, index, page)) { -+ /* Swap was replaced by page: retry */ -+ index--; -+ break; -+ } -+ nr_swaps_freed++; - continue; - } - -@@ -571,6 +572,11 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, - if (page->mapping == mapping) { - VM_BUG_ON(PageWriteback(page)); - truncate_inode_page(mapping, page); -+ } else { -+ /* Page was replaced by swap: retry */ -+ unlock_page(page); -+ index--; -+ break; - } - } - unlock_page(page); --- -1.9.1 - -- cgit v1.2.3-54-g00ecf