summaryrefslogtreecommitdiffstats
path: root/meta/recipes-devtools/gcc/gcc-9.3/0003-aarch64-Mitigate-SLS-for-BLR-instruction.patch
diff options
context:
space:
mode:
Diffstat (limited to 'meta/recipes-devtools/gcc/gcc-9.3/0003-aarch64-Mitigate-SLS-for-BLR-instruction.patch')
-rw-r--r--meta/recipes-devtools/gcc/gcc-9.3/0003-aarch64-Mitigate-SLS-for-BLR-instruction.patch659
1 files changed, 0 insertions, 659 deletions
diff --git a/meta/recipes-devtools/gcc/gcc-9.3/0003-aarch64-Mitigate-SLS-for-BLR-instruction.patch b/meta/recipes-devtools/gcc/gcc-9.3/0003-aarch64-Mitigate-SLS-for-BLR-instruction.patch
deleted file mode 100644
index 6dffef0a34..0000000000
--- a/meta/recipes-devtools/gcc/gcc-9.3/0003-aarch64-Mitigate-SLS-for-BLR-instruction.patch
+++ /dev/null
@@ -1,659 +0,0 @@
1CVE: CVE-2020-13844
2Upstream-Status: Backport
3Signed-off-by: Ross Burton <ross.burton@arm.com>
4
5From 2155170525f93093b90a1a065e7ed71a925566e9 Mon Sep 17 00:00:00 2001
6From: Matthew Malcomson <matthew.malcomson@arm.com>
7Date: Thu, 9 Jul 2020 09:11:59 +0100
8Subject: [PATCH 3/3] aarch64: Mitigate SLS for BLR instruction
9
10This patch introduces the mitigation for Straight Line Speculation past
11the BLR instruction.
12
13This mitigation replaces BLR instructions with a BL to a stub which uses
14a BR to jump to the original value. These function stubs are then
15appended with a speculation barrier to ensure no straight line
16speculation happens after these jumps.
17
18When optimising for speed we use a set of stubs for each function since
19this should help the branch predictor make more accurate predictions
20about where a stub should branch.
21
22When optimising for size we use one set of stubs for all functions.
23This set of stubs can have human readable names, and we are using
24`__call_indirect_x<N>` for register x<N>.
25
26When BTI branch protection is enabled the BLR instruction can jump to a
27`BTI c` instruction using any register, while the BR instruction can
28only jump to a `BTI c` instruction using the x16 or x17 registers.
29Hence, in order to ensure this transformation is safe we mov the value
30of the original register into x16 and use x16 for the BR.
31
32As an example when optimising for size:
33a
34 BLR x0
35instruction would get transformed to something like
36 BL __call_indirect_x0
37where __call_indirect_x0 labels a thunk that contains
38__call_indirect_x0:
39 MOV X16, X0
40 BR X16
41 <speculation barrier>
42
43The first version of this patch used local symbols specific to a
44compilation unit to try and avoid relocations.
45This was mistaken since functions coming from the same compilation unit
46can still be in different sections, and the assembler will insert
47relocations at jumps between sections.
48
49On any relocation the linker is permitted to emit a veneer to handle
50jumps between symbols that are very far apart. The registers x16 and
51x17 may be clobbered by these veneers.
52Hence the function stubs cannot rely on the values of x16 and x17 being
53the same as just before the function stub is called.
54
55Similar can be said for the hot/cold partitioning of single functions,
56so function-local stubs have the same restriction.
57
58This updated version of the patch never emits function stubs for x16 and
59x17, and instead forces other registers to be used.
60
61Given the above, there is now no benefit to local symbols (since they
62are not enough to avoid dealing with linker intricacies). This patch
63now uses global symbols with hidden visibility each stored in their own
64COMDAT section. This means stubs can be shared between compilation
65units while still avoiding the PLT indirection.
66
67This patch also removes the `__call_indirect_x30` stub (and
68function-local equivalent) which would simply jump back to the original
69location.
70
71The function-local stubs are emitted to the assembly output file in one
72chunk, which means we need not add the speculation barrier directly
73after each one.
74This is because we know for certain that the instructions directly after
75the BR in all but the last function stub will be from another one of
76these stubs and hence will not contain a speculation gadget.
77Instead we add a speculation barrier at the end of the sequence of
78stubs.
79
80The global stubs are emitted in COMDAT/.linkonce sections by
81themselves so that the linker can remove duplicates from multiple object
82files. This means they are not emitted in one chunk, and each one must
83include the speculation barrier.
84
85Another difference is that since the global stubs are shared across
86compilation units we do not know that all functions will be targeting an
87architecture supporting the SB instruction.
88Rather than provide multiple stubs for each architecture, we provide a
89stub that will work for all architectures -- using the DSB+ISB barrier.
90
91This mitigation does not apply for BLR instructions in the following
92places:
93- Some accesses to thread-local variables use a code sequence with a BLR
94 instruction. This code sequence is part of the binary interface between
95 compiler and linker. If this BLR instruction needs to be mitigated, it'd
96 probably be best to do so in the linker. It seems that the code sequence
97 for thread-local variable access is unlikely to lead to a Spectre Revalation
98 Gadget.
99- PLT stubs are produced by the linker and each contain a BLR instruction.
100 It seems that at most only after the last PLT stub a Spectre Revalation
101 Gadget might appear.
102
103Testing:
104 Bootstrap and regtest on AArch64
105 (with BOOT_CFLAGS="-mharden-sls=retbr,blr")
106 Used a temporary hack(1) in gcc-dg.exp to use these options on every
107 test in the testsuite, a slight modification to emit the speculation
108 barrier after every function stub, and a script to check that the
109 output never emitted a BLR, or unmitigated BR or RET instruction.
110 Similar on an aarch64-none-elf cross-compiler.
111
1121) Temporary hack emitted a speculation barrier at the end of every stub
113function, and used a script to ensure that:
114 a) Every RET or BR is immediately followed by a speculation barrier.
115 b) No BLR instruction is emitted by compiler.
116
117(cherry picked from 96b7f495f9269d5448822e4fc28882edb35a58d7)
118
119gcc/ChangeLog:
120
121 * config/aarch64/aarch64-protos.h (aarch64_indirect_call_asm):
122 New declaration.
123 * config/aarch64/aarch64.c (aarch64_regno_regclass): Handle new
124 stub registers class.
125 (aarch64_class_max_nregs): Likewise.
126 (aarch64_register_move_cost): Likewise.
127 (aarch64_sls_shared_thunks): Global array to store stub labels.
128 (aarch64_sls_emit_function_stub): New.
129 (aarch64_create_blr_label): New.
130 (aarch64_sls_emit_blr_function_thunks): New.
131 (aarch64_sls_emit_shared_blr_thunks): New.
132 (aarch64_asm_file_end): New.
133 (aarch64_indirect_call_asm): New.
134 (TARGET_ASM_FILE_END): Use aarch64_asm_file_end.
135 (TARGET_ASM_FUNCTION_EPILOGUE): Use
136 aarch64_sls_emit_blr_function_thunks.
137 * config/aarch64/aarch64.h (STB_REGNUM_P): New.
138 (enum reg_class): Add STUB_REGS class.
139 (machine_function): Introduce `call_via` array for
140 function-local stub labels.
141 * config/aarch64/aarch64.md (*call_insn, *call_value_insn): Use
142 aarch64_indirect_call_asm to emit code when hardening BLR
143 instructions.
144 * config/aarch64/constraints.md (Ucr): New constraint
145 representing registers for indirect calls. Is GENERAL_REGS
146 usually, and STUB_REGS when hardening BLR instruction against
147 SLS.
148 * config/aarch64/predicates.md (aarch64_general_reg): STUB_REGS class
149 is also a general register.
150
151gcc/testsuite/ChangeLog:
152
153 * gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c: New test.
154 * gcc.target/aarch64/sls-mitigation/sls-miti-blr.c: New test.
155---
156 gcc/config/aarch64/aarch64-protos.h | 1 +
157 gcc/config/aarch64/aarch64.c | 225 +++++++++++++++++-
158 gcc/config/aarch64/aarch64.h | 15 ++
159 gcc/config/aarch64/aarch64.md | 11 +-
160 gcc/config/aarch64/constraints.md | 9 +
161 gcc/config/aarch64/predicates.md | 3 +-
162 .../aarch64/sls-mitigation/sls-miti-blr-bti.c | 40 ++++
163 .../aarch64/sls-mitigation/sls-miti-blr.c | 33 +++
164 8 files changed, 328 insertions(+), 9 deletions(-)
165 create mode 100644 gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c
166 create mode 100644 gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr.c
167
168diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h
169index 885eae893..2676e43ae 100644
170--- a/gcc/config/aarch64/aarch64-protos.h
171+++ b/gcc/config/aarch64/aarch64-protos.h
172@@ -645,6 +645,7 @@ poly_uint64 aarch64_regmode_natural_size (machine_mode);
173 bool aarch64_high_bits_all_ones_p (HOST_WIDE_INT);
174
175 const char *aarch64_sls_barrier (int);
176+const char *aarch64_indirect_call_asm (rtx);
177 extern bool aarch64_harden_sls_retbr_p (void);
178 extern bool aarch64_harden_sls_blr_p (void);
179
180diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
181index dff61105c..bc6c02c3a 100644
182--- a/gcc/config/aarch64/aarch64.c
183+++ b/gcc/config/aarch64/aarch64.c
184@@ -8190,6 +8190,9 @@ aarch64_label_mentioned_p (rtx x)
185 enum reg_class
186 aarch64_regno_regclass (unsigned regno)
187 {
188+ if (STUB_REGNUM_P (regno))
189+ return STUB_REGS;
190+
191 if (GP_REGNUM_P (regno))
192 return GENERAL_REGS;
193
194@@ -8499,6 +8502,7 @@ aarch64_class_max_nregs (reg_class_t regclass, machine_mode mode)
195 unsigned int nregs;
196 switch (regclass)
197 {
198+ case STUB_REGS:
199 case TAILCALL_ADDR_REGS:
200 case POINTER_REGS:
201 case GENERAL_REGS:
202@@ -10693,10 +10697,12 @@ aarch64_register_move_cost (machine_mode mode,
203 = aarch64_tune_params.regmove_cost;
204
205 /* Caller save and pointer regs are equivalent to GENERAL_REGS. */
206- if (to == TAILCALL_ADDR_REGS || to == POINTER_REGS)
207+ if (to == TAILCALL_ADDR_REGS || to == POINTER_REGS
208+ || to == STUB_REGS)
209 to = GENERAL_REGS;
210
211- if (from == TAILCALL_ADDR_REGS || from == POINTER_REGS)
212+ if (from == TAILCALL_ADDR_REGS || from == POINTER_REGS
213+ || from == STUB_REGS)
214 from = GENERAL_REGS;
215
216 /* Moving between GPR and stack cost is the same as GP2GP. */
217@@ -19009,6 +19015,215 @@ aarch64_sls_barrier (int mitigation_required)
218 : "";
219 }
220
221+static GTY (()) tree aarch64_sls_shared_thunks[30];
222+static GTY (()) bool aarch64_sls_shared_thunks_needed = false;
223+const char *indirect_symbol_names[30] = {
224+ "__call_indirect_x0",
225+ "__call_indirect_x1",
226+ "__call_indirect_x2",
227+ "__call_indirect_x3",
228+ "__call_indirect_x4",
229+ "__call_indirect_x5",
230+ "__call_indirect_x6",
231+ "__call_indirect_x7",
232+ "__call_indirect_x8",
233+ "__call_indirect_x9",
234+ "__call_indirect_x10",
235+ "__call_indirect_x11",
236+ "__call_indirect_x12",
237+ "__call_indirect_x13",
238+ "__call_indirect_x14",
239+ "__call_indirect_x15",
240+ "", /* "__call_indirect_x16", */
241+ "", /* "__call_indirect_x17", */
242+ "__call_indirect_x18",
243+ "__call_indirect_x19",
244+ "__call_indirect_x20",
245+ "__call_indirect_x21",
246+ "__call_indirect_x22",
247+ "__call_indirect_x23",
248+ "__call_indirect_x24",
249+ "__call_indirect_x25",
250+ "__call_indirect_x26",
251+ "__call_indirect_x27",
252+ "__call_indirect_x28",
253+ "__call_indirect_x29",
254+};
255+
256+/* Function to create a BLR thunk. This thunk is used to mitigate straight
257+ line speculation. Instead of a simple BLR that can be speculated past,
258+ we emit a BL to this thunk, and this thunk contains a BR to the relevant
259+ register. These thunks have the relevant speculation barries put after
260+ their indirect branch so that speculation is blocked.
261+
262+ We use such a thunk so the speculation barriers are kept off the
263+ architecturally executed path in order to reduce the performance overhead.
264+
265+ When optimizing for size we use stubs shared by the linked object.
266+ When optimizing for performance we emit stubs for each function in the hope
267+ that the branch predictor can better train on jumps specific for a given
268+ function. */
269+rtx
270+aarch64_sls_create_blr_label (int regnum)
271+{
272+ gcc_assert (STUB_REGNUM_P (regnum));
273+ if (optimize_function_for_size_p (cfun))
274+ {
275+ /* For the thunks shared between different functions in this compilation
276+ unit we use a named symbol -- this is just for users to more easily
277+ understand the generated assembly. */
278+ aarch64_sls_shared_thunks_needed = true;
279+ const char *thunk_name = indirect_symbol_names[regnum];
280+ if (aarch64_sls_shared_thunks[regnum] == NULL)
281+ {
282+ /* Build a decl representing this function stub and record it for
283+ later. We build a decl here so we can use the GCC machinery for
284+ handling sections automatically (through `get_named_section` and
285+ `make_decl_one_only`). That saves us a lot of trouble handling
286+ the specifics of different output file formats. */
287+ tree decl = build_decl (BUILTINS_LOCATION, FUNCTION_DECL,
288+ get_identifier (thunk_name),
289+ build_function_type_list (void_type_node,
290+ NULL_TREE));
291+ DECL_RESULT (decl) = build_decl (BUILTINS_LOCATION, RESULT_DECL,
292+ NULL_TREE, void_type_node);
293+ TREE_PUBLIC (decl) = 1;
294+ TREE_STATIC (decl) = 1;
295+ DECL_IGNORED_P (decl) = 1;
296+ DECL_ARTIFICIAL (decl) = 1;
297+ make_decl_one_only (decl, DECL_ASSEMBLER_NAME (decl));
298+ resolve_unique_section (decl, 0, false);
299+ aarch64_sls_shared_thunks[regnum] = decl;
300+ }
301+
302+ return gen_rtx_SYMBOL_REF (Pmode, thunk_name);
303+ }
304+
305+ if (cfun->machine->call_via[regnum] == NULL)
306+ cfun->machine->call_via[regnum]
307+ = gen_rtx_LABEL_REF (Pmode, gen_label_rtx ());
308+ return cfun->machine->call_via[regnum];
309+}
310+
311+/* Helper function for aarch64_sls_emit_blr_function_thunks and
312+ aarch64_sls_emit_shared_blr_thunks below. */
313+static void
314+aarch64_sls_emit_function_stub (FILE *out_file, int regnum)
315+{
316+ /* Save in x16 and branch to that function so this transformation does
317+ not prevent jumping to `BTI c` instructions. */
318+ asm_fprintf (out_file, "\tmov\tx16, x%d\n", regnum);
319+ asm_fprintf (out_file, "\tbr\tx16\n");
320+}
321+
322+/* Emit all BLR stubs for this particular function.
323+ Here we emit all the BLR stubs needed for the current function. Since we
324+ emit these stubs in a consecutive block we know there will be no speculation
325+ gadgets between each stub, and hence we only emit a speculation barrier at
326+ the end of the stub sequences.
327+
328+ This is called in the TARGET_ASM_FUNCTION_EPILOGUE hook. */
329+void
330+aarch64_sls_emit_blr_function_thunks (FILE *out_file)
331+{
332+ if (! aarch64_harden_sls_blr_p ())
333+ return;
334+
335+ bool any_functions_emitted = false;
336+ /* We must save and restore the current function section since this assembly
337+ is emitted at the end of the function. This means it can be emitted *just
338+ after* the cold section of a function. That cold part would be emitted in
339+ a different section. That switch would trigger a `.cfi_endproc` directive
340+ to be emitted in the original section and a `.cfi_startproc` directive to
341+ be emitted in the new section. Switching to the original section without
342+ restoring would mean that the `.cfi_endproc` emitted as a function ends
343+ would happen in a different section -- leaving an unmatched
344+ `.cfi_startproc` in the cold text section and an unmatched `.cfi_endproc`
345+ in the standard text section. */
346+ section *save_text_section = in_section;
347+ switch_to_section (function_section (current_function_decl));
348+ for (int regnum = 0; regnum < 30; ++regnum)
349+ {
350+ rtx specu_label = cfun->machine->call_via[regnum];
351+ if (specu_label == NULL)
352+ continue;
353+
354+ targetm.asm_out.print_operand (out_file, specu_label, 0);
355+ asm_fprintf (out_file, ":\n");
356+ aarch64_sls_emit_function_stub (out_file, regnum);
357+ any_functions_emitted = true;
358+ }
359+ if (any_functions_emitted)
360+ /* Can use the SB if needs be here, since this stub will only be used
361+ by the current function, and hence for the current target. */
362+ asm_fprintf (out_file, "\t%s\n", aarch64_sls_barrier (true));
363+ switch_to_section (save_text_section);
364+}
365+
366+/* Emit shared BLR stubs for the current compilation unit.
367+ Over the course of compiling this unit we may have converted some BLR
368+ instructions to a BL to a shared stub function. This is where we emit those
369+ stub functions.
370+ This function is for the stubs shared between different functions in this
371+ compilation unit. We share when optimizing for size instead of speed.
372+
373+ This function is called through the TARGET_ASM_FILE_END hook. */
374+void
375+aarch64_sls_emit_shared_blr_thunks (FILE *out_file)
376+{
377+ if (! aarch64_sls_shared_thunks_needed)
378+ return;
379+
380+ for (int regnum = 0; regnum < 30; ++regnum)
381+ {
382+ tree decl = aarch64_sls_shared_thunks[regnum];
383+ if (!decl)
384+ continue;
385+
386+ const char *name = indirect_symbol_names[regnum];
387+ switch_to_section (get_named_section (decl, NULL, 0));
388+ ASM_OUTPUT_ALIGN (out_file, 2);
389+ targetm.asm_out.globalize_label (out_file, name);
390+ /* Only emits if the compiler is configured for an assembler that can
391+ handle visibility directives. */
392+ targetm.asm_out.assemble_visibility (decl, VISIBILITY_HIDDEN);
393+ ASM_OUTPUT_TYPE_DIRECTIVE (out_file, name, "function");
394+ ASM_OUTPUT_LABEL (out_file, name);
395+ aarch64_sls_emit_function_stub (out_file, regnum);
396+ /* Use the most conservative target to ensure it can always be used by any
397+ function in the translation unit. */
398+ asm_fprintf (out_file, "\tdsb\tsy\n\tisb\n");
399+ ASM_DECLARE_FUNCTION_SIZE (out_file, name, decl);
400+ }
401+}
402+
403+/* Implement TARGET_ASM_FILE_END. */
404+void
405+aarch64_asm_file_end ()
406+{
407+ aarch64_sls_emit_shared_blr_thunks (asm_out_file);
408+ /* Since this function will be called for the ASM_FILE_END hook, we ensure
409+ that what would be called otherwise (e.g. `file_end_indicate_exec_stack`
410+ for FreeBSD) still gets called. */
411+#ifdef TARGET_ASM_FILE_END
412+ TARGET_ASM_FILE_END ();
413+#endif
414+}
415+
416+const char *
417+aarch64_indirect_call_asm (rtx addr)
418+{
419+ gcc_assert (REG_P (addr));
420+ if (aarch64_harden_sls_blr_p ())
421+ {
422+ rtx stub_label = aarch64_sls_create_blr_label (REGNO (addr));
423+ output_asm_insn ("bl\t%0", &stub_label);
424+ }
425+ else
426+ output_asm_insn ("blr\t%0", &addr);
427+ return "";
428+}
429+
430 /* Target-specific selftests. */
431
432 #if CHECKING_P
433@@ -19529,6 +19744,12 @@ aarch64_libgcc_floating_mode_supported_p
434 #define TARGET_RUN_TARGET_SELFTESTS selftest::aarch64_run_selftests
435 #endif /* #if CHECKING_P */
436
437+#undef TARGET_ASM_FILE_END
438+#define TARGET_ASM_FILE_END aarch64_asm_file_end
439+
440+#undef TARGET_ASM_FUNCTION_EPILOGUE
441+#define TARGET_ASM_FUNCTION_EPILOGUE aarch64_sls_emit_blr_function_thunks
442+
443 struct gcc_target targetm = TARGET_INITIALIZER;
444
445 #include "gt-aarch64.h"
446diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h
447index 72ddc6fd9..60682a100 100644
448--- a/gcc/config/aarch64/aarch64.h
449+++ b/gcc/config/aarch64/aarch64.h
450@@ -540,6 +540,16 @@ extern unsigned aarch64_architecture_version;
451 #define GP_REGNUM_P(REGNO) \
452 (((unsigned) (REGNO - R0_REGNUM)) <= (R30_REGNUM - R0_REGNUM))
453
454+/* Registers known to be preserved over a BL instruction. This consists of the
455+ GENERAL_REGS without x16, x17, and x30. The x30 register is changed by the
456+ BL instruction itself, while the x16 and x17 registers may be used by
457+ veneers which can be inserted by the linker. */
458+#define STUB_REGNUM_P(REGNO) \
459+ (GP_REGNUM_P (REGNO) \
460+ && (REGNO) != R16_REGNUM \
461+ && (REGNO) != R17_REGNUM \
462+ && (REGNO) != R30_REGNUM) \
463+
464 #define FP_REGNUM_P(REGNO) \
465 (((unsigned) (REGNO - V0_REGNUM)) <= (V31_REGNUM - V0_REGNUM))
466
467@@ -561,6 +571,7 @@ enum reg_class
468 {
469 NO_REGS,
470 TAILCALL_ADDR_REGS,
471+ STUB_REGS,
472 GENERAL_REGS,
473 STACK_REG,
474 POINTER_REGS,
475@@ -580,6 +591,7 @@ enum reg_class
476 { \
477 "NO_REGS", \
478 "TAILCALL_ADDR_REGS", \
479+ "STUB_REGS", \
480 "GENERAL_REGS", \
481 "STACK_REG", \
482 "POINTER_REGS", \
483@@ -596,6 +608,7 @@ enum reg_class
484 { \
485 { 0x00000000, 0x00000000, 0x00000000 }, /* NO_REGS */ \
486 { 0x00030000, 0x00000000, 0x00000000 }, /* TAILCALL_ADDR_REGS */\
487+ { 0x3ffcffff, 0x00000000, 0x00000000 }, /* STUB_REGS */ \
488 { 0x7fffffff, 0x00000000, 0x00000003 }, /* GENERAL_REGS */ \
489 { 0x80000000, 0x00000000, 0x00000000 }, /* STACK_REG */ \
490 { 0xffffffff, 0x00000000, 0x00000003 }, /* POINTER_REGS */ \
491@@ -735,6 +748,8 @@ typedef struct GTY (()) machine_function
492 struct aarch64_frame frame;
493 /* One entry for each hard register. */
494 bool reg_is_wrapped_separately[LAST_SAVED_REGNUM];
495+ /* One entry for each general purpose register. */
496+ rtx call_via[SP_REGNUM];
497 bool label_is_assembled;
498 } machine_function;
499 #endif
500diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
501index 494aee964..ed8cf8ece 100644
502--- a/gcc/config/aarch64/aarch64.md
503+++ b/gcc/config/aarch64/aarch64.md
504@@ -908,15 +908,14 @@
505 )
506
507 (define_insn "*call_insn"
508- [(call (mem:DI (match_operand:DI 0 "aarch64_call_insn_operand" "r, Usf"))
509+ [(call (mem:DI (match_operand:DI 0 "aarch64_call_insn_operand" "Ucr, Usf"))
510 (match_operand 1 "" ""))
511 (clobber (reg:DI LR_REGNUM))]
512 ""
513 "@
514- blr\\t%0
515+ * return aarch64_indirect_call_asm (operands[0]);
516 bl\\t%c0"
517- [(set_attr "type" "call, call")]
518-)
519+ [(set_attr "type" "call, call")])
520
521 (define_expand "call_value"
522 [(parallel [(set (match_operand 0 "" "")
523@@ -934,12 +933,12 @@
524
525 (define_insn "*call_value_insn"
526 [(set (match_operand 0 "" "")
527- (call (mem:DI (match_operand:DI 1 "aarch64_call_insn_operand" "r, Usf"))
528+ (call (mem:DI (match_operand:DI 1 "aarch64_call_insn_operand" "Ucr, Usf"))
529 (match_operand 2 "" "")))
530 (clobber (reg:DI LR_REGNUM))]
531 ""
532 "@
533- blr\\t%1
534+ * return aarch64_indirect_call_asm (operands[1]);
535 bl\\t%c1"
536 [(set_attr "type" "call, call")]
537 )
538diff --git a/gcc/config/aarch64/constraints.md b/gcc/config/aarch64/constraints.md
539index 21f9549e6..7756dbe83 100644
540--- a/gcc/config/aarch64/constraints.md
541+++ b/gcc/config/aarch64/constraints.md
542@@ -24,6 +24,15 @@
543 (define_register_constraint "Ucs" "TAILCALL_ADDR_REGS"
544 "@internal Registers suitable for an indirect tail call")
545
546+(define_register_constraint "Ucr"
547+ "aarch64_harden_sls_blr_p () ? STUB_REGS : GENERAL_REGS"
548+ "@internal Registers to be used for an indirect call.
549+ This is usually the general registers, but when we are hardening against
550+ Straight Line Speculation we disallow x16, x17, and x30 so we can use
551+ indirection stubs. These indirection stubs cannot use the above registers
552+ since they will be reached by a BL that may have to go through a linker
553+ veneer.")
554+
555 (define_register_constraint "w" "FP_REGS"
556 "Floating point and SIMD vector registers.")
557
558diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
559index 8e1b78421..4250aecb3 100644
560--- a/gcc/config/aarch64/predicates.md
561+++ b/gcc/config/aarch64/predicates.md
562@@ -32,7 +32,8 @@
563
564 (define_predicate "aarch64_general_reg"
565 (and (match_operand 0 "register_operand")
566- (match_test "REGNO_REG_CLASS (REGNO (op)) == GENERAL_REGS")))
567+ (match_test "REGNO_REG_CLASS (REGNO (op)) == STUB_REGS
568+ || REGNO_REG_CLASS (REGNO (op)) == GENERAL_REGS")))
569
570 ;; Return true if OP a (const_int 0) operand.
571 (define_predicate "const0_operand"
572diff --git a/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c
573new file mode 100644
574index 000000000..b1fb754c7
575--- /dev/null
576+++ b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c
577@@ -0,0 +1,40 @@
578+/* { dg-do compile } */
579+/* { dg-additional-options "-mharden-sls=blr -mbranch-protection=bti" } */
580+/*
581+ Ensure that the SLS hardening of BLR leaves no BLR instructions.
582+ Here we also check that there are no BR instructions with anything except an
583+ x16 or x17 register. This is because a `BTI c` instruction can be branched
584+ to using a BLR instruction using any register, but can only be branched to
585+ with a BR using an x16 or x17 register.
586+ */
587+typedef int (foo) (int, int);
588+typedef void (bar) (int, int);
589+struct sls_testclass {
590+ foo *x;
591+ bar *y;
592+ int left;
593+ int right;
594+};
595+
596+/* We test both RTL patterns for a call which returns a value and a call which
597+ does not. */
598+int blr_call_value (struct sls_testclass x)
599+{
600+ int retval = x.x(x.left, x.right);
601+ if (retval % 10)
602+ return 100;
603+ return 9;
604+}
605+
606+int blr_call (struct sls_testclass x)
607+{
608+ x.y(x.left, x.right);
609+ if (x.left % 10)
610+ return 100;
611+ return 9;
612+}
613+
614+/* { dg-final { scan-assembler-not {\tblr\t} } } */
615+/* { dg-final { scan-assembler-not {\tbr\tx(?!16|17)} } } */
616+/* { dg-final { scan-assembler {\tbr\tx(16|17)} } } */
617+
618diff --git a/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr.c b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr.c
619new file mode 100644
620index 000000000..88baffffe
621--- /dev/null
622+++ b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr.c
623@@ -0,0 +1,33 @@
624+/* { dg-additional-options "-mharden-sls=blr -save-temps" } */
625+/* Ensure that the SLS hardening of BLR leaves no BLR instructions.
626+ We only test that all BLR instructions have been removed, not that the
627+ resulting code makes sense. */
628+typedef int (foo) (int, int);
629+typedef void (bar) (int, int);
630+struct sls_testclass {
631+ foo *x;
632+ bar *y;
633+ int left;
634+ int right;
635+};
636+
637+/* We test both RTL patterns for a call which returns a value and a call which
638+ does not. */
639+int blr_call_value (struct sls_testclass x)
640+{
641+ int retval = x.x(x.left, x.right);
642+ if (retval % 10)
643+ return 100;
644+ return 9;
645+}
646+
647+int blr_call (struct sls_testclass x)
648+{
649+ x.y(x.left, x.right);
650+ if (x.left % 10)
651+ return 100;
652+ return 9;
653+}
654+
655+/* { dg-final { scan-assembler-not {\tblr\t} } } */
656+/* { dg-final { scan-assembler {\tbr\tx[0-9][0-9]?} } } */
657--
6582.25.1
659