* [PATCH 03/11] Move append_insns out of aarch64_relocate_instruction
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
@ 2015-10-07 9:26 ` Yao Qi
2015-10-07 9:26 ` [PATCH 08/11] New test case gdb.arch/disp-step-insn-reloc.exp Yao Qi
` (10 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Yao Qi @ 2015-10-07 9:26 UTC (permalink / raw)
To: gdb-patches
aarch64_relocate_instruction should only decode instructions, and other
operations should be done out side of it. This patch moves append_insns
out of aarch64_relocate_instruction, to its caller.
gdb/gdbserver:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* linux-aarch64-low.c (aarch64_relocate_instruction): Return
int. Add argument buf.
(aarch64_install_fast_tracepoint_jump_pad): Pass buf to
aarch64_relocate_instruction.
---
gdb/gdbserver/linux-aarch64-low.c | 37 +++++++++++++++++++------------------
1 file changed, 19 insertions(+), 18 deletions(-)
diff --git a/gdb/gdbserver/linux-aarch64-low.c b/gdb/gdbserver/linux-aarch64-low.c
index 909ba65..506fb9e 100644
--- a/gdb/gdbserver/linux-aarch64-low.c
+++ b/gdb/gdbserver/linux-aarch64-low.c
@@ -1924,8 +1924,8 @@ can_encode_int32 (int32_t val, unsigned bits)
return rest == 0 || rest == -1;
}
-/* Relocate an instruction INSN from OLDLOC to *TO. This function will
- also increment TO by the number of bytes the new instruction(s) take(s).
+/* Relocate an instruction INSN from OLDLOC to TO and save the relocated
+ instructions in BUF. The number of instructions in BUF is returned.
PC relative instructions need to be handled specifically:
@@ -1936,10 +1936,10 @@ can_encode_int32 (int32_t val, unsigned bits)
- ADR/ADRP
- LDR/LDRSW (literal) */
-static void
-aarch64_relocate_instruction (CORE_ADDR *to, CORE_ADDR oldloc, uint32_t insn)
+static int
+aarch64_relocate_instruction (const CORE_ADDR to, const CORE_ADDR oldloc,
+ uint32_t insn, uint32_t *buf)
{
- uint32_t buf[32];
uint32_t *p = buf;
int is_bl;
@@ -1957,16 +1957,16 @@ aarch64_relocate_instruction (CORE_ADDR *to, CORE_ADDR oldloc, uint32_t insn)
if (aarch64_decode_b (oldloc, insn, &is_bl, &offset))
{
- offset = (oldloc - *to + offset);
+ offset = (oldloc - to + offset);
if (can_encode_int32 (offset, 28))
p += emit_b (p, is_bl, offset);
else
- return;
+ return 0;
}
else if (aarch64_decode_bcond (oldloc, insn, &cond, &offset))
{
- offset = (oldloc - *to + offset);
+ offset = (oldloc - to + offset);
if (can_encode_int32 (offset, 21))
p += emit_bcond (p, cond, offset);
@@ -1989,11 +1989,11 @@ aarch64_relocate_instruction (CORE_ADDR *to, CORE_ADDR oldloc, uint32_t insn)
p += emit_b (p, 0, offset - 8);
}
else
- return;
+ return 0;
}
else if (aarch64_decode_cb (oldloc, insn, &is64, &is_cbnz, &rn, &offset))
{
- offset = (oldloc - *to + offset);
+ offset = (oldloc - to + offset);
if (can_encode_int32 (offset, 21))
p += emit_cb (p, is_cbnz, aarch64_register (rn, is64), offset);
@@ -2015,11 +2015,11 @@ aarch64_relocate_instruction (CORE_ADDR *to, CORE_ADDR oldloc, uint32_t insn)
p += emit_b (p, 0, offset - 8);
}
else
- return;
+ return 0;
}
else if (aarch64_decode_tb (oldloc, insn, &is_tbnz, &bit, &rt, &offset))
{
- offset = (oldloc - *to + offset);
+ offset = (oldloc - to + offset);
if (can_encode_int32 (offset, 16))
p += emit_tb (p, is_tbnz, bit, aarch64_register (rt, 1), offset);
@@ -2041,7 +2041,7 @@ aarch64_relocate_instruction (CORE_ADDR *to, CORE_ADDR oldloc, uint32_t insn)
p += emit_b (p, 0, offset - 8);
}
else
- return;
+ return 0;
}
else if (aarch64_decode_adr (oldloc, insn, &is_adrp, &rd, &offset))
{
@@ -2092,7 +2092,7 @@ aarch64_relocate_instruction (CORE_ADDR *to, CORE_ADDR oldloc, uint32_t insn)
p += emit_insn (p, insn);
}
- append_insns (to, p - buf, buf);
+ return (int) (p - buf);
}
/* Implementation of linux_target_ops method
@@ -2421,11 +2421,9 @@ aarch64_install_fast_tracepoint_jump_pad (CORE_ADDR tpoint,
/* Now emit the relocated instruction. */
*adjusted_insn_addr = buildaddr;
target_read_uint32 (tpaddr, &insn);
- aarch64_relocate_instruction (&buildaddr, tpaddr, insn);
- *adjusted_insn_addr_end = buildaddr;
-
+ i = aarch64_relocate_instruction (buildaddr, tpaddr, insn, buf);
/* We may not have been able to relocate the instruction. */
- if (*adjusted_insn_addr == *adjusted_insn_addr_end)
+ if (i == 0)
{
sprintf (err,
"E.Could not relocate instruction from %s to %s.",
@@ -2433,6 +2431,9 @@ aarch64_install_fast_tracepoint_jump_pad (CORE_ADDR tpoint,
core_addr_to_string_nz (buildaddr));
return 1;
}
+ else
+ append_insns (&buildaddr, i, buf);
+ *adjusted_insn_addr_end = buildaddr;
/* Go back to the start of the buffer. */
p = buf;
--
1.9.1
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 08/11] New test case gdb.arch/disp-step-insn-reloc.exp
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
2015-10-07 9:26 ` [PATCH 03/11] Move append_insns out of aarch64_relocate_instruction Yao Qi
@ 2015-10-07 9:26 ` Yao Qi
2015-10-07 9:26 ` [PATCH 07/11] Support displaced stepping in support_displaced_stepping for aarch64*-*-linux* Yao Qi
` (9 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Yao Qi @ 2015-10-07 9:26 UTC (permalink / raw)
To: gdb-patches
This patch adds a new test case which uses gdb.arch/insn-reloc.c too
to test displaced stepping. Nowadays, tests are for x86, x86_64 and
aarch64.
gdb/testsuite:
* gdb.arch/disp-step-insn-reloc.exp: New test case.
---
gdb/testsuite/gdb.arch/disp-step-insn-reloc.exp | 84 +++++++++++++++++++++++++
1 file changed, 84 insertions(+)
create mode 100644 gdb/testsuite/gdb.arch/disp-step-insn-reloc.exp
diff --git a/gdb/testsuite/gdb.arch/disp-step-insn-reloc.exp b/gdb/testsuite/gdb.arch/disp-step-insn-reloc.exp
new file mode 100644
index 0000000..2edb258
--- /dev/null
+++ b/gdb/testsuite/gdb.arch/disp-step-insn-reloc.exp
@@ -0,0 +1,84 @@
+# Copyright 2015 Free Software Foundation, Inc.
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+standard_testfile insn-reloc.c
+set executable $testfile
+set expfile $testfile.exp
+
+if { ![support_displaced_stepping] } {
+ unsupported "displaced stepping"
+ return -1
+}
+
+# Some targets have leading underscores on assembly symbols.
+set additional_flags [gdb_target_symbol_prefix_flags]
+
+if [prepare_for_testing $expfile $executable $srcfile \
+ [list debug $additional_flags]] {
+ untested "failed to prepare for tests"
+ return -1
+}
+
+if ![runto_main] {
+ fail "Can't run to main"
+ return -1
+}
+
+# Read function name from testcases[N].
+
+proc read_testcase { n } {
+ global gdb_prompt
+
+ set result -1
+ gdb_test_multiple "print testcases\[${n}\]" "read name of test case ${n}" {
+ -re "\[$\].*= .*<(.*)>.*$gdb_prompt $" {
+ set result $expect_out(1,string)
+ }
+ -re "$gdb_prompt $" { }
+ }
+
+ return $result
+}
+
+set n_testcases [get_integer_valueof "n_testcases" 0]
+if { ${n_testcases} == 0 } {
+ untested "No instruction relocation to test"
+ return 1
+}
+
+# Set a fast tracepoint on each set_point${i} symbol. There is one for
+# each testcase.
+for { set i 0 } { ${i} < ${n_testcases} } { incr i } {
+ set testcase [read_testcase $i]
+
+ gdb_test "break *set_point$i" "Breakpoint .*" "breakpoint on ${testcase}"
+}
+
+gdb_test "break pass" ".*" ""
+gdb_test "break fail" ".*" ""
+
+gdb_test_no_output "set displaced-stepping on"
+
+# Make sure we have hit the pass breakpoint for each testcase.
+for { set i 0 } { ${i} < ${n_testcases} } { incr i } {
+ set testcase [read_testcase $i]
+
+ with_test_prefix "${testcase}" {
+ gdb_test "continue" ".*Breakpoint \[0-9\]+, .*" \
+ "go to breakpoint $i"
+
+ gdb_test "continue" ".*Breakpoint \[0-9\]+, pass \(\).*" \
+ "relocated instruction"
+ }
+}
--
1.9.1
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 07/11] Support displaced stepping in support_displaced_stepping for aarch64*-*-linux*
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
2015-10-07 9:26 ` [PATCH 03/11] Move append_insns out of aarch64_relocate_instruction Yao Qi
2015-10-07 9:26 ` [PATCH 08/11] New test case gdb.arch/disp-step-insn-reloc.exp Yao Qi
@ 2015-10-07 9:26 ` Yao Qi
2015-10-07 9:26 ` [PATCH 10/11] Rename emit_load_store to aarch64_emit_load_store Yao Qi
` (8 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Yao Qi @ 2015-10-07 9:26 UTC (permalink / raw)
To: gdb-patches
gdb/testsuite:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* lib/gdb.exp (support_displaced_stepping): Return 1 if target
is aarch64*-*-linux*.
---
gdb/testsuite/lib/gdb.exp | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/gdb/testsuite/lib/gdb.exp b/gdb/testsuite/lib/gdb.exp
index 9eaf721..048070b 100644
--- a/gdb/testsuite/lib/gdb.exp
+++ b/gdb/testsuite/lib/gdb.exp
@@ -2472,7 +2472,8 @@ proc support_displaced_stepping {} {
if { [istarget "x86_64-*-linux*"] || [istarget "i\[34567\]86-*-linux*"]
|| [istarget "arm*-*-linux*"] || [istarget "powerpc-*-linux*"]
- || [istarget "powerpc64-*-linux*"] || [istarget "s390*-*-*"] } {
+ || [istarget "powerpc64-*-linux*"] || [istarget "s390*-*-*"]
+ || [istarget "aarch64*-*-linux*"] } {
return 1
}
--
1.9.1
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 10/11] Rename emit_load_store to aarch64_emit_load_store
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
` (2 preceding siblings ...)
2015-10-07 9:26 ` [PATCH 07/11] Support displaced stepping in support_displaced_stepping for aarch64*-*-linux* Yao Qi
@ 2015-10-07 9:26 ` Yao Qi
2015-10-07 9:26 ` [PATCH 02/11] Move target_read_uint32 out of aarch64_relocate_instruction Yao Qi
` (7 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Yao Qi @ 2015-10-07 9:26 UTC (permalink / raw)
To: gdb-patches
Likewise, this patch renames emit_load_store to
aarch64_emit_load_store.
gdb:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* arch/aarch64-insn.c (emit_load_store): Rename to ...
(aarch64_emit_load_store): ... it. All callers updated.
gdb/gdbserver:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* linux-aarch64-low.c: Update all callers as emit_load_store
is renamed to aarch64_emit_load_store.
---
gdb/arch/aarch64-insn.c | 10 +++++-----
gdb/arch/aarch64-insn.h | 14 +++++++-------
gdb/gdbserver/linux-aarch64-low.c | 6 +++---
3 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/gdb/arch/aarch64-insn.c b/gdb/arch/aarch64-insn.c
index 99f4fb9..0ec7269 100644
--- a/gdb/arch/aarch64-insn.c
+++ b/gdb/arch/aarch64-insn.c
@@ -342,11 +342,11 @@ aarch64_emit_insn (uint32_t *buf, uint32_t insn)
/* Helper function emitting a load or store instruction. */
int
-emit_load_store (uint32_t *buf, uint32_t size,
- enum aarch64_opcodes opcode,
- struct aarch64_register rt,
- struct aarch64_register rn,
- struct aarch64_memory_operand operand)
+aarch64_emit_load_store (uint32_t *buf, uint32_t size,
+ enum aarch64_opcodes opcode,
+ struct aarch64_register rt,
+ struct aarch64_register rn,
+ struct aarch64_memory_operand operand)
{
uint32_t op;
diff --git a/gdb/arch/aarch64-insn.h b/gdb/arch/aarch64-insn.h
index 37ef37e..d51cabc 100644
--- a/gdb/arch/aarch64-insn.h
+++ b/gdb/arch/aarch64-insn.h
@@ -269,7 +269,7 @@ void aarch64_relocate_instruction (uint32_t insn,
0 .. 32760 range (12 bits << 3). */
#define emit_ldr(buf, rt, rn, operand) \
- emit_load_store (buf, rt.is64 ? 3 : 2, LDR, rt, rn, operand)
+ aarch64_emit_load_store (buf, rt.is64 ? 3 : 2, LDR, rt, rn, operand)
/* Write a LDRSW instruction into *BUF. The register size is 64-bit.
@@ -283,7 +283,7 @@ void aarch64_relocate_instruction (uint32_t insn,
0 .. 16380 range (12 bits << 2). */
#define emit_ldrsw(buf, rt, rn, operand) \
- emit_load_store (buf, 3, LDRSW, rt, rn, operand)
+ aarch64_emit_load_store (buf, 3, LDRSW, rt, rn, operand)
/* Write a TBZ or TBNZ instruction into *BUF.
@@ -312,10 +312,10 @@ void aarch64_relocate_instruction (uint32_t insn,
int aarch64_emit_insn (uint32_t *buf, uint32_t insn);
-int emit_load_store (uint32_t *buf, uint32_t size,
- enum aarch64_opcodes opcode,
- struct aarch64_register rt,
- struct aarch64_register rn,
- struct aarch64_memory_operand operand);
+int aarch64_emit_load_store (uint32_t *buf, uint32_t size,
+ enum aarch64_opcodes opcode,
+ struct aarch64_register rt,
+ struct aarch64_register rn,
+ struct aarch64_memory_operand operand);
#endif
diff --git a/gdb/gdbserver/linux-aarch64-low.c b/gdb/gdbserver/linux-aarch64-low.c
index 963511b..9cefdda 100644
--- a/gdb/gdbserver/linux-aarch64-low.c
+++ b/gdb/gdbserver/linux-aarch64-low.c
@@ -902,7 +902,7 @@ emit_ldrh (uint32_t *buf, struct aarch64_register rt,
struct aarch64_register rn,
struct aarch64_memory_operand operand)
{
- return emit_load_store (buf, 1, LDR, rt, rn, operand);
+ return aarch64_emit_load_store (buf, 1, LDR, rt, rn, operand);
}
/* Write a LDRB instruction into *BUF.
@@ -921,7 +921,7 @@ emit_ldrb (uint32_t *buf, struct aarch64_register rt,
struct aarch64_register rn,
struct aarch64_memory_operand operand)
{
- return emit_load_store (buf, 0, LDR, rt, rn, operand);
+ return aarch64_emit_load_store (buf, 0, LDR, rt, rn, operand);
}
@@ -942,7 +942,7 @@ emit_str (uint32_t *buf, struct aarch64_register rt,
struct aarch64_register rn,
struct aarch64_memory_operand operand)
{
- return emit_load_store (buf, rt.is64 ? 3 : 2, STR, rt, rn, operand);
+ return aarch64_emit_load_store (buf, rt.is64 ? 3 : 2, STR, rt, rn, operand);
}
/* Helper function emitting an exclusive load or store instruction. */
--
1.9.1
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 02/11] Move target_read_uint32 out of aarch64_relocate_instruction
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
` (3 preceding siblings ...)
2015-10-07 9:26 ` [PATCH 10/11] Rename emit_load_store to aarch64_emit_load_store Yao Qi
@ 2015-10-07 9:26 ` Yao Qi
2015-10-07 9:26 ` [PATCH 01/11] More tests in gdb.arch/insn-reloc.c Yao Qi
` (6 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Yao Qi @ 2015-10-07 9:26 UTC (permalink / raw)
To: gdb-patches
This patch is to move target_read_uint32 out of
aarch64_relocate_instruction and pass INSN to
aarch64_relocate_instruction, so that it is cleaner, only decode
instructions.
gdb/gdbserver:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* linux-aarch64-low.c (aarch64_relocate_instruction): Add
argument insn. Remove local variable insn. Don't call
target_read_uint32.
(aarch64_install_fast_tracepoint_jump_pad): Call
target_read_uint32.
---
gdb/gdbserver/linux-aarch64-low.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/gdb/gdbserver/linux-aarch64-low.c b/gdb/gdbserver/linux-aarch64-low.c
index 5592e61..909ba65 100644
--- a/gdb/gdbserver/linux-aarch64-low.c
+++ b/gdb/gdbserver/linux-aarch64-low.c
@@ -1924,8 +1924,8 @@ can_encode_int32 (int32_t val, unsigned bits)
return rest == 0 || rest == -1;
}
-/* Relocate an instruction from OLDLOC to *TO. This function will also
- increment TO by the number of bytes the new instruction(s) take(s).
+/* Relocate an instruction INSN from OLDLOC to *TO. This function will
+ also increment TO by the number of bytes the new instruction(s) take(s).
PC relative instructions need to be handled specifically:
@@ -1937,11 +1937,10 @@ can_encode_int32 (int32_t val, unsigned bits)
- LDR/LDRSW (literal) */
static void
-aarch64_relocate_instruction (CORE_ADDR *to, CORE_ADDR oldloc)
+aarch64_relocate_instruction (CORE_ADDR *to, CORE_ADDR oldloc, uint32_t insn)
{
uint32_t buf[32];
uint32_t *p = buf;
- uint32_t insn;
int is_bl;
int is64;
@@ -1956,8 +1955,6 @@ aarch64_relocate_instruction (CORE_ADDR *to, CORE_ADDR oldloc)
unsigned bit;
int32_t offset;
- target_read_uint32 (oldloc, &insn);
-
if (aarch64_decode_b (oldloc, insn, &is_bl, &offset))
{
offset = (oldloc - *to + offset);
@@ -2120,6 +2117,7 @@ aarch64_install_fast_tracepoint_jump_pad (CORE_ADDR tpoint,
uint32_t *p = buf;
int32_t offset;
int i;
+ uint32_t insn;
CORE_ADDR buildaddr = *jump_entry;
/* We need to save the current state on the stack both to restore it
@@ -2422,7 +2420,8 @@ aarch64_install_fast_tracepoint_jump_pad (CORE_ADDR tpoint,
/* Now emit the relocated instruction. */
*adjusted_insn_addr = buildaddr;
- aarch64_relocate_instruction (&buildaddr, tpaddr);
+ target_read_uint32 (tpaddr, &insn);
+ aarch64_relocate_instruction (&buildaddr, tpaddr, insn);
*adjusted_insn_addr_end = buildaddr;
/* We may not have been able to relocate the instruction. */
--
1.9.1
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 01/11] More tests in gdb.arch/insn-reloc.c
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
` (4 preceding siblings ...)
2015-10-07 9:26 ` [PATCH 02/11] Move target_read_uint32 out of aarch64_relocate_instruction Yao Qi
@ 2015-10-07 9:26 ` Yao Qi
2015-10-07 9:27 ` [PATCH 09/11] Rename emit_insn to aarch64_emit_insn Yao Qi
` (5 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Yao Qi @ 2015-10-07 9:26 UTC (permalink / raw)
To: gdb-patches
This patch adds more tests in gdb.arch/insn-reloc.c to cover
instruction BL and cover B.CON when CON is false. These new added
tests can be used for displaced stepping too.
gdb/testsuite:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* gdb.arch/insn-reloc.c (can_relocate_bcond): Rename to ...
(can_relocate_bcond_true): ... it.
(can_relocate_bcond_false): New function.
(foo): Likewise.
(can_relocate_bl): Likewise.
(testcases) [__aarch64__]: Add can_relocate_bcond_false and
can_relocate_bl.
---
gdb/testsuite/gdb.arch/insn-reloc.c | 50 ++++++++++++++++++++++++++++++++++---
1 file changed, 47 insertions(+), 3 deletions(-)
diff --git a/gdb/testsuite/gdb.arch/insn-reloc.c b/gdb/testsuite/gdb.arch/insn-reloc.c
index c7148a2..dc6d8b6 100644
--- a/gdb/testsuite/gdb.arch/insn-reloc.c
+++ b/gdb/testsuite/gdb.arch/insn-reloc.c
@@ -159,7 +159,7 @@ can_relocate_b (void)
*/
static void
-can_relocate_bcond (void)
+can_relocate_bcond_true (void)
{
int ok = 0;
@@ -469,6 +469,48 @@ can_relocate_ldr (void)
else
fail ();
}
+
+/* Make sure we can relocate a B.cond instruction and condition is false. */
+
+static void
+can_relocate_bcond_false (void)
+{
+ int ok = 0;
+
+ asm (" mov x0, #8\n"
+ " tst x0, #8\n" /* Clear the Z flag. */
+ "set_point10:\n" /* Set tracepoint here. */
+ " b.eq 0b\n" /* Condition is false. */
+ " mov %[ok], #1\n"
+ " b 1f\n"
+ "0:\n"
+ " mov %[ok], #0\n"
+ "1:\n"
+ : [ok] "=r" (ok)
+ :
+ : "0", "cc");
+
+ if (ok == 1)
+ pass ();
+ else
+ fail ();
+}
+
+static void
+foo (void)
+{
+}
+
+/* Make sure we can relocate a BL instruction. */
+
+static void
+can_relocate_bl (void)
+{
+ asm ("set_point11:\n"
+ " bl foo\n"
+ " bl pass\n"); /* Test that LR is updated correctly. */
+}
+
#endif
/* Functions testing relocations need to be placed here. GDB will read
@@ -482,7 +524,7 @@ static testcase_ftype testcases[] = {
can_relocate_jump
#elif (defined __aarch64__)
can_relocate_b,
- can_relocate_bcond,
+ can_relocate_bcond_true,
can_relocate_cbz,
can_relocate_cbnz,
can_relocate_tbz,
@@ -490,7 +532,9 @@ static testcase_ftype testcases[] = {
can_relocate_adr_forward,
can_relocate_adr_backward,
can_relocate_adrp,
- can_relocate_ldr
+ can_relocate_ldr,
+ can_relocate_bcond_false,
+ can_relocate_bl,
#endif
};
--
1.9.1
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 09/11] Rename emit_insn to aarch64_emit_insn
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
` (5 preceding siblings ...)
2015-10-07 9:26 ` [PATCH 01/11] More tests in gdb.arch/insn-reloc.c Yao Qi
@ 2015-10-07 9:27 ` Yao Qi
2015-10-07 9:27 ` [PATCH 04/11] Use visitor in aarch64_relocate_instruction Yao Qi
` (4 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Yao Qi @ 2015-10-07 9:27 UTC (permalink / raw)
To: gdb-patches
As emit_insn becomes extern, the prefix "aarch64_" is needed. This
patch renames emit_insn to aarch64_emit_insn.
gdb:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* arch/aarch64-insn.c (emit_insn): Rename to ...
(aarch64_emit_insn): ... it. All callers updated.
gdb/gdbserver:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* linux-aarch64-low.c: Update all callers of function renaming
from emit_insn to aarch64_emit_insn.
---
gdb/aarch64-tdep.c | 2 +-
gdb/arch/aarch64-insn.c | 25 ++++++------
gdb/arch/aarch64-insn.h | 40 ++++++++++----------
gdb/gdbserver/linux-aarch64-low.c | 80 ++++++++++++++++++++-------------------
4 files changed, 75 insertions(+), 72 deletions(-)
diff --git a/gdb/aarch64-tdep.c b/gdb/aarch64-tdep.c
index d9c4334..243f0f5 100644
--- a/gdb/aarch64-tdep.c
+++ b/gdb/aarch64-tdep.c
@@ -2771,7 +2771,7 @@ aarch64_displaced_step_others (const uint32_t insn,
struct aarch64_displaced_step_data *dsd
= (struct aarch64_displaced_step_data *) data;
- emit_insn (dsd->insn_buf, insn);
+ aarch64_emit_insn (dsd->insn_buf, insn);
dsd->insn_count = 1;
if ((insn & 0xfffffc1f) == 0xd65f0000)
diff --git a/gdb/arch/aarch64-insn.c b/gdb/arch/aarch64-insn.c
index 3bc0117..99f4fb9 100644
--- a/gdb/arch/aarch64-insn.c
+++ b/gdb/arch/aarch64-insn.c
@@ -333,7 +333,7 @@ aarch64_relocate_instruction (uint32_t insn,
instructions written (aka. 1). */
int
-emit_insn (uint32_t *buf, uint32_t insn)
+aarch64_emit_insn (uint32_t *buf, uint32_t insn)
{
*buf = insn;
return 1;
@@ -356,10 +356,10 @@ emit_load_store (uint32_t *buf, uint32_t size,
{
op = ENCODE (1, 1, 24);
- return emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
- | ENCODE (operand.index >> 3, 12, 10)
- | ENCODE (rn.num, 5, 5)
- | ENCODE (rt.num, 5, 0));
+ return aarch64_emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
+ | ENCODE (operand.index >> 3, 12, 10)
+ | ENCODE (rn.num, 5, 5)
+ | ENCODE (rt.num, 5, 0));
}
case MEMORY_OPERAND_POSTINDEX:
{
@@ -367,9 +367,10 @@ emit_load_store (uint32_t *buf, uint32_t size,
op = ENCODE (0, 1, 24);
- return emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
- | post_index | ENCODE (operand.index, 9, 12)
- | ENCODE (rn.num, 5, 5) | ENCODE (rt.num, 5, 0));
+ return aarch64_emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
+ | post_index | ENCODE (operand.index, 9, 12)
+ | ENCODE (rn.num, 5, 5)
+ | ENCODE (rt.num, 5, 0));
}
case MEMORY_OPERAND_PREINDEX:
{
@@ -377,10 +378,10 @@ emit_load_store (uint32_t *buf, uint32_t size,
op = ENCODE (0, 1, 24);
- return emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
- | pre_index | ENCODE (operand.index, 9, 12)
- | ENCODE (rn.num, 5, 5)
- | ENCODE (rt.num, 5, 0));
+ return aarch64_emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
+ | pre_index | ENCODE (operand.index, 9, 12)
+ | ENCODE (rn.num, 5, 5)
+ | ENCODE (rt.num, 5, 0));
}
default:
return 0;
diff --git a/gdb/arch/aarch64-insn.h b/gdb/arch/aarch64-insn.h
index 01a5d73..37ef37e 100644
--- a/gdb/arch/aarch64-insn.h
+++ b/gdb/arch/aarch64-insn.h
@@ -223,7 +223,7 @@ void aarch64_relocate_instruction (uint32_t insn,
+/- 128MB (26 bits << 2). */
#define emit_b(buf, is_bl, offset) \
- emit_insn (buf, ((is_bl) ? BL : B) | (ENCODE ((offset) >> 2, 26, 0)))
+ aarch64_emit_insn (buf, ((is_bl) ? BL : B) | (ENCODE ((offset) >> 2, 26, 0)))
/* Write a BCOND instruction into *BUF.
@@ -234,10 +234,10 @@ void aarch64_relocate_instruction (uint32_t insn,
byte-addressed but should be 4 bytes aligned. It has a limited range of
+/- 1MB (19 bits << 2). */
-#define emit_bcond(buf, cond, offset) \
- emit_insn (buf, \
- BCOND | ENCODE ((offset) >> 2, 19, 5) \
- | ENCODE ((cond), 4, 0))
+#define emit_bcond(buf, cond, offset) \
+ aarch64_emit_insn (buf, \
+ BCOND | ENCODE ((offset) >> 2, 19, 5) \
+ | ENCODE ((cond), 4, 0))
/* Write a CBZ or CBNZ instruction into *BUF.
@@ -250,12 +250,12 @@ void aarch64_relocate_instruction (uint32_t insn,
byte-addressed but should be 4 bytes aligned. It has a limited range of
+/- 1MB (19 bits << 2). */
-#define emit_cb(buf, is_cbnz, rt, offset) \
- emit_insn (buf, \
- ((is_cbnz) ? CBNZ : CBZ) \
- | ENCODE (rt.is64, 1, 31) /* sf */ \
- | ENCODE (offset >> 2, 19, 5) /* imm19 */ \
- | ENCODE (rt.num, 5, 0))
+#define emit_cb(buf, is_cbnz, rt, offset) \
+ aarch64_emit_insn (buf, \
+ ((is_cbnz) ? CBNZ : CBZ) \
+ | ENCODE (rt.is64, 1, 31) /* sf */ \
+ | ENCODE (offset >> 2, 19, 5) /* imm19 */ \
+ | ENCODE (rt.num, 5, 0))
/* Write a LDR instruction into *BUF.
@@ -298,19 +298,19 @@ void aarch64_relocate_instruction (uint32_t insn,
byte-addressed but should be 4 bytes aligned. It has a limited range of
+/- 32KB (14 bits << 2). */
-#define emit_tb(buf, is_tbnz, bit, rt, offset) \
- emit_insn (buf, \
- ((is_tbnz) ? TBNZ: TBZ) \
- | ENCODE (bit >> 5, 1, 31) /* b5 */ \
- | ENCODE (bit, 5, 19) /* b40 */ \
- | ENCODE (offset >> 2, 14, 5) /* imm14 */ \
- | ENCODE (rt.num, 5, 0))
+#define emit_tb(buf, is_tbnz, bit, rt, offset) \
+ aarch64_emit_insn (buf, \
+ ((is_tbnz) ? TBNZ: TBZ) \
+ | ENCODE (bit >> 5, 1, 31) /* b5 */ \
+ | ENCODE (bit, 5, 19) /* b40 */ \
+ | ENCODE (offset >> 2, 14, 5) /* imm14 */ \
+ | ENCODE (rt.num, 5, 0))
/* Write a NOP instruction into *BUF. */
-#define emit_nop(buf) emit_insn (buf, NOP)
+#define emit_nop(buf) aarch64_emit_insn (buf, NOP)
-int emit_insn (uint32_t *buf, uint32_t insn);
+int aarch64_emit_insn (uint32_t *buf, uint32_t insn);
int emit_load_store (uint32_t *buf, uint32_t size,
enum aarch64_opcodes opcode,
diff --git a/gdb/gdbserver/linux-aarch64-low.c b/gdb/gdbserver/linux-aarch64-low.c
index 9450449..963511b 100644
--- a/gdb/gdbserver/linux-aarch64-low.c
+++ b/gdb/gdbserver/linux-aarch64-low.c
@@ -743,7 +743,7 @@ enum aarch64_system_control_registers
static int
emit_blr (uint32_t *buf, struct aarch64_register rn)
{
- return emit_insn (buf, BLR | ENCODE (rn.num, 5, 5));
+ return aarch64_emit_insn (buf, BLR | ENCODE (rn.num, 5, 5));
}
/* Write a RET instruction into *BUF.
@@ -755,7 +755,7 @@ emit_blr (uint32_t *buf, struct aarch64_register rn)
static int
emit_ret (uint32_t *buf, struct aarch64_register rn)
{
- return emit_insn (buf, RET | ENCODE (rn.num, 5, 5));
+ return aarch64_emit_insn (buf, RET | ENCODE (rn.num, 5, 5));
}
static int
@@ -798,10 +798,10 @@ emit_load_store_pair (uint32_t *buf, enum aarch64_opcodes opcode,
return 0;
}
- return emit_insn (buf, opcode | opc | pre_index | write_back
- | ENCODE (operand.index >> 3, 7, 15)
- | ENCODE (rt2.num, 5, 10)
- | ENCODE (rn.num, 5, 5) | ENCODE (rt.num, 5, 0));
+ return aarch64_emit_insn (buf, opcode | opc | pre_index | write_back
+ | ENCODE (operand.index >> 3, 7, 15)
+ | ENCODE (rt2.num, 5, 10)
+ | ENCODE (rn.num, 5, 5) | ENCODE (rt.num, 5, 0));
}
/* Write a STP instruction into *BUF.
@@ -858,9 +858,10 @@ emit_ldp_q_offset (uint32_t *buf, unsigned rt, unsigned rt2,
uint32_t opc = ENCODE (2, 2, 30);
uint32_t pre_index = ENCODE (1, 1, 24);
- return emit_insn (buf, LDP_SIMD_VFP | opc | pre_index
- | ENCODE (offset >> 4, 7, 15) | ENCODE (rt2, 5, 10)
- | ENCODE (rn.num, 5, 5) | ENCODE (rt, 5, 0));
+ return aarch64_emit_insn (buf, LDP_SIMD_VFP | opc | pre_index
+ | ENCODE (offset >> 4, 7, 15)
+ | ENCODE (rt2, 5, 10)
+ | ENCODE (rn.num, 5, 5) | ENCODE (rt, 5, 0));
}
/* Write a STP (SIMD&VFP) instruction using Q registers into *BUF.
@@ -879,7 +880,7 @@ emit_stp_q_offset (uint32_t *buf, unsigned rt, unsigned rt2,
uint32_t opc = ENCODE (2, 2, 30);
uint32_t pre_index = ENCODE (1, 1, 24);
- return emit_insn (buf, STP_SIMD_VFP | opc | pre_index
+ return aarch64_emit_insn (buf, STP_SIMD_VFP | opc | pre_index
| ENCODE (offset >> 4, 7, 15)
| ENCODE (rt2, 5, 10)
| ENCODE (rn.num, 5, 5) | ENCODE (rt, 5, 0));
@@ -954,9 +955,9 @@ emit_load_store_exclusive (uint32_t *buf, uint32_t size,
struct aarch64_register rt2,
struct aarch64_register rn)
{
- return emit_insn (buf, opcode | ENCODE (size, 2, 30)
- | ENCODE (rs.num, 5, 16) | ENCODE (rt2.num, 5, 10)
- | ENCODE (rn.num, 5, 5) | ENCODE (rt.num, 5, 0));
+ return aarch64_emit_insn (buf, opcode | ENCODE (size, 2, 30)
+ | ENCODE (rs.num, 5, 16) | ENCODE (rt2.num, 5, 10)
+ | ENCODE (rn.num, 5, 5) | ENCODE (rt.num, 5, 0));
}
/* Write a LAXR instruction into *BUF.
@@ -1015,8 +1016,8 @@ emit_data_processing_reg (uint32_t *buf, enum aarch64_opcodes opcode,
{
uint32_t size = ENCODE (rd.is64, 1, 31);
- return emit_insn (buf, opcode | size | ENCODE (rm.num, 5, 16)
- | ENCODE (rn.num, 5, 5) | ENCODE (rd.num, 5, 0));
+ return aarch64_emit_insn (buf, opcode | size | ENCODE (rm.num, 5, 16)
+ | ENCODE (rn.num, 5, 5) | ENCODE (rd.num, 5, 0));
}
/* Helper function for data processing instructions taking either a register
@@ -1037,9 +1038,10 @@ emit_data_processing (uint32_t *buf, enum aarch64_opcodes opcode,
/* xxx1 000x xxxx xxxx xxxx xxxx xxxx xxxx */
operand_opcode = ENCODE (8, 4, 25);
- return emit_insn (buf, opcode | operand_opcode | size
- | ENCODE (operand.imm, 12, 10)
- | ENCODE (rn.num, 5, 5) | ENCODE (rd.num, 5, 0));
+ return aarch64_emit_insn (buf, opcode | operand_opcode | size
+ | ENCODE (operand.imm, 12, 10)
+ | ENCODE (rn.num, 5, 5)
+ | ENCODE (rd.num, 5, 0));
}
else
{
@@ -1112,9 +1114,9 @@ emit_mov (uint32_t *buf, struct aarch64_register rd,
/* Do not shift the immediate. */
uint32_t shift = ENCODE (0, 2, 21);
- return emit_insn (buf, MOV | size | shift
- | ENCODE (operand.imm, 16, 5)
- | ENCODE (rd.num, 5, 0));
+ return aarch64_emit_insn (buf, MOV | size | shift
+ | ENCODE (operand.imm, 16, 5)
+ | ENCODE (rd.num, 5, 0));
}
else
return emit_add (buf, rd, operand.reg, immediate_operand (0));
@@ -1134,8 +1136,8 @@ emit_movk (uint32_t *buf, struct aarch64_register rd, uint32_t imm,
{
uint32_t size = ENCODE (rd.is64, 1, 31);
- return emit_insn (buf, MOVK | size | ENCODE (shift, 2, 21) |
- ENCODE (imm, 16, 5) | ENCODE (rd.num, 5, 0));
+ return aarch64_emit_insn (buf, MOVK | size | ENCODE (shift, 2, 21) |
+ ENCODE (imm, 16, 5) | ENCODE (rd.num, 5, 0));
}
/* Write instructions into *BUF in order to move ADDR into a register.
@@ -1343,8 +1345,8 @@ static int
emit_mrs (uint32_t *buf, struct aarch64_register rt,
enum aarch64_system_control_registers system_reg)
{
- return emit_insn (buf, MRS | ENCODE (system_reg, 15, 5)
- | ENCODE (rt.num, 5, 0));
+ return aarch64_emit_insn (buf, MRS | ENCODE (system_reg, 15, 5)
+ | ENCODE (rt.num, 5, 0));
}
/* Write a MSR instruction into *BUF. The register size is 64-bit.
@@ -1358,8 +1360,8 @@ static int
emit_msr (uint32_t *buf, enum aarch64_system_control_registers system_reg,
struct aarch64_register rt)
{
- return emit_insn (buf, MSR | ENCODE (system_reg, 15, 5)
- | ENCODE (rt.num, 5, 0));
+ return aarch64_emit_insn (buf, MSR | ENCODE (system_reg, 15, 5)
+ | ENCODE (rt.num, 5, 0));
}
/* Write a SEVL instruction into *BUF.
@@ -1369,7 +1371,7 @@ emit_msr (uint32_t *buf, enum aarch64_system_control_registers system_reg,
static int
emit_sevl (uint32_t *buf)
{
- return emit_insn (buf, SEVL);
+ return aarch64_emit_insn (buf, SEVL);
}
/* Write a WFE instruction into *BUF.
@@ -1379,7 +1381,7 @@ emit_sevl (uint32_t *buf)
static int
emit_wfe (uint32_t *buf)
{
- return emit_insn (buf, WFE);
+ return aarch64_emit_insn (buf, WFE);
}
/* Write a SBFM instruction into *BUF.
@@ -1401,9 +1403,9 @@ emit_sbfm (uint32_t *buf, struct aarch64_register rd,
uint32_t size = ENCODE (rd.is64, 1, 31);
uint32_t n = ENCODE (rd.is64, 1, 22);
- return emit_insn (buf, SBFM | size | n | ENCODE (immr, 6, 16)
- | ENCODE (imms, 6, 10) | ENCODE (rn.num, 5, 5)
- | ENCODE (rd.num, 5, 0));
+ return aarch64_emit_insn (buf, SBFM | size | n | ENCODE (immr, 6, 16)
+ | ENCODE (imms, 6, 10) | ENCODE (rn.num, 5, 5)
+ | ENCODE (rd.num, 5, 0));
}
/* Write a SBFX instruction into *BUF.
@@ -1446,9 +1448,9 @@ emit_ubfm (uint32_t *buf, struct aarch64_register rd,
uint32_t size = ENCODE (rd.is64, 1, 31);
uint32_t n = ENCODE (rd.is64, 1, 22);
- return emit_insn (buf, UBFM | size | n | ENCODE (immr, 6, 16)
- | ENCODE (imms, 6, 10) | ENCODE (rn.num, 5, 5)
- | ENCODE (rd.num, 5, 0));
+ return aarch64_emit_insn (buf, UBFM | size | n | ENCODE (immr, 6, 16)
+ | ENCODE (imms, 6, 10) | ENCODE (rn.num, 5, 5)
+ | ENCODE (rd.num, 5, 0));
}
/* Write a UBFX instruction into *BUF.
@@ -1490,9 +1492,9 @@ emit_csinc (uint32_t *buf, struct aarch64_register rd,
{
uint32_t size = ENCODE (rd.is64, 1, 31);
- return emit_insn (buf, CSINC | size | ENCODE (rm.num, 5, 16)
- | ENCODE (cond, 4, 12) | ENCODE (rn.num, 5, 5)
- | ENCODE (rd.num, 5, 0));
+ return aarch64_emit_insn (buf, CSINC | size | ENCODE (rm.num, 5, 16)
+ | ENCODE (cond, 4, 12) | ENCODE (rn.num, 5, 5)
+ | ENCODE (rd.num, 5, 0));
}
/* Write a CSET instruction into *BUF.
@@ -1757,7 +1759,7 @@ aarch64_ftrace_insn_reloc_others (const uint32_t insn,
/* The instruction is not PC relative. Just re-emit it at the new
location. */
- insn_reloc->insn_ptr += emit_insn (insn_reloc->insn_ptr, insn);
+ insn_reloc->insn_ptr += aarch64_emit_insn (insn_reloc->insn_ptr, insn);
}
static const struct aarch64_insn_visitor visitor =
--
1.9.1
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 04/11] Use visitor in aarch64_relocate_instruction
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
` (6 preceding siblings ...)
2015-10-07 9:27 ` [PATCH 09/11] Rename emit_insn to aarch64_emit_insn Yao Qi
@ 2015-10-07 9:27 ` Yao Qi
2015-10-07 9:27 ` [PATCH 05/11] Move aarch64_relocate_instruction to arch/aarch64-insn.c Yao Qi
` (3 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Yao Qi @ 2015-10-07 9:27 UTC (permalink / raw)
To: gdb-patches
Nowadays, the instruction decodings and handling are mixed together
inside aarch64_relocate_instruction. The patch decouples instruction
decoding and instruction handling by using visitor pattern. That is,
aarch64_relocate_instruction decode instructions and visit each
instruction by different visitor methods. Each visitor defines the
concrete things to different instructions. Fast tracepoint instruction
relocation and displaced stepping can define their own visitors,
sub-class of struct aarch64_insn_data.
gdb/gdbserver:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* linux-aarch64-low.c (struct aarch64_insn_data): New.
(struct aarch64_insn_visitor): New.
(struct aarch64_insn_relocation_data): New.
(aarch64_ftrace_insn_reloc_b): New function.
(aarch64_ftrace_insn_reloc_b_cond): Likewise.
(aarch64_ftrace_insn_reloc_cb): Likewise.
(aarch64_ftrace_insn_reloc_tb): Likewise.
(aarch64_ftrace_insn_reloc_adr): Likewise.
(aarch64_ftrace_insn_reloc_ldr_literal): Likewise.
(aarch64_ftrace_insn_reloc_others): Likewise.
(visitor): New.
(aarch64_relocate_instruction): Use visitor.
---
gdb/gdbserver/linux-aarch64-low.c | 442 ++++++++++++++++++++++++++------------
1 file changed, 299 insertions(+), 143 deletions(-)
diff --git a/gdb/gdbserver/linux-aarch64-low.c b/gdb/gdbserver/linux-aarch64-low.c
index 506fb9e..b4181ed 100644
--- a/gdb/gdbserver/linux-aarch64-low.c
+++ b/gdb/gdbserver/linux-aarch64-low.c
@@ -1924,175 +1924,323 @@ can_encode_int32 (int32_t val, unsigned bits)
return rest == 0 || rest == -1;
}
-/* Relocate an instruction INSN from OLDLOC to TO and save the relocated
- instructions in BUF. The number of instructions in BUF is returned.
+/* Data passed to each method of aarch64_insn_visitor. */
- PC relative instructions need to be handled specifically:
+struct aarch64_insn_data
+{
+ /* The instruction address. */
+ CORE_ADDR insn_addr;
+};
- - B/BL
- - B.COND
- - CBZ/CBNZ
- - TBZ/TBNZ
- - ADR/ADRP
- - LDR/LDRSW (literal) */
+/* Visit different instructions by different methods. */
-static int
-aarch64_relocate_instruction (const CORE_ADDR to, const CORE_ADDR oldloc,
- uint32_t insn, uint32_t *buf)
+struct aarch64_insn_visitor
{
- uint32_t *p = buf;
+ /* Visit instruction B/BL OFFSET. */
+ void (*b) (const int is_bl, const int32_t offset,
+ struct aarch64_insn_data *data);
- int is_bl;
- int is64;
- int is_sw;
- int is_cbnz;
- int is_tbnz;
- int is_adrp;
- unsigned rn;
- unsigned rt;
- unsigned rd;
- unsigned cond;
- unsigned bit;
- int32_t offset;
+ /* Visit instruction B.COND OFFSET. */
+ void (*b_cond) (const unsigned cond, const int32_t offset,
+ struct aarch64_insn_data *data);
- if (aarch64_decode_b (oldloc, insn, &is_bl, &offset))
- {
- offset = (oldloc - to + offset);
+ /* Visit instruction CBZ/CBNZ Rn, OFFSET. */
+ void (*cb) (const int32_t offset, const int is_cbnz,
+ const unsigned rn, int is64,
+ struct aarch64_insn_data *data);
- if (can_encode_int32 (offset, 28))
- p += emit_b (p, is_bl, offset);
- else
- return 0;
- }
- else if (aarch64_decode_bcond (oldloc, insn, &cond, &offset))
- {
- offset = (oldloc - to + offset);
+ /* Visit instruction TBZ/TBNZ Rt, #BIT, OFFSET. */
+ void (*tb) (const int32_t offset, int is_tbnz,
+ const unsigned rt, unsigned bit,
+ struct aarch64_insn_data *data);
- if (can_encode_int32 (offset, 21))
- p += emit_bcond (p, cond, offset);
- else if (can_encode_int32 (offset, 28))
- {
- /* The offset is out of range for a conditional branch
- instruction but not for a unconditional branch. We can use
- the following instructions instead:
+ /* Visit instruction ADR/ADRP Rd, OFFSET. */
+ void (*adr) (const int32_t offset, const unsigned rd,
+ const int is_adrp, struct aarch64_insn_data *data);
+
+ /* Visit instruction LDR/LDRSW Rt, OFFSET. */
+ void (*ldr_literal) (const int32_t offset, const int is_sw,
+ const unsigned rt, const int is64,
+ struct aarch64_insn_data *data);
- B.COND TAKEN ; If cond is true, then jump to TAKEN.
- B NOT_TAKEN ; Else jump over TAKEN and continue.
- TAKEN:
- B #(offset - 8)
- NOT_TAKEN:
+ /* Visit instruction INSN of other kinds. */
+ void (*others) (const uint32_t insn, struct aarch64_insn_data *data);
+};
- */
+/* Sub-class of struct aarch64_insn_data, store information of
+ instruction relocation for fast tracepoint. Visitor can
+ relocate an instruction from BASE.INSN_ADDR to NEW_ADDR and save
+ the relocated instructions in buffer pointed by INSN_PTR. */
- p += emit_bcond (p, cond, 8);
- p += emit_b (p, 0, 8);
- p += emit_b (p, 0, offset - 8);
- }
- else
- return 0;
+struct aarch64_insn_relocation_data
+{
+ struct aarch64_insn_data base;
+
+ /* The new address the instruction is relocated to. */
+ CORE_ADDR new_addr;
+ /* Pointer to the buffer of relocated instruction(s). */
+ uint32_t *insn_ptr;
+};
+
+/* Implementation of aarch64_insn_visitor method "b". */
+
+static void
+aarch64_ftrace_insn_reloc_b (const int is_bl, const int32_t offset,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_insn_relocation_data *insn_reloc
+ = (struct aarch64_insn_relocation_data *) data;
+ int32_t new_offset
+ = insn_reloc->base.insn_addr - insn_reloc->new_addr + offset;
+
+ if (can_encode_int32 (new_offset, 28))
+ insn_reloc->insn_ptr += emit_b (insn_reloc->insn_ptr, is_bl, new_offset);
+}
+
+/* Implementation of aarch64_insn_visitor method "b_cond". */
+
+static void
+aarch64_ftrace_insn_reloc_b_cond (const unsigned cond, const int32_t offset,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_insn_relocation_data *insn_reloc
+ = (struct aarch64_insn_relocation_data *) data;
+ int32_t new_offset
+ = insn_reloc->base.insn_addr - insn_reloc->new_addr + offset;
+
+ if (can_encode_int32 (new_offset, 21))
+ {
+ insn_reloc->insn_ptr += emit_bcond (insn_reloc->insn_ptr, cond,
+ new_offset);
}
- else if (aarch64_decode_cb (oldloc, insn, &is64, &is_cbnz, &rn, &offset))
+ else if (can_encode_int32 (new_offset, 28))
{
- offset = (oldloc - to + offset);
+ /* The offset is out of range for a conditional branch
+ instruction but not for a unconditional branch. We can use
+ the following instructions instead:
- if (can_encode_int32 (offset, 21))
- p += emit_cb (p, is_cbnz, aarch64_register (rn, is64), offset);
- else if (can_encode_int32 (offset, 28))
- {
- /* The offset is out of range for a compare and branch
- instruction but not for a unconditional branch. We can use
- the following instructions instead:
-
- CBZ xn, TAKEN ; xn == 0, then jump to TAKEN.
- B NOT_TAKEN ; Else jump over TAKEN and continue.
- TAKEN:
- B #(offset - 8)
- NOT_TAKEN:
-
- */
- p += emit_cb (p, is_cbnz, aarch64_register (rn, is64), 8);
- p += emit_b (p, 0, 8);
- p += emit_b (p, 0, offset - 8);
- }
- else
- return 0;
+ B.COND TAKEN ; If cond is true, then jump to TAKEN.
+ B NOT_TAKEN ; Else jump over TAKEN and continue.
+ TAKEN:
+ B #(offset - 8)
+ NOT_TAKEN:
+
+ */
+
+ insn_reloc->insn_ptr += emit_bcond (insn_reloc->insn_ptr, cond, 8);
+ insn_reloc->insn_ptr += emit_b (insn_reloc->insn_ptr, 0, 8);
+ insn_reloc->insn_ptr += emit_b (insn_reloc->insn_ptr, 0, new_offset - 8);
}
- else if (aarch64_decode_tb (oldloc, insn, &is_tbnz, &bit, &rt, &offset))
- {
- offset = (oldloc - to + offset);
+}
- if (can_encode_int32 (offset, 16))
- p += emit_tb (p, is_tbnz, bit, aarch64_register (rt, 1), offset);
- else if (can_encode_int32 (offset, 28))
- {
- /* The offset is out of range for a test bit and branch
- instruction but not for a unconditional branch. We can use
- the following instructions instead:
-
- TBZ xn, #bit, TAKEN ; xn[bit] == 0, then jump to TAKEN.
- B NOT_TAKEN ; Else jump over TAKEN and continue.
- TAKEN:
- B #(offset - 8)
- NOT_TAKEN:
-
- */
- p += emit_tb (p, is_tbnz, bit, aarch64_register (rt, 1), 8);
- p += emit_b (p, 0, 8);
- p += emit_b (p, 0, offset - 8);
- }
- else
- return 0;
+/* Implementation of aarch64_insn_visitor method "cb". */
+
+static void
+aarch64_ftrace_insn_reloc_cb (const int32_t offset, const int is_cbnz,
+ const unsigned rn, int is64,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_insn_relocation_data *insn_reloc
+ = (struct aarch64_insn_relocation_data *) data;
+ int32_t new_offset
+ = insn_reloc->base.insn_addr - insn_reloc->new_addr + offset;
+
+ if (can_encode_int32 (new_offset, 21))
+ {
+ insn_reloc->insn_ptr += emit_cb (insn_reloc->insn_ptr, is_cbnz,
+ aarch64_register (rn, is64), new_offset);
}
- else if (aarch64_decode_adr (oldloc, insn, &is_adrp, &rd, &offset))
+ else if (can_encode_int32 (new_offset, 28))
{
+ /* The offset is out of range for a compare and branch
+ instruction but not for a unconditional branch. We can use
+ the following instructions instead:
+
+ CBZ xn, TAKEN ; xn == 0, then jump to TAKEN.
+ B NOT_TAKEN ; Else jump over TAKEN and continue.
+ TAKEN:
+ B #(offset - 8)
+ NOT_TAKEN:
+
+ */
+ insn_reloc->insn_ptr += emit_cb (insn_reloc->insn_ptr, is_cbnz,
+ aarch64_register (rn, is64), 8);
+ insn_reloc->insn_ptr += emit_b (insn_reloc->insn_ptr, 0, 8);
+ insn_reloc->insn_ptr += emit_b (insn_reloc->insn_ptr, 0, new_offset - 8);
+ }
+}
- /* We know exactly the address the ADR{P,} instruction will compute.
- We can just write it to the destination register. */
- CORE_ADDR address = oldloc + offset;
+/* Implementation of aarch64_insn_visitor method "tb". */
- if (is_adrp)
- {
- /* Clear the lower 12 bits of the offset to get the 4K page. */
- p += emit_mov_addr (p, aarch64_register (rd, 1),
- address & ~0xfff);
- }
- else
- p += emit_mov_addr (p, aarch64_register (rd, 1), address);
+static void
+aarch64_ftrace_insn_reloc_tb (const int32_t offset, int is_tbnz,
+ const unsigned rt, unsigned bit,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_insn_relocation_data *insn_reloc
+ = (struct aarch64_insn_relocation_data *) data;
+ int32_t new_offset
+ = insn_reloc->base.insn_addr - insn_reloc->new_addr + offset;
+
+ if (can_encode_int32 (new_offset, 16))
+ {
+ insn_reloc->insn_ptr += emit_tb (insn_reloc->insn_ptr, is_tbnz, bit,
+ aarch64_register (rt, 1), new_offset);
}
- else if (aarch64_decode_ldr_literal (oldloc, insn, &is_sw, &is64, &rt,
- &offset))
+ else if (can_encode_int32 (new_offset, 28))
{
- /* We know exactly what address to load from, and what register we
- can use:
+ /* The offset is out of range for a test bit and branch
+ instruction but not for a unconditional branch. We can use
+ the following instructions instead:
+
+ TBZ xn, #bit, TAKEN ; xn[bit] == 0, then jump to TAKEN.
+ B NOT_TAKEN ; Else jump over TAKEN and continue.
+ TAKEN:
+ B #(offset - 8)
+ NOT_TAKEN:
+
+ */
+ insn_reloc->insn_ptr += emit_tb (insn_reloc->insn_ptr, is_tbnz, bit,
+ aarch64_register (rt, 1), 8);
+ insn_reloc->insn_ptr += emit_b (insn_reloc->insn_ptr, 0, 8);
+ insn_reloc->insn_ptr += emit_b (insn_reloc->insn_ptr, 0,
+ new_offset - 8);
+ }
+}
- MOV xd, #(oldloc + offset)
- MOVK xd, #((oldloc + offset) >> 16), lsl #16
- ...
+/* Implementation of aarch64_insn_visitor method "adr". */
- LDR xd, [xd] ; or LDRSW xd, [xd]
+static void
+aarch64_ftrace_insn_reloc_adr (const int32_t offset, const unsigned rd,
+ const int is_adrp,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_insn_relocation_data *insn_reloc
+ = (struct aarch64_insn_relocation_data *) data;
+ /* We know exactly the address the ADR{P,} instruction will compute.
+ We can just write it to the destination register. */
+ CORE_ADDR address = data->insn_addr + offset;
- */
- CORE_ADDR address = oldloc + offset;
+ if (is_adrp)
+ {
+ /* Clear the lower 12 bits of the offset to get the 4K page. */
+ insn_reloc->insn_ptr += emit_mov_addr (insn_reloc->insn_ptr,
+ aarch64_register (rd, 1),
+ address & ~0xfff);
+ }
+ else
+ insn_reloc->insn_ptr += emit_mov_addr (insn_reloc->insn_ptr,
+ aarch64_register (rd, 1), address);
+}
- p += emit_mov_addr (p, aarch64_register (rt, 1), address);
+/* Implementation of aarch64_insn_visitor method "ldr_literal". */
- if (is_sw)
- p += emit_ldrsw (p, aarch64_register (rt, 1),
- aarch64_register (rt, 1),
- offset_memory_operand (0));
- else
- p += emit_ldr (p, aarch64_register (rt, is64),
- aarch64_register (rt, 1),
- offset_memory_operand (0));
- }
+static void
+aarch64_ftrace_insn_reloc_ldr_literal (const int32_t offset, const int is_sw,
+ const unsigned rt, const int is64,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_insn_relocation_data *insn_reloc
+ = (struct aarch64_insn_relocation_data *) data;
+ CORE_ADDR address = data->insn_addr + offset;
+
+ insn_reloc->insn_ptr += emit_mov_addr (insn_reloc->insn_ptr,
+ aarch64_register (rt, 1), address);
+
+ /* We know exactly what address to load from, and what register we
+ can use:
+
+ MOV xd, #(oldloc + offset)
+ MOVK xd, #((oldloc + offset) >> 16), lsl #16
+ ...
+
+ LDR xd, [xd] ; or LDRSW xd, [xd]
+
+ */
+
+ if (is_sw)
+ insn_reloc->insn_ptr += emit_ldrsw (insn_reloc->insn_ptr,
+ aarch64_register (rt, 1),
+ aarch64_register (rt, 1),
+ offset_memory_operand (0));
else
- {
- /* The instruction is not PC relative. Just re-emit it at the new
- location. */
- p += emit_insn (p, insn);
- }
+ insn_reloc->insn_ptr += emit_ldr (insn_reloc->insn_ptr,
+ aarch64_register (rt, is64),
+ aarch64_register (rt, 1),
+ offset_memory_operand (0));
+}
+
+/* Implementation of aarch64_insn_visitor method "others". */
+
+static void
+aarch64_ftrace_insn_reloc_others (const uint32_t insn,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_insn_relocation_data *insn_reloc
+ = (struct aarch64_insn_relocation_data *) data;
- return (int) (p - buf);
+ /* The instruction is not PC relative. Just re-emit it at the new
+ location. */
+ insn_reloc->insn_ptr += emit_insn (insn_reloc->insn_ptr, insn);
+}
+
+static const struct aarch64_insn_visitor visitor =
+{
+ aarch64_ftrace_insn_reloc_b,
+ aarch64_ftrace_insn_reloc_b_cond,
+ aarch64_ftrace_insn_reloc_cb,
+ aarch64_ftrace_insn_reloc_tb,
+ aarch64_ftrace_insn_reloc_adr,
+ aarch64_ftrace_insn_reloc_ldr_literal,
+ aarch64_ftrace_insn_reloc_others,
+};
+
+/* Visit an instruction INSN by VISITOR with all needed information in DATA.
+
+ PC relative instructions need to be handled specifically:
+
+ - B/BL
+ - B.COND
+ - CBZ/CBNZ
+ - TBZ/TBNZ
+ - ADR/ADRP
+ - LDR/LDRSW (literal) */
+
+static void
+aarch64_relocate_instruction (uint32_t insn,
+ const struct aarch64_insn_visitor *visitor,
+ struct aarch64_insn_data *data)
+{
+ int is_bl;
+ int is64;
+ int is_sw;
+ int is_cbnz;
+ int is_tbnz;
+ int is_adrp;
+ unsigned rn;
+ unsigned rt;
+ unsigned rd;
+ unsigned cond;
+ unsigned bit;
+ int32_t offset;
+
+ if (aarch64_decode_b (data->insn_addr, insn, &is_bl, &offset))
+ visitor->b (is_bl, offset, data);
+ else if (aarch64_decode_bcond (data->insn_addr, insn, &cond, &offset))
+ visitor->b_cond (cond, offset, data);
+ else if (aarch64_decode_cb (data->insn_addr, insn, &is64, &is_cbnz, &rn,
+ &offset))
+ visitor->cb (offset, is_cbnz, rn, is64, data);
+ else if (aarch64_decode_tb (data->insn_addr, insn, &is_tbnz, &bit, &rt,
+ &offset))
+ visitor->tb (offset, is_tbnz, rt, bit, data);
+ else if (aarch64_decode_adr (data->insn_addr, insn, &is_adrp, &rd, &offset))
+ visitor->adr (offset, rd, is_adrp, data);
+ else if (aarch64_decode_ldr_literal (data->insn_addr, insn, &is_sw, &is64,
+ &rt, &offset))
+ visitor->ldr_literal (offset, is_sw, rt, is64, data);
+ else
+ visitor->others (insn, data);
}
/* Implementation of linux_target_ops method
@@ -2119,6 +2267,7 @@ aarch64_install_fast_tracepoint_jump_pad (CORE_ADDR tpoint,
int i;
uint32_t insn;
CORE_ADDR buildaddr = *jump_entry;
+ struct aarch64_insn_relocation_data insn_data;
/* We need to save the current state on the stack both to restore it
later and to collect register values when the tracepoint is hit.
@@ -2421,9 +2570,16 @@ aarch64_install_fast_tracepoint_jump_pad (CORE_ADDR tpoint,
/* Now emit the relocated instruction. */
*adjusted_insn_addr = buildaddr;
target_read_uint32 (tpaddr, &insn);
- i = aarch64_relocate_instruction (buildaddr, tpaddr, insn, buf);
+
+ insn_data.base.insn_addr = tpaddr;
+ insn_data.new_addr = buildaddr;
+ insn_data.insn_ptr = buf;
+
+ aarch64_relocate_instruction (insn, &visitor,
+ (struct aarch64_insn_data *) &insn_data);
+
/* We may not have been able to relocate the instruction. */
- if (i == 0)
+ if (insn_data.insn_ptr == buf)
{
sprintf (err,
"E.Could not relocate instruction from %s to %s.",
@@ -2432,7 +2588,7 @@ aarch64_install_fast_tracepoint_jump_pad (CORE_ADDR tpoint,
return 1;
}
else
- append_insns (&buildaddr, i, buf);
+ append_insns (&buildaddr, insn_data.insn_ptr - buf, buf);
*adjusted_insn_addr_end = buildaddr;
/* Go back to the start of the buffer. */
--
1.9.1
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 05/11] Move aarch64_relocate_instruction to arch/aarch64-insn.c
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
` (7 preceding siblings ...)
2015-10-07 9:27 ` [PATCH 04/11] Use visitor in aarch64_relocate_instruction Yao Qi
@ 2015-10-07 9:27 ` Yao Qi
2015-10-07 9:27 ` [PATCH 06/11] Support displaced stepping in aarch64-linux Yao Qi
` (2 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Yao Qi @ 2015-10-07 9:27 UTC (permalink / raw)
To: gdb-patches
This patch moves aarch64_relocate_instruction and visitor class to
arch/aarch64-insn.c, so that both GDB and GDBserver can use it.
gdb:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* arch/aarch64-insn.c (aarch64_decode_ldr_literal): Moved from
gdbserver/linux-aarch64-low.c.
(aarch64_relocate_instruction): Likewise.
* arch/aarch64-insn.h (aarch64_decode_ldr_literal): Declare.
(struct aarch64_insn_data): Moved from
gdbserver/linux-aarch64-low.c.
(struct aarch64_insn_visitor): Likewise.
(aarch64_relocate_instruction): Declare.
gdb/gdbserver:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* linux-aarch64-low.c (extract_signed_bitfield): Remove.
(aarch64_decode_ldr_literal): Move to gdb/arch/aarch64-insn.c.
(aarch64_relocate_instruction): Likewise.
(struct aarch64_insn_data): Move to gdb/arch/aarch64-insn.h.
(struct aarch64_insn_visitor): Likewise.
---
gdb/arch/aarch64-insn.c | 93 +++++++++++++++++++++++
gdb/arch/aarch64-insn.h | 50 ++++++++++++
gdb/gdbserver/linux-aarch64-low.c | 155 --------------------------------------
3 files changed, 143 insertions(+), 155 deletions(-)
diff --git a/gdb/arch/aarch64-insn.c b/gdb/arch/aarch64-insn.c
index 13d0013..d0e88fa 100644
--- a/gdb/arch/aarch64-insn.c
+++ b/gdb/arch/aarch64-insn.c
@@ -235,3 +235,96 @@ aarch64_decode_tb (CORE_ADDR addr, uint32_t insn, int *is_tbnz,
}
return 0;
}
+
+/* Decode an opcode if it represents an LDR or LDRSW instruction taking a
+ literal offset from the current PC.
+
+ ADDR specifies the address of the opcode.
+ INSN specifies the opcode to test.
+ IS_W is set if the instruction is LDRSW.
+ IS64 receives size field from the decoded instruction.
+ RT receives the 'rt' field from the decoded instruction.
+ OFFSET receives the 'imm' field from the decoded instruction.
+
+ Return 1 if the opcodes matches and is decoded, otherwise 0. */
+
+int
+aarch64_decode_ldr_literal (CORE_ADDR addr, uint32_t insn, int *is_w,
+ int *is64, unsigned *rt, int32_t *offset)
+{
+ /* LDR 0T01 1000 iiii iiii iiii iiii iiir rrrr */
+ /* LDRSW 1001 1000 iiii iiii iiii iiii iiir rrrr */
+ if ((insn & 0x3f000000) == 0x18000000)
+ {
+ *is_w = (insn >> 31) & 0x1;
+
+ if (*is_w)
+ {
+ /* LDRSW always takes a 64-bit destination registers. */
+ *is64 = 1;
+ }
+ else
+ *is64 = (insn >> 30) & 0x1;
+
+ *rt = (insn >> 0) & 0x1f;
+ *offset = extract_signed_bitfield (insn, 19, 5) << 2;
+
+ if (aarch64_debug)
+ debug_printf ("decode: %s 0x%x %s %s%u, #?\n",
+ core_addr_to_string_nz (addr), insn,
+ *is_w ? "ldrsw" : "ldr",
+ *is64 ? "x" : "w", *rt);
+
+ return 1;
+ }
+
+ return 0;
+}
+
+/* Visit an instruction INSN by VISITOR with all needed information in DATA.
+
+ PC relative instructions need to be handled specifically:
+
+ - B/BL
+ - B.COND
+ - CBZ/CBNZ
+ - TBZ/TBNZ
+ - ADR/ADRP
+ - LDR/LDRSW (literal) */
+
+void
+aarch64_relocate_instruction (uint32_t insn,
+ const struct aarch64_insn_visitor *visitor,
+ struct aarch64_insn_data *data)
+{
+ int is_bl;
+ int is64;
+ int is_sw;
+ int is_cbnz;
+ int is_tbnz;
+ int is_adrp;
+ unsigned rn;
+ unsigned rt;
+ unsigned rd;
+ unsigned cond;
+ unsigned bit;
+ int32_t offset;
+
+ if (aarch64_decode_b (data->insn_addr, insn, &is_bl, &offset))
+ visitor->b (is_bl, offset, data);
+ else if (aarch64_decode_bcond (data->insn_addr, insn, &cond, &offset))
+ visitor->b_cond (cond, offset, data);
+ else if (aarch64_decode_cb (data->insn_addr, insn, &is64, &is_cbnz, &rn,
+ &offset))
+ visitor->cb (offset, is_cbnz, rn, is64, data);
+ else if (aarch64_decode_tb (data->insn_addr, insn, &is_tbnz, &bit, &rt,
+ &offset))
+ visitor->tb (offset, is_tbnz, rt, bit, data);
+ else if (aarch64_decode_adr (data->insn_addr, insn, &is_adrp, &rd, &offset))
+ visitor->adr (offset, rd, is_adrp, data);
+ else if (aarch64_decode_ldr_literal (data->insn_addr, insn, &is_sw, &is64,
+ &rt, &offset))
+ visitor->ldr_literal (offset, is_sw, rt, is64, data);
+ else
+ visitor->others (insn, data);
+}
diff --git a/gdb/arch/aarch64-insn.h b/gdb/arch/aarch64-insn.h
index 2facb44..47f6715 100644
--- a/gdb/arch/aarch64-insn.h
+++ b/gdb/arch/aarch64-insn.h
@@ -36,4 +36,54 @@ int aarch64_decode_cb (CORE_ADDR addr, uint32_t insn, int *is64,
int aarch64_decode_tb (CORE_ADDR addr, uint32_t insn, int *is_tbnz,
unsigned *bit, unsigned *rt, int32_t *imm);
+int aarch64_decode_ldr_literal (CORE_ADDR addr, uint32_t insn, int *is_w,
+ int *is64, unsigned *rt, int32_t *offset);
+
+/* Data passed to each method of aarch64_insn_visitor. */
+
+struct aarch64_insn_data
+{
+ /* The instruction address. */
+ CORE_ADDR insn_addr;
+};
+
+/* Visit different instructions by different methods. */
+
+struct aarch64_insn_visitor
+{
+ /* Visit instruction B/BL OFFSET. */
+ void (*b) (const int is_bl, const int32_t offset,
+ struct aarch64_insn_data *data);
+
+ /* Visit instruction B.COND OFFSET. */
+ void (*b_cond) (const unsigned cond, const int32_t offset,
+ struct aarch64_insn_data *data);
+
+ /* Visit instruction CBZ/CBNZ Rn, OFFSET. */
+ void (*cb) (const int32_t offset, const int is_cbnz,
+ const unsigned rn, int is64,
+ struct aarch64_insn_data *data);
+
+ /* Visit instruction TBZ/TBNZ Rt, #BIT, OFFSET. */
+ void (*tb) (const int32_t offset, int is_tbnz,
+ const unsigned rt, unsigned bit,
+ struct aarch64_insn_data *data);
+
+ /* Visit instruction ADR/ADRP Rd, OFFSET. */
+ void (*adr) (const int32_t offset, const unsigned rd,
+ const int is_adrp, struct aarch64_insn_data *data);
+
+ /* Visit instruction LDR/LDRSW Rt, OFFSET. */
+ void (*ldr_literal) (const int32_t offset, const int is_sw,
+ const unsigned rt, const int is64,
+ struct aarch64_insn_data *data);
+
+ /* Visit instruction INSN of other kinds. */
+ void (*others) (const uint32_t insn, struct aarch64_insn_data *data);
+};
+
+void aarch64_relocate_instruction (uint32_t insn,
+ const struct aarch64_insn_visitor *visitor,
+ struct aarch64_insn_data *data);
+
#endif
diff --git a/gdb/gdbserver/linux-aarch64-low.c b/gdb/gdbserver/linux-aarch64-low.c
index b4181ed..1241434 100644
--- a/gdb/gdbserver/linux-aarch64-low.c
+++ b/gdb/gdbserver/linux-aarch64-low.c
@@ -584,70 +584,6 @@ aarch64_get_thread_area (int lwpid, CORE_ADDR *addrp)
return 0;
}
-/* Extract a signed value from a bit field within an instruction
- encoding.
-
- INSN is the instruction opcode.
-
- WIDTH specifies the width of the bit field to extract (in bits).
-
- OFFSET specifies the least significant bit of the field where bits
- are numbered zero counting from least to most significant. */
-
-static int32_t
-extract_signed_bitfield (uint32_t insn, unsigned width, unsigned offset)
-{
- unsigned shift_l = sizeof (int32_t) * 8 - (offset + width);
- unsigned shift_r = sizeof (int32_t) * 8 - width;
-
- return ((int32_t) insn << shift_l) >> shift_r;
-}
-
-/* Decode an opcode if it represents an LDR or LDRSW instruction taking a
- literal offset from the current PC.
-
- ADDR specifies the address of the opcode.
- INSN specifies the opcode to test.
- IS_W is set if the instruction is LDRSW.
- IS64 receives size field from the decoded instruction.
- RT receives the 'rt' field from the decoded instruction.
- OFFSET receives the 'imm' field from the decoded instruction.
-
- Return 1 if the opcodes matches and is decoded, otherwise 0. */
-
-int
-aarch64_decode_ldr_literal (CORE_ADDR addr, uint32_t insn, int *is_w,
- int *is64, unsigned *rt, int32_t *offset)
-{
- /* LDR 0T01 1000 iiii iiii iiii iiii iiir rrrr */
- /* LDRSW 1001 1000 iiii iiii iiii iiii iiir rrrr */
- if ((insn & 0x3f000000) == 0x18000000)
- {
- *is_w = (insn >> 31) & 0x1;
-
- if (*is_w)
- {
- /* LDRSW always takes a 64-bit destination registers. */
- *is64 = 1;
- }
- else
- *is64 = (insn >> 30) & 0x1;
-
- *rt = (insn >> 0) & 0x1f;
- *offset = extract_signed_bitfield (insn, 19, 5) << 2;
-
- if (aarch64_debug)
- debug_printf ("decode: %s 0x%x %s %s%u, #?\n",
- core_addr_to_string_nz (addr), insn,
- *is_w ? "ldrsw" : "ldr",
- *is64 ? "x" : "w", *rt);
-
- return 1;
- }
-
- return 0;
-}
-
/* List of opcodes that we need for building the jump pad and relocating
an instruction. */
@@ -1924,49 +1860,6 @@ can_encode_int32 (int32_t val, unsigned bits)
return rest == 0 || rest == -1;
}
-/* Data passed to each method of aarch64_insn_visitor. */
-
-struct aarch64_insn_data
-{
- /* The instruction address. */
- CORE_ADDR insn_addr;
-};
-
-/* Visit different instructions by different methods. */
-
-struct aarch64_insn_visitor
-{
- /* Visit instruction B/BL OFFSET. */
- void (*b) (const int is_bl, const int32_t offset,
- struct aarch64_insn_data *data);
-
- /* Visit instruction B.COND OFFSET. */
- void (*b_cond) (const unsigned cond, const int32_t offset,
- struct aarch64_insn_data *data);
-
- /* Visit instruction CBZ/CBNZ Rn, OFFSET. */
- void (*cb) (const int32_t offset, const int is_cbnz,
- const unsigned rn, int is64,
- struct aarch64_insn_data *data);
-
- /* Visit instruction TBZ/TBNZ Rt, #BIT, OFFSET. */
- void (*tb) (const int32_t offset, int is_tbnz,
- const unsigned rt, unsigned bit,
- struct aarch64_insn_data *data);
-
- /* Visit instruction ADR/ADRP Rd, OFFSET. */
- void (*adr) (const int32_t offset, const unsigned rd,
- const int is_adrp, struct aarch64_insn_data *data);
-
- /* Visit instruction LDR/LDRSW Rt, OFFSET. */
- void (*ldr_literal) (const int32_t offset, const int is_sw,
- const unsigned rt, const int is64,
- struct aarch64_insn_data *data);
-
- /* Visit instruction INSN of other kinds. */
- void (*others) (const uint32_t insn, struct aarch64_insn_data *data);
-};
-
/* Sub-class of struct aarch64_insn_data, store information of
instruction relocation for fast tracepoint. Visitor can
relocate an instruction from BASE.INSN_ADDR to NEW_ADDR and save
@@ -2195,54 +2088,6 @@ static const struct aarch64_insn_visitor visitor =
aarch64_ftrace_insn_reloc_others,
};
-/* Visit an instruction INSN by VISITOR with all needed information in DATA.
-
- PC relative instructions need to be handled specifically:
-
- - B/BL
- - B.COND
- - CBZ/CBNZ
- - TBZ/TBNZ
- - ADR/ADRP
- - LDR/LDRSW (literal) */
-
-static void
-aarch64_relocate_instruction (uint32_t insn,
- const struct aarch64_insn_visitor *visitor,
- struct aarch64_insn_data *data)
-{
- int is_bl;
- int is64;
- int is_sw;
- int is_cbnz;
- int is_tbnz;
- int is_adrp;
- unsigned rn;
- unsigned rt;
- unsigned rd;
- unsigned cond;
- unsigned bit;
- int32_t offset;
-
- if (aarch64_decode_b (data->insn_addr, insn, &is_bl, &offset))
- visitor->b (is_bl, offset, data);
- else if (aarch64_decode_bcond (data->insn_addr, insn, &cond, &offset))
- visitor->b_cond (cond, offset, data);
- else if (aarch64_decode_cb (data->insn_addr, insn, &is64, &is_cbnz, &rn,
- &offset))
- visitor->cb (offset, is_cbnz, rn, is64, data);
- else if (aarch64_decode_tb (data->insn_addr, insn, &is_tbnz, &bit, &rt,
- &offset))
- visitor->tb (offset, is_tbnz, rt, bit, data);
- else if (aarch64_decode_adr (data->insn_addr, insn, &is_adrp, &rd, &offset))
- visitor->adr (offset, rd, is_adrp, data);
- else if (aarch64_decode_ldr_literal (data->insn_addr, insn, &is_sw, &is64,
- &rt, &offset))
- visitor->ldr_literal (offset, is_sw, rt, is64, data);
- else
- visitor->others (insn, data);
-}
-
/* Implementation of linux_target_ops method
"install_fast_tracepoint_jump_pad". */
--
1.9.1
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 06/11] Support displaced stepping in aarch64-linux
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
` (8 preceding siblings ...)
2015-10-07 9:27 ` [PATCH 05/11] Move aarch64_relocate_instruction to arch/aarch64-insn.c Yao Qi
@ 2015-10-07 9:27 ` Yao Qi
2015-10-13 20:26 ` Sergio Durigan Junior
2015-10-07 9:27 ` [PATCH 11/11] Mention the change in NEWS Yao Qi
2015-10-12 10:35 ` [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
11 siblings, 1 reply; 16+ messages in thread
From: Yao Qi @ 2015-10-07 9:27 UTC (permalink / raw)
To: gdb-patches
This patch is to support displaced stepping in aarch64-linux. A
visitor is implemented for displaced stepping, and used to record
information to fixup pc after displaced stepping if needed. Some
emit_* functions are converted to macros, and moved to
arch/aarch64-insn.{c,h} so that they can be shared.
gdb:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* aarch64-linux-tdep.c: Include arch-utils.h.
(aarch64_linux_init_abi): Call set_gdbarch_max_insn_length,
set_gdbarch_displaced_step_copy_insn,
set_gdbarch_displaced_step_fixup,
set_gdbarch_displaced_step_free_closure,
set_gdbarch_displaced_step_location,
and set_gdbarch_displaced_step_hw_singlestep.
* aarch64-tdep.c (struct displaced_step_closure): New.
(struct aarch64_displaced_step_data): New.
(aarch64_displaced_step_b): New function.
(aarch64_displaced_step_b_cond): Likewise.
(aarch64_register): Likewise.
(aarch64_displaced_step_cb): Likewise.
(aarch64_displaced_step_tb): Likewise.
(aarch64_displaced_step_adr): Likewise.
(aarch64_displaced_step_ldr_literal): Likewise.
(aarch64_displaced_step_others): Likewise.
(aarch64_displaced_step_copy_insn): Likewise.
(aarch64_displaced_step_fixup): Likewise.
(aarch64_displaced_step_hw_singlestep): Likewise.
* aarch64-tdep.h (DISPLACED_MODIFIED_INSNS): New macro.
(aarch64_displaced_step_copy_insn): Declare.
(aarch64_displaced_step_fixup): Declare.
(aarch64_displaced_step_hw_singlestep): Declare.
* arch/aarch64-insn.c (emit_insn): Moved from
gdbserver/linux-aarch64-low.c.
(emit_load_store): Likewise.
* arch/aarch64-insn.h (enum aarch64_opcodes): Moved from
gdbserver/linux-aarch64-low.c.
(struct aarch64_register): Likewise.
(struct aarch64_memory_operand): Likewise.
(ENCODE): Likewise.
(can_encode_int32): New macro.
(emit_b, emit_bcond, emit_cb, emit_ldr, emit_ldrsw): Likewise.
(emit_tb, emit_nop): Likewise.
(emit_insn): Declare.
(emit_load_store): Declare.
gdb/gdbserver:
2015-10-05 Yao Qi <yao.qi@linaro.org>
* linux-aarch64-low.c (enum aarch64_opcodes): Move to
arch/aarch64-insn.h.
(struct aarch64_memory_operand): Likewise.
(ENCODE): Likewise.
(emit_insn): Move to arch/aarch64-insn.c.
(emit_b, emit_bcond, emit_cb, emit_tb): Remove.
(emit_load_store): Move to arch/aarch64-insn.c.
(emit_ldr, emit_ldrb, emit_ldrsw, emit_nop): Remove.
(can_encode_int32): Remove.
---
gdb/aarch64-linux-tdep.c | 12 ++
gdb/aarch64-tdep.c | 337 ++++++++++++++++++++++++++++++++++++++
gdb/aarch64-tdep.h | 17 ++
gdb/arch/aarch64-insn.c | 58 +++++++
gdb/arch/aarch64-insn.h | 232 ++++++++++++++++++++++++++
gdb/gdbserver/linux-aarch64-low.c | 323 +-----------------------------------
6 files changed, 659 insertions(+), 320 deletions(-)
diff --git a/gdb/aarch64-linux-tdep.c b/gdb/aarch64-linux-tdep.c
index aaf6608..272aafe 100644
--- a/gdb/aarch64-linux-tdep.c
+++ b/gdb/aarch64-linux-tdep.c
@@ -21,6 +21,7 @@
#include "defs.h"
#include "gdbarch.h"
+#include "arch-utils.h"
#include "glibc-tdep.h"
#include "linux-tdep.h"
#include "aarch64-tdep.h"
@@ -1151,6 +1152,17 @@ aarch64_linux_init_abi (struct gdbarch_info info, struct gdbarch *gdbarch)
/* `catch syscall' */
set_xml_syscall_file_name (gdbarch, "syscalls/aarch64-linux.xml");
set_gdbarch_get_syscall_number (gdbarch, aarch64_linux_get_syscall_number);
+
+ /* Displaced stepping. */
+ set_gdbarch_max_insn_length (gdbarch, 4 * DISPLACED_MODIFIED_INSNS);
+ set_gdbarch_displaced_step_copy_insn (gdbarch,
+ aarch64_displaced_step_copy_insn);
+ set_gdbarch_displaced_step_fixup (gdbarch, aarch64_displaced_step_fixup);
+ set_gdbarch_displaced_step_free_closure (gdbarch,
+ simple_displaced_step_free_closure);
+ set_gdbarch_displaced_step_location (gdbarch, linux_displaced_step_location);
+ set_gdbarch_displaced_step_hw_singlestep (gdbarch,
+ aarch64_displaced_step_hw_singlestep);
}
/* Provide a prototype to silence -Wmissing-prototypes. */
diff --git a/gdb/aarch64-tdep.c b/gdb/aarch64-tdep.c
index df67e12..d9c4334 100644
--- a/gdb/aarch64-tdep.c
+++ b/gdb/aarch64-tdep.c
@@ -2559,6 +2559,343 @@ aarch64_software_single_step (struct frame_info *frame)
return 1;
}
+struct displaced_step_closure
+{
+ /* It is true when condition instruction, such as B.CON, TBZ, etc,
+ is being displaced stepping. */
+ int cond;
+
+ /* PC adjustment offset after displaced stepping. */
+ int32_t pc_adjust;
+};
+
+/* Data when visiting instructions for displaced stepping. */
+
+struct aarch64_displaced_step_data
+{
+ struct aarch64_insn_data base;
+
+ /* The address where the instruction will be executed at. */
+ CORE_ADDR new_addr;
+ /* Buffer of instructions to be copied to NEW_ADDR to execute. */
+ uint32_t insn_buf[DISPLACED_MODIFIED_INSNS];
+ /* Number of instructions in INSN_BUF. */
+ unsigned insn_count;
+ /* Registers when doing displaced stepping. */
+ struct regcache *regs;
+
+ struct displaced_step_closure *dsc;
+};
+
+/* Implementation of aarch64_insn_visitor method "b". */
+
+static void
+aarch64_displaced_step_b (const int is_bl, const int32_t offset,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_displaced_step_data *dsd
+ = (struct aarch64_displaced_step_data *) data;
+ int32_t new_offset = data->insn_addr - dsd->new_addr + offset;
+
+ if (can_encode_int32 (new_offset, 28))
+ {
+ /* Emit B rather than BL, because executing BL on a new address
+ will get the wrong address into LR. In order to avoid this,
+ we emit B, and update LR if the instruction is BL. */
+ emit_b (dsd->insn_buf, 0, new_offset);
+ dsd->insn_count++;
+ }
+ else
+ {
+ /* Write NOP. */
+ emit_nop (dsd->insn_buf);
+ dsd->insn_count++;
+ dsd->dsc->pc_adjust = offset;
+ }
+
+ if (is_bl)
+ {
+ /* Update LR. */
+ regcache_cooked_write_unsigned (dsd->regs, AARCH64_LR_REGNUM,
+ data->insn_addr + 4);
+ }
+}
+
+/* Implementation of aarch64_insn_visitor method "b_cond". */
+
+static void
+aarch64_displaced_step_b_cond (const unsigned cond, const int32_t offset,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_displaced_step_data *dsd
+ = (struct aarch64_displaced_step_data *) data;
+ int32_t new_offset = data->insn_addr - dsd->new_addr + offset;
+
+ /* GDB has to fix up PC after displaced step this instruction
+ differently according to the condition is true or false. Instead
+ of checking COND against conditional flags, we can use
+ the following instructions, and GDB can tell how to fix up PC
+ according to the PC value.
+
+ B.COND TAKEN ; If cond is true, then jump to TAKEN.
+ INSN1 ;
+ TAKEN:
+ INSN2
+ */
+
+ emit_bcond (dsd->insn_buf, cond, 8);
+ dsd->dsc->cond = 1;
+ dsd->dsc->pc_adjust = offset;
+ dsd->insn_count = 1;
+}
+
+/* Dynamically allocate a new register. If we know the register
+ statically, we should make it a global as above instead of using this
+ helper function. */
+
+static struct aarch64_register
+aarch64_register (unsigned num, int is64)
+{
+ return (struct aarch64_register) { num, is64 };
+}
+
+/* Implementation of aarch64_insn_visitor method "cb". */
+
+static void
+aarch64_displaced_step_cb (const int32_t offset, const int is_cbnz,
+ const unsigned rn, int is64,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_displaced_step_data *dsd
+ = (struct aarch64_displaced_step_data *) data;
+ int32_t new_offset = data->insn_addr - dsd->new_addr + offset;
+
+ /* The offset is out of range for a compare and branch
+ instruction. We can use the following instructions instead:
+
+ CBZ xn, TAKEN ; xn == 0, then jump to TAKEN.
+ INSN1 ;
+ TAKEN:
+ INSN2
+ */
+ emit_cb (dsd->insn_buf, is_cbnz, aarch64_register (rn, is64), 8);
+ dsd->insn_count = 1;
+ dsd->dsc->cond = 1;
+ dsd->dsc->pc_adjust = offset;
+}
+
+/* Implementation of aarch64_insn_visitor method "tb". */
+
+static void
+aarch64_displaced_step_tb (const int32_t offset, int is_tbnz,
+ const unsigned rt, unsigned bit,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_displaced_step_data *dsd
+ = (struct aarch64_displaced_step_data *) data;
+ int32_t new_offset = data->insn_addr - dsd->new_addr + offset;
+
+ /* The offset is out of range for a test bit and branch
+ instruction We can use the following instructions instead:
+
+ TBZ xn, #bit, TAKEN ; xn[bit] == 0, then jump to TAKEN.
+ INSN1 ;
+ TAKEN:
+ INSN2
+
+ */
+ emit_tb (dsd->insn_buf, is_tbnz, bit, aarch64_register (rt, 1), 8);
+ dsd->insn_count = 1;
+ dsd->dsc->cond = 1;
+ dsd->dsc->pc_adjust = offset;
+}
+
+/* Implementation of aarch64_insn_visitor method "adr". */
+
+static void
+aarch64_displaced_step_adr (const int32_t offset, const unsigned rd,
+ const int is_adrp, struct aarch64_insn_data *data)
+{
+ struct aarch64_displaced_step_data *dsd
+ = (struct aarch64_displaced_step_data *) data;
+ /* We know exactly the address the ADR{P,} instruction will compute.
+ We can just write it to the destination register. */
+ CORE_ADDR address = data->insn_addr + offset;
+
+ if (is_adrp)
+ {
+ /* Clear the lower 12 bits of the offset to get the 4K page. */
+ regcache_cooked_write_unsigned (dsd->regs, AARCH64_X0_REGNUM + rd,
+ address & ~0xfff);
+ }
+ else
+ regcache_cooked_write_unsigned (dsd->regs, AARCH64_X0_REGNUM + rd,
+ address);
+
+ dsd->dsc->pc_adjust = 4;
+ emit_nop (dsd->insn_buf);
+ dsd->insn_count = 1;
+}
+
+/* Implementation of aarch64_insn_visitor method "ldr_literal". */
+
+static void
+aarch64_displaced_step_ldr_literal (const int32_t offset, const int is_sw,
+ const unsigned rt, const int is64,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_displaced_step_data *dsd
+ = (struct aarch64_displaced_step_data *) data;
+ CORE_ADDR address = data->insn_addr + offset;
+ struct aarch64_memory_operand zero = { MEMORY_OPERAND_OFFSET, 0 };
+
+ regcache_cooked_write_unsigned (dsd->regs, AARCH64_X0_REGNUM + rt,
+ address);
+
+ if (is_sw)
+ dsd->insn_count = emit_ldrsw (dsd->insn_buf, aarch64_register (rt, 1),
+ aarch64_register (rt, 1), zero);
+ else
+ dsd->insn_count = emit_ldr (dsd->insn_buf, aarch64_register (rt, is64),
+ aarch64_register (rt, 1), zero);
+
+ dsd->dsc->pc_adjust = 4;
+}
+
+/* Implementation of aarch64_insn_visitor method "others". */
+
+static void
+aarch64_displaced_step_others (const uint32_t insn,
+ struct aarch64_insn_data *data)
+{
+ struct aarch64_displaced_step_data *dsd
+ = (struct aarch64_displaced_step_data *) data;
+
+ emit_insn (dsd->insn_buf, insn);
+ dsd->insn_count = 1;
+
+ if ((insn & 0xfffffc1f) == 0xd65f0000)
+ {
+ /* RET */
+ dsd->dsc->pc_adjust = 0;
+ }
+ else
+ dsd->dsc->pc_adjust = 4;
+}
+
+static const struct aarch64_insn_visitor visitor =
+{
+ aarch64_displaced_step_b,
+ aarch64_displaced_step_b_cond,
+ aarch64_displaced_step_cb,
+ aarch64_displaced_step_tb,
+ aarch64_displaced_step_adr,
+ aarch64_displaced_step_ldr_literal,
+ aarch64_displaced_step_others,
+};
+
+/* Implement the "displaced_step_copy_insn" gdbarch method. */
+
+struct displaced_step_closure *
+aarch64_displaced_step_copy_insn (struct gdbarch *gdbarch,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ struct displaced_step_closure *dsc = NULL;
+ enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch);
+ uint32_t insn = read_memory_unsigned_integer (from, 4, byte_order_for_code);
+ struct aarch64_displaced_step_data dsd;
+
+ /* Look for a Load Exclusive instruction which begins the sequence. */
+ if (decode_masked_match (insn, 0x3fc00000, 0x08400000))
+ {
+ /* We can't displaced step atomic sequences. */
+ return NULL;
+ }
+
+ dsc = XCNEW (struct displaced_step_closure);
+ dsd.base.insn_addr = from;
+ dsd.new_addr = to;
+ dsd.regs = regs;
+ dsd.dsc = dsc;
+ aarch64_relocate_instruction (insn, &visitor,
+ (struct aarch64_insn_data *) &dsd);
+ gdb_assert (dsd.insn_count <= DISPLACED_MODIFIED_INSNS);
+
+ if (dsd.insn_count != 0)
+ {
+ int i;
+
+ /* Instruction can be relocated to scratch pad. Copy
+ relocated instruction(s) there. */
+ for (i = 0; i < dsd.insn_count; i++)
+ {
+ if (debug_displaced)
+ {
+ debug_printf ("displaced: writing insn ");
+ debug_printf ("%.8x", dsd.insn_buf[i]);
+ debug_printf (" at %s\n", paddress (gdbarch, to + i * 4));
+ }
+ write_memory_unsigned_integer (to + i * 4, 4, byte_order_for_code,
+ (ULONGEST) dsd.insn_buf[i]);
+ }
+ }
+ else
+ {
+ xfree (dsc);
+ dsc = NULL;
+ }
+
+ return dsc;
+}
+
+/* Implement the "displaced_step_fixup" gdbarch method. */
+
+void
+aarch64_displaced_step_fixup (struct gdbarch *gdbarch,
+ struct displaced_step_closure *dsc,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ if (dsc->cond)
+ {
+ ULONGEST pc;
+
+ regcache_cooked_read_unsigned (regs, AARCH64_PC_REGNUM, &pc);
+ if (pc - to == 8)
+ {
+ /* Condition is true. */
+ }
+ else if (pc - to == 4)
+ {
+ /* Condition is false. */
+ dsc->pc_adjust = 4;
+ }
+ else
+ gdb_assert_not_reached ("Unexpected PC value after displaced stepping");
+ }
+
+ if (dsc->pc_adjust != 0)
+ {
+ if (debug_displaced)
+ {
+ debug_printf ("displaced: fixup: set PC to %s:%d\n",
+ paddress (gdbarch, from), dsc->pc_adjust);
+ }
+ regcache_cooked_write_unsigned (regs, AARCH64_PC_REGNUM,
+ from + dsc->pc_adjust);
+ }
+}
+
+/* Implement the "displaced_step_hw_singlestep" gdbarch method. */
+
+int
+aarch64_displaced_step_hw_singlestep (struct gdbarch *gdbarch,
+ struct displaced_step_closure *closure)
+{
+ return 1;
+}
+
/* Initialize the current architecture based on INFO. If possible,
re-use an architecture from ARCHES, which is a list of
architectures already created during this debugging session.
diff --git a/gdb/aarch64-tdep.h b/gdb/aarch64-tdep.h
index af209a9..6297170 100644
--- a/gdb/aarch64-tdep.h
+++ b/gdb/aarch64-tdep.h
@@ -69,6 +69,10 @@ enum aarch64_regnum
/* Total number of general (X) registers. */
#define AARCH64_X_REGISTER_COUNT 32
+/* The maximum number of modified instructions generated for one
+ single-stepped instruction. */
+#define DISPLACED_MODIFIED_INSNS 1
+
/* Target-dependent structure in gdbarch. */
struct gdbarch_tdep
{
@@ -98,4 +102,17 @@ extern struct target_desc *tdesc_aarch64;
extern int aarch64_process_record (struct gdbarch *gdbarch,
struct regcache *regcache, CORE_ADDR addr);
+struct displaced_step_closure *
+ aarch64_displaced_step_copy_insn (struct gdbarch *gdbarch,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs);
+
+void aarch64_displaced_step_fixup (struct gdbarch *gdbarch,
+ struct displaced_step_closure *dsc,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs);
+
+int aarch64_displaced_step_hw_singlestep (struct gdbarch *gdbarch,
+ struct displaced_step_closure *closure);
+
#endif /* aarch64-tdep.h */
diff --git a/gdb/arch/aarch64-insn.c b/gdb/arch/aarch64-insn.c
index d0e88fa..3bc0117 100644
--- a/gdb/arch/aarch64-insn.c
+++ b/gdb/arch/aarch64-insn.c
@@ -328,3 +328,61 @@ aarch64_relocate_instruction (uint32_t insn,
else
visitor->others (insn, data);
}
+
+/* Write a 32-bit unsigned integer INSN info *BUF. Return the number of
+ instructions written (aka. 1). */
+
+int
+emit_insn (uint32_t *buf, uint32_t insn)
+{
+ *buf = insn;
+ return 1;
+}
+
+/* Helper function emitting a load or store instruction. */
+
+int
+emit_load_store (uint32_t *buf, uint32_t size,
+ enum aarch64_opcodes opcode,
+ struct aarch64_register rt,
+ struct aarch64_register rn,
+ struct aarch64_memory_operand operand)
+{
+ uint32_t op;
+
+ switch (operand.type)
+ {
+ case MEMORY_OPERAND_OFFSET:
+ {
+ op = ENCODE (1, 1, 24);
+
+ return emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
+ | ENCODE (operand.index >> 3, 12, 10)
+ | ENCODE (rn.num, 5, 5)
+ | ENCODE (rt.num, 5, 0));
+ }
+ case MEMORY_OPERAND_POSTINDEX:
+ {
+ uint32_t post_index = ENCODE (1, 2, 10);
+
+ op = ENCODE (0, 1, 24);
+
+ return emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
+ | post_index | ENCODE (operand.index, 9, 12)
+ | ENCODE (rn.num, 5, 5) | ENCODE (rt.num, 5, 0));
+ }
+ case MEMORY_OPERAND_PREINDEX:
+ {
+ uint32_t pre_index = ENCODE (3, 2, 10);
+
+ op = ENCODE (0, 1, 24);
+
+ return emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
+ | pre_index | ENCODE (operand.index, 9, 12)
+ | ENCODE (rn.num, 5, 5)
+ | ENCODE (rt.num, 5, 0));
+ }
+ default:
+ return 0;
+ }
+}
diff --git a/gdb/arch/aarch64-insn.h b/gdb/arch/aarch64-insn.h
index 47f6715..01a5d73 100644
--- a/gdb/arch/aarch64-insn.h
+++ b/gdb/arch/aarch64-insn.h
@@ -21,6 +21,129 @@
extern int aarch64_debug;
+/* List of opcodes that we need for building the jump pad and relocating
+ an instruction. */
+
+enum aarch64_opcodes
+{
+ /* B 0001 01ii iiii iiii iiii iiii iiii iiii */
+ /* BL 1001 01ii iiii iiii iiii iiii iiii iiii */
+ /* B.COND 0101 0100 iiii iiii iiii iiii iii0 cccc */
+ /* CBZ s011 0100 iiii iiii iiii iiii iiir rrrr */
+ /* CBNZ s011 0101 iiii iiii iiii iiii iiir rrrr */
+ /* TBZ b011 0110 bbbb biii iiii iiii iiir rrrr */
+ /* TBNZ b011 0111 bbbb biii iiii iiii iiir rrrr */
+ B = 0x14000000,
+ BL = 0x80000000 | B,
+ BCOND = 0x40000000 | B,
+ CBZ = 0x20000000 | B,
+ CBNZ = 0x21000000 | B,
+ TBZ = 0x36000000 | B,
+ TBNZ = 0x37000000 | B,
+ /* BLR 1101 0110 0011 1111 0000 00rr rrr0 0000 */
+ BLR = 0xd63f0000,
+ /* RET 1101 0110 0101 1111 0000 00rr rrr0 0000 */
+ RET = 0xd65f0000,
+ /* STP s010 100o o0ii iiii irrr rrrr rrrr rrrr */
+ /* LDP s010 100o o1ii iiii irrr rrrr rrrr rrrr */
+ /* STP (SIMD&VFP) ss10 110o o0ii iiii irrr rrrr rrrr rrrr */
+ /* LDP (SIMD&VFP) ss10 110o o1ii iiii irrr rrrr rrrr rrrr */
+ STP = 0x28000000,
+ LDP = 0x28400000,
+ STP_SIMD_VFP = 0x04000000 | STP,
+ LDP_SIMD_VFP = 0x04000000 | LDP,
+ /* STR ss11 100o 00xi iiii iiii xxrr rrrr rrrr */
+ /* LDR ss11 100o 01xi iiii iiii xxrr rrrr rrrr */
+ /* LDRSW 1011 100o 10xi iiii iiii xxrr rrrr rrrr */
+ STR = 0x38000000,
+ LDR = 0x00400000 | STR,
+ LDRSW = 0x80800000 | STR,
+ /* LDAXR ss00 1000 0101 1111 1111 11rr rrrr rrrr */
+ LDAXR = 0x085ffc00,
+ /* STXR ss00 1000 000r rrrr 0111 11rr rrrr rrrr */
+ STXR = 0x08007c00,
+ /* STLR ss00 1000 1001 1111 1111 11rr rrrr rrrr */
+ STLR = 0x089ffc00,
+ /* MOV s101 0010 1xxi iiii iiii iiii iiir rrrr */
+ /* MOVK s111 0010 1xxi iiii iiii iiii iiir rrrr */
+ MOV = 0x52800000,
+ MOVK = 0x20000000 | MOV,
+ /* ADD s00o ooo1 xxxx xxxx xxxx xxxx xxxx xxxx */
+ /* SUB s10o ooo1 xxxx xxxx xxxx xxxx xxxx xxxx */
+ /* SUBS s11o ooo1 xxxx xxxx xxxx xxxx xxxx xxxx */
+ ADD = 0x01000000,
+ SUB = 0x40000000 | ADD,
+ SUBS = 0x20000000 | SUB,
+ /* AND s000 1010 xx0x xxxx xxxx xxxx xxxx xxxx */
+ /* ORR s010 1010 xx0x xxxx xxxx xxxx xxxx xxxx */
+ /* ORN s010 1010 xx1x xxxx xxxx xxxx xxxx xxxx */
+ /* EOR s100 1010 xx0x xxxx xxxx xxxx xxxx xxxx */
+ AND = 0x0a000000,
+ ORR = 0x20000000 | AND,
+ ORN = 0x00200000 | ORR,
+ EOR = 0x40000000 | AND,
+ /* LSLV s001 1010 110r rrrr 0010 00rr rrrr rrrr */
+ /* LSRV s001 1010 110r rrrr 0010 01rr rrrr rrrr */
+ /* ASRV s001 1010 110r rrrr 0010 10rr rrrr rrrr */
+ LSLV = 0x1ac02000,
+ LSRV = 0x00000400 | LSLV,
+ ASRV = 0x00000800 | LSLV,
+ /* SBFM s001 0011 0nii iiii iiii iirr rrrr rrrr */
+ SBFM = 0x13000000,
+ /* UBFM s101 0011 0nii iiii iiii iirr rrrr rrrr */
+ UBFM = 0x40000000 | SBFM,
+ /* CSINC s001 1010 100r rrrr cccc 01rr rrrr rrrr */
+ CSINC = 0x9a800400,
+ /* MUL s001 1011 000r rrrr 0111 11rr rrrr rrrr */
+ MUL = 0x1b007c00,
+ /* MSR (register) 1101 0101 0001 oooo oooo oooo ooor rrrr */
+ /* MRS 1101 0101 0011 oooo oooo oooo ooor rrrr */
+ MSR = 0xd5100000,
+ MRS = 0x00200000 | MSR,
+ /* HINT 1101 0101 0000 0011 0010 oooo ooo1 1111 */
+ HINT = 0xd503201f,
+ SEVL = (5 << 5) | HINT,
+ WFE = (2 << 5) | HINT,
+ NOP = (0 << 5) | HINT,
+};
+
+/* Representation of a general purpose register of the form xN or wN.
+
+ This type is used by emitting functions that take registers as operands. */
+
+struct aarch64_register
+{
+ unsigned num;
+ int is64;
+};
+
+/* Representation of a memory operand, used for load and store
+ instructions.
+
+ The types correspond to the following variants:
+
+ MEMORY_OPERAND_OFFSET: LDR rt, [rn, #offset]
+ MEMORY_OPERAND_PREINDEX: LDR rt, [rn, #index]!
+ MEMORY_OPERAND_POSTINDEX: LDR rt, [rn], #index */
+
+struct aarch64_memory_operand
+{
+ /* Type of the operand. */
+ enum
+ {
+ MEMORY_OPERAND_OFFSET,
+ MEMORY_OPERAND_PREINDEX,
+ MEMORY_OPERAND_POSTINDEX,
+ } type;
+ /* Index from the base register. */
+ int32_t index;
+};
+
+/* Helper macro to mask and shift a value into a bitfield. */
+
+#define ENCODE(val, size, offset) \
+ ((uint32_t) ((val & ((1ULL << size) - 1)) << offset))
+
int aarch64_decode_adr (CORE_ADDR addr, uint32_t insn, int *is_adrp,
unsigned *rd, int32_t *offset);
@@ -86,4 +209,113 @@ void aarch64_relocate_instruction (uint32_t insn,
const struct aarch64_insn_visitor *visitor,
struct aarch64_insn_data *data);
+#define can_encode_int32(val, bits) \
+ (((val) >> (bits)) == 0 || ((val) >> (bits)) == -1)
+
+/* Write a B or BL instruction into *BUF.
+
+ B #offset
+ BL #offset
+
+ IS_BL specifies if the link register should be updated.
+ OFFSET is the immediate offset from the current PC. It is
+ byte-addressed but should be 4 bytes aligned. It has a limited range of
+ +/- 128MB (26 bits << 2). */
+
+#define emit_b(buf, is_bl, offset) \
+ emit_insn (buf, ((is_bl) ? BL : B) | (ENCODE ((offset) >> 2, 26, 0)))
+
+/* Write a BCOND instruction into *BUF.
+
+ B.COND #offset
+
+ COND specifies the condition field.
+ OFFSET is the immediate offset from the current PC. It is
+ byte-addressed but should be 4 bytes aligned. It has a limited range of
+ +/- 1MB (19 bits << 2). */
+
+#define emit_bcond(buf, cond, offset) \
+ emit_insn (buf, \
+ BCOND | ENCODE ((offset) >> 2, 19, 5) \
+ | ENCODE ((cond), 4, 0))
+
+/* Write a CBZ or CBNZ instruction into *BUF.
+
+ CBZ rt, #offset
+ CBNZ rt, #offset
+
+ IS_CBNZ distinguishes between CBZ and CBNZ instructions.
+ RN is the register to test.
+ OFFSET is the immediate offset from the current PC. It is
+ byte-addressed but should be 4 bytes aligned. It has a limited range of
+ +/- 1MB (19 bits << 2). */
+
+#define emit_cb(buf, is_cbnz, rt, offset) \
+ emit_insn (buf, \
+ ((is_cbnz) ? CBNZ : CBZ) \
+ | ENCODE (rt.is64, 1, 31) /* sf */ \
+ | ENCODE (offset >> 2, 19, 5) /* imm19 */ \
+ | ENCODE (rt.num, 5, 0))
+
+/* Write a LDR instruction into *BUF.
+
+ LDR rt, [rn, #offset]
+ LDR rt, [rn, #index]!
+ LDR rt, [rn], #index
+
+ RT is the register to store.
+ RN is the base address register.
+ OFFSET is the immediate to add to the base address. It is limited to
+ 0 .. 32760 range (12 bits << 3). */
+
+#define emit_ldr(buf, rt, rn, operand) \
+ emit_load_store (buf, rt.is64 ? 3 : 2, LDR, rt, rn, operand)
+
+/* Write a LDRSW instruction into *BUF. The register size is 64-bit.
+
+ LDRSW xt, [rn, #offset]
+ LDRSW xt, [rn, #index]!
+ LDRSW xt, [rn], #index
+
+ RT is the register to store.
+ RN is the base address register.
+ OFFSET is the immediate to add to the base address. It is limited to
+ 0 .. 16380 range (12 bits << 2). */
+
+#define emit_ldrsw(buf, rt, rn, operand) \
+ emit_load_store (buf, 3, LDRSW, rt, rn, operand)
+
+
+/* Write a TBZ or TBNZ instruction into *BUF.
+
+ TBZ rt, #bit, #offset
+ TBNZ rt, #bit, #offset
+
+ IS_TBNZ distinguishes between TBZ and TBNZ instructions.
+ RT is the register to test.
+ BIT is the index of the bit to test in register RT.
+ OFFSET is the immediate offset from the current PC. It is
+ byte-addressed but should be 4 bytes aligned. It has a limited range of
+ +/- 32KB (14 bits << 2). */
+
+#define emit_tb(buf, is_tbnz, bit, rt, offset) \
+ emit_insn (buf, \
+ ((is_tbnz) ? TBNZ: TBZ) \
+ | ENCODE (bit >> 5, 1, 31) /* b5 */ \
+ | ENCODE (bit, 5, 19) /* b40 */ \
+ | ENCODE (offset >> 2, 14, 5) /* imm14 */ \
+ | ENCODE (rt.num, 5, 0))
+
+/* Write a NOP instruction into *BUF. */
+
+#define emit_nop(buf) emit_insn (buf, NOP)
+
+int emit_insn (uint32_t *buf, uint32_t insn);
+
+int emit_load_store (uint32_t *buf, uint32_t size,
+ enum aarch64_opcodes opcode,
+ struct aarch64_register rt,
+ struct aarch64_register rn,
+ struct aarch64_memory_operand operand);
+
#endif
diff --git a/gdb/gdbserver/linux-aarch64-low.c b/gdb/gdbserver/linux-aarch64-low.c
index 1241434..9450449 100644
--- a/gdb/gdbserver/linux-aarch64-low.c
+++ b/gdb/gdbserver/linux-aarch64-low.c
@@ -584,92 +584,6 @@ aarch64_get_thread_area (int lwpid, CORE_ADDR *addrp)
return 0;
}
-/* List of opcodes that we need for building the jump pad and relocating
- an instruction. */
-
-enum aarch64_opcodes
-{
- /* B 0001 01ii iiii iiii iiii iiii iiii iiii */
- /* BL 1001 01ii iiii iiii iiii iiii iiii iiii */
- /* B.COND 0101 0100 iiii iiii iiii iiii iii0 cccc */
- /* CBZ s011 0100 iiii iiii iiii iiii iiir rrrr */
- /* CBNZ s011 0101 iiii iiii iiii iiii iiir rrrr */
- /* TBZ b011 0110 bbbb biii iiii iiii iiir rrrr */
- /* TBNZ b011 0111 bbbb biii iiii iiii iiir rrrr */
- B = 0x14000000,
- BL = 0x80000000 | B,
- BCOND = 0x40000000 | B,
- CBZ = 0x20000000 | B,
- CBNZ = 0x21000000 | B,
- TBZ = 0x36000000 | B,
- TBNZ = 0x37000000 | B,
- /* BLR 1101 0110 0011 1111 0000 00rr rrr0 0000 */
- BLR = 0xd63f0000,
- /* RET 1101 0110 0101 1111 0000 00rr rrr0 0000 */
- RET = 0xd65f0000,
- /* STP s010 100o o0ii iiii irrr rrrr rrrr rrrr */
- /* LDP s010 100o o1ii iiii irrr rrrr rrrr rrrr */
- /* STP (SIMD&VFP) ss10 110o o0ii iiii irrr rrrr rrrr rrrr */
- /* LDP (SIMD&VFP) ss10 110o o1ii iiii irrr rrrr rrrr rrrr */
- STP = 0x28000000,
- LDP = 0x28400000,
- STP_SIMD_VFP = 0x04000000 | STP,
- LDP_SIMD_VFP = 0x04000000 | LDP,
- /* STR ss11 100o 00xi iiii iiii xxrr rrrr rrrr */
- /* LDR ss11 100o 01xi iiii iiii xxrr rrrr rrrr */
- /* LDRSW 1011 100o 10xi iiii iiii xxrr rrrr rrrr */
- STR = 0x38000000,
- LDR = 0x00400000 | STR,
- LDRSW = 0x80800000 | STR,
- /* LDAXR ss00 1000 0101 1111 1111 11rr rrrr rrrr */
- LDAXR = 0x085ffc00,
- /* STXR ss00 1000 000r rrrr 0111 11rr rrrr rrrr */
- STXR = 0x08007c00,
- /* STLR ss00 1000 1001 1111 1111 11rr rrrr rrrr */
- STLR = 0x089ffc00,
- /* MOV s101 0010 1xxi iiii iiii iiii iiir rrrr */
- /* MOVK s111 0010 1xxi iiii iiii iiii iiir rrrr */
- MOV = 0x52800000,
- MOVK = 0x20000000 | MOV,
- /* ADD s00o ooo1 xxxx xxxx xxxx xxxx xxxx xxxx */
- /* SUB s10o ooo1 xxxx xxxx xxxx xxxx xxxx xxxx */
- /* SUBS s11o ooo1 xxxx xxxx xxxx xxxx xxxx xxxx */
- ADD = 0x01000000,
- SUB = 0x40000000 | ADD,
- SUBS = 0x20000000 | SUB,
- /* AND s000 1010 xx0x xxxx xxxx xxxx xxxx xxxx */
- /* ORR s010 1010 xx0x xxxx xxxx xxxx xxxx xxxx */
- /* ORN s010 1010 xx1x xxxx xxxx xxxx xxxx xxxx */
- /* EOR s100 1010 xx0x xxxx xxxx xxxx xxxx xxxx */
- AND = 0x0a000000,
- ORR = 0x20000000 | AND,
- ORN = 0x00200000 | ORR,
- EOR = 0x40000000 | AND,
- /* LSLV s001 1010 110r rrrr 0010 00rr rrrr rrrr */
- /* LSRV s001 1010 110r rrrr 0010 01rr rrrr rrrr */
- /* ASRV s001 1010 110r rrrr 0010 10rr rrrr rrrr */
- LSLV = 0x1ac02000,
- LSRV = 0x00000400 | LSLV,
- ASRV = 0x00000800 | LSLV,
- /* SBFM s001 0011 0nii iiii iiii iirr rrrr rrrr */
- SBFM = 0x13000000,
- /* UBFM s101 0011 0nii iiii iiii iirr rrrr rrrr */
- UBFM = 0x40000000 | SBFM,
- /* CSINC s001 1010 100r rrrr cccc 01rr rrrr rrrr */
- CSINC = 0x9a800400,
- /* MUL s001 1011 000r rrrr 0111 11rr rrrr rrrr */
- MUL = 0x1b007c00,
- /* MSR (register) 1101 0101 0001 oooo oooo oooo ooor rrrr */
- /* MRS 1101 0101 0011 oooo oooo oooo ooor rrrr */
- MSR = 0xd5100000,
- MRS = 0x00200000 | MSR,
- /* HINT 1101 0101 0000 0011 0010 oooo ooo1 1111 */
- HINT = 0xd503201f,
- SEVL = (5 << 5) | HINT,
- WFE = (2 << 5) | HINT,
- NOP = (0 << 5) | HINT,
-};
-
/* List of condition codes that we need. */
enum aarch64_condition_codes
@@ -683,16 +597,6 @@ enum aarch64_condition_codes
LE = 0xd,
};
-/* Representation of a general purpose register of the form xN or wN.
-
- This type is used by emitting functions that take registers as operands. */
-
-struct aarch64_register
-{
- unsigned num;
- int is64;
-};
-
/* Representation of an operand. At this time, it only supports register
and immediate types. */
@@ -779,28 +683,6 @@ immediate_operand (uint32_t imm)
return operand;
}
-/* Representation of a memory operand, used for load and store
- instructions.
-
- The types correspond to the following variants:
-
- MEMORY_OPERAND_OFFSET: LDR rt, [rn, #offset]
- MEMORY_OPERAND_PREINDEX: LDR rt, [rn, #index]!
- MEMORY_OPERAND_POSTINDEX: LDR rt, [rn], #index */
-
-struct aarch64_memory_operand
-{
- /* Type of the operand. */
- enum
- {
- MEMORY_OPERAND_OFFSET,
- MEMORY_OPERAND_PREINDEX,
- MEMORY_OPERAND_POSTINDEX,
- } type;
- /* Index from the base register. */
- int32_t index;
-};
-
/* Helper function to create an offset memory operand.
For example:
@@ -852,108 +734,6 @@ enum aarch64_system_control_registers
TPIDR_EL0 = (0x1 << 14) | (0x3 << 11) | (0xd << 7) | (0x0 << 3) | 0x2
};
-/* Helper macro to mask and shift a value into a bitfield. */
-
-#define ENCODE(val, size, offset) \
- ((uint32_t) ((val & ((1ULL << size) - 1)) << offset))
-
-/* Write a 32-bit unsigned integer INSN info *BUF. Return the number of
- instructions written (aka. 1). */
-
-static int
-emit_insn (uint32_t *buf, uint32_t insn)
-{
- *buf = insn;
- return 1;
-}
-
-/* Write a B or BL instruction into *BUF.
-
- B #offset
- BL #offset
-
- IS_BL specifies if the link register should be updated.
- OFFSET is the immediate offset from the current PC. It is
- byte-addressed but should be 4 bytes aligned. It has a limited range of
- +/- 128MB (26 bits << 2). */
-
-static int
-emit_b (uint32_t *buf, int is_bl, int32_t offset)
-{
- uint32_t imm26 = ENCODE (offset >> 2, 26, 0);
-
- if (is_bl)
- return emit_insn (buf, BL | imm26);
- else
- return emit_insn (buf, B | imm26);
-}
-
-/* Write a BCOND instruction into *BUF.
-
- B.COND #offset
-
- COND specifies the condition field.
- OFFSET is the immediate offset from the current PC. It is
- byte-addressed but should be 4 bytes aligned. It has a limited range of
- +/- 1MB (19 bits << 2). */
-
-static int
-emit_bcond (uint32_t *buf, unsigned cond, int32_t offset)
-{
- return emit_insn (buf, BCOND | ENCODE (offset >> 2, 19, 5)
- | ENCODE (cond, 4, 0));
-}
-
-/* Write a CBZ or CBNZ instruction into *BUF.
-
- CBZ rt, #offset
- CBNZ rt, #offset
-
- IS_CBNZ distinguishes between CBZ and CBNZ instructions.
- RN is the register to test.
- OFFSET is the immediate offset from the current PC. It is
- byte-addressed but should be 4 bytes aligned. It has a limited range of
- +/- 1MB (19 bits << 2). */
-
-static int
-emit_cb (uint32_t *buf, int is_cbnz, struct aarch64_register rt,
- int32_t offset)
-{
- uint32_t imm19 = ENCODE (offset >> 2, 19, 5);
- uint32_t sf = ENCODE (rt.is64, 1, 31);
-
- if (is_cbnz)
- return emit_insn (buf, CBNZ | sf | imm19 | ENCODE (rt.num, 5, 0));
- else
- return emit_insn (buf, CBZ | sf | imm19 | ENCODE (rt.num, 5, 0));
-}
-
-/* Write a TBZ or TBNZ instruction into *BUF.
-
- TBZ rt, #bit, #offset
- TBNZ rt, #bit, #offset
-
- IS_TBNZ distinguishes between TBZ and TBNZ instructions.
- RT is the register to test.
- BIT is the index of the bit to test in register RT.
- OFFSET is the immediate offset from the current PC. It is
- byte-addressed but should be 4 bytes aligned. It has a limited range of
- +/- 32KB (14 bits << 2). */
-
-static int
-emit_tb (uint32_t *buf, int is_tbnz, unsigned bit,
- struct aarch64_register rt, int32_t offset)
-{
- uint32_t imm14 = ENCODE (offset >> 2, 14, 5);
- uint32_t b40 = ENCODE (bit, 5, 19);
- uint32_t b5 = ENCODE (bit >> 5, 1, 31);
-
- if (is_tbnz)
- return emit_insn (buf, TBNZ | b5 | b40 | imm14 | ENCODE (rt.num, 5, 0));
- else
- return emit_insn (buf, TBZ | b5 | b40 | imm14 | ENCODE (rt.num, 5, 0));
-}
-
/* Write a BLR instruction into *BUF.
BLR rn
@@ -1100,70 +880,9 @@ emit_stp_q_offset (uint32_t *buf, unsigned rt, unsigned rt2,
uint32_t pre_index = ENCODE (1, 1, 24);
return emit_insn (buf, STP_SIMD_VFP | opc | pre_index
- | ENCODE (offset >> 4, 7, 15) | ENCODE (rt2, 5, 10)
- | ENCODE (rn.num, 5, 5) | ENCODE (rt, 5, 0));
-}
-
-/* Helper function emitting a load or store instruction. */
-
-static int
-emit_load_store (uint32_t *buf, uint32_t size, enum aarch64_opcodes opcode,
- struct aarch64_register rt, struct aarch64_register rn,
- struct aarch64_memory_operand operand)
-{
- uint32_t op;
-
- switch (operand.type)
- {
- case MEMORY_OPERAND_OFFSET:
- {
- op = ENCODE (1, 1, 24);
-
- return emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
- | ENCODE (operand.index >> 3, 12, 10)
- | ENCODE (rn.num, 5, 5) | ENCODE (rt.num, 5, 0));
- }
- case MEMORY_OPERAND_POSTINDEX:
- {
- uint32_t post_index = ENCODE (1, 2, 10);
-
- op = ENCODE (0, 1, 24);
-
- return emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
- | post_index | ENCODE (operand.index, 9, 12)
- | ENCODE (rn.num, 5, 5) | ENCODE (rt.num, 5, 0));
- }
- case MEMORY_OPERAND_PREINDEX:
- {
- uint32_t pre_index = ENCODE (3, 2, 10);
-
- op = ENCODE (0, 1, 24);
-
- return emit_insn (buf, opcode | ENCODE (size, 2, 30) | op
- | pre_index | ENCODE (operand.index, 9, 12)
- | ENCODE (rn.num, 5, 5) | ENCODE (rt.num, 5, 0));
- }
- default:
- return 0;
- }
-}
-
-/* Write a LDR instruction into *BUF.
-
- LDR rt, [rn, #offset]
- LDR rt, [rn, #index]!
- LDR rt, [rn], #index
-
- RT is the register to store.
- RN is the base address register.
- OFFSET is the immediate to add to the base address. It is limited to
- 0 .. 32760 range (12 bits << 3). */
-
-static int
-emit_ldr (uint32_t *buf, struct aarch64_register rt,
- struct aarch64_register rn, struct aarch64_memory_operand operand)
-{
- return emit_load_store (buf, rt.is64 ? 3 : 2, LDR, rt, rn, operand);
+ | ENCODE (offset >> 4, 7, 15)
+ | ENCODE (rt2, 5, 10)
+ | ENCODE (rn.num, 5, 5) | ENCODE (rt, 5, 0));
}
/* Write a LDRH instruction into *BUF.
@@ -1204,24 +923,7 @@ emit_ldrb (uint32_t *buf, struct aarch64_register rt,
return emit_load_store (buf, 0, LDR, rt, rn, operand);
}
-/* Write a LDRSW instruction into *BUF. The register size is 64-bit.
- LDRSW xt, [rn, #offset]
- LDRSW xt, [rn, #index]!
- LDRSW xt, [rn], #index
-
- RT is the register to store.
- RN is the base address register.
- OFFSET is the immediate to add to the base address. It is limited to
- 0 .. 16380 range (12 bits << 2). */
-
-static int
-emit_ldrsw (uint32_t *buf, struct aarch64_register rt,
- struct aarch64_register rn,
- struct aarch64_memory_operand operand)
-{
- return emit_load_store (buf, 3, LDRSW, rt, rn, operand);
-}
/* Write a STR instruction into *BUF.
@@ -1816,14 +1518,6 @@ emit_cset (uint32_t *buf, struct aarch64_register rd, unsigned cond)
return emit_csinc (buf, rd, xzr, xzr, cond ^ 0x1);
}
-/* Write a NOP instruction into *BUF. */
-
-static int
-emit_nop (uint32_t *buf)
-{
- return emit_insn (buf, NOP);
-}
-
/* Write LEN instructions from BUF into the inferior memory at *TO.
Note instructions are always little endian on AArch64, unlike data. */
@@ -1849,17 +1543,6 @@ append_insns (CORE_ADDR *to, size_t len, const uint32_t *buf)
*to += byte_len;
}
-/* Helper function. Return 1 if VAL can be encoded in BITS bits. */
-
-static int
-can_encode_int32 (int32_t val, unsigned bits)
-{
- /* This must be an arithemic shift. */
- int32_t rest = val >> bits;
-
- return rest == 0 || rest == -1;
-}
-
/* Sub-class of struct aarch64_insn_data, store information of
instruction relocation for fast tracepoint. Visitor can
relocate an instruction from BASE.INSN_ADDR to NEW_ADDR and save
--
1.9.1
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH 06/11] Support displaced stepping in aarch64-linux
2015-10-07 9:27 ` [PATCH 06/11] Support displaced stepping in aarch64-linux Yao Qi
@ 2015-10-13 20:26 ` Sergio Durigan Junior
2015-10-14 8:37 ` Yao Qi
0 siblings, 1 reply; 16+ messages in thread
From: Sergio Durigan Junior @ 2015-10-13 20:26 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
On Wednesday, October 07 2015, Yao Qi wrote:
> This patch is to support displaced stepping in aarch64-linux. A
> visitor is implemented for displaced stepping, and used to record
> information to fixup pc after displaced stepping if needed. Some
> emit_* functions are converted to macros, and moved to
> arch/aarch64-insn.{c,h} so that they can be shared.
Hi Yao,
This patch broke GDB when compiling with --enable-build-with-cxx:
<http://gdb-build.sergiodj.net/builders/Fedora-x86_64-cxx-build-m64/builds/1046/steps/compile%20gdb/logs/stdio>
You should also have received a message from the BuildBot, but I decided
to send this one just to be safe.
Thanks,
--
Sergio
GPG key ID: 237A 54B1 0287 28BF 00EF 31F4 D0EB 7628 65FC 5E36
Please send encrypted e-mail if possible
http://sergiodj.net/
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH 06/11] Support displaced stepping in aarch64-linux
2015-10-13 20:26 ` Sergio Durigan Junior
@ 2015-10-14 8:37 ` Yao Qi
0 siblings, 0 replies; 16+ messages in thread
From: Yao Qi @ 2015-10-14 8:37 UTC (permalink / raw)
To: Sergio Durigan Junior; +Cc: Yao Qi, gdb-patches
Sergio Durigan Junior <sergiodj@redhat.com> writes:
> This patch broke GDB when compiling with --enable-build-with-cxx:
>
> <http://gdb-build.sergiodj.net/builders/Fedora-x86_64-cxx-build-m64/builds/1046/steps/compile%20gdb/logs/stdio>
>
> You should also have received a message from the BuildBot, but I decided
> to send this one just to be safe.
Hi Sergio,
I don't see such message in my linaro box. I only saw one message
before, and I've fixed this build failure.
"Your commit '[aarch64] use aarch64_decode_insn to decode instructions in GDB' broke GDB"
Anyway, patch below fixes the build failure. I've pushed it in.
--
Yao (齐尧)
From 6448a3e4daecbdba25e5c76b0fbb0c21583a1347 Mon Sep 17 00:00:00 2001
From: Yao Qi <yao.qi@linaro.org>
Date: Wed, 14 Oct 2015 09:23:14 +0100
Subject: [PATCH] Define enum out of struct
This patch moves the definition of enum out of the scope of struct
aarch64_memory_operand, otherwise it breaks GDB build in c++ mode.
gdb:
2015-10-14 Yao Qi <yao.qi@linaro.org>
* arch/aarch64-insn.h (struct aarch64_memory_operand): Move enum
out of it.
(enum aarch64_memory_operand_type): New.
diff --git a/gdb/ChangeLog b/gdb/ChangeLog
index cabfe36..4b8ffb7 100644
--- a/gdb/ChangeLog
+++ b/gdb/ChangeLog
@@ -1,3 +1,9 @@
+2015-10-14 Yao Qi <yao.qi@linaro.org>
+
+ * arch/aarch64-insn.h (struct aarch64_memory_operand): Move enum
+ out of it.
+ (enum aarch64_memory_operand_type): New.
+
2015-10-13 David Edelsohn <dje.gcc@gmail.com>
* xcoffread.c (dwarf2_xcoff_names): Add .dwmac and .dwpbtyp.
diff --git a/gdb/arch/aarch64-insn.h b/gdb/arch/aarch64-insn.h
index d51cabc..cc7ec48 100644
--- a/gdb/arch/aarch64-insn.h
+++ b/gdb/arch/aarch64-insn.h
@@ -117,6 +117,13 @@ struct aarch64_register
int is64;
};
+enum aarch64_memory_operand_type
+{
+ MEMORY_OPERAND_OFFSET,
+ MEMORY_OPERAND_PREINDEX,
+ MEMORY_OPERAND_POSTINDEX,
+};
+
/* Representation of a memory operand, used for load and store
instructions.
@@ -129,12 +136,8 @@ struct aarch64_register
struct aarch64_memory_operand
{
/* Type of the operand. */
- enum
- {
- MEMORY_OPERAND_OFFSET,
- MEMORY_OPERAND_PREINDEX,
- MEMORY_OPERAND_POSTINDEX,
- } type;
+ enum aarch64_memory_operand_type type;
+
/* Index from the base register. */
int32_t index;
};
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 11/11] Mention the change in NEWS
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
` (9 preceding siblings ...)
2015-10-07 9:27 ` [PATCH 06/11] Support displaced stepping in aarch64-linux Yao Qi
@ 2015-10-07 9:27 ` Yao Qi
2015-10-07 15:38 ` Eli Zaretskii
2015-10-12 10:35 ` [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
11 siblings, 1 reply; 16+ messages in thread
From: Yao Qi @ 2015-10-07 9:27 UTC (permalink / raw)
To: gdb-patches
gdb:
2015-10-07 Yao Qi <yao.qi@linaro.org>
* NEWS: Mention the change.
---
gdb/NEWS | 2 ++
1 file changed, 2 insertions(+)
diff --git a/gdb/NEWS b/gdb/NEWS
index 2e38d9a..b2b1e99 100644
--- a/gdb/NEWS
+++ b/gdb/NEWS
@@ -22,6 +22,8 @@
including JIT compiling fast tracepoint's conditional expression bytecode
into native code.
+* GDB now supports displaced stepping on AArch64 GNU/Linux.
+
* New commands
maint set target-non-stop (on|off|auto)
--
1.9.1
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux
2015-10-07 9:26 [PATCH 00/11] Displaced stepping on AArch64 GNU/Linux Yao Qi
` (10 preceding siblings ...)
2015-10-07 9:27 ` [PATCH 11/11] Mention the change in NEWS Yao Qi
@ 2015-10-12 10:35 ` Yao Qi
11 siblings, 0 replies; 16+ messages in thread
From: Yao Qi @ 2015-10-12 10:35 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
Yao Qi <qiyaoltc@gmail.com> writes:
> This patch series adds displaced stepping on aarch64-linux. The
> series refactors and reuses some aarch64 fast tracepoint instruction
> relocation code in GDBserver, because both of fast tracepoint and
> displaced stepping need to handle instruction relocation.
>
> Patches #2 - #4 are about refactoring aarch64_relocate_instruction in
> GDBserver in order to share it between GDB and GDBserver. A visitor
> pattern is used, and aarch64_relocate_instruction decodes instructions
> and visits different instructions by different methods of visitor.
> See more details in patch #4. Patch #5 moves all visitor pattern stuff
> and aarch64_relocate_instruction to arch/aarch64-insn.c, and patch #6
> adds the displaced stepping support.
>
> Patch #8 adds a new test case gdb.arch/disp-step-insn-reloc.exp which
> uses insn-reloc.c too for displaced stepping. Patch #9 and #10 add
> "aarch64_" prefix to function names, as a clean up of this series.
>
> The whole series is regression tested on aarch64-linux, both native and
> gdbserver.
I pushed them in.
--
Yao (齐尧)
^ permalink raw reply [flat|nested] 16+ messages in thread