From: Yao Qi <qiyaoltc@gmail.com>
To: Kees Cook <keescook@chromium.org>
Cc: gdb-patches@sourceware.org, brian.murray@canonical.com,
matthias.klose@canonical.com
Subject: Re: [PATCH] Fix PTRACE_GETREGSET failure for compat inferiors on arm64
Date: Fri, 02 Dec 2016 22:49:00 -0000 [thread overview]
Message-ID: <20161202224952.panaxwmmrx4emord@localhost> (raw)
In-Reply-To: <20161202214613.GA54717@beast>
On 16-12-02 13:46:13, Kees Cook wrote:
> When running a 32-bit ARM inferior on a 64-bit ARM host, only the hardware
> floating-point registers (NT_ARM_VFP) are available. If the inferior
> uses hard-float, do not request soft-float registers (NT_PRFPREG) and
> run the risk of failing with EINVAL. This is most noticeably exposed
"soft-float" is not accurate. FPA is coprocessor. Both VFP and FPA is
implemented in the combination of software and hardware. I'd like to
rewrite the commit log like this,
"When running a 32-bit ARM inferior with a 32-bit ARM GDB on 64-bit
AArch64 host, only VFP registers (NT_ARM_VFP) are available. The FPA
registers (NT_PRFPREG) is not available."
> when running "generate-core-file":
>
> (gdb) generate-core-file myprog.core
> Unable to fetch the floating point registers.: Invalid argument.
>
> ptrace(PTRACE_GETREGSET, 27642, NT_FPREGSET, 0xffcc67f0) = -1 EINVAL (Invalid argument)
>
> gdb/ChangeLog:
>
> 2016-12-02 Kees Cook <keescook@google.com>
You don't have FSF copyright assignment.
>
> * gdb/arm-linux-nat.c: Skip soft-float registers when using hard-float.
>
> ---
> gdb/arm-linux-nat.c | 14 +++++++++-----
> 1 file changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/gdb/arm-linux-nat.c b/gdb/arm-linux-nat.c
> index d11bdc6..2126cd7 100644
> --- a/gdb/arm-linux-nat.c
> +++ b/gdb/arm-linux-nat.c
> @@ -384,17 +384,19 @@ arm_linux_fetch_inferior_registers (struct target_ops *ops,
> if (-1 == regno)
> {
> fetch_regs (regcache);
> - fetch_fpregs (regcache);
We should only call fetch_fpregs if tdep->have_fpa_registers is true.
> if (tdep->have_wmmx_registers)
> fetch_wmmx_regs (regcache);
> if (tdep->vfp_register_count > 0)
> fetch_vfp_regs (regcache);
> + else
> + fetch_fpregs (regcache);
> }
> - else
> + else
> {
> if (regno < ARM_F0_REGNUM || regno == ARM_PS_REGNUM)
> fetch_regs (regcache);
> - else if (regno >= ARM_F0_REGNUM && regno <= ARM_FPS_REGNUM)
> + else if (tdep->vfp_register_count == 0
> + && regno >= ARM_F0_REGNUM && regno <= ARM_FPS_REGNUM)
> fetch_fpregs (regcache);
Do we really need this change? If FPA registers are not available,
REGNO can't fall in this range (ARM_F0_REGNUM, ARM_FPS_REGNUM).
These two comments above also apply to store registers.
--
Yao
next prev parent reply other threads:[~2016-12-02 22:49 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-02 21:46 Kees Cook
2016-12-02 22:49 ` Yao Qi [this message]
2016-12-02 23:08 ` Kees Cook
2016-12-04 11:11 ` Yao Qi
2016-12-04 19:30 ` Kees Cook
2016-12-05 10:26 ` Yao Qi
2016-12-05 16:06 ` Kees Cook
2016-12-09 9:27 ` Yao Qi
2016-12-02 23:50 ` Doug Evans
2016-12-04 10:31 ` Yao Qi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161202224952.panaxwmmrx4emord@localhost \
--to=qiyaoltc@gmail.com \
--cc=brian.murray@canonical.com \
--cc=gdb-patches@sourceware.org \
--cc=keescook@chromium.org \
--cc=matthias.klose@canonical.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox