Mirror of the gdb-patches mailing list
 help / color / mirror / Atom feed
From: Tom Tromey <tromey@redhat.com>
To: Pedro Alves <pedro@codesourcery.com>
Cc: gdb-patches@sourceware.org,
	"Ulrich Weigand" <uweigand@de.ibm.com>,
	        Jan Kratochvil <jkratoch@redhat.com>
Subject: Re: [rfc] Infrastructure to disable breakpoints during inferior startup
Date: Fri, 24 Jul 2009 22:49:00 -0000	[thread overview]
Message-ID: <m3my6tohy0.fsf@fleche.redhat.com> (raw)
In-Reply-To: <200907231751.01413.pedro@codesourcery.com> (Pedro Alves's message of "Thu\, 23 Jul 2009 17\:51\:00 +0100")

>>>>> "Pedro" == Pedro Alves <pedro@codesourcery.com> writes:

Pedro> BTW, I haven't had much of a chance to touch the multi-exec
Pedro> patches since I posted them last.  I was mostly waiting to see if
Pedro> people had comments on the general design, and on the user
Pedro> interface before proceeding further with it.  If there's anything
Pedro> I should do to make that (testing, review, comments) easier on
Pedro> others, please let me know.

I've been meaning to try this for a while.

Today I finally got around to applying it.  The patch didn't apply
cleanly (nothing serious), and also had a couple problems compiling once
I did apply it.

I've appended my cleanup patch.  One of the rs6000-tdep.c hunks is just
a temporary workaround for unrelated build breakage.

I put all this on a local git branch.  I can push it to the archer
repository if you, or anybody, wants to see it there.

I still haven't actually tried it, but I hope to do so soon.

Tom

diff --git a/gdb/mips-tdep.c b/gdb/mips-tdep.c
index 51e8bbd..9cf5057 100644
--- a/gdb/mips-tdep.c
+++ b/gdb/mips-tdep.c
@@ -2490,7 +2490,7 @@ mips_software_single_step (struct frame_info *frame)
   CORE_ADDR pc, next_pc;
 
   pc = get_frame_pc (frame);
-  if (deal_with_atomic_sequence (gdbarch, pc))
+  if (deal_with_atomic_sequence (gdbarch, aspace, pc))
     return 1;
 
   next_pc = mips_next_pc (frame, pc);
diff --git a/gdb/monitor.c b/gdb/monitor.c
index 4cdfaae..4f258a9 100644
--- a/gdb/monitor.c
+++ b/gdb/monitor.c
@@ -705,6 +705,7 @@ monitor_open (char *args, struct monitor_ops *mon_ops, int from_tty)
 {
   char *name;
   char **p;
+  struct inferior *inf;
 
   if (mon_ops->magic != MONITOR_OPS_MAGIC)
     error (_("Magic number of monitor_ops struct wrong."));
diff --git a/gdb/rs6000-tdep.c b/gdb/rs6000-tdep.c
index bc787f3..1d37eda 100644
--- a/gdb/rs6000-tdep.c
+++ b/gdb/rs6000-tdep.c
@@ -1075,7 +1075,7 @@ int
 ppc_deal_with_atomic_sequence (struct frame_info *frame)
 {
   struct gdbarch *gdbarch = get_frame_arch (frame);
-  struct gdbarch *aspace = get_frame_address_space (frame);
+  struct address_space *aspace = get_frame_address_space (frame);
   enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
   CORE_ADDR pc = get_frame_pc (frame);
   CORE_ADDR breaks[2] = {-1, -1};
@@ -2950,8 +2950,8 @@ static struct variant variants[] =
    bfd_mach_rs6k, &tdesc_rs6000},
   {"403", "IBM PowerPC 403", bfd_arch_powerpc,
    bfd_mach_ppc_403, &tdesc_powerpc_403},
-  {"405", "IBM PowerPC 405", bfd_arch_powerpc,
-   bfd_mach_ppc_405, &tdesc_powerpc_405},
+/*   {"405", "IBM PowerPC 405", bfd_arch_powerpc, */
+/*    bfd_mach_ppc_405, &tdesc_powerpc_405}, */
   {"601", "Motorola PowerPC 601", bfd_arch_powerpc,
    bfd_mach_ppc_601, &tdesc_powerpc_601},
   {"602", "Motorola PowerPC 602", bfd_arch_powerpc,
diff --git a/gdb/spu-tdep.c b/gdb/spu-tdep.c
index 4725757..0733824 100644
--- a/gdb/spu-tdep.c
+++ b/gdb/spu-tdep.c
@@ -1338,7 +1338,7 @@ spu_software_single_step (struct frame_info *frame)
 
       target = target & (SPU_LS_SIZE - 1);
       if (target != next_pc)
-	insert_single_step_breakpoint (gdbarch, target);
+	insert_single_step_breakpoint (gdbarch, aspace, target);
     }
 
   return 1;


  reply	other threads:[~2009-07-24 22:07 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-22 17:14 Ulrich Weigand
2009-07-22 20:32 ` Tom Tromey
2009-07-23 15:49   ` Ulrich Weigand
2009-07-23 16:51     ` Tom Tromey
2009-07-23 18:06       ` Ulrich Weigand
2009-07-23 18:57         ` Pedro Alves
2009-07-24 22:49           ` Tom Tromey [this message]
2009-07-24 23:32             ` Multi-exec patches (Was: [rfc] Infrastructure to disable breakpoints during inferior startup) Tom Tromey
2009-07-25 16:05               ` Pedro Alves
2009-07-25 19:31                 ` Pedro Alves
2009-07-27 17:39                 ` Multi-exec patches Tom Tromey
2009-07-27 18:45                   ` Tom Tromey
2009-07-28 14:28                   ` Pedro Alves
2009-07-29 22:03                     ` Tom Tromey
2009-07-31 15:45           ` [rfc] Infrastructure to disable breakpoints during inferior startup Ulrich Weigand
2009-08-03  3:07             ` Thiago Jung Bauermann
2009-08-03 18:13               ` Eli Zaretskii
2009-08-05 18:14                 ` Ulrich Weigand
2009-08-05 18:58                   ` Eli Zaretskii
2009-08-06 17:46                   ` Tom Tromey
2009-08-06 18:42                     ` Eli Zaretskii
2009-08-06 19:12                       ` Michael Snyder

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=m3my6tohy0.fsf@fleche.redhat.com \
    --to=tromey@redhat.com \
    --cc=gdb-patches@sourceware.org \
    --cc=jkratoch@redhat.com \
    --cc=pedro@codesourcery.com \
    --cc=uweigand@de.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox