Mirror of the gdb-patches mailing list
 help / color / mirror / Atom feed
* [RFC][PATCH] Extend qAttached for shared program spaces
@ 2022-08-03 11:26 Keno Fischer via Gdb-patches
  0 siblings, 0 replies; only message in thread
From: Keno Fischer via Gdb-patches @ 2022-08-03 11:26 UTC (permalink / raw)
  To: gdb-patches; +Cc: khuey, robert, tim, gabriel.baraldi

Currently, when connecting to an existing extended-remote target
and the target reports a stop in a previously-unseen process, GDB
will create a new inferior and assign it a fresh pspace/aspace.
However, there is no particular reason why this is necessarily
the correct answer. The newly connected process might easily be
in a vfork, or worse, be a long running process that happened to
get cloned of using CLONE_VM (some GC and instrumentation tooling
works this way). Without properly accounting for the shared
pspace/aspace, GDB will ignore e.g. watchpoint events in the
shared address space, giving a worse user experience as well
as needing to perform redundant work.

This proposes an extension to the remote protocol to let a
gdb server inform the gdb client of which processes a newly
attached process shares a pspace with. This is accomplished
through a new `v thread-id-list` response option for `qAttached`.
If support for this option is indicated, the server may
use it to indicate that the attached inferior shares a pspace
with the given list of other threads in the system, which the
client can then use to find an existing pspace (if any).

Signed-off-by: Keno Fischer <keno@juliacomputing.com>
---

This is a quick-and-dirty implementation of the above proposal,
mostly as a starting point for discussion. I would appreciate
feedback on the remote protocol semantics, as well as if there are
implementation issues that I need to consider, I'm not particularly
well versed in this code. That said, I did implement the other side of this
protocol in a branch of https://github.com/rr-debugger/rr and
verified that watchpoints etc were working well between recorded
processes that shared an address space. Comments appreciated!

 gdb/doc/gdb.texinfo |  3 +++
 gdb/remote.c        | 59 ++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 56 insertions(+), 6 deletions(-)

diff --git a/gdb/doc/gdb.texinfo b/gdb/doc/gdb.texinfo
index 382df00ee7d..05069eb6abf 100644
--- a/gdb/doc/gdb.texinfo
+++ b/gdb/doc/gdb.texinfo
@@ -43690,6 +43690,9 @@ the @code{quit} command.
 
 Reply:
 @table @samp
+@item v @var{thread-id-list}
+The remote server attached to an existing process that shares its address space
+with the threads in @var{thread-id-list} (e.g. through a previous vfork).
 @item 1
 The remote server attached to an existing process.
 @item 0
diff --git a/gdb/remote.c b/gdb/remote.c
index 49d26c2039e..158883ebc91 100644
--- a/gdb/remote.c
+++ b/gdb/remote.c
@@ -784,7 +784,7 @@ class remote_target : public process_stratum_target
   void remote_interrupt_ns ();
 
   char *remote_get_noisy_reply ();
-  int remote_query_attached (int pid);
+  int remote_query_attached (int pid, std::vector<ptid_t> &threads_sharing_as);
   inferior *remote_add_inferior (bool fake_pid_p, int pid, int attached,
 				 int try_open_exec);
 
@@ -2220,6 +2220,9 @@ enum {
      packets and the tag violation stop replies.  */
   PACKET_memory_tagging_feature,
 
+  /* Support for `v` reply in qAttached.  */
+  PACKET_qattached_v,
+
   PACKET_MAX
 };
 
@@ -2442,7 +2445,7 @@ static const ptid_t any_thread_ptid (42000, 0, 1);
    detach instead of killing it when bailing out).  */
 
 int
-remote_target::remote_query_attached (int pid)
+remote_target::remote_query_attached (int pid, std::vector<ptid_t> &threads_sharing_pspace)
 {
   struct remote_state *rs = get_remote_state ();
   size_t size = get_remote_packet_size ();
@@ -2457,11 +2460,22 @@ remote_target::remote_query_attached (int pid)
 
   putpkt (rs->buf);
   getpkt (&rs->buf, 0);
+  const char *bufp = rs->buf.data();
 
   switch (packet_ok (rs->buf,
 		     &remote_protocol_packets[PACKET_qAttached]))
     {
     case PACKET_OK:
+      if (bufp[0] == 'v' && bufp[1] == ' ') {
+	bufp += 2; // Skip "v "
+	do
+	{
+	  ptid_t ptid = read_ptid (bufp, &bufp);
+	  threads_sharing_pspace.emplace_back (ptid);
+	}
+	while (*bufp++ == ',');	/* comma-separated list */
+	return 1;
+      }
       if (strcmp (rs->buf.data (), "1") == 0)
 	return 1;
       break;
@@ -2491,12 +2505,13 @@ remote_target::remote_add_inferior (bool fake_pid_p, int pid, int attached,
 				    int try_open_exec)
 {
   struct inferior *inf;
+  std::vector<ptid_t> threads_sharing_pspace;
 
   /* Check whether this process we're learning about is to be
      considered attached, or if is to be considered to have been
      spawned by the stub.  */
   if (attached == -1)
-    attached = remote_query_attached (pid);
+    attached = remote_query_attached (pid, threads_sharing_pspace);
 
   if (gdbarch_has_global_solist (target_gdbarch ()))
     {
@@ -2516,6 +2531,8 @@ remote_target::remote_add_inferior (bool fake_pid_p, int pid, int attached,
 	 between program/address spaces.  We simply bind the inferior
 	 to the program space's address space.  */
       inf = current_inferior ();
+      address_space *aspace = nullptr;
+      program_space *pspace = nullptr;
 
       /* However, if the current inferior is already bound to a
 	 process, find some other empty inferior.  */
@@ -2523,17 +2540,39 @@ remote_target::remote_add_inferior (bool fake_pid_p, int pid, int attached,
 	{
 	  inf = nullptr;
 	  for (inferior *it : all_inferiors ())
-	    if (it->pid == 0)
+	    {
+	      if (!inf && it->pid == 0)
 	      {
 		inf = it;
-		break;
+		if (pspace || threads_sharing_pspace.empty())
+		  break;
 	      }
+	      if (!pspace)
+		for (size_t i = 0; i < threads_sharing_pspace.size (); i++)
+		  if (threads_sharing_pspace[i].pid () == it->pid)
+		    {
+			pspace = it->pspace;
+			aspace = it->aspace;
+			break;
+		    }
+	      if (inf && pspace)
+		break;
+	    }
 	}
       if (inf == nullptr)
 	{
 	  /* Since all inferiors were already bound to a process, add
 	     a new inferior.  */
-	  inf = add_inferior_with_spaces ();
+	  if (!pspace)
+	    inf = add_inferior_with_spaces ();
+	  else
+	    {
+	      inf = add_inferior (0);
+	      inf->pspace = pspace;
+	      inf->aspace = aspace;
+	      gdbarch_info info;
+	      inf->gdbarch = gdbarch_find_by_info (info);
+	    }
 	}
       switch_to_inferior_no_thread (inf);
       inf->push_target (this);
@@ -5427,6 +5466,7 @@ static const struct protocol_feature remote_protocol_features[] = {
   { "no-resumed", PACKET_DISABLE, remote_supported_packet, PACKET_no_resumed },
   { "memory-tagging", PACKET_DISABLE, remote_supported_packet,
     PACKET_memory_tagging_feature },
+  { "qAttachedV", PACKET_DISABLE, remote_supported_packet, PACKET_qattached_v }
 };
 
 static char *remote_support_xml;
@@ -5525,6 +5565,10 @@ remote_target::remote_query_supported ()
 	  != AUTO_BOOLEAN_FALSE)
 	remote_query_supported_append (&q, "memory-tagging+");
 
+      if (packet_set_cmd_state (PACKET_qattached_v)
+	  != AUTO_BOOLEAN_FALSE)
+	remote_query_supported_append (&q, "qAttachedV+");
+
       /* Keep this one last to work around a gdbserver <= 7.10 bug in
 	 the qSupported:xmlRegisters=i386 handling.  */
       if (remote_support_xml != NULL
@@ -15347,6 +15391,9 @@ Show the maximum size of the address (in bits) in a memory packet."), NULL,
   add_packet_config_cmd (&remote_protocol_packets[PACKET_memory_tagging_feature],
 			 "memory-tagging-feature", "memory-tagging-feature", 0);
 
+  add_packet_config_cmd (&remote_protocol_packets[PACKET_qattached_v],
+       			 "qattached-v", "qattached-v", 0);
+
   /* Assert that we've registered "set remote foo-packet" commands
      for all packet configs.  */
   {
-- 
2.25.1


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2022-08-03 11:27 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-03 11:26 [RFC][PATCH] Extend qAttached for shared program spaces Keno Fischer via Gdb-patches

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox