Mirror of the gdb-patches mailing list
 help / color / mirror / Atom feed
* [rfc][3/3] Remote core file generation: memory map
@ 2011-10-21 19:54 Ulrich Weigand
  2011-11-01 18:41 ` Jan Kratochvil
  0 siblings, 1 reply; 8+ messages in thread
From: Ulrich Weigand @ 2011-10-21 19:54 UTC (permalink / raw)
  To: gdb-patches

Hello,

the final remaining piece to support remote core file generation
is the target_find_memory_regions hook.  This is really properly
a target hook, but it is unfortunately not implemented for the
remote target.

The patch below adds support by adding a new TARGET_OBJECT_PROCESS_MAP
xfer type implemented via a qXfer:process-map:read packet.

Note that there already is a qXfer:memory-map:read packet, but this
is not usable as-is to implement target_find_memory_regions, since
it is really intended for a *system* memory map for some naked
embedded targets instead of a per-process virtual address space map.

For example:

- the memory map is read into a single global mem_region list; it is not
  switched for multiple inferiors

- native or gdbserver Linux targets do not have a memory map today,
  and just enabling it changes memory access behaviour in unexpected
  ways, e.g. accesses outside of memory regions in /proc//maps are
  now no longer possible; also caching behaviour is different

- the memory attribute format is insufficient to express properties
  of a virtual memory mapping (e.g. permissions; mapped filename ...)


I guess longer term it might be nicer to always have a memory map,
and also use it for native targets, and then use the same map also
for core file generation ...

I'd appreciate suggestions how to move forward on this; is having a
new qXfer type just for core file generation OK, or should we rather
attempt to move towards an always-active memory map -- if the latter,
how can we get there?

Thanks,
Ulrich


ChangeLog:

	gdb/
	* target.h (TARGET_OBJECT_PROCESS_MAP): New enum.
	* remote.c (PACKET_qXfer_process_map): New packet type.
	(remote_protocol_features): Add support.
	(remote_xfer_partial): Handle TARGET_OBJECT_PROCESS_MAP.

	(struct find_memory_regions_args): New data structure.
	(process_map_start_vma): New function.
	(vma_attributes, process_map_children, process_map_elements):
	New global variables.
	(remote_find_memory_regions): New function.
	(init_remote_ops): Install it.

	* common/buffer.c (buffer_xml_printf): Support %l and %ll
	length modifiers.
	* common/linux-osdata.c: Include "sys/param.h".
	(read_mapping): New function.
	(linux_common_xfer_process_map): Likewise.
	* common/linux-osdata.h (linux_common_xfer_process_map): Add prototype.

	gdb/gdbserver/
	* target.h (struct target_ops): New callback "qxfer_process_map".
	* linux-low.c (linux_qxfer_process_map): New function.
	(linux_target_ops): Install it.
	* server.c (handle_qxfer_process_map): New function.
	(qxfer_packets): Use it to implement "process-map".


Index: gdb/remote.c
===================================================================
RCS file: /cvs/src/src/gdb/remote.c,v
retrieving revision 1.465
diff -u -p -r1.465 remote.c
--- gdb/remote.c	13 Oct 2011 13:15:16 -0000	1.465
+++ gdb/remote.c	21 Oct 2011 16:24:19 -0000
@@ -1237,6 +1237,7 @@ enum {
   PACKET_qXfer_features,
   PACKET_qXfer_libraries,
   PACKET_qXfer_memory_map,
+  PACKET_qXfer_process_map,
   PACKET_qXfer_spu_read,
   PACKET_qXfer_spu_write,
   PACKET_qXfer_osdata,
@@ -2800,6 +2801,102 @@ remote_threads_extra_info (struct thread
       }
   return NULL;
 }
+
+
+#if defined(HAVE_LIBEXPAT)
+
+struct find_memory_regions_args
+{
+  find_memory_region_ftype func;
+  void *obfd;
+};
+
+static void
+process_map_start_vma (struct gdb_xml_parser *parser,
+		       const struct gdb_xml_element *element,
+		       void *user_data, VEC(gdb_xml_value_s) *attributes)
+{
+  struct find_memory_regions_args *data = user_data;
+  struct gdb_xml_value *attr;
+
+  CORE_ADDR start;
+  unsigned long length;
+  int may_read, may_write, may_exec;
+
+  attr = xml_find_attribute (attributes, "start");
+  start = *(ULONGEST *) attr->value;
+
+  attr = xml_find_attribute (attributes, "length");
+  length = *(ULONGEST *) attr->value;
+
+  attr = xml_find_attribute (attributes, "may_read");
+  may_read = attr && *(ULONGEST *) attr->value;
+
+  attr = xml_find_attribute (attributes, "may_write");
+  may_write = attr && *(ULONGEST *) attr->value;
+
+  attr = xml_find_attribute (attributes, "may_exec");
+  may_exec = attr && *(ULONGEST *) attr->value;
+
+  data->func (start, length, may_read, may_write, may_exec, data->obfd);
+}
+
+const struct gdb_xml_attribute vma_attributes[] = {
+  { "start", GDB_XML_AF_NONE, gdb_xml_parse_attr_ulongest, NULL },
+  { "length", GDB_XML_AF_NONE, gdb_xml_parse_attr_ulongest, NULL },
+  { "may_read", GDB_XML_AF_OPTIONAL,
+    gdb_xml_parse_attr_enum, gdb_xml_enums_boolean },
+  { "may_write", GDB_XML_AF_OPTIONAL,
+    gdb_xml_parse_attr_enum, gdb_xml_enums_boolean },
+  { "may_exec", GDB_XML_AF_OPTIONAL,
+    gdb_xml_parse_attr_enum, gdb_xml_enums_boolean },
+  { NULL, GDB_XML_AF_NONE, NULL, NULL }
+};
+
+const struct gdb_xml_element process_map_children[] = {
+  { "vma", vma_attributes, NULL, GDB_XML_EF_REPEATABLE,
+    process_map_start_vma, NULL },
+  { NULL, NULL, NULL, GDB_XML_EF_NONE, NULL, NULL }
+};
+
+const struct gdb_xml_element process_map_elements[] = {
+  { "process-map", NULL, process_map_children, GDB_XML_EF_NONE,
+    NULL, NULL },
+  { NULL, NULL, NULL, GDB_XML_EF_NONE, NULL, NULL }
+};
+
+#endif
+
+static int
+remote_find_memory_regions (find_memory_region_ftype func, void *obfd)
+{
+#if defined(HAVE_LIBEXPAT)
+  if (remote_protocol_packets[PACKET_qXfer_process_map].support
+      == PACKET_ENABLE)
+    {
+      int ret = -1;
+      char *xml = target_read_stralloc (&current_target,
+					TARGET_OBJECT_PROCESS_MAP, NULL);
+
+      struct cleanup *back_to = make_cleanup (xfree, xml);
+
+      if (xml && *xml)
+	{
+	  struct find_memory_regions_args args;
+	  args.func = func;
+	  args.obfd = obfd;
+
+	  ret = gdb_xml_parse_quick (_("process-map"), NULL,
+				     process_map_elements, xml, &args);
+	}
+
+      do_cleanups (back_to);
+      return ret;
+    }
+#endif
+
+  return -1;
+}
 \f
 
 static int
@@ -3722,6 +3819,8 @@ static struct protocol_feature remote_pr
     PACKET_qXfer_libraries },
   { "qXfer:memory-map:read", PACKET_DISABLE, remote_supported_packet,
     PACKET_qXfer_memory_map },
+  { "qXfer:process-map:read", PACKET_DISABLE, remote_supported_packet,
+    PACKET_qXfer_process_map },
   { "qXfer:spu:read", PACKET_DISABLE, remote_supported_packet,
     PACKET_qXfer_spu_read },
   { "qXfer:spu:write", PACKET_DISABLE, remote_supported_packet,
@@ -8321,6 +8420,11 @@ remote_xfer_partial (struct target_ops *
       return remote_read_qxfer (ops, "memory-map", annex, readbuf, offset, len,
 				&remote_protocol_packets[PACKET_qXfer_memory_map]);
 
+    case TARGET_OBJECT_PROCESS_MAP:
+      gdb_assert (annex == NULL);
+      return remote_read_qxfer (ops, "process-map", annex, readbuf, offset, len,
+				&remote_protocol_packets[PACKET_qXfer_process_map]);
+
     case TARGET_OBJECT_OSDATA:
       /* Should only get here if we're connected.  */
       gdb_assert (remote_desc);
@@ -10444,6 +10548,7 @@ Specify the serial device it is connecte
   remote_ops.to_has_thread_control = tc_schedlock;    /* can lock scheduler */
   remote_ops.to_can_execute_reverse = remote_can_execute_reverse;
   remote_ops.to_magic = OPS_MAGIC;
+  remote_ops.to_find_memory_regions = remote_find_memory_regions;
   remote_ops.to_memory_map = remote_memory_map;
   remote_ops.to_flash_erase = remote_flash_erase;
   remote_ops.to_flash_done = remote_flash_done;
Index: gdb/target.h
===================================================================
RCS file: /cvs/src/src/gdb/target.h,v
retrieving revision 1.213
diff -u -p -r1.213 target.h
--- gdb/target.h	7 Oct 2011 12:06:46 -0000	1.213
+++ gdb/target.h	21 Oct 2011 16:24:20 -0000
@@ -242,6 +242,8 @@ enum target_object
   TARGET_OBJECT_WCOOKIE,
   /* Target memory map in XML format.  */
   TARGET_OBJECT_MEMORY_MAP,
+  /* Target process memory map in XML format.  */
+  TARGET_OBJECT_PROCESS_MAP,
   /* Flash memory.  This object can be used to write contents to
      a previously erased flash memory.  Using it without erasing
      flash can have unexpected results.  Addresses are physical
Index: gdb/common/buffer.c
===================================================================
RCS file: /cvs/src/src/gdb/common/buffer.c,v
retrieving revision 1.1
diff -u -p -r1.1 buffer.c
--- gdb/common/buffer.c	21 Jul 2011 23:46:09 -0000	1.1
+++ gdb/common/buffer.c	21 Oct 2011 16:24:20 -0000
@@ -89,7 +89,8 @@ buffer_xml_printf (struct buffer *buffer
   va_list ap;
   const char *f;
   const char *prev;
-  int percent = 0;
+  const char *percent = NULL;
+  int length = 0;
 
   va_start (ap, format);
 
@@ -108,17 +109,40 @@ buffer_xml_printf (struct buffer *buffer
 	      str = va_arg (ap, char *);
 	      break;
 	    case 'd':
-	      sprintf (str, "%d", va_arg (ap, int));
+	      if (length == 2)
+		sprintf (str, "%lld", va_arg (ap, long long));
+	      else if (length == 1)
+		sprintf (str, "%ld", va_arg (ap, long));
+	      else
+		sprintf (str, "%d", va_arg (ap, int));
 	      break;
 	    case 'u':
-	      sprintf (str, "%u", va_arg (ap, unsigned int));
+	      if (length == 2)
+		sprintf (str, "%llu", va_arg (ap, unsigned long long));
+	      else if (length == 1)
+		sprintf (str, "%lu", va_arg (ap, unsigned long));
+	      else
+		sprintf (str, "%u", va_arg (ap, unsigned int));
 	      break;
 	    case 'x':
-	      sprintf (str, "%x", va_arg (ap, unsigned int));
+	      if (length == 2)
+		sprintf (str, "%llx", va_arg (ap, unsigned long long));
+	      else if (length == 1)
+		sprintf (str, "%lx", va_arg (ap, unsigned long));
+	      else
+		sprintf (str, "%x", va_arg (ap, unsigned int));
 	      break;
 	    case 'o':
-	      sprintf (str, "%o", va_arg (ap, unsigned int));
-	      break;
+	      if (length == 2)
+		sprintf (str, "%llo", va_arg (ap, unsigned long long));
+	      else if (length == 1)
+		sprintf (str, "%lo", va_arg (ap, unsigned long));
+	      else
+		sprintf (str, "%o", va_arg (ap, unsigned int));
+	      break;
+	    case 'l':
+	      length++;
+	      continue;
 	    default:
 	      str = 0;
 	      break;
@@ -126,16 +150,17 @@ buffer_xml_printf (struct buffer *buffer
 	  
 	  if (str)
 	    {
-	      buffer_grow (buffer, prev, f - prev - 1);
+	      buffer_grow (buffer, prev, percent - prev);
 	      p = xml_escape_text (str);
 	      buffer_grow_str (buffer, p);
 	      xfree (p);
 	      prev = f + 1;
 	    }
 	  percent = 0;
+	  length = 0;
 	}
       else if (*f == '%')
-	percent = 1;
+	percent = f;
     }
 
   buffer_grow_str (buffer, prev);
Index: gdb/common/linux-osdata.c
===================================================================
RCS file: /cvs/src/src/gdb/common/linux-osdata.c,v
retrieving revision 1.2
diff -u -p -r1.2 linux-osdata.c
--- gdb/common/linux-osdata.c	26 Aug 2011 18:58:04 -0000	1.2
+++ gdb/common/linux-osdata.c	21 Oct 2011 16:24:21 -0000
@@ -27,6 +27,7 @@
 
 #include <sys/types.h>
 #include <sys/stat.h>
+#include <sys/param.h>
 #include <dirent.h>
 #include <ctype.h>
 #include <stdlib.h>
@@ -586,3 +587,109 @@ linux_common_xfer_osdata (const char *an
     }
 }
 
+static int
+read_mapping (FILE *mapfile,
+	      long long *addr,
+	      long long *endaddr,
+	      char *permissions,
+	      long long *offset,
+	      char *device, long long *inode, char *filename)
+{
+  int ret = fscanf (mapfile, "%llx-%llx %s %llx %s %llx",
+                    addr, endaddr, permissions, offset, device, inode);
+
+  filename[0] = '\0';
+  if (ret > 0 && ret != EOF)
+    {
+      /* Eat everything up to EOL for the filename.  This will prevent
+         weird filenames (such as one with embedded whitespace) from
+         confusing this code.  It also makes this code more robust in
+         respect to annotations the kernel may add after the filename.
+
+         Note the filename is used for informational purposes
+         only.  */
+      ret += fscanf (mapfile, "%[^\n]\n", filename);
+    }
+
+  return (ret != 0 && ret != EOF);
+}
+
+LONGEST
+linux_common_xfer_process_map (int pid, gdb_byte *readbuf,
+			       ULONGEST offset, LONGEST len)
+{
+  /* We make the process map snapshot when the object starts to be read.  */
+  static int cached_pid;
+  static const char *buf;
+  static LONGEST len_avail;
+  static struct buffer buffer;
+
+  if (cached_pid != pid)
+    {
+      buffer_free (&buffer);
+      len_avail = 0;
+      cached_pid = pid;
+    }
+
+  if (offset == 0)
+    {
+      char *maps_path;
+      FILE *maps_file;
+
+      if (len_avail != 0)
+	buffer_free (&buffer);
+      len_avail = 0;
+      buf = NULL;
+      buffer_init (&buffer);
+      buffer_grow_str (&buffer, "<process-map>\n");
+
+      maps_path = xstrprintf ("/proc/%d/maps", cached_pid); 
+      maps_file = fopen (maps_path, "r");
+      if (maps_file)
+	{
+	  long long addr, endaddr, offset, inode;
+	  char permissions[8], device[8], filename[MAXPATHLEN];
+
+	  while (read_mapping (maps_file, &addr, &endaddr, &permissions[0],
+			       &offset, &device[0], &inode, &filename[0]))
+	    {
+	      int may_read = (strchr (permissions, 'r') != 0);
+	      int may_write = (strchr (permissions, 'w') != 0);
+	      int may_exec = (strchr (permissions, 'x') != 0);
+
+	      buffer_xml_printf (&buffer,
+				 "<vma start=\"%lld\" length=\"%lld\" "
+				 "may_read=\"%s\" may_write=\"%s\" "
+				 "may_exec=\"%s\"/>\n",
+				 addr, endaddr - addr,
+				 may_read? "yes" : "no",
+				 may_write? "yes" : "no",
+				 may_exec? "yes" : "no");
+	    }
+
+	  fclose (maps_file);
+	}
+
+      xfree (maps_path);
+
+      buffer_grow_str0 (&buffer, "</process-map>\n");
+      buf = buffer_finish (&buffer);
+      len_avail = strlen (buf);
+    }
+
+  if (offset >= len_avail)
+    {
+      /* Done.  Get rid of the buffer.  */
+      buffer_free (&buffer);
+      buf = NULL;
+      len_avail = 0;
+      return 0;
+    }
+
+  if (len > len_avail - offset)
+    len = len_avail - offset;
+  memcpy (readbuf, buf + offset, len);
+
+  return len;
+}
+
Index: gdb/common/linux-osdata.h
===================================================================
RCS file: /cvs/src/src/gdb/common/linux-osdata.h,v
retrieving revision 1.1
diff -u -p -r1.1 linux-osdata.h
--- gdb/common/linux-osdata.h	21 Jul 2011 23:46:09 -0000	1.1
+++ gdb/common/linux-osdata.h	21 Oct 2011 16:24:21 -0000
@@ -26,4 +26,7 @@ extern int linux_common_core_of_thread (
 extern LONGEST linux_common_xfer_osdata (const char *annex, gdb_byte *readbuf,
 					 ULONGEST offset, LONGEST len);
 
+extern LONGEST linux_common_xfer_process_map (int pid, gdb_byte *readbuf,
+					      ULONGEST offset, LONGEST len);
+
 #endif
Index: gdb/gdbserver/linux-low.c
===================================================================
RCS file: /cvs/src/src/gdb/gdbserver/linux-low.c,v
retrieving revision 1.178
diff -u -p -r1.178 linux-low.c
--- gdb/gdbserver/linux-low.c	7 Oct 2011 12:06:48 -0000	1.178
+++ gdb/gdbserver/linux-low.c	21 Oct 2011 16:24:21 -0000
@@ -4505,6 +4505,13 @@ linux_qxfer_osdata (const char *annex,
   return linux_common_xfer_osdata (annex, readbuf, offset, len);
 }
 
+static int
+linux_qxfer_process_map (unsigned char *readbuf, CORE_ADDR offset, int len)
+{
+  int pid = pid_of (get_thread_lwp (current_inferior));
+  return linux_common_xfer_process_map (pid, readbuf, offset, len);
+}
+
 /* Convert a native/host siginfo object, into/from the siginfo in the
    layout of the inferiors' architecture.  */
 
@@ -5014,6 +5021,7 @@ static struct target_ops linux_target_op
   linux_install_fast_tracepoint_jump_pad,
   linux_emit_ops,
   linux_supports_disable_randomization,
+  linux_qxfer_process_map,
 };
 
 static void
Index: gdb/gdbserver/server.c
===================================================================
RCS file: /cvs/src/src/gdb/gdbserver/server.c,v
retrieving revision 1.149
diff -u -p -r1.149 server.c
--- gdb/gdbserver/server.c	7 Oct 2011 12:06:48 -0000	1.149
+++ gdb/gdbserver/server.c	21 Oct 2011 16:24:22 -0000
@@ -992,6 +992,22 @@ handle_qxfer_libraries (const char *anne
   return len;
 }
 
+/* Handle qXfer:process-map:read.  */
+
+static int
+handle_qxfer_process_map (const char *annex,
+			 gdb_byte *readbuf, const gdb_byte *writebuf,
+			 ULONGEST offset, LONGEST len)
+{
+  if (the_target->qxfer_process_map == NULL || writebuf != NULL)
+    return -2;
+
+  if (annex[0] != '\0' || !target_running ())
+    return -1;
+
+  return (*the_target->qxfer_process_map) (readbuf, offset, len);
+}
+
 /* Handle qXfer:osadata:read.  */
 
 static int
@@ -1216,6 +1232,7 @@ static const struct qxfer qxfer_packets[
     { "fdpic", handle_qxfer_fdpic},
     { "features", handle_qxfer_features },
     { "libraries", handle_qxfer_libraries },
+    { "process-map", handle_qxfer_process_map },
     { "osdata", handle_qxfer_osdata },
     { "siginfo", handle_qxfer_siginfo },
     { "spu", handle_qxfer_spu },
Index: gdb/gdbserver/target.h
===================================================================
RCS file: /cvs/src/src/gdb/gdbserver/target.h,v
retrieving revision 1.57
diff -u -p -r1.57 target.h
--- gdb/gdbserver/target.h	7 Oct 2011 12:06:48 -0000	1.57
+++ gdb/gdbserver/target.h	21 Oct 2011 16:24:22 -0000
@@ -380,6 +380,9 @@ struct target_ops
 
   /* Returns true if the target supports disabling randomization.  */
   int (*supports_disable_randomization) (void);
+
+  /* Read process map using qXfer packets.  */
+  int (*qxfer_process_map) (unsigned char *readbuf, CORE_ADDR offset, int len);
 };
 
 extern struct target_ops *the_target;
-- 
  Dr. Ulrich Weigand
  GNU Toolchain for Linux on System z and Cell BE
  Ulrich.Weigand@de.ibm.com


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [rfc][3/3] Remote core file generation: memory map
  2011-10-21 19:54 [rfc][3/3] Remote core file generation: memory map Ulrich Weigand
@ 2011-11-01 18:41 ` Jan Kratochvil
  2011-11-01 21:28   ` Ulrich Weigand
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Kratochvil @ 2011-11-01 18:41 UTC (permalink / raw)
  To: Ulrich Weigand; +Cc: gdb-patches

Hi Ulrich,

not sure if there is anything to do from this mail but at least some comments:


On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote:
> Note that there already is a qXfer:memory-map:read packet, but this
> is not usable as-is to implement target_find_memory_regions, since
> it is really intended for a *system* memory map for some naked
> embedded targets instead of a per-process virtual address space map.
> 
> For example:
> 
> - the memory map is read into a single global mem_region list; it is not
>   switched for multiple inferiors

Without extended-remote there is a single address map only.  Is the memory map
already useful with extended-remote using separate address spaces?

I do not have the embedded memory map experience but it seems to me the memory
map should be specified for each address map, therefore for each inferior it
is OK (maybe only possibly more duplicates are sent if the address spaces are
the same).  If GDB uses the memory map it uses it already for some inferior
and therefore its address space.


> - native or gdbserver Linux targets do not have a memory map today,
>   and just enabling it changes memory access behaviour in unexpected
>   ways, e.g. accesses outside of memory regions in /proc//maps are
>   now no longer possible; also caching behaviour is different
> 
> - the memory attribute format is insufficient to express properties
>   of a virtual memory mapping (e.g. permissions; mapped filename ...)
> 
> 
> I guess longer term it might be nicer to always have a memory map,
> and also use it for native targets, and then use the same map also
> for core file generation ...
> 
> I'd appreciate suggestions how to move forward on this; is having a
> new qXfer type just for core file generation OK, or should we rather
> attempt to move towards an always-active memory map -- if the latter,
> how can we get there?

I need to implement core files reading support into gdbserver in a foreseeable
future for performance reasons.  For the core file case everything can be
indefinitely cached (and it is more significant to cache it than in the local
core file case).  The caching can+should be improved even in the normal live
process case (by setting default_mem_attrib->cache = 1) but it needs to be
temporary (with the prepare_execute_command flushing).  For embedded targets
the caching should be disabled for memory-I/O regions even if it would get
enabled otherwise.

The caching should probably stay in the memory map and not be moved into the
process map.  This all suggests me separation in the submitted patch may
complicate it all a bit.


> +const struct gdb_xml_attribute vma_attributes[] = {
> +const struct gdb_xml_element process_map_children[] = {
> +const struct gdb_xml_element process_map_elements[] = {

These should be static; it is already a bug in memory-map.c but there are too
many such bugs, someone could spend some time fixing them, one could use my:
	http://git.jankratochvil.net/?p=nethome.git;a=blob_plain;hb=HEAD;f=bin/checkstatic


> +static int
> +read_mapping (FILE *mapfile,
> +	      long long *addr,
> +	      long long *endaddr,
> +	      char *permissions,
> +	      long long *offset,
> +	      char *device, long long *inode, char *filename)
> +{
> +  int ret = fscanf (mapfile, "%llx-%llx %s %llx %s %llx",
> +                    addr, endaddr, permissions, offset, device, inode);

There will be /proc/PID/maps reader for gdbserver also from:
	[patch 3/3] Implement qXfer:libraries for Linux/gdbserver #3
	http://sourceware.org/ml/gdb-patches/2011-10/msg00511.html

But maybe it is simple enough code and it has different output anyway it does
not matter much to unify it.


Thanks,
Jan


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [rfc][3/3] Remote core file generation: memory map
  2011-11-01 18:41 ` Jan Kratochvil
@ 2011-11-01 21:28   ` Ulrich Weigand
  2011-11-08 17:25     ` Ulrich Weigand
  0 siblings, 1 reply; 8+ messages in thread
From: Ulrich Weigand @ 2011-11-01 21:28 UTC (permalink / raw)
  To: Jan Kratochvil; +Cc: gdb-patches

Jan Kratochvil wrote:
> On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote:
> > Note that there already is a qXfer:memory-map:read packet, but this
> > is not usable as-is to implement target_find_memory_regions, since
> > it is really intended for a *system* memory map for some naked
> > embedded targets instead of a per-process virtual address space map.
> > 
> > For example:
> > 
> > - the memory map is read into a single global mem_region list; it is not
> >   switched for multiple inferiors
> 
> Without extended-remote there is a single address map only.  Is the memory map
> already useful with extended-remote using separate address spaces?
> 
> I do not have the embedded memory map experience but it seems to me the memory
> map should be specified for each address map, therefore for each inferior it
> is OK (maybe only possibly more duplicates are sent if the address spaces are
> the same).  If GDB uses the memory map it uses it already for some inferior
> and therefore its address space.

The problem is that the way GDB uses the memory map is completely
incompatible with the presence of multiple address spaces.

There is a single instance of the map (kept in a global variable
mem_region_list in memattr.c), which is used for any access in
any address space.  lookup_mem_region takes only a CORE_ADDR;
the "info mem" commands only operate on addresses with no notion
of address spaces.  The remote protocol also does not specify
which address space a map is requested for.

This doesn't appear to matter much in practice, since the native
targets and gdbserver do not implement memory maps at all.  Just
some special-purpose remote stubs apparently do; and those are
probably for targets that do not support multiple address spaces.

However, this means that it isn't easily possible to just switch
to providing memory maps for native/gdbserver target, because we
now run into those problems ...

> I need to implement core files reading support into gdbserver in a foreseeable
> future for performance reasons.  For the core file case everything can be
> indefinitely cached (and it is more significant to cache it than in the local
> core file case).  The caching can+should be improved even in the normal live
> process case (by setting default_mem_attrib->cache = 1) but it needs to be
> temporary (with the prepare_execute_command flushing).  For embedded targets
> the caching should be disabled for memory-I/O regions even if it would get
> enabled otherwise.
> 
> The caching should probably stay in the memory map and not be moved into the
> process map.  This all suggests me separation in the submitted patch may
> complicate it all a bit.

Yes, if you want to enable memory-map features on gdbserver targets, then
those problems will need to be fixed.  In *that* case, it would make more
sense to avoid introducing a new map.

> > +const struct gdb_xml_attribute vma_attributes[] = {
> > +const struct gdb_xml_element process_map_children[] = {
> > +const struct gdb_xml_element process_map_elements[] = {
> 
> These should be static; it is already a bug in memory-map.c but there are too
> many such bugs, someone could spend some time fixing them, one could use my:
> 	http://git.jankratochvil.net/?p=nethome.git;a=blob_plain;hb=HEAD;f=bin/checkstatic

Fixed, thanks.

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU Toolchain for Linux on System z and Cell BE
  Ulrich.Weigand@de.ibm.com


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [rfc][3/3] Remote core file generation: memory map
  2011-11-01 21:28   ` Ulrich Weigand
@ 2011-11-08 17:25     ` Ulrich Weigand
  2011-11-09 16:37       ` Pedro Alves
  0 siblings, 1 reply; 8+ messages in thread
From: Ulrich Weigand @ 2011-11-08 17:25 UTC (permalink / raw)
  To: jan.kratochvil; +Cc: gdb-patches

I wrote:
> Jan Kratochvil wrote:
> > On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote:
> > > Note that there already is a qXfer:memory-map:read packet, but this
> > > is not usable as-is to implement target_find_memory_regions, since
> > > it is really intended for a *system* memory map for some naked
> > > embedded targets instead of a per-process virtual address space map.
> > > 
> > > For example:
> > > 
> > > - the memory map is read into a single global mem_region list; it is not
> > >   switched for multiple inferiors
> > 
> > Without extended-remote there is a single address map only.  Is the memory map
> > already useful with extended-remote using separate address spaces?
> > 
> > I do not have the embedded memory map experience but it seems to me the memory
> > map should be specified for each address map, therefore for each inferior it
> > is OK (maybe only possibly more duplicates are sent if the address spaces are
> > the same).  If GDB uses the memory map it uses it already for some inferior
> > and therefore its address space.
> 
> The problem is that the way GDB uses the memory map is completely
> incompatible with the presence of multiple address spaces.
> 
> There is a single instance of the map (kept in a global variable
> mem_region_list in memattr.c), which is used for any access in
> any address space.  lookup_mem_region takes only a CORE_ADDR;
> the "info mem" commands only operate on addresses with no notion
> of address spaces.  The remote protocol also does not specify
> which address space a map is requested for.

Another problem just occurred to me: the memory region list is
cached during the whole duration of existence of the inferior.
This caching is really necessary, since the map is consulted
during each single memory access.  And it seems quite valid to
cache the map as long as it describes fixed features of the
architecture (i.e. RAM/ROM/Flash layout).

However, once the map describes VMA mappings in a process context,
it becomes highly dynamic as memory maps come and go ...  It is
no longer really feasible to cache the map contents then.

This seems to me to be an argument *for* splitting the contents into
two maps; the system map which is static and cached (and used for
each memory access), and the per-process map which is dynamic
and uncached (and only used rarely, in response to unfrequently
used user commands) ...

Thoughts?

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU Toolchain for Linux on System z and Cell BE
  Ulrich.Weigand@de.ibm.com


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [rfc][3/3] Remote core file generation: memory map
  2011-11-08 17:25     ` Ulrich Weigand
@ 2011-11-09 16:37       ` Pedro Alves
  2011-11-09 18:27         ` Ulrich Weigand
  0 siblings, 1 reply; 8+ messages in thread
From: Pedro Alves @ 2011-11-09 16:37 UTC (permalink / raw)
  To: gdb-patches; +Cc: Ulrich Weigand, jan.kratochvil

On Tuesday 08 November 2011 17:25:36, Ulrich Weigand wrote:
> I wrote:
> > Jan Kratochvil wrote:
> > > On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote:
> > > > Note that there already is a qXfer:memory-map:read packet, but this
> > > > is not usable as-is to implement target_find_memory_regions, since
> > > > it is really intended for a *system* memory map for some naked
> > > > embedded targets instead of a per-process virtual address space map.
> > > > 
> > > > For example:
> > > > 
> > > > - the memory map is read into a single global mem_region list; it is not
> > > >   switched for multiple inferiors
> > > 
> > > Without extended-remote there is a single address map only.  Is the memory map
> > > already useful with extended-remote using separate address spaces?
> > > 
> > > I do not have the embedded memory map experience but it seems to me the memory
> > > map should be specified for each address map, therefore for each inferior it
> > > is OK (maybe only possibly more duplicates are sent if the address spaces are
> > > the same).  If GDB uses the memory map it uses it already for some inferior
> > > and therefore its address space.
> > 
> > The problem is that the way GDB uses the memory map is completely
> > incompatible with the presence of multiple address spaces.
> > 
> > There is a single instance of the map (kept in a global variable
> > mem_region_list in memattr.c), which is used for any access in
> > any address space.  lookup_mem_region takes only a CORE_ADDR;
> > the "info mem" commands only operate on addresses with no notion
> > of address spaces.  

That's mostly because we never really needed to consider making it
per multi-process/inferior/exec before, and managed to just look the
other way.  Targets that do multi-process don't use the map presently.
I'm sure there are other things that live in globals but that should
be per-inferior or address space, waiting for someone to trip on
them, and eventually get fixed.  :-)

> Another problem just occurred to me: the memory region list is
> cached during the whole duration of existence of the inferior.
> This caching is really necessary, since the map is consulted
> during each single memory access.  And it seems quite valid to
> cache the map as long as it describes fixed features of the
> architecture (i.e. RAM/ROM/Flash layout).
> 
> However, once the map describes VMA mappings in a process context,
> it becomes highly dynamic as memory maps come and go ...  It is
> no longer really feasible to cache the map contents then.

Agreed.

> This seems to me to be an argument *for* splitting the contents into
> two maps; the system map which is static and cached (and used for
> each memory access), and the per-process map which is dynamic
> and uncached (and only used rarely, in response to unfrequently
> used user commands) ...

On e.g., uclinux / no mmu, you could have both the
system memory map returning the properties of memory of the
whole system, and gdb could use that for all memory accesses,
but, when generating a core of a single process, we're only
interested in the memory "mapped" to that process.  So I tend
to agree.

We could also make the existing memory map be per-process/aspace,
and define it describe only the process's map (a process is a means
of a virtualization of the system resources after all).  The dynamic
issue with process's memory maps then becomes a cache management
policy decision.  E.g., at times we know the map can't change (all is
stopped, or by user knob), this would automatically enable the dcache
for all RO regions (mostly .text).  We can still do this while having
two maps mechanism though.

It doesn't seem there's a true answer to this, but I'm leaning
on a new target object.

-- 
Pedro Alves


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [rfc][3/3] Remote core file generation: memory map
  2011-11-09 16:37       ` Pedro Alves
@ 2011-11-09 18:27         ` Ulrich Weigand
  2011-11-09 19:31           ` Pedro Alves
  0 siblings, 1 reply; 8+ messages in thread
From: Ulrich Weigand @ 2011-11-09 18:27 UTC (permalink / raw)
  To: Pedro Alves; +Cc: gdb-patches, jan.kratochvil, sergiodj

Pedro Alves wrote:
> On Tuesday 08 November 2011 17:25:36, Ulrich Weigand wrote:
> > > The problem is that the way GDB uses the memory map is completely
> > > incompatible with the presence of multiple address spaces.
> > > 
> > > There is a single instance of the map (kept in a global variable
> > > mem_region_list in memattr.c), which is used for any access in
> > > any address space.  lookup_mem_region takes only a CORE_ADDR;
> > > the "info mem" commands only operate on addresses with no notion
> > > of address spaces.  
> 
> That's mostly because we never really needed to consider making it
> per multi-process/inferior/exec before, and managed to just look the
> other way.  Targets that do multi-process don't use the map presently.
> I'm sure there are other things that live in globals but that should
> be per-inferior or address space, waiting for someone to trip on
> them, and eventually get fixed.  :-)

Yes, that's what I thought :-)

> > This seems to me to be an argument *for* splitting the contents into
> > two maps; the system map which is static and cached (and used for
> > each memory access), and the per-process map which is dynamic
> > and uncached (and only used rarely, in response to unfrequently
> > used user commands) ...
> 
> On e.g., uclinux / no mmu, you could have both the
> system memory map returning the properties of memory of the
> whole system, and gdb could use that for all memory accesses,
> but, when generating a core of a single process, we're only
> interested in the memory "mapped" to that process.  So I tend
> to agree.

OK, another good point.

> We could also make the existing memory map be per-process/aspace,
> and define it describe only the process's map (a process is a means
> of a virtualization of the system resources after all).  The dynamic
> issue with process's memory maps then becomes a cache management
> policy decision.  E.g., at times we know the map can't change (all is
> stopped, or by user knob), this would automatically enable the dcache
> for all RO regions (mostly .text).  We can still do this while having
> two maps mechanism though.
> 
> It doesn't seem there's a true answer to this, but I'm leaning
> on a new target object.

OK.  In the meantime, I've noticed the discussion going on in parallel
on the "info core mappings" commands.  If we implement this, we have
the somewhat weird situation that we can show mappings for native
processes and for core files, but not for processes attached to remotely,
even if the target is also Linux ...

It would appear to me that this command actually just needs the very
same data I need here for the generate-core-file command, namely the
current list of memory mappings.

If we create a new target object for VMA memory mappings, maybe we
ought to then have a standard "info mappings" (or the like) command
implemented in GDB *common code* that works likewise on native,
core file, *and* also gdbserver targets; in fact, on all targets
that provide that new target object (which may need to be a bit
richer, e.g. provide mapped file names as well)?

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU Toolchain for Linux on System z and Cell BE
  Ulrich.Weigand@de.ibm.com


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [rfc][3/3] Remote core file generation: memory map
  2011-11-09 18:27         ` Ulrich Weigand
@ 2011-11-09 19:31           ` Pedro Alves
  2011-11-09 20:04             ` Sergio Durigan Junior
  0 siblings, 1 reply; 8+ messages in thread
From: Pedro Alves @ 2011-11-09 19:31 UTC (permalink / raw)
  To: Ulrich Weigand; +Cc: gdb-patches, jan.kratochvil, sergiodj

On Wednesday 09 November 2011 18:27:07, Ulrich Weigand wrote:

> OK.  In the meantime, I've noticed the discussion going on in parallel
> on the "info core mappings" commands.  If we implement this, we have
> the somewhat weird situation that we can show mappings for native
> processes and for core files, but not for processes attached to remotely,
> even if the target is also Linux ...
> 
> It would appear to me that this command actually just needs the very
> same data I need here for the generate-core-file command, namely the
> current list of memory mappings.
> 
> If we create a new target object for VMA memory mappings, maybe we
> ought to then have a standard "info mappings" (or the like) command
> implemented in GDB *common code* that works likewise on native,
> core file, *and* also gdbserver targets; in fact, on all targets
> that provide that new target object (which may need to be a bit
> richer, e.g. provide mapped file names as well)?

Sounds like a good idea indeed.

-- 
Pedro Alves


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [rfc][3/3] Remote core file generation: memory map
  2011-11-09 19:31           ` Pedro Alves
@ 2011-11-09 20:04             ` Sergio Durigan Junior
  0 siblings, 0 replies; 8+ messages in thread
From: Sergio Durigan Junior @ 2011-11-09 20:04 UTC (permalink / raw)
  To: Pedro Alves; +Cc: Ulrich Weigand, gdb-patches, jan.kratochvil

Pedro Alves <pedro@codesourcery.com> writes:

> On Wednesday 09 November 2011 18:27:07, Ulrich Weigand wrote:
>
>> OK.  In the meantime, I've noticed the discussion going on in parallel
>> on the "info core mappings" commands.  If we implement this, we have
>> the somewhat weird situation that we can show mappings for native
>> processes and for core files, but not for processes attached to remotely,
>> even if the target is also Linux ...
>> 
>> It would appear to me that this command actually just needs the very
>> same data I need here for the generate-core-file command, namely the
>> current list of memory mappings.
>> 
>> If we create a new target object for VMA memory mappings, maybe we
>> ought to then have a standard "info mappings" (or the like) command
>> implemented in GDB *common code* that works likewise on native,
>> core file, *and* also gdbserver targets; in fact, on all targets
>> that provide that new target object (which may need to be a bit
>> richer, e.g. provide mapped file names as well)?
>
> Sounds like a good idea indeed.

I totally agree.  When doing the `info core' patch, I often found myself
thinking about how `info proc mappings' needed to be reworked.

Anyway, just to be clear, will this new command be implemented
differently than the current `info proc mappings'?  I don't know about
you, but cat'ing /proc/PID/maps is something that could be improved.

Thanks.


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-11-09 20:04 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-10-21 19:54 [rfc][3/3] Remote core file generation: memory map Ulrich Weigand
2011-11-01 18:41 ` Jan Kratochvil
2011-11-01 21:28   ` Ulrich Weigand
2011-11-08 17:25     ` Ulrich Weigand
2011-11-09 16:37       ` Pedro Alves
2011-11-09 18:27         ` Ulrich Weigand
2011-11-09 19:31           ` Pedro Alves
2011-11-09 20:04             ` Sergio Durigan Junior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox