From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 7997 invoked by alias); 7 Sep 2008 21:18:06 -0000 Received: (qmail 7986 invoked by uid 22791); 7 Sep 2008 21:18:05 -0000 X-Spam-Check-By: sourceware.org Received: from mtagate2.de.ibm.com (HELO mtagate2.de.ibm.com) (195.212.17.162) by sourceware.org (qpsmtpd/0.31) with ESMTP; Sun, 07 Sep 2008 21:17:22 +0000 Received: from d12nrmr1607.megacenter.de.ibm.com (d12nrmr1607.megacenter.de.ibm.com [9.149.167.49]) by mtagate2.de.ibm.com (8.13.1/8.13.1) with ESMTP id m87LHJFO030808 for ; Sun, 7 Sep 2008 21:17:19 GMT Received: from d12av02.megacenter.de.ibm.com (d12av02.megacenter.de.ibm.com [9.149.165.228]) by d12nrmr1607.megacenter.de.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id m87LHJAA3338248 for ; Sun, 7 Sep 2008 23:17:19 +0200 Received: from d12av02.megacenter.de.ibm.com (loopback [127.0.0.1]) by d12av02.megacenter.de.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id m87LHGRU003817 for ; Sun, 7 Sep 2008 23:17:16 +0200 Received: from tuxmaker.boeblingen.de.ibm.com (tuxmaker.boeblingen.de.ibm.com [9.152.85.9]) by d12av02.megacenter.de.ibm.com (8.12.11.20060308/8.12.11) with SMTP id m87LHGEp003814 for ; Sun, 7 Sep 2008 23:17:16 +0200 Message-Id: <200809072117.m87LHGEp003814@d12av02.megacenter.de.ibm.com> Received: by tuxmaker.boeblingen.de.ibm.com (sSMTP sendmail emulation); Sun, 7 Sep 2008 23:17:16 +0200 Subject: [rfc] [18/18] Cell multi-arch: Automatically flush software-managed cache To: gdb-patches@sourceware.org Date: Sun, 07 Sep 2008 21:18:00 -0000 From: "Ulrich Weigand" X-Mailer: ELM [version 2.5 PL2] MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2008-09/txt/msg00151.txt.bz2 Hello, the __ea pointers (see previous patch) are implemented in SPU code with the help of a software-managed cache. This causes some challenges for debugging, because if you access a PowerPC variable via an __ea pointer from SPU code, changes to that variable are not actually visible in PowerPC memory until the SPU software-managed cache has written back the cache line. This patch has GDB perform an inferior call to __cache_flush every time the inferior stops in SPU code that uses the software-managed cache. Thus, the user is able to inspect PowerPC variables and see current values. However, there are situations where this is counter-productive, e.g. when you are actually trying to debug the cache manager itself. Therefore, the patch also adds a command to disable that feature. Bye, Ulrich ChangeLog: * spu-tdep.c: Include "infcall.h". (spu_auto_flush_cache_p): New static variable. (spu_objfile_from_context): New function. (flush_ea_cache, spu_attach_normal_stop): Likewise. (show_spu_auto_flush_cache): Likewise. (_initialize_spu_tdep): Attach to normal_stop observer. Install "set spu auto-flush-cache" / "show spu auto-flush-cache" commands. doc/ChangeLog: * gdb.texinfo (Cell Broadband Engine SPU architecture): Document the "set spu auto-flush-cache" and "show spu auto-flush-cache" commands. Index: src/gdb/spu-tdep.c =================================================================== --- src.orig/gdb/spu-tdep.c +++ src/gdb/spu-tdep.c @@ -42,6 +42,7 @@ #include "floatformat.h" #include "block.h" #include "observer.h" +#include "infcall.h" #include "spu-tdep.h" @@ -52,6 +53,8 @@ static struct cmd_list_element *showspuc /* Whether to stop for new SPE contexts. */ static int spu_stop_on_load_p = 0; +/* Whether to automatically flush the SW-managed cache. */ +static int spu_auto_flush_cache_p = 1; /* The tdep structure. */ @@ -1772,6 +1775,76 @@ spu_catch_start (struct objfile *objfile tbreak_command (buf, 0); } +/* Lookup OBJFILE corresponding to the current SPU context. */ +static struct objfile * +spu_objfile_from_context (void) +{ + struct frame_info *frame = get_current_frame (); + struct gdbarch *gdbarch = get_frame_arch (frame); + struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch); + struct objfile *obj; + + if (gdbarch_bfd_arch_info (gdbarch)->arch != bfd_arch_spu) + return NULL; + + ALL_OBJFILES (obj) + { + if (obj->sections != obj->sections_end + && SPUADDR_SPU (obj_section_addr (obj->sections)) == tdep->id) + return obj; + } + + return NULL; +} + +/* Flush cache for ea pointer access if available and return 1. Return 0 if + inferior call was not executed. */ +static void +flush_ea_cache (void) +{ + struct value *ea_flush_fn = NULL; + struct minimal_symbol *msymbol; + struct objfile *obj; + + obj = spu_objfile_from_context (); + if (obj == NULL) + return; + + /* Lookup inferior function __cache_flush. */ + msymbol = lookup_minimal_symbol ("__cache_flush", NULL, obj); + if (msymbol != NULL) + { + struct type *type; + CORE_ADDR addr; + + type = builtin_type_void; + type = lookup_function_type (type); + type = lookup_pointer_type (type); + addr = SYMBOL_VALUE_ADDRESS (msymbol); + ea_flush_fn = value_from_pointer (type, addr); + } + + if (ea_flush_fn) + call_function_by_hand (ea_flush_fn, 0, NULL); +} + +/* This handler is called when the inferior has stopped. If it is stopped in + SPU architecture then flush the ea cache if used. */ +static void +spu_attach_normal_stop (struct bpstats *bs) +{ + if (!spu_auto_flush_cache_p) + return; + + if (!target_has_registers || !target_has_stack || !target_has_memory) + return; + + /* Temporarily reset the spu_auto_flush_cache_p to avoid recursively + re-entering this function when __cache_flush stops. */ + spu_auto_flush_cache_p = 0; + flush_ea_cache (); + spu_auto_flush_cache_p = 1; +} /* "info spu" commands. */ @@ -2344,6 +2417,14 @@ show_spu_stop_on_load (struct ui_file *f value); } +static void +show_spu_auto_flush_cache (struct ui_file *file, int from_tty, + struct cmd_list_element *c, const char *value) +{ + fprintf_filtered (file, _("Automatic software-cache flush is %s.\n"), + value); +} + /* Set up gdbarch struct. */ @@ -2464,6 +2545,9 @@ _initialize_spu_tdep (void) /* Install spu stop-on-load handler. */ observer_attach_new_objfile (spu_catch_start); + /* Add ourselves to normal_stop event chain. */ + observer_attach_normal_stop (spu_attach_normal_stop); + /* Add root prefix command for all "set spu"/"show spu" commands. */ add_prefix_cmd ("spu", no_class, set_spu_command, _("Various SPU specific commands."), @@ -2486,6 +2570,20 @@ Use \"off\" to disable stopping for new show_spu_stop_on_load, &setspucmdlist, &showspucmdlist); + /* Toggle whether or not to automatically flush the software-managed + cache whenever SPE execution stops. */ + add_setshow_boolean_cmd ("auto-flush-cache", class_support, + &spu_auto_flush_cache_p, _("\ +Set whether to automatically flush SW-managed cache."), + _("\ +Show whether to automatically flush SW-managed cache."), + _("\ +Use \"on\" to automatically flush the software-managed cache whenever SPE execution stops.\n\ +Use \"off\" to never automatically flush the software-managed cache."), + NULL, + show_spu_auto_flush_cache, + &setspucmdlist, &showspucmdlist); + /* Add root prefix command for all "info spu" commands. */ add_prefix_cmd ("spu", class_info, info_spu_command, _("Various SPU specific commands."), Index: src/gdb/doc/gdb.texinfo =================================================================== --- src.orig/gdb/doc/gdb.texinfo +++ src/gdb/doc/gdb.texinfo @@ -16446,6 +16446,16 @@ function. @kindex show spu Show whether to stop for new SPE threads. +@item set spu auto-flush-cache @var{arg} +Set whether to automatically flush the software-managed cache. When set to +@code{on}, @value{GDBN} will automatically cause the SPE software-managed +cache to be flushed whenever SPE execution stops. This provides a consistent +view of PowerPC memory that is accessed via the cache. If an application +does not use the software-managed cache, this option has no effect. + +@item show spu auto-flush-cache +Show whether to automatically flush the software-managed cache. + @end table @node PowerPC -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com