From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 55987 invoked by alias); 28 Mar 2017 15:08:31 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 55872 invoked by uid 89); 28 Mar 2017 15:08:29 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_00,MIME_BASE64_BLANKS,RCVD_IN_SORBS_SPAM,RP_MATCHES_RCVD,SPF_PASS autolearn=no version=3.3.2 spammy=essentially, Tel, tel, office X-HELO: mga02.intel.com Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 28 Mar 2017 15:08:27 +0000 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Mar 2017 08:08:26 -0700 X-ExtLoop1: 1 Received: from irsmsx103.ger.corp.intel.com ([163.33.3.157]) by orsmga001.jf.intel.com with ESMTP; 28 Mar 2017 08:08:24 -0700 Received: from irsmsx104.ger.corp.intel.com ([169.254.5.170]) by IRSMSX103.ger.corp.intel.com ([169.254.3.241]) with mapi id 14.03.0319.002; Tue, 28 Mar 2017 16:08:23 +0100 From: "Metzger, Markus T" To: Yao Qi CC: "gdb-patches@sourceware.org" , "Wiederhake, Tim" , "xdje42@gmail.com" , "Joel Brobecker" Subject: RE: GDB 8.0 release/branching 2017-03-20 update Date: Tue, 28 Mar 2017 15:08:00 -0000 Message-ID: References: <20170320201629.pbjzaqsnvs7dx7f2@adacore.com> <86zigevkv0.fsf@gmail.com> <86inn1utkp.fsf@gmail.com> <86inmzvrbx.fsf@gmail.com> <86shm2u47t.fsf@gmail.com> <86wpbbnf1f.fsf@gmail.com> <86shlyoggb.fsf@gmail.com> <86mvc5o7o4.fsf@gmail.com> In-Reply-To: <86mvc5o7o4.fsf@gmail.com> Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 X-IsSubscribed: yes X-SW-Source: 2017-03/txt/msg00485.txt.bz2 SGVsbG8gWWFvLA0KDQo+ID4gSG93IHdvdWxkIHRoaXMgbG9vayBpbiBvdXIg cHl0aG9uIGltcGxlbWVudGF0aW9uPw0KPiANCj4gSSBhbSBub3Qgc3VyZS4g IE9uZSBhcHByb2FjaCBpbiBteSBtaW5kIGlzIHRoYXQgc3ViLWNsYXNzIGNh biBvdmVyd3JpdGUNCj4gYnkgZGVmaW5pbmcgaXRzIG93biBnZXRzZXQuDQo+ IA0KPiBzdHJ1Y3QgUHlHZXRTZXREZWYgcHlfaW5zbl9nZXRzZXRbXSA9DQo+ ICB7DQo+ICAgeyAiZGF0YSIsIHB5X2luc25fZGF0YSwgTlVMTCwgInJhdyBp bnN0cnVjdGlvbiBkYXRhIiwgTlVMTH0sDQo+ICAgeyAiZGVjb2RlZCIsIHB5 X2luc25fZGVjb2RlLCBOVUxMLCAiZGVjb2RlZCBpbnN0cnVjdGlvbiIsIE5V TEx9LA0KPiAgIHsgInNpemUiLCBweV9pbnNuX3NpemUsIE5VTEwsICJpbnN0 cnVjdGlvbiBzaXplIGluIGJ5dGUiLCBOVUxMfSwNCj4gICB7ICJwYyIsIHB5 X2luc25fcGMsIE5VTEwsICJpbnN0cnVjdGlvbiBhZGRyZXNzIiwgTlVMTCB9 LA0KPiAgIHtOVUxMfQ0KPiB9Ow0KDQpUaGlzIGlzIGZvciB0aGUgKGFic3Ry YWN0KSBiYXNlLWNsYXNzLCBJIGFzc3VtZS4NCg0KV2UgZG9uJ3Qgc3RvcmUg YW55IGRhdGEgaW4gdGhlIGJhc2UgY2xhc3Mgc28gdGhlIFB5dGhvbiBvYmpl Y3Qgd291bGQNCmNvbnRhaW4gdGhlIFB5T2JqZWN0IGhlYWRlciBhbmQgbm90 aGluZyBlbHNlLCBjb3JyZWN0Pw0KDQpBbmQgdGhlIGFib3ZlIGZ1bmN0aW9u cyB3b3VsZCB0aHJvdyBhbiBleGNlcHRpb24gb3IgcmV0dXJuIE5vbmUuDQpD b3JyZWN0Pw0KDQoNCj4gc3RydWN0IFB5R2V0U2V0RGVmIGJ0cHlfaW5zbl9n ZXRzZXRbXSA9DQo+IHsNCj4gICB7ICJkYXRhIiwgYnRweV9pbnNuX2RhdGEs IE5VTEwsICJyYXcgaW5zdHJ1Y3Rpb24gZGF0YSIsIE5VTEx9LA0KPiAgIHsg ImRlY29kZWQiLCBidHB5X2luc25fZGVjb2RlLCBOVUxMLCAiZGVjb2RlZCBp bnN0cnVjdGlvbiIsIE5VTEx9LA0KPiAgIHsgInNpemUiLCBidHB5X2luc25f c2l6ZSwgTlVMTCwgImluc3RydWN0aW9uIHNpemUgaW4gYnl0ZSIsIE5VTEx9 LA0KPiAgIHsgInBjIiwgYnRweV9pbnNuX3BjLCBOVUxMLCAiaW5zdHJ1Y3Rp b24gYWRkcmVzcyIsIE5VTEwgfSwNCj4gDQo+ICAgeyAibnVtYmVyIiwgYnRw eV9udW1iZXIsIE5VTEwsICJpbnN0cnVjdGlvbiBudW1iZXIiLCBOVUxMfSwN Cj4gICB7ICJzYWwiLCBidHB5X3NhbCwgTlVMTCwgImluc3RydWN0aW9uIG51 bWJlciIsIE5VTEx9LA0KPiAgIHtOVUxMfQ0KPiB9Ow0KDQpUaGlzIGlzIGZv ciB0aGUgQnRyYWNlSW5zdHJ1Y3Rpb24gZGVyaXZlZCBjbGFzcywgSSBhc3N1 bWUuICBUaGF0J3MgZXNzZW50aWFsbHkNCndoYXQgVGltIGltcGxlbWVudGVk LiAgQ29ycmVjdD8NCg0KVGhpcyBkb2Vzbid0IGxvb2sgdG9vIGZhciBhd2F5 IGZyb20gd2hhdCB3ZSBoYXZlIGluIEdEQiB0b2RheS4NCg0KVGhhbmtzLA0K TWFya3VzLg0KDQpJbnRlbCBEZXV0c2NobGFuZCBHbWJIClJlZ2lzdGVyZWQg QWRkcmVzczogQW0gQ2FtcGVvbiAxMC0xMiwgODU1NzkgTmV1YmliZXJnLCBH ZXJtYW55ClRlbDogKzQ5IDg5IDk5IDg4NTMtMCwgd3d3LmludGVsLmRlCk1h bmFnaW5nIERpcmVjdG9yczogQ2hyaXN0aW4gRWlzZW5zY2htaWQsIENocmlz dGlhbiBMYW1wcmVjaHRlcgpDaGFpcnBlcnNvbiBvZiB0aGUgU3VwZXJ2aXNv cnkgQm9hcmQ6IE5pY29sZSBMYXUKUmVnaXN0ZXJlZCBPZmZpY2U6IE11bmlj aApDb21tZXJjaWFsIFJlZ2lzdGVyOiBBbXRzZ2VyaWNodCBNdWVuY2hlbiBI UkIgMTg2OTI4Cg== >From gdb-patches-return-137842-listarch-gdb-patches=sources.redhat.com@sourceware.org Tue Mar 28 15:29:43 2017 Return-Path: Delivered-To: listarch-gdb-patches@sources.redhat.com Received: (qmail 56166 invoked by alias); 28 Mar 2017 15:29:43 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Delivered-To: mailing list gdb-patches@sourceware.org Received: (qmail 56135 invoked by uid 89); 28 Mar 2017 15:29:40 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-26.9 required=5.0 tests=AWL,BAYES_00,GIT_PATCH_0,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,RCVD_IN_DNSWL_NONE,SPF_PASS autolearn=ham version=3.3.2 spammy=H*Ad:U*drow, pen, bonzini, Bonzini X-HELO: mail-wm0-f54.google.com Received: from mail-wm0-f54.google.com (HELO mail-wm0-f54.google.com) (74.125.82.54) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 28 Mar 2017 15:29:38 +0000 Received: by mail-wm0-f54.google.com with SMTP id x124so1832768wmf.0 for ; Tue, 28 Mar 2017 08:29:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=iCxy3aXoKUAj0f2KLZ1JsIhV3vgtO7jG76AQU7N9BP0=; b=RdZIUMtmMhwby6R2UC93y2rGcEPd5Wx65Hu/IzVb7uA7ioJV52Uhh1UV0kgqr6U313 Nler23hEZhF3VIyIKsrMaqz9JHiQdyCTIYTY6wSuPuWt9JNnuU6muqDHrBylrIBMfE6R 7LILHN3ElsE9gKL8yYBHOSMu9w/8Lq/PyJ1uJYcSNbNJeVmXbREQv7sDtd9gk0ennzSm gTBIWJZ5EKWDFCDc2hIc3e/BVdWCKe8tesOwkIc6cjjvP2DLkZnIQkFCRgVVp++4oPa6 SJs6IHSvAZrwtWkBcMdf/EGnkSOixc+0td+6Q08C2TH7ui8AXNUrdICC/JnWFQC65IGW Wf6Q== X-Gm-Message-State: AFeK/H3p9DHWq9HIRzDmS3ZtDRKISk9jaZaTOL6f8yGP14icEfzYgntkldCdDQk1GBtEX8qJ X-Received: by 10.28.18.21 with SMTP id 21mr14322996wms.77.1490714977756; Tue, 28 Mar 2017 08:29:37 -0700 (PDT) Received: from pb.pb.local ([62.217.45.26]) by smtp.gmail.com with ESMTPSA id d29sm4067197wmi.21.2017.03.28.08.29.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 28 Mar 2017 08:29:36 -0700 (PDT) From: Roman Pen To: Cc: Roman Pen , Pedro Alves , Daniel Jacobowitz , Jan Kratochvil , gdb-patches@sourceware.org, Stefan Hajnoczi , Paolo Bonzini Subject: [PATCH 1/1] gdb: corelow: make it possible to modify registers for a corefile Date: Tue, 28 Mar 2017 15:29:00 -0000 Message-Id: <20170328152918.301-1-roman.penyaev@profitbricks.com> X-SW-Source: 2017-03/txt/msg00486.txt.bz2 Content-length: 5928 This change eases debugging of a jmp_buf (setjmp()) and user contexts (makecontext()), which are highly used in QEMU project as a part of coroutines. This change allows setting registers for a corefile, thus it makes possible to investigate backtraces of preemted contexts just setting correct registers taken from jmp_buf or ucontext_t structures. Before it was possible to debug only live processes. This patch caches all register on a first attempt to modify register '(gdb) set $REG = ADDR' and then cached copy is always returned from get_core_registers(). No harmful impact on previous behaviour is expected, since it was not allowed to set registers for a corefile, obviously nobody did that before. If registers are not cached (default behaviour) old execution path will be executed and registers will be reread. Signed-off-by: Roman Pen Cc: Pedro Alves Cc: Daniel Jacobowitz Cc: Jan Kratochvil Cc: gdb-patches@sourceware.org QEMU guys who can be interested in this new gdb behaviour: Cc: Stefan Hajnoczi Cc: Paolo Bonzini --- gdb/corelow.c | 107 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 107 insertions(+) diff --git a/gdb/corelow.c b/gdb/corelow.c index c46af0a8a59d..8463717a23c9 100644 --- a/gdb/corelow.c +++ b/gdb/corelow.c @@ -74,6 +74,24 @@ static struct gdbarch *core_gdbarch = NULL; unix child targets. */ static struct target_section_table *core_data; +/* Cached registers. Once registers are modified (set) for a corefile, + they are cached and then are always fetched from get_core_registers(). + This hairy hack is used only for one purpose: give a possibility to + investigate backtraces and debug jmp_buf (setjmp()) and user contexts + (makecontext()). */ + +typedef char cachedreg_t[MAX_REGISTER_SIZE]; + +struct cached_regs { + cachedreg_t *regs; + ptid_t ptid; + struct cached_regs *next; +}; + +/* Hash table of cached registers for each thread, where key is a ptid. */ + +static struct cached_regs *core_cachedregs[128]; + static void core_files_info (struct target_ops *); static struct core_fns *sniff_core_bfd (bfd *); @@ -183,12 +201,35 @@ gdb_check_format (bfd *abfd) return (0); } +static inline unsigned +ptid_to_hashtbl_index(ptid_t ptid) +{ + return ptid_get_lwp (ptid) % ARRAY_SIZE (core_cachedregs); +} + +static inline struct cached_regs * +find_cached_regs(ptid_t ptid) +{ + struct cached_regs *regs; + + regs = core_cachedregs[ptid_to_hashtbl_index (ptid)]; + for ( ; regs; regs = regs->next) + { + if (ptid_equal (regs->ptid, ptid)) + break; + } + + return regs; +} + /* Discard all vestiges of any previous core file and mark data and stack spaces as empty. */ static void core_close (struct target_ops *self) { + int i; + if (core_bfd) { int pid = ptid_get_pid (inferior_ptid); @@ -213,6 +254,21 @@ core_close (struct target_ops *self) } core_vec = NULL; core_gdbarch = NULL; + + for (i = 0; i < ARRAY_SIZE (core_cachedregs); i++) + { + struct cached_regs *regs, *next; + + regs = core_cachedregs[i]; + while (regs) + { + next = regs->next; + xfree(regs->regs); + xfree(regs); + regs = next; + } + core_cachedregs[i] = NULL; + } } static void @@ -610,6 +666,7 @@ get_core_registers (struct target_ops *ops, { int i; struct gdbarch *gdbarch; + struct cached_regs *regs; if (!(core_gdbarch && gdbarch_iterate_over_regset_sections_p (core_gdbarch)) && (core_vec == NULL || core_vec->core_read_registers == NULL)) @@ -620,6 +677,19 @@ get_core_registers (struct target_ops *ops, } gdbarch = get_regcache_arch (regcache); + regs = find_cached_regs (inferior_ptid); + + if (regs) + { + /* If registers were once modified (set) for a corefile, + follow this path and always return cached registers */ + + for (i = 0; i < gdbarch_num_regs (gdbarch); i++) + regcache_raw_supply(regcache, i, ®s->regs[i]); + + return; + } + if (gdbarch_iterate_over_regset_sections_p (gdbarch)) gdbarch_iterate_over_regset_sections (gdbarch, get_core_registers_cb, @@ -639,6 +709,41 @@ get_core_registers (struct target_ops *ops, } static void +set_core_registers (struct target_ops *self, struct regcache *regcache, + int regnum) +{ + struct cached_regs *regs; + struct gdbarch *gdbarch; + int i; + + gdbarch = get_regcache_arch (regcache); + regs = find_cached_regs (inferior_ptid); + + if (regs == NULL) + { + unsigned hind; + + regs = (struct cached_regs *)xmalloc (sizeof(*regs)); + regs->ptid = inferior_ptid; + regs->regs = (cachedreg_t *)xcalloc (gdbarch_num_regs (gdbarch), + sizeof(*regs->regs)); + hind = ptid_to_hashtbl_index (inferior_ptid); + /* Add new cached registers to the head of the list */ + regs->next = core_cachedregs[hind]; + core_cachedregs[hind] = regs; + } + + for (i = 0; i < gdbarch_num_regs (gdbarch); i++) + regcache_raw_collect (regcache, i, ®s->regs[i]); +} + +static void +prepare_core_registers (struct target_ops *self, struct regcache *arg1) +{ + /* nothing here */ +} + +static void core_files_info (struct target_ops *t) { print_section_info (core_data, core_bfd); @@ -1050,6 +1155,8 @@ init_core_ops (void) core_ops.to_close = core_close; core_ops.to_detach = core_detach; core_ops.to_fetch_registers = get_core_registers; + core_ops.to_store_registers = set_core_registers; + core_ops.to_prepare_to_store = prepare_core_registers; core_ops.to_xfer_partial = core_xfer_partial; core_ops.to_files_info = core_files_info; core_ops.to_insert_breakpoint = ignore; -- 2.11.0