From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 27796 invoked by alias); 7 Sep 2010 21:50:53 -0000 Received: (qmail 27787 invoked by uid 22791); 7 Sep 2010 21:50:52 -0000 X-SWARE-Spam-Status: No, hits=-1.5 required=5.0 tests=AWL,BAYES_00,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from intrepid.intrepid.com (HELO mail.intrepid.com) (74.95.8.113) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Tue, 07 Sep 2010 21:50:46 +0000 Received: from screamer.local (screamer.local [10.10.1.2]) by mail.intrepid.com (8.13.8/8.13.8) with ESMTP id o87LoWCf031923 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 7 Sep 2010 14:50:32 -0700 Received: from [10.10.1.196] ([10.10.1.196]) by screamer.local (8.14.4/8.14.4) with ESMTP id o87LoWS3031974; Tue, 7 Sep 2010 14:50:32 -0700 Message-ID: <4C86B3AA.8010503@intrepid.com> Date: Tue, 07 Sep 2010 21:50:00 -0000 From: Nenad Vukicevic User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.8) Gecko/20100802 Lightning/1.0b2 Thunderbird/3.1.2 MIME-Version: 1.0 To: Tom Tromey , gdb@sourceware.org Subject: Re: multi-{inferior,exec} References: <4C83D050.7010903@intrepid.com> <20100907213421.GA21182@caradoc.them.org> In-Reply-To: <20100907213421.GA21182@caradoc.them.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-IsSubscribed: yes Mailing-List: contact gdb-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sourceware.org X-SW-Source: 2010-09/txt/msg00049.txt.bz2 On 9/7/2010 2:34 PM, Daniel Jacobowitz wrote: > On Tue, Sep 07, 2010 at 11:03:48AM -0600, Tom Tromey wrote: >> This sounds like the "barrier" feature in HPD. >> A patch for this would be interesting, too. >> >> I am curious though -- what is it useful for? > Think about this in terms of, for example, automatically parallelized > OpenMP code. If you have a parallel region, it's handy to examine all > threads at that point. I don't know if this would work better in UPC > than it would in OpenMP though; you can have code executed in only > some threads... > Also, think about 32 or more threads printing breakpoint announcments. :) As all UPC threads execute the same code, but work on different sets of data, you look at all of them as only one program. UPC has a "upc_barrier" statement that all threads need to arrive before going forward. At that point your shared space is consistent and it is a natural place to stop. Nenad