From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by sourceware.org (Postfix) with ESMTPS id 5C94A3861026 for ; Mon, 20 Jul 2020 12:42:31 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 5C94A3861026 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=tdevries@suse.de X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 9F7E3AD39 for ; Mon, 20 Jul 2020 12:42:36 +0000 (UTC) Date: Mon, 20 Jul 2020 14:42:28 +0200 From: Tom de Vries To: gdb-patches@sourceware.org Subject: [committed][gdb/testsuite] Stabilize execution order in omp-par-scope.c Message-ID: <20200720124227.GA14573@delia> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.10.1 (2018-07-13) X-Spam-Status: No, score=-11.1 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Jul 2020 12:42:33 -0000 Hi, In openmp test-case gdb.threads/omp-par-scope.exp we xfail and kfail dependent on omp_get_thread_num (). Since execution order of the threads can vary from execution to execution, this can cause changes in test results. F.i., we can see this difference between two test runs: ... -KFAIL: single_scope: first thread: print i3 (PRMS: gdb/22214) +PASS: single_scope: first thread: print i3 -PASS: single_scope: second thread: print i3 +KFAIL: single_scope: second thread: print i3 (PRMS: gdb/22214) ... In both cases, the KFAIL is for omp_get_thread_num () == 1, but in one case that corresponds to the first thread executing that bit of code, and in the other case to the second thread. Get rid of this difference by stabilizing execution order. Tested on x86_64-linux. Committed to trunk. Thanks, - Tom [gdb/testsuite] Stabilize execution order in omp-par-scope.c gdb/testsuite/ChangeLog: 2020-07-20 Tom de Vries * gdb.threads/omp-par-scope.c (lock, lock2): New variable. (omp_set_lock_in_order): New function. (single_scope, multi_scope, nested_func, nested_parallel): Use omp_set_lock_in_order and omp_unset_lock. (main): Init and destroy lock and lock2. --- gdb/testsuite/gdb.threads/omp-par-scope.c | 47 +++++++++++++++++++++++++++++++ 1 file changed, 47 insertions(+) diff --git a/gdb/testsuite/gdb.threads/omp-par-scope.c b/gdb/testsuite/gdb.threads/omp-par-scope.c index 987fb34426..57b0beb7b6 100644 --- a/gdb/testsuite/gdb.threads/omp-par-scope.c +++ b/gdb/testsuite/gdb.threads/omp-par-scope.c @@ -18,6 +18,28 @@ #include #include +omp_lock_t lock; +omp_lock_t lock2; + +/* Enforce execution order between two threads using a lock. */ + +static void +omp_set_lock_in_order (int num, omp_lock_t *lock) +{ + /* Ensure that thread num 0 first sets the lock. */ + if (num == 0) + omp_set_lock (lock); + #pragma omp barrier + + /* Block thread num 1 until it can set the lock. */ + if (num == 1) + omp_set_lock (lock); + + /* This bit here is guaranteed to be executed first by thread num 0, and + once thread num 0 unsets the lock, to be executed by thread num 1. */ + ; +} + /* Testcase for checking access to variables in a single / outer scope. Make sure that variables not referred to in the parallel section are accessible from the debugger. */ @@ -31,6 +53,7 @@ single_scope (void) #pragma omp parallel num_threads (2) shared (s1, i1) private (s2, i2) { int thread_num = omp_get_thread_num (); + omp_set_lock_in_order (thread_num, &lock); s2 = 100 * (thread_num + 1) + 2; i2 = s2 + 10; @@ -38,6 +61,8 @@ single_scope (void) #pragma omp critical printf ("single_scope: thread_num=%d, s1=%d, i1=%d, s2=%d, i2=%d\n", thread_num, s1, i1, s2, i2); + + omp_unset_lock (&lock); } printf ("single_scope: s1=%d, s2=%d, s3=%d, i1=%d, i2=%d, i3=%d\n", @@ -67,11 +92,15 @@ multi_scope (void) private (i21) { int thread_num = omp_get_thread_num (); + omp_set_lock_in_order (thread_num, &lock); + i21 = 100 * (thread_num + 1) + 21; #pragma omp critical printf ("multi_scope: thread_num=%d, i01=%d, i11=%d, i21=%d\n", thread_num, i01, i11, i21); + + omp_unset_lock (&lock); } printf ("multi_scope: i01=%d, i02=%d, i11=%d, " @@ -105,6 +134,7 @@ nested_func (void) #pragma omp parallel num_threads (2) shared (i, p, x) private (j, q, y) { int tn = omp_get_thread_num (); + omp_set_lock_in_order (tn, &lock); j = 1000 * (tn + 1); q = j + 1; @@ -112,6 +142,8 @@ nested_func (void) #pragma omp critical printf ("nested_func: tn=%d: i=%d, p=%d, x=%d, j=%d, q=%d, y=%d\n", tn, i, p, x, j, q, y); + + omp_unset_lock (&lock); } } } @@ -137,6 +169,8 @@ nested_parallel (void) #pragma omp parallel num_threads (2) private (l) { int num = omp_get_thread_num (); + omp_set_lock_in_order (num, &lock); + int nthr = omp_get_num_threads (); int off = num * nthr; int k = off + 101; @@ -144,23 +178,36 @@ nested_parallel (void) #pragma omp parallel num_threads (2) shared (num) { int inner_num = omp_get_thread_num (); + omp_set_lock_in_order (inner_num, &lock2); + #pragma omp critical printf ("nested_parallel (inner threads): outer thread num = %d, thread num = %d\n", num, inner_num); + + omp_unset_lock (&lock2); } #pragma omp critical printf ("nested_parallel (outer threads) %d: k = %d, l = %d\n", num, k, l); + + omp_unset_lock (&lock); } } int main (int argc, char **argv) { + omp_init_lock (&lock); + omp_init_lock (&lock2); + single_scope (); multi_scope (); #if HAVE_NESTED_FUNCTION_SUPPORT nested_func (); #endif nested_parallel (); + + omp_destroy_lock (&lock); + omp_destroy_lock (&lock2); + return 0; }