From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 1166 invoked by alias); 22 Apr 2010 19:46:22 -0000 Received: (qmail 1148 invoked by uid 22791); 22 Apr 2010 19:46:20 -0000 X-SWARE-Spam-Status: No, hits=-0.0 required=5.0 tests=BAYES_40 X-Spam-Check-By: sourceware.org Received: from imr1.ericy.com (HELO imr1.ericy.com) (198.24.6.9) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 22 Apr 2010 19:46:15 +0000 Received: from eusaamw0706.eamcs.ericsson.se ([147.117.20.31]) by imr1.ericy.com (8.13.1/8.13.1) with ESMTP id o3MJohZ2029849; Thu, 22 Apr 2010 14:50:43 -0500 Received: from EUSAACMS0701.eamcs.ericsson.se ([169.254.1.197]) by eusaamw0706.eamcs.ericsson.se ([147.117.20.31]) with mapi; Thu, 22 Apr 2010 15:45:59 -0400 From: Dominique Toupin To: Mark Wielaard CC: "Frank Ch. Eigler" , "gdb@sourceware.org" , "systemtap@sourceware.org" Date: Thu, 22 Apr 2010 19:46:00 -0000 Subject: RE: Static/dynamic userspace/kernel trace Message-ID: References: <20100419141356.GC4823@redhat.com> <1271750345.2571.23.camel@hermans.wildebeest.org> In-Reply-To: <1271750345.2571.23.camel@hermans.wildebeest.org> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Mailing-List: contact gdb-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sourceware.org X-SW-Source: 2010-04/txt/msg00085.txt.bz2 =20 > Having a very low overhead pre-filter of the trace output=20 > using full expressions based on context variables, keeping=20 > statistics through aggregate state variables and deciding=20 > what to push through the trace output buffer using formatted=20 > output and data kept in associative arrays helps a lot. Since=20 > all can be done without incurring extra i/o, context-switches=20 > or external/post filtering it makes interpreting/analyzing=20 > the actual trace data a lot easier and lower overhead. It=20 > might also help in your use case, since you don't have to=20 > push multi-megabytes of trace data off a machine but can=20 > tailor the trace buffers to only have a couple of K of=20 > targeted output. We can use conditional tracing in LTTng/kprobe and GDB tracepoint, it could= be good to have a more elaborate conditional tracing. It could be worth it= to compare the different Linux conditional tracing options (both user spac= e/kernel and dynamic/static) and see how we can improve some of them. Our problem is if we do a very fancy condition or live analysis of the data= before logging we have too much overhead in CPU cycles.